I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.
Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.
"May you live in interesting times" is a curse for a reason.
But even so, solving that problem feels much more attainable than it used to be.
I assume that thanks to the universal approximation theorem it’s theoretically possible to emulate the physical mechanism, but at what hardware and training cost? I’ve done back of the napkin math on this before [1] and the number of “parameters” in the brain is at least 2-4 orders of magnitude more than state of the art models. But that’s just the current weights, what about the history that actually enables the plasticity? Channel threshold potentials are also continuous rather than discreet and emulating them might require the full fp64 so I’m not sure how we’re even going to get to the memory requirements in the next decade, let alone whether any architecture on the horizon can emulate neuroplasticity.
Then there’s the whole problem of a true physical feedback loop with which the AI can run experiments to learn against external reward functions and the core survival reward function at the core of evolution might itself be critical but that’s getting deep into the research and philosophy on the nature of intelligence.
[1] >>40313672