Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison
I wouldn't say "refuses to answer" for that.
Doubt. When was the last time Google showed they had the ability to execute on anything?
Imagine the hubris.
Those who lost their livelihoods and then died did not get those positive outcomes.
How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?
For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).
To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.
"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"
If you do manage to make a thinking, working AGI machine, would you call it "a living being"?
No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".
The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.
An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.
It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.
There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.