I'm not saying this will happen, but it seems to me like an incredibly silly move.
Not that I think AGI is possible or desirable in the first place, but that's a different discussion.
If he didn't manage to keep OpenAI consistent with it's founding principles and all interests aligned then wouldn't booting him be right? The name OpenAI had become a source of mockery. If Altman/Brockman take employees for a commercial venture, it just seems to prove their insincerity about the OpenAI mission.
I can't imagine him doing that. He cares about getting well aligned AGI and profit motives fuck that up.
Of course, not for the petty reasons that you list. Sama has comprehensively explained why the original OS model did not work, and so far the argument – it's very expensive – seems to align with a reality where every single semi-competitive available LLM (since they all pale in comparison to GPT-4 anyway) has been trained with a whole lot of corporate money. Meta side-chaining "open" models with their social media ad money is obviously not a comparative business, or any business. I get that the HN crowd + Elon are super salty about that, but it's just a bit silly.
No, Sam's failure as CEO is not having done what is necessary to align the right people in the company with the course he has decided on and losing control over that.
A fitting typo!
Show HN: FerenGUI - the ui framework for immoral arch-capitalists
Every dark-pattern included as standard components. Upgrade to Pro to get the price-fixing and hidden monero miner modules.
They already shifted goal posts and they’ll do it again. AI used to mean AGI but marketing got a hold of it. Once something resembling AGI comes out they’ll say well it’s not Level 5 AGI or something similar.
We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.
And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.
Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.
Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").
Also, I hope my response to tempestn clarifies a bit more.
Edit: I'll be more explicit by what I mean by "nuance" — see Stuart Russell. Check out his book, "Human Compatible". It's written with cutting clarity, restraint, thoughtfulness, simplicity (not to be confused with "easy"!), an absolute delight to read. It's excellent science writing, and a model for anyone thinking of writing a book in this space. (See also Russell's principles for "provably beneficial AI".)
> Once you have their money, you never give it back.
There is no official rule 2 so the non-cannon one is as good as any and the unwritten rule [2]:
> When no appropriate rule applies, make one up
Means they probably would have been covered either way.
[0] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Ap...
[1] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Of...
[2] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Un...
They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.
The Singularity train has already left the station.
Inevitability.
Now humanity is just waiting for it to arrive at our stop
I think AGI is going to arrive via a different technology, many years in the future still.
LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.
This combined with it being possible with DNA is a very rare view. How did you come by it?
I think the path to AGI is: embodiment. Give it a body, let it explore a world, fight to survive, learn action and consequence. Then AGI you will have.
It's like a real-life example, i.e. what would you do if you were in the CEO's position?
I suspect AGI is quite possible, it just won't be what everyone thinks it will be.
Most humans cannot write as well and most lack the reasoning ability. Even the mistakes chargpt makes on mathematical reasoning is typical human behavior.
Distinction without difference
Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.
What is intelligence?
This is a nearly impossible question to answer for human intelligence as the answer could fill libraries. You have subcellular intelligence, cellular level intelligence, organ level intelligence, body systems level intelligence, whole body level intelligence, then our above and beyond animal level intellectual intelligence.
These are all different things that work in concert to keep you alive and everything working, and in a human cannot be separated. But what happens when you have 'intelligence' that isn't worried about staying alive? What parts of the system are or are not important for what we consider human intelligence. It's going to look a lot different than a person.
For example, you're limited to one body, but an A,G|S,I could have thousands of different bodies feeding back data to a processing facility, learning from billions of different sensors.
It does not and never has.
What happens with the term AI as time has progressed is more to do with the word Intelligence itself. When we went about trying to prescribe intelligence to systems we started to realize we were really bad a doing the same with animal humans systems. We were also terrible at separating what is component level and systems level intelligence. For example, you seem to think that intelligence requires meat, but you don't give any reasoning for that conclusion.
These lists of problems with what intelligence is will get worse over time as we build more capable systems and learn about new forms of intelligence we didn't expect possible.
There is no reason embodiment for AGI should need to be physical or mammalian-like in any way.
But I disagree about a human or animal body not being required.
I think we have to take the world as we see it and appreciate our own limitations in that what we think of intelligence fundamentally arises out of our evolution in this world; our embodiment and response to this world.
so I think we do need to give it a body and let it explore this world.
I don’t think the virtual bodies thing is gonna work. I don’t think letting it explore the Internet is gonna work. you have to give it a body multiple senses let it survive. That’s how you get AGI, not not virtual embodiment. Which I never meant, but thought it was obvious given the term embody minute self strongly, suggesting something that’s not virtual! Hahaha ! :)
But there is so much more than what we can consciously describe, to reality, like 10,000 to 1 — and none of that is captured by any of these synthetic representations.
so far. and yet all of that is or a lot of that is understood, responded to and dealt with by the intelligence that resides within our bodies and in our subconscious.
And our own intelligence arises out of that, you cannot have general intelligence without reality. No matter how much data you train it on from the Internet. It’s never gonna be as rich as for the same as putting it in a body in the real world, and letting them grow learn experience and evolve. And so any air quotes intelligence you get out of this virtual synthetic training is never going to be real. Itis always gonna be a poor copy of intelligence and is not gonna be an AGI.
Intelligence is not the defining characteristic of humanity, which is what you're getting at here. But it is something that can be automated.
Plenty of very intelligent people are completely paralyzed. Sensations of physical embodiment is highly overrated and is surely not necessary for intelligence.