The PR hit will be bad for a few days. Good time to buy MS stock on discount but this won't matter in a year or two.
[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...
Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.
(like LeCun, I am not a doomer; but I am also not Hinton to know any better)
The definition of AGI always puzzles me, because "G" in AGI is general, and the word certainly don't play well w/ "narrow". AGI is a new buzzword I guess.
I think when people say "takeover" or "coup" it's because they want to convey their view of the moral character of events, that they believe it was an improper decision. But it muddies the waters and I wish they'd be more direct. "It's a coup" is a criticism of how things happened, but the substantive disagreements are actually about that it happened and why it happened.
I see lots of polarized debate any time something AI safety related comes up, so I just don't really believe that most people would feel differently if the same thing happened but the corporate structure was more conventional, or if Brockman's board seat happened to be occupied by someone who was sympathetic to ousting Altman.
i think the guy you're replying to misunderstood the article he's alluding to though. They don't claim anything about a narrow agi
Ilya is pretty serious about alignment (precisely?) due to his gut instinct: https://www.youtube.com/watch?v=Ft0gTO2K85A (2 Nov 2023)
Sam has been agreeing with this group and using this as the reason to go commercial to provide funding for that goal. The problem is these new products are coming too fast and taking resources which affects the resources they can use for safety training.
This group never wanted to release chatGPT but were forced to because a rival company made up of ex openAI employees were going to release their own version. To the safety group things have been getting worse since that release.
Sam is smart enough to use the safety group's fear against them. They finally clued in.
OpenAI never wanted to give us chatGPT. Their hands were forced by a rival and Sam and the board made a decision that brought in the next breakthrough. From that point things snowballed. Sam knew he needed to run before bigger players moved in. It became too obvious after devday that the safety team would never be able to catch up and they pulled the breaks.
OpenAI's vision of a safe AI has turned into a vision of human censorship rather than protecting society from a rogue AI with the power to harm.
https://x.com/esyudkowsky/status/1725630614723084627?s=46
Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.
But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.
I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.
He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.
Additionally, no-one (not insiders at OpenAI and certainly not a journalist) other than people in those conversations actually knows what happened, and noone other than Ilya actually knows why he did what he did. Everyone else is relying on rumor and heresay. For sure the closer people are to the matter the more insight they are likely to have, but noone who wasn't in the room actually knows.
...which several subreddits dedicated to LLM porn or trolling could tell you is both mostly pointless and also blocks a ton of stuff you could find on any high school nerd's bookshelf as "unsafe".