zlacker

[parent] [thread] 38 comments
1. valine+(OP)[view] [source] 2023-11-20 05:39:23
Not a word from Ilya. I can’t wrap my mind around his motivation. Did he really fire Sam over “AI safety” concerns? How is that remotely rational.
replies(6): >>buffer+Z >>seanhu+33 >>ignora+x3 >>singul+k4 >>ah765+C4 >>tdubhr+u5
2. buffer+Z[view] [source] 2023-11-20 05:44:07
>>valine+(OP)
Because that's not the actual reason. It looks like a hostile takeover. The "king" of, arguably, the most important company in the world, got kicked out with very little effort. It's pretty extraordinary, and the power shift is extraordinary too.
replies(3): >>voidfu+72 >>sdwvit+J2 >>yreg+w3
◧◩
3. voidfu+72[view] [source] [discussion] 2023-11-20 05:51:06
>>buffer+Z
Kicked out is a bit hyperbole. They don't have their champion anymore but the deal and their minority ownership stake are inked. They still get tech and profits. They might not have a path to owning OpenAI now but that was a problem a few years down the road. They can also invest in Altmans new thing and poach OpenAI talent to bolster their internal AI research which is probably going to get a massive funding boost.

The PR hit will be bad for a few days. Good time to buy MS stock on discount but this won't matter in a year or two.

◧◩
4. sdwvit+J2[view] [source] [discussion] 2023-11-20 05:54:09
>>buffer+Z
Maybe someone from higher up called the board?
replies(1): >>boriss+hc
5. seanhu+33[view] [source] 2023-11-20 05:55:57
>>valine+(OP)
No he didn't fire Sam over AI safety concerns. That's completely made up by people in the twittersphere. The only thing we know is that the board said the reason was that he lied to the board. The guardian[1] reported that he was working on a new startup[1] and that staff had been told it was due to a breakdown in communication and not to do with anything regarding safety, security, malfeasance or a bunch of other things.

[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...

replies(2): >>frabcu+s5 >>sashan+db
◧◩
6. yreg+w3[view] [source] [discussion] 2023-11-20 05:59:21
>>buffer+Z
The board firing a CEO is hardly a hostile takeover.
replies(2): >>surrea+S3 >>juped+Ba
7. ignora+x3[view] [source] 2023-11-20 05:59:22
>>valine+(OP)
> Did he really fire Sam over "AI safety" concerns? How is that remotely rational.

Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.

(like LeCun, I am not a doomer; but I am also not Hinton to know any better)

replies(3): >>mcpack+65 >>sgregn+Y6 >>esjeon+y7
◧◩◪
8. surrea+S3[view] [source] [discussion] 2023-11-20 06:01:54
>>yreg+w3
If you fire one board member (Altman) and remove another from the board (Brockman) it's not exactly friendly either
replies(1): >>maxbon+C7
9. singul+k4[view] [source] 2023-11-20 06:04:49
>>valine+(OP)
To shine some light on the true nature of the "AI safety tribe" aspects I highly recommend reading the other top HN post / article : https://archive.is/Vqjpr
10. ah765+C4[view] [source] 2023-11-20 06:06:07
>>valine+(OP)
It might be because of AI safety, but I think it's more likely because Sam was executing plans without informing the board, such as making deals with outside companies, allocating funds to profit-oriented products and making announcements about them, and so on. Perhaps he also wanted to reduce investment in the alignment research that Ilya considered important. Hopefully we'll learn the truth soon, though I suspect that it involves confidential deals with other companies and that's why we haven't heard anything.
replies(1): >>ipaddr+Cf
◧◩
11. mcpack+65[view] [source] [discussion] 2023-11-20 06:09:20
>>ignora+x3
> "[Artificial General Intelligence] in a very narrow domain."

Which is it?

replies(2): >>ignora+p7 >>maxlin+48
◧◩
12. frabcu+s5[view] [source] [discussion] 2023-11-20 06:10:23
>>seanhu+33
The Atlantic Article makes it pretty clear that the fast growth of the commercial business was giving Ilya too few resources and too little time to do the safety work he wanted to do: https://archive.ph/UjqmQ
replies(1): >>seanhu+EJ
13. tdubhr+u5[view] [source] 2023-11-20 06:10:42
>>valine+(OP)
If it really was about “safety” then why wouldn’t Ilya have made some statement about opening the details of their model at least to some independent researchers under some tight controls. This is what makes it look like a simple power grab, the board has said absolutely nothing about what actions they would take to move toward a safer model of development.
replies(2): >>snovv_+I8 >>victor+gi
◧◩
14. sgregn+Y6[view] [source] [discussion] 2023-11-20 06:19:09
>>ignora+x3
Can you please share the sources for Ilyas views?
replies(1): >>ignora+f7
◧◩◪
15. ignora+f7[view] [source] [discussion] 2023-11-20 06:20:43
>>sgregn+Y6
https://archive.is/yjOmt
replies(1): >>zxexz+Eb
◧◩◪
16. ignora+p7[view] [source] [discussion] 2023-11-20 06:21:52
>>mcpack+65
Read the paper linked above, and if you don't agree that's okay. There are many who don't.
replies(2): >>maxlin+B8 >>calf+fa
◧◩
17. esjeon+y7[view] [source] [discussion] 2023-11-20 06:22:29
>>ignora+x3
> AGI in a very narrow domain

The definition of AGI always puzzles me, because "G" in AGI is general, and the word certainly don't play well w/ "narrow". AGI is a new buzzword I guess.

replies(1): >>famous+F8
◧◩◪◨
18. maxbon+C7[view] [source] [discussion] 2023-11-20 06:22:58
>>surrea+S3
Firing generally isn't friendly, but no one "took over." The people who had the power exercised it. Maybe they shouldn't have, I feel no compulsion to argue on their behalf, but calling it a "takeover" isn't correct.

I think when people say "takeover" or "coup" it's because they want to convey their view of the moral character of events, that they believe it was an improper decision. But it muddies the waters and I wish they'd be more direct. "It's a coup" is a criticism of how things happened, but the substantive disagreements are actually about that it happened and why it happened.

I see lots of polarized debate any time something AI safety related comes up, so I just don't really believe that most people would feel differently if the same thing happened but the corporate structure was more conventional, or if Brockman's board seat happened to be occupied by someone who was sympathetic to ousting Altman.

replies(1): >>calf+Ha
◧◩◪
19. maxlin+48[view] [source] [discussion] 2023-11-20 06:26:21
>>mcpack+65
I think the guy read the paper he linked the wrong way. The paper explicitly separates "narrow" and "AGI" types where AlphaGo is in the virtuoso bracket for narrow AI, and ChatGPT is in the "emerging" bracket for "general" AI. Only thing it puts to be AGI is few levels up from virtuoso, but in the "general" type.
◧◩◪◨
20. maxlin+B8[view] [source] [discussion] 2023-11-20 06:30:02
>>ignora+p7
Check it again, I think you might have misread the thing. It categorizes things in a way that clearly separates AlphaGO from even shooting towards "AGI". The "General" part of AGI can't really be skipped or words don't make any sense anymore.
replies(1): >>ignora+Y8
◧◩◪
21. famous+F8[view] [source] [discussion] 2023-11-20 06:30:30
>>esjeon+y7
Well there's nothing narrow about sota LLMs. The main hinge is just competence.

i think the guy you're replying to misunderstood the article he's alluding to though. They don't claim anything about a narrow agi

◧◩
22. snovv_+I8[view] [source] [discussion] 2023-11-20 06:30:50
>>tdubhr+u5
Because they want to slow down further research which would push AGI closer until the safety/alignment aspect can catch up.
replies(2): >>lyu072+qc >>ffgjgf+2j
◧◩◪◨⬒
23. ignora+Y8[view] [source] [discussion] 2023-11-20 06:32:40
>>maxlin+B8
Ah, gotcha; I meant "superintelligence" (which is ASI and not AGI).
◧◩◪◨
24. calf+fa[view] [source] [discussion] 2023-11-20 06:41:32
>>ignora+p7
Has anyone written a response to this paper? Their main gist is to try to define AGI empirically using only what is measurable.
◧◩◪
25. juped+Ba[view] [source] [discussion] 2023-11-20 06:44:01
>>yreg+w3
Boards have exactly one job.

(It's firing the CEO, if anyone wasn't aware.)

◧◩◪◨⬒
26. calf+Ha[view] [source] [discussion] 2023-11-20 06:44:19
>>maxbon+C7
"It's a coup" is loaded language and lets the user insinuate their position without actually explaining and justifying it.
◧◩
27. sashan+db[view] [source] [discussion] 2023-11-20 06:47:38
>>seanhu+33
Spoken to a bunch of folk at OpenAI, it really does seem to be regarding safety. Ilya was extremely worried, did not like the idea of GPT’s as users can train AI’s to do arbitrarily harmful stuff.
◧◩◪◨
28. zxexz+Eb[view] [source] [discussion] 2023-11-20 06:50:41
>>ignora+f7
For what it's worth, the MIT Technology Review these days is considered to be closer to a "tech tabloid" than an actual news source. I personally would find it hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models. And I'm of the 'faction' that AGI can't come soon enough.
replies(1): >>ignora+ye
◧◩◪
29. boriss+hc[view] [source] [discussion] 2023-11-20 06:55:14
>>sdwvit+J2
ChatGPT-5?
◧◩◪
30. lyu072+qc[view] [source] [discussion] 2023-11-20 06:55:55
>>snovv_+I8
But if you really cared about that why would you be so opaque on everything. Usually people with strong conviction try to convince other people of that conviction. For a non profit that is supposedly acting in the interests of all mankind, they aren't actually telling us shit. Transparency is pretty much the first thing everybody does who actually cares about ethics and social responsibilities.
replies(1): >>upward+dj
◧◩◪◨⬒
31. ignora+ye[view] [source] [discussion] 2023-11-20 07:10:44
>>zxexz+Eb
> hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models

Ilya is pretty serious about alignment (precisely?) due to his gut instinct: https://www.youtube.com/watch?v=Ft0gTO2K85A (2 Nov 2023)

replies(1): >>zxexz+6Sj
◧◩
32. ipaddr+Cf[view] [source] [discussion] 2023-11-20 07:16:51
>>ah765+C4
It's to do with a tribe in openAI that believes ai will take over the world in the next 10 years so we need to spend much of our efforts towards that goal. What that translates to is strong prompt censorship and automated tools to ban those who keep asking things we don't want you to ask.

Sam has been agreeing with this group and using this as the reason to go commercial to provide funding for that goal. The problem is these new products are coming too fast and taking resources which affects the resources they can use for safety training.

This group never wanted to release chatGPT but were forced to because a rival company made up of ex openAI employees were going to release their own version. To the safety group things have been getting worse since that release.

Sam is smart enough to use the safety group's fear against them. They finally clued in.

OpenAI never wanted to give us chatGPT. Their hands were forced by a rival and Sam and the board made a decision that brought in the next breakthrough. From that point things snowballed. Sam knew he needed to run before bigger players moved in. It became too obvious after devday that the safety team would never be able to catch up and they pulled the breaks.

OpenAI's vision of a safe AI has turned into a vision of human censorship rather than protecting society from a rogue AI with the power to harm.

replies(1): >>crooke+et2
◧◩
33. victor+gi[view] [source] [discussion] 2023-11-20 07:34:29
>>tdubhr+u5
That's because it's not about safety, it's about ego, vanity, and delusion.
◧◩◪
34. ffgjgf+2j[view] [source] [discussion] 2023-11-20 07:40:14
>>snovv_+I8
wouldn’t that mean that they’ll just be left behind and it won’t matter what their goal is?
replies(1): >>snovv_+ZZ
◧◩◪◨
35. upward+dj[view] [source] [discussion] 2023-11-20 07:41:22
>>lyu072+qc
Ilya might be a believer in what Eliezer Yudkowsky is currently saying, which is that opacity is safer.

https://x.com/esyudkowsky/status/1725630614723084627?s=46

Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.

But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.

I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.

He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.

◧◩◪
36. seanhu+EJ[view] [source] [discussion] 2023-11-20 09:59:18
>>frabcu+s5
I can buy that they fired him over a disagreement about strategy (ie are we going too fast/we are concentrating on the wrong things etc), because in general of course board members get fired if they can't work together on a common strategy. But the narrative that has taken over lots of places at the weekend is more along the lines of he got fired because they had created a sentient AI and Ilya was worried about it. That just makes no sense to me.

Additionally, no-one (not insiders at OpenAI and certainly not a journalist) other than people in those conversations actually knows what happened, and noone other than Ilya actually knows why he did what he did. Everyone else is relying on rumor and heresay. For sure the closer people are to the matter the more insight they are likely to have, but noone who wasn't in the room actually knows.

◧◩◪◨
37. snovv_+ZZ[view] [source] [discussion] 2023-11-20 11:50:54
>>ffgjgf+2j
It depends how far ahead they currently are.
◧◩◪
38. crooke+et2[view] [source] [discussion] 2023-11-20 18:40:52
>>ipaddr+Cf
> What that translates to is strong prompt censorship and automated tools to ban those who keep asking things we don't want you to ask.

...which several subreddits dedicated to LLM porn or trolling could tell you is both mostly pointless and also blocks a ton of stuff you could find on any high school nerd's bookshelf as "unsafe".

◧◩◪◨⬒⬓
39. zxexz+6Sj[view] [source] [discussion] 2023-11-26 04:49:27
>>ignora+ye
I don't doubt he's serious in what he believes. I respect Ilya greatly as a researcher. Do you have notes or a time-point for me to listen to in that podcast? I bemoan the trend toward podcasts-as-references - even a time-point reference (or even multiple!) to the transcript would be greatly desirable!
[go to top]