zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. valine+v3[view] [source] 2023-11-20 05:39:23
>>andsoi+(OP)
Not a word from Ilya. I can’t wrap my mind around his motivation. Did he really fire Sam over “AI safety” concerns? How is that remotely rational.
◧◩
2. tdubhr+Z8[view] [source] 2023-11-20 06:10:42
>>valine+v3
If it really was about “safety” then why wouldn’t Ilya have made some statement about opening the details of their model at least to some independent researchers under some tight controls. This is what makes it look like a simple power grab, the board has said absolutely nothing about what actions they would take to move toward a safer model of development.
◧◩◪
3. snovv_+dc[view] [source] 2023-11-20 06:30:50
>>tdubhr+Z8
Because they want to slow down further research which would push AGI closer until the safety/alignment aspect can catch up.
◧◩◪◨
4. lyu072+Vf[view] [source] 2023-11-20 06:55:55
>>snovv_+dc
But if you really cared about that why would you be so opaque on everything. Usually people with strong conviction try to convince other people of that conviction. For a non profit that is supposedly acting in the interests of all mankind, they aren't actually telling us shit. Transparency is pretty much the first thing everybody does who actually cares about ethics and social responsibilities.
◧◩◪◨⬒
5. upward+Im[view] [source] 2023-11-20 07:41:22
>>lyu072+Vf
Ilya might be a believer in what Eliezer Yudkowsky is currently saying, which is that opacity is safer.

https://x.com/esyudkowsky/status/1725630614723084627?s=46

Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.

But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.

I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.

He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.

[go to top]