zlacker

[parent] [thread] 0 comments
1. upward+(OP)[view] [source] 2023-11-20 07:41:22
Ilya might be a believer in what Eliezer Yudkowsky is currently saying, which is that opacity is safer.

https://x.com/esyudkowsky/status/1725630614723084627?s=46

Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.

But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.

I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.

He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.

[go to top]