zlacker

[return to "Jan Leike's OpenAI departure statement"]
1. llamai+lg[view] [source] 2024-05-17 17:44:43
>>jnnnth+(OP)
This won't make a dent in the logical armor of AI optimists:

[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI

[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking

Unbridled optimism lives another day!

◧◩
2. skepti+vy[view] [source] 2024-05-17 19:44:52
>>llamai+lg
All I’d like to see from AI safety folks is an empirical argument demonstrating that we’re remotely close to AGI, and that AGI is dangerous.

Sorry, but sci-fi novels are not going to cut it here. If anything, the last year and a half have just supported the notion that we’re not close to AGI.

[go to top]