zlacker

[return to "Jan Leike's OpenAI departure statement"]
1. daniel+bh[view] [source] 2024-05-17 17:51:09
>>jnnnth+(OP)
> “Over the past few months […] we were struggling for compute

OpenAI literally said they were setting aside 20% of compute to ensure alignment [1] but if you read the fine print, what they said was that they are “dedicating 20% of the compute we’ve secured ‘to date’ to this effort” (emphasis mine). So if their overall compute has increased by 10x then that 20% is suddenly 2%, right? Is OpenAI going to be responsible or is it just a mad race (modelled from the top) to “win” the AI game?

[1] https://openai.com/index/introducing-superalignment/?utm_sou...

◧◩
2. PeterS+gz[view] [source] 2024-05-17 19:50:26
>>daniel+bh
Alignment with whom?
◧◩◪
3. jonono+iC1[view] [source] 2024-05-18 09:35:39
>>PeterS+gz
Anyone or anything at all is probably the first steps, which we still don't know how to do - even conceptually. But the first real target would be alignment with the creator of the AI, and the goals an values they set.

That is: Alignment with humanity/society as a whole is even further away. And might even be considered out-of-scope for AI research: ensuring that the AI creator (persons/organization) is aligned with society might be considered to be in the political domain.

[go to top]