zlacker

[parent] [thread] 3 comments
1. daniel+(OP)[view] [source] 2024-05-17 17:51:09
> “Over the past few months […] we were struggling for compute

OpenAI literally said they were setting aside 20% of compute to ensure alignment [1] but if you read the fine print, what they said was that they are “dedicating 20% of the compute we’ve secured ‘to date’ to this effort” (emphasis mine). So if their overall compute has increased by 10x then that 20% is suddenly 2%, right? Is OpenAI going to be responsible or is it just a mad race (modelled from the top) to “win” the AI game?

[1] https://openai.com/index/introducing-superalignment/?utm_sou...

replies(1): >>PeterS+5i
2. PeterS+5i[view] [source] 2024-05-17 19:50:26
>>daniel+(OP)
Alignment with whom?
replies(2): >>ben_w+zr >>jonono+7l1
◧◩
3. ben_w+zr[view] [source] [discussion] 2024-05-17 21:02:31
>>PeterS+5i
Last I heard, the concept of AI-alignment was pre-paradigmatic.

This means that we're not ready to answer "with whom", because we don't know what "aligned" even means.

◧◩
4. jonono+7l1[view] [source] [discussion] 2024-05-18 09:35:39
>>PeterS+5i
Anyone or anything at all is probably the first steps, which we still don't know how to do - even conceptually. But the first real target would be alignment with the creator of the AI, and the goals an values they set.

That is: Alignment with humanity/society as a whole is even further away. And might even be considered out-of-scope for AI research: ensuring that the AI creator (persons/organization) is aligned with society might be considered to be in the political domain.

[go to top]