zlacker

[return to "Introducing Superalignment"]
1. voldac+sb1[view] [source] 2023-07-05 22:15:29
>>tim_sw+(OP)
>How do we ensure AI systems much smarter than humans follow human intent?

What is human intent? My intents may be very different from most humans. It seems like ClosedAI wants their system to follow the desires of some people and not others, but without describing which ones or why.

◧◩
2. ben_w+kf1[view] [source] 2023-07-05 22:37:12
>>voldac+sb1
You're seeing what you want to see.

They're repeatedly very specific about the whole "this can kill all of us if we do it wrong", so it's more than a little churlish to parrot the name "ClosedAI" when they're announcing hiring a researcher to figure out how to align with anyone, at all, even in principle.

◧◩◪
3. junon+hs1[view] [source] 2023-07-05 23:56:15
>>ben_w+kf1
I'm still a bit hung up on "it can kill all of us". How?
◧◩◪◨
4. ctoth+bN1[view] [source] 2023-07-06 02:29:26
>>junon+hs1
Here's an article[0] and a good short story[1] explaining exactly this.

[0]: No Physical Substrate, No Problem https://slatestarcodex.com/2015/04/07/no-physical-substrate-...

[1]: It Looks Like You're Trying To Take Over The World https://gwern.net/fiction/clippy

◧◩◪◨⬒
5. junon+Dm2[view] [source] 2023-07-06 07:26:40
>>ctoth+bN1
The clippy example already starts out with many assumptions that simply aren't true today.

LLMs are not going to destroy humanity. We need a paradigm shift and a new model for AI for that to happen. ClosedAI is irresponsibly trying to create hype and mystery around their product, which always sells.

◧◩◪◨⬒⬓
6. ben_w+3s2[view] [source] 2023-07-06 08:14:16
>>junon+Dm2
Will you please stop calling them ClosedAI? That just comes across like a playground taunt, like "libtard" or "CONservative".
◧◩◪◨⬒⬓⬔
7. junon+rA6[view] [source] 2023-07-07 08:09:41
>>ben_w+3s2
I'll call them what they are. Closed and antithetical to their original goals.
[go to top]