zlacker

[parent] [thread] 2 comments
1. nabla9+(OP)[view] [source] 2023-11-20 14:40:22
So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.

Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.

It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.

replies(1): >>coffee+W7
2. coffee+W7[view] [source] 2023-11-20 15:22:41
>>nabla9+(OP)
We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.
replies(1): >>colins+sV1
◧◩
3. colins+sV1[view] [source] [discussion] 2023-11-20 23:13:48
>>coffee+W7
i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.

companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.

so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?

[go to top]