Honestly, pretty sick
I think that anecdote made me like this guy even if I disagree with him about the dangers of AI.
Imagine if the US or any other government of 1800s came gained so much power, 'locking-in' their repugnant values as the moral truth, backed by total control of the world.
The problem of defining “what’s a good outcome” is a sub problem of alignment.
The 2003 Northeast blackout that affected 50 million people was partially caused by a race condition. https://www.theregister.com/2004/04/08/blackout_bug_report/
A world where everyone is paperclipped is probably better than one controlled by psychopathic totalitarian human overlords supported by AI, yet the direction of current research seems to leading us into the latter scenario.
The fire is OpenAI controlling an AI with their alignment efforts. The analogy here is that some company could recreate the AGI-under-alignment and just... Decide to remove the alignment controls. Hence, create another effigy and not set it on fire.
Brockman had a robot as ringbearer for his wedding. And instead of asking how your colleagues are doing, they would have asked “What is your life a function of?”. This was 2020.
https://www.theatlantic.com/technology/archive/2023/11/sam-a...
That doesn't sound simple. Not all humans have the same moral code, so who gets to decide which is the "correct" one?