This part in particular caught my eye: "Other assumptions could also break down in the future, like favorable generalization properties during deployment". There have been actual experiments in which AIs appeared to successfully learn their objective in training, and then did something unexpected when released into a broader environment.[1]
I've seen some leading AI researchers dismiss alignment concerns, but without actually engaging with the arguments at all. I've seen no serious rebuttals that actually address the things the alignment people are concerned about.
When it comes to people…
Expert who’s worried: conflict of interest or a quack
Non-expert: dismissible because non-expert
Was always worried: paranoiac
Recently became worried: flip flopper with no conviction
When it comes to the tech itself…
Bullish case: AI is super powerful and will change the world for the better
Bearish case: AI can’t do much lol what are you worried about they’re just words on a screen