Locking in values in that way would be considered a failure of alignment by anyone I've ever read talk about alignment. Not the worst possible failure of alignment (compared to locking in “the value of the entity legally known as OpenAI”, for example), but definitely a straightforward failure to achieve alignment.
>>cwillu+(OP)
I know it's a theme, MacAskill discusess it in his book. In practice, this is the direction all the "AI safety" departments and organisations seem to be going into.
A world where everyone is paperclipped is probably better than one controlled by psychopathic totalitarian human overlords supported by AI, yet the direction of current research seems to leading us into the latter scenario.