zlacker

[parent] [thread] 0 comments
1. trasht+(OP)[view] [source] 2023-11-20 10:51:42
I suppose "safety" means different things to different people. Elon seems to be of the type that cares about existential risks. One reading of him, is that he sees both Tesla, Twitter and SpaceX as tools to mitigate what he sees as existential risks.

In the case of Tesla, to accelerate the development of electric cars, in the case of Twitter, to reduce the probability of civil war and in the case of SpaceX to eventually have humanity (or our descendants) spread out enough that a single catastrophic event (like a meteor, gray goo or similar) doesn't wipe us out all at once.

His detractors obviously will question both his motives and methods, but if we imagine he's acting out of good faith (whether or not he's wrong), his approach to AI fits the pattern, including his story about why he helped with the startup of OpenAI in the first place.

From someone with an ex-risk approach to AI safety, the first concern is, to quote Ilya from the recent Alignment Workshop "As a bare minimum, let's make it so that if the tech does 'bad things', it's because of its operators, rather than due to some unexpected behavior".

In other words, for someone concerned with existential risk, even intentional "bad use" such as using AI for killer robots at a large scale in war or for a dictator to use AI to suppress a population are secondary concerns.

And it appears to me that Elon and Ilya both have this outlook, while Sam may be more concerned with shorter term social impacts.

[go to top]