Why? Because if I’m not right then I am convinced that AI is going to be a force for evil. It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze. It will concentrate immense power and wealth in the hands of people who I don’t trust. And it will do all of this while consuming truly shocking amounts of energy.
Not only do I think these things will happen, I think the Altmans of the world would eagerly agree that they will happen. They just think it will be interesting / profitable for them. It won’t be for us.
And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI. My skepticism (and unwillingness to aid in the advancement of AI) might slow things down a billionth of a percent. Maybe if there are more of me, things will slow down enough that we can find some sort of effective safeguards on this stuff before it’s out of hand.
So I’ll keep being skeptical, until it’s over.
I firmly believe that too. That's why I've been investing a great deal of effort in helping people understand what this stuff can and can't do and how best to make use of it.
I don't think we can stop it, but I do think (hope) we can show people how to use it in a way where the good applications outweigh the bad.
I stand by what I wrote about it though: https://simonwillison.net/2025/Mar/19/vibe-coding/
I think it's a net positive for regular humans to be able to build tools for their own personal use, and I think my section on "when is it OK to vibe code?" (only for low stakes projects, treat with extreme caution if private data or security is involved) is something I wish people had paid more attention to! https://simonwillison.net/2025/Mar/19/vibe-coding/#when-is-i...