If you take seriously any downsides, whether misinformation or surveillance or laundering bias or x-risk, how does AI model weights or training data being open source solve them? Open source is a lot of things, but one thing it's not is misuse-resistant (and the "with many eyes all bugs are shallow" thing hasn't proved true in practice even with high level code, much less giant matrices and terabytes of text). Is there a path forward that doesn't involve either a lot of downside risk (even if mostly for people who aren't on HN and interested in tinkering with frontier models themselves, in the worlds where surveillance or bias is the main problem), or significant regulation?
I don't particularly like or trust Altman but I don't think he'd be obviously less self-serving if he were to oppose any regulation.
The layperson in the middle who has been happily plugging prompts into ChatGPT claiming they are a "prompt expert" are the ones the most excited.
For those that truly understand AI, there is a lot that you should genuinely be worried about. Now, don't confuse that for saying that we shouldn't work on it or should abandon AI work. I truly believe that this is the next greatest revolution. This is 1,000x more transformative than the industrial revolution, and 100x more transformative than the internet revolution. But it is worth a brief consideration of the effects of our work before we start running into these changes that could have drastic effects on everybody's daily life.