But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.
For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).
Eh? Polls on the matter show widespread public support for a pause due to safety concerns.
We've seen it in the past decade in multiple cases. That's safety.
The decision that the topic discusses means Business is winning, and they absolutely will re-enforce the idea that the only care is that these systems allow them to re-enforce the business cases.
That's bad, and unsafe.