zlacker

[parent] [thread] 2 comments
1. jkapla+(OP)[view] [source] 2023-11-22 07:22:15
I feel like the "safety" crowd lost the PR battle, in part, because of framing it as "safety" and over-emphasizing on existential risk. Like you say, not that many people truly take that seriously right now.

But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.

For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).

replies(2): >>concor+db >>cyanyd+aB
2. concor+db[view] [source] 2023-11-22 08:47:14
>>jkapla+(OP)
> Like you say, not that many people truly take that seriously right now.

Eh? Polls on the matter show widespread public support for a pause due to safety concerns.

3. cyanyd+aB[view] [source] 2023-11-22 12:28:02
>>jkapla+(OP)
No one who wants to capitalize on AI appears to take it seriously. Especially how grey that safety is. I'm not concerned AI is going to nuke humanity, I'm more concerned it'll re-enforce racism, bias, and the rest of human's irrational activities because it's _blindly_ using existing history to predict future.

We've seen it in the past decade in multiple cases. That's safety.

The decision that the topic discusses means Business is winning, and they absolutely will re-enforce the idea that the only care is that these systems allow them to re-enforce the business cases.

That's bad, and unsafe.

[go to top]