zlacker

[parent] [thread] 1 comments
1. lordma+(OP)[view] [source] 2024-05-17 20:11:15
Good. As someone who is a paid up OpenAI user I absolutely don't agree that there should be a role for a team screaming to put the brakes on because of some nebulous, imagined "existential risk" of hypothetical future AGI.

There are huge risks to AI today in terms of upheaval to economies and harms to individuals and minorities but they need to be tackled by carefully designed legislation, focused on real harms, like the EU AI legislation.

Then that imposes very specific obligations that every AI product must meet.

It's both better targeted, has wider impact across the industry, and probably allows moving faster in terms of tech.

replies(1): >>croes+P32
2. croes+P32[view] [source] 2024-05-18 20:09:22
>>lordma+(OP)
Bad As someone who is a paid up OpenAI user I absolutely agree that there should be a role to put the brakes on because some value profit over risks.

Two years ago, you wouldn't have believed it if someone had promised results like we have now. AGI can appear suddenly, or even decades later. But if nobody pays attention to it, we will definitely notice it too late if it happens.

[go to top]