zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. elil17+xC[view] [source] 2023-05-16 14:39:28
>>vforgi+(OP)
This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.

Dear Senator [X],

I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.

Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.

Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.

Thank you, [My name]

◧◩
2. kubota+sN[view] [source] 2023-05-16 15:30:07
>>elil17+xC
You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.
◧◩◪
3. wnevet+561[view] [source] 2023-05-16 16:41:57
>>kubota+sN
> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?

◧◩◪◨
4. pizza+781[view] [source] 2023-05-16 16:49:10
>>wnevet+561
I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.

◧◩◪◨⬒
5. DirkH+kX1[view] [source] 2023-05-16 20:55:19
>>pizza+781
Your argument could just as easily be applied to human cloning and argue for why human cloning and genetic engineering for specific desirable traits should not be illegal.

And it isn't a strong argument for the same reason that it isn't a good argument when used to argue we should allow human cloning and just focus on regulating the more direct causal links like non-clone employment loss from mass produced hyper-intelligent clones, and ensuring they have legal rights, and having proper oversight and non-clone human accountability.

Maybe those things could all make ethical human cloning viable. But I think the world coming together and being like "holy shit this is happening too fast. Our institutions aren't ready at all nor will they adapt fast enough. Global ban" was the right call.

It is not impossible that a similar call is also appropriate here with AI. I personally dunno what the right call is, but I'm pretty skeptical of any strong claim that it could never be the right call to outright ban some forms of advanced AI research just like we did with some forms of advanced genetic engineering research.

This isn't like banning numbers at all. The blame falling on the corner-cutters doesn't mean the right call is always to just tell the blamed not to cut corners. In some cases the right call is instead taking away their corner-cutting tool.

At least until our institutions can catch up.

[go to top]