zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. elil17+xC[view] [source] 2023-05-16 14:39:28
>>vforgi+(OP)
This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.

Dear Senator [X],

I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.

Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.

Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.

Thank you, [My name]

◧◩
2. kubota+sN[view] [source] 2023-05-16 15:30:07
>>elil17+xC
You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.
◧◩◪
3. wnevet+561[view] [source] 2023-05-16 16:41:57
>>kubota+sN
> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?

◧◩◪◨
4. pizza+781[view] [source] 2023-05-16 16:49:10
>>wnevet+561
I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.

◧◩◪◨⬒
5. hkt+xC1[view] [source] 2023-05-16 19:17:02
>>pizza+781
> atoms matter, information isn’t

Algorithmic discrimination already exists, so um, yes, information matters.

Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance, just.. imagine AI underwriters. There's no court of appeal for insurance. It matters.

◧◩◪◨⬒⬓
6. pizza+fS1[view] [source] 2023-05-16 20:28:42
>>hkt+xC1
I am literally agreeing with you but in a much more precise way. These are questions of “who gets what stuff”, “who gets which house”, “who gets which heart transplant”, “which human being sits in the big chair at which corporation”, “which file on which server that’s part of the SWIFT network reports that you own how much money”, “which wannabe operator decides their department needs to purchase which fascist predictive policing software”, etc.

Imagine I 1. hooked up a camera feed of a lava lamp to generate some bits and then 2. hooked up the US nuclear first strike network to it. I would be an idiot, but would I be an idiot because of 1. or 2.?

Basically I think it’s totally reasonable to hold these two beliefs: 1. there is no reason to fear the LLM 2. there is every reason to fear the LLM in the hands of those who refuse to think about their actions and the burdens they may impose on others, probably because they will justify the means through some kind of wishy washy appeal to bad probability theory.

The -plogp that you use to judge the sense of some predicted action you take is just a model, it’s just numbers in RAM. Only when those numbers are converted into destructive social decisions does it convert into something of consequence.

I agree that society is beginning to design all kinds of ornate algorithmic beating sticks to use against the people. The blame lies with the ones choosing to read tea leaves and then using the tea leaves to justify application of whatever Kafkaesque policies they design.

[go to top]