Dear Senator [X],
I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.
Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.
While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.
Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.
Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.
Thank you, [My name]
Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?
Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.
Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.
These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).
Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.
I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:
All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.
Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.
We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.
Exactly! A friend of mine who is into the communist ideology thinks that whichever society taps AI for productivity efficiency, and even policy, will become the new hegemon. I have no immediate counterpoint besides the technology not being there yet.
I can definitely imagine LLM based on political manifests. A personal conversation with your senator at any time about any subject! That is the basic part though: The politician being augmented by the LLM.
The bad part is a party, driven by a LLM or similar political model, where the human guy you see and elect is just a mouthpiece like in "The moon is a harsh mistress". Policy would all be algorithmic and the LLM out provide the interface between the fundamental processing and the mouthpiece.
These conflicts will likely lead to the conflicts you mention. I am pretty sure there will be a new -ism.