Dear Senator [X],
I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.
Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.
While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.
Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.
Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.
Thank you, [My name]
IANA senator, but if I were you lost me there. The personal insults make it seem petty and completely overshadow the otherwise professional-sounding message.
still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(
https://www.telecomtv.com/content/digital-platforms-services...
https://writingillini.com/2023/05/16/illinois-basketball-ill...
Which one do you think is more important to convince?
And, if there's one thing politicians are know for it's got to be ad hominem.
I probably live quite distally to you and am probably exposed to parts of western culture you probably aren't, and I almost never hear nor read ilk as a derogation or used to associate in a derogatory manner.
https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....
Also, note that all of the negative examples are politics related. If a politician reads the word 'ilk', it is going to be interpreted negatively. It might be the case that ilk does "always mean" a negative connotation in politics.
You could change 'ilk' to 'friends', and keep the same meaning with very little negative connotation. There is still a slight negative connotation here, in the political arena, but it's a very vague shade, and I like it here.
"Altman and his ilk try to claim that..." is a negative phrase because "ilk" is negative, but also because "try to claim" is invalidating and dismissive. So this has elements or notes of an emotional attack, rather than a purely rational argument. If someone is already leaning towards Altman's side, then this will feel like an attack and like you are the enemy.
"Altman claims that..." removes all connotation and sticks to just the facts.
Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?
While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.
For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.
I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?
Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.
Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.
What would you say to a simple registration requirement? You give a point of contact and a description of training data, model, and perhaps intended use (could be binary: civilian or dual use). One page, publicly visible.
This gives groundwork for future rulemaking and oversight if necessary.
You are already arguing from a position of strength.
When you add petty jibes, it weakens your perceived position, because it suggests that you think you need them, rather than relying solely on your argument.
(As a corollary, you should never use petty jibes. When you feel like you need to, shore up your argument instead.)
> The American Heritage® Dictionary of the English Language, 5th Edition.
Yes, only sometimes used to indicate disapproval, but such ambiguity does not work to your favor here. It is better to remove that ambiguity.
People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.
I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.
See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....
Do we really have to play this game?
If what you’re arguing for is not going to specifically advantage your state over others, and the thing you’re arguing against isn’t going to create an advantage for other states over yours, why make this about ‘your state’ in the first place?
The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.
That is painfully naive, a history of pork projects speaks otherwise.
https://www.eff.org/deeplinks/2015/04/remembering-case-estab...
If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.
These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).
Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.
I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:
All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.
Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.
Algorithmic discrimination already exists, so um, yes, information matters.
Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance, just.. imagine AI underwriters. There's no court of appeal for insurance. It matters.
- Scamming via impersonation - Misinformation - Usage of AI in a way that could have serious legal ramifications for incorrect responses - Severe economic displacement
Congress can and should examine these issues. Just because OP works at an AI doesnt' mean that company can't exist in a regulated industry.
I too work in the AI space and welcome thoughtful regulation.
Altman can an ulterior motive, but it doesn't mean that we should strive for having some sort of handle on AI safety.
It could be that Altman and OpenAI know exactly how this will look and the backlash that will ensue that we get ZERO oversight and we rush headlong into doom.
Short term we need to focus on the structural unemployment that is about to hit us. As the AI labs use AI to make better AI, it will eat all the jobs until we have a relative handful of AI whisperers.
If you sent it by e-mail or web contact form, chances are you wasted your time.
If you really want attention, you'll send it as a real letter. People who take the time to actually send real mail are taken more seriously.
Ilk almost always has a negative connotation regardless of what the dictionary says.
I'll give you one: National Public Lands Day: Let’s Help Elk … and Their Ilk - https://pressroom.toyota.com/npld-2016-elk/ (it's a play on words)
Dear Senator [X],
It's painfully obvious that Sam Altman's testimony before the judiciary committee is an attempt to set up rent-seeking conditions for OpenAI, and to snuff out competition from the flourishing open source AI community.
We will be carefully monitoring your campaign finances for evidence of bribery.
Hugs and Kiss,
[My Name]
This is not a rebuttle to regulatory capture. it is in fact built into the model
These "small companies" are feeder systems for the large company, it is a place for companies to raise to the level where they would come under the burden of regulations, and prevented from growing larger there by making them very easy to acquire by the large company.
The small company has to sell or raise massive amounts of capital to just piss away on compliance cost. Most will just sell
Imagine I 1. hooked up a camera feed of a lava lamp to generate some bits and then 2. hooked up the US nuclear first strike network to it. I would be an idiot, but would I be an idiot because of 1. or 2.?
Basically I think it’s totally reasonable to hold these two beliefs: 1. there is no reason to fear the LLM 2. there is every reason to fear the LLM in the hands of those who refuse to think about their actions and the burdens they may impose on others, probably because they will justify the means through some kind of wishy washy appeal to bad probability theory.
The -plogp that you use to judge the sense of some predicted action you take is just a model, it’s just numbers in RAM. Only when those numbers are converted into destructive social decisions does it convert into something of consequence.
I agree that society is beginning to design all kinds of ornate algorithmic beating sticks to use against the people. The blame lies with the ones choosing to read tea leaves and then using the tea leaves to justify application of whatever Kafkaesque policies they design.
I agree with being skeptical of proposals from those with vested interests, but are you just arguing against what you imagine Altman will say, or did I miss some important news?
You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.
The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.
The benefit of adoption is education. The world is already adapting.
Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.
These resources should be spent lessening the impact rather than trying to completely control it.
The denotation may not be negative, but if you use ilk in what you see as a neutral way, people will get a different message than you're trying to send.
I’m sympathetic to your position in general, but I can’t believe you wrote that with a straight face. “I don’t know how it would do it, therefore we should completely ignore the risk that it could be done.”
I’m no security expert, but I’ve been following the field incidentally and dabbling since writing login prompt simulators for the Prime terminals at college to harvest user account passwords. When I was a Unix admin I used to have fun figuring out how to hack my own systems. Security is unbelievably hard. An AI eventually jail braking is an eventual almost certainty we need to prepare for.
Consider this: "Firefighters and their ilk." It's not a word that nicely described a group, even though that's what it's supposed to do. I think the language has moved to where we just say Firefighters now when it's positive, and ilk or et al when it's a negative connotation.
Just my experience.
And it isn't a strong argument for the same reason that it isn't a good argument when used to argue we should allow human cloning and just focus on regulating the more direct causal links like non-clone employment loss from mass produced hyper-intelligent clones, and ensuring they have legal rights, and having proper oversight and non-clone human accountability.
Maybe those things could all make ethical human cloning viable. But I think the world coming together and being like "holy shit this is happening too fast. Our institutions aren't ready at all nor will they adapt fast enough. Global ban" was the right call.
It is not impossible that a similar call is also appropriate here with AI. I personally dunno what the right call is, but I'm pretty skeptical of any strong claim that it could never be the right call to outright ban some forms of advanced AI research just like we did with some forms of advanced genetic engineering research.
This isn't like banning numbers at all. The blame falling on the corner-cutters doesn't mean the right call is always to just tell the blamed not to cut corners. In some cases the right call is instead taking away their corner-cutting tool.
At least until our institutions can catch up.
A lot of said constituents' views are, in practice, that they should receive special advantages.
Why do so many Americans think universal health care means there is no private insurance? In most countries, insurance is compulsory and tightly regulated. Some like the Netherlands and France have public insurance offered by the government. In other places like Germany, your options are all private, but underprivileged people have access to government subsidies for insurance (Americans do too, to be fair). Get sick in one of these places as an American, you will be handed a bill and it will still make your head spin. Most places in Europe work like this. Of course, even in places with nationalized healthcare like the UK, non-residents would still have to pay. What makes Germany and NL and most other European countries different from that system is if you're a resident without an insurance policy, you will also have to pay a hefty fine. You are basically auto-enrolled in an invisible "NHS" insurance system as a UK resident. Of course, most who can afford it in the UK still pay for private insurance. The public stuff blends being not quite good with generally poor availability.
Americans are actually pretty close to Germany with their healthcare. What makes the US system shitty can be boiled down to two main factors:
- Healthcare networks (and state incorporation laws) making insurance basically useless outside of a small collection of doctors and hospitals, and especially your state
- Very little regulation on insurance companies, pharmaceutical companies or healthcare providers in price-setting
The latter is especially bad. My experience with American health insurance has been that I pay more for much less. $300/month premiums and still even seeing a bill is outrageous. AI underwriters won't fix this, yeah, but they aren't going to make it any worse because the problem is in the legislative system.
> There's no court of appeal for insurance.
No, but you can of course always sue your insurance company for breach of contract if they're wrongfully withholding payment. AI doesn't change this, but AI can make this a viable option for small people by acting as a lawyer. Well, in an ideal world anyways. The bar association cartels have been very quick to raise their hackles and hiss at the prospect of AI lawyers. Not that they'll do anything to stop AI from replacing most duties of a paralegal of course. Can't have the average person wielding the power of virtually free, world class legal services.
(I would just emphasize, before anyone complains, that the Federal Republic of Germany is very much a federal republic.)
Imagine if, e.g. drugs testing and manufacture was subject to no regulations. As a consumer, if you can be aware that some chemicals are very powerful and useful, but you can't be sure that any specific product has the chemicals it says it has, that it was produced in a way that ensures a consistent product, or that it was tested for safety, or what the evidence is that it's effective against a particular condition. Even if wide adoption of drugs from a range of producers occurs, does the public really understand what they're taking, and whether it's safe? Should the burden be on them to vet every medication on the market? Or is appropriate to have some regulation to ensure medications have have their active ingredients in the amounts stated, and are produced with high quality assurance, and are actually shown to be effective? Oh, no, says a pharma industry PR person. "Doing anything that limits the adoption or encourages the underground development of bioactive chemicals is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for."
If a team of PhDs can spend weeks trying to explain "why did the model do Y in response to X?" or figure out "can we stop it from doing Z?", expecting "wide adoption" to force "public education" to be sufficient to defuse all harms such that no regulation whatsoever is necessary is ... beyond optimistic.
The views of their constituents are probably in favor of special advantages for their constituents, so the one may imply the other.
I mean, some elected representatives may represent constituencies consisting primarily of altruistic angels, but that is…not the norm.
It can be, but often its often the project of a substantial subset of the people creating institutions, so its misleading and romanticizing the past to view it as “decay”.
The debate is really about how much and what type of regulation. It is of strategic importance that we do not let bad actors get the upper hand, but we also know that bad actors will rarely follow any of this regulation anyway. There is something to be said for regulating the application rather than the technology, as well as for realizing that large corporations have historically used regulatory capture to increase their moat.
Given it seems quite unlikely we will be able to stop prompt injections, what are we to do?
Provenance seems like a good option, but difficult to implement. It allows us to track who created what, so when someone does something bad, we can find and punish them.
There are analogies to be made with the Bill of Rights and gun laws. Gun analogy seem interesting because they have to be registered, but often criminals won't and the debate is quite polarized.
You absolutely can. Maybe you can't effectively enforce that regulation but you can regulate and you can take measures that make violating the regulation impractical or risky for most people. By the way, the "crypto-wars" never ended and are ongoing all around the world (UK, EU, India, US...)
i can sell a webserver that gets used to host illegal content all day long. Should that be included? Where does the regulation end? I hate that the answer to any question seems to be just add more government.
I have sent correspondence about ten times to my Congressmen and Senators. I have received a good reply (although often just saying there is nothing that they can do) except for the one time I contacted Jon Kyl and unfortunately mentioned data about his campaign donations from Monsanto - I was writing about a bill he sponsored that I thought would have made it difficult for small farmers to survive economically and make community gardens difficult because of regulations. No response on that correspondence.
great, how does that apply to China or Europe in general? Or a group in Russia or somewhere else? Are you assuming every governing body on the surface of the earth is going to agree on the terms used to regulate AI? I think it's a fool's errand.
AI is a very different problem space. With AI, even the big models easily fit on a micro SD card. You can carry around all of GPT4 and its supporting code on a thumb drive. You can transfer it wirelessly in under 5 minutes. It's quite different than drugs or conventional weapons or most other things from a practicality perspective when you really think about enforcing developmental regulation.
Also consider that criminals and other bad actors don't care about laws. The RIAA and MPAA have tried hard for 20+ years to stop piracy and the DMCA and other laws have been built to support that, yet anyone reading this can easily download the latest blockbuster movie or in the theater.
Even still, I'm not saying don't make laws or regulations on AI. I'm just saying we need to carefully consider what we're really trying to protect or prevent.
Also, I certainly believe that in this case, the widespread public adoption of AI tech has already driven education and adaptation that could not have been achieved otherwise. My mom understands that those pictures of Trump being chased by the cops are fake. Why? Because Stable Diffusion is on my home computer so I can make them too. I think this needs to continue.
Those interns have a pile of form letters they send for about 99% of (e)mail they get, and if you happen to catch their attention you might get more than than the usual tick mark in a spreadsheet (for/against X). Which at best might be as much as a sentence or two in a weekly correspondence summary which may/may not be read by your representative depending on how seriously they take their job.
Everything has become so my team vs your team... you are bad because you think differently...
Likewise for activities that aren't nefarious too. Whatever fears that could be placed on blobs of code like "AI", are far more merited being placed on humans.
You think voice actors and writers are not saying the same?
When do we accept capitalism as we know it is just a bullshit hallucination we grew up with? It’s no more an immutable feature of reality than a religion?
I don’t owe propping up some rich person’s figurative identity, or yours for that matter.
In places like the usa I don't think politicians should expect privacy or peace. They have so much power compared to the citizen and they so rarely further the interests of the general population in good faith.
Given how they treat you, it's best to abandon politeness (which only helps them further belittle your meaninglessness in their decision making) and put a crowd in front of their house, accost them at restaurants, and find other ways of reminding them how accessible and functionally answerable they are to the people they're supposed to serve.
https://www.smh.com.au/national/nsw/maximise-profits-facial-...
AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:
https://www.monster.com/jobs/search?q=artificial+intelligenc...
In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.
Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.
Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.
I met some biohackers at defcon that took this perspective, a sort of "open source but for medicine" ideology. I see the dangers of a massively uneducated population trying to 3d print aspirin poisoning themselves, but they already do that with horse paste so I'm not sure it's a new issue.
Similarly it's not really good faith to assume everyone opposed to regulation in this field is seeking a lawless libertarian (or anarchist perhaps) utopia.
Politicians just know that it's better to be nice to people who seem to like you or are engaged with the system, since they want to keep getting your vote. If not then the person isn't worth your time.
There are already laws against false advertising, misrepresentation etc. We don’t need extra laws specifically for AI that doesn’t perform well.
What most people are concerned about is AI that performs too well.
But even then, that’s a linear diffusion- one person, one body mod. I guess you could say that their descendants would proliferate and multiply so the alteration slowly grows exponentially over the generations.. but the FUD I hear from AI decelerationists is that it would be an explosive diffusion of harms, like, as soon as the day after tomorrow. One architect, up to billions of victims, allegedly. Not that I think it’s unwise to be compelled to precaution with new and mighty technologies, but what is it that some people are so worried about that they’re willing to ban all research, and choke all the good that has come from them, already? Maybe it’s just a symptom of the underlying growing mistrust in the social contract..
I would assert that just as I have the right to pull out a sheet of paper and write the most vile, libelous thing on it I can imagine, I have the right to use AI to put anyone's face on any body, naked or not. The crime comes from using it for fraud. Take gasoline for another example. Gasoline is powerful stuff. You can use it to immolate yourself or burn down your neighbor's house. You can make Molitov cocktails and throw them at nuns. But we don't ban it, or saturate it with fire retardants, because it has a ton of other utility, and we can just make those outlying things illegal. Besides, five years from now, nobody's going to believe a damned thing they watch, listen to, or read.
We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.
The right one is to grant people rights over their likeness, so you could use something more like copyright law
Even if it's a real recording, you should still have control over it
I think you should continue to have the right to use whatever program to generate whatever video clip you like on your computer. That is a distinct matter from whether a commercially available video generative AI service has some obligations to guard against abusive uses. Personal freedoms are not the same as corporate freedom from regulatory burdens, no matter how hard some people will work to conflate them.
Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?
"AI" is whatever congress says it is? That is an absolutely terrible idea.
Totally agree we could be witnessing a growing mistrust in the social contract.
Again it sounds extreme but in an extreme situation it could happen / not impossible.
As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.
Exactly! A friend of mine who is into the communist ideology thinks that whichever society taps AI for productivity efficiency, and even policy, will become the new hegemon. I have no immediate counterpoint besides the technology not being there yet.
I can definitely imagine LLM based on political manifests. A personal conversation with your senator at any time about any subject! That is the basic part though: The politician being augmented by the LLM.
The bad part is a party, driven by a LLM or similar political model, where the human guy you see and elect is just a mouthpiece like in "The moon is a harsh mistress". Policy would all be algorithmic and the LLM out provide the interface between the fundamental processing and the mouthpiece.
These conflicts will likely lead to the conflicts you mention. I am pretty sure there will be a new -ism.
I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.