zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. elil17+xC[view] [source] 2023-05-16 14:39:28
>>vforgi+(OP)
This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.

Dear Senator [X],

I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.

Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.

Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.

Thank you, [My name]

◧◩
2. abeppu+T61[view] [source] 2023-05-16 16:44:48
>>elil17+xC
> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.

For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.

I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

◧◩◪
3. shon+AV1[view] [source] 2023-05-16 20:47:15
>>abeppu+T61
> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.

The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.

The benefit of adoption is education. The world is already adapting.

Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.

[go to top]