It is a tool. If I use a tool for illegal purposes I have broken the law. I can be held accountable for having broken the law. If the laws are deficient, make the laws stronger and punish people for wrong deed, regardless of the tool at hand.
This is a naked attempt to build a regulatory moat while capitalizing on fear of the unknown and ignorance. It’s attempting to regulate research into something that has no external ability to cause harm without the use of a principal directing it.
I can see a day (perhaps) when AIs have some form of independent autonomy, or even display agency and sentience, when we can revisit. Other issues come into play as well, such as the morality of owning a sentience and what that entails. But that is way down the road. And even further if Microsoft’s proxy closes the doors on anyone but Microsoft, Google, Amazon, and Facebook.
It is a tool which allows any individual to have nearly instant access not only to all the world's public data, but the ability to correlate and research that data to synthesize new information quickly.
Without guardrails, someone can have a completely amoral LLM that has the ability to write persuasive manifestos on any kind of extremist movement that prior would have taken someone with intelligence.
A person will be able to ask the model how best to commit various crimes with the lowest chances of being caught.
It will enable a level of pattern matching and surveillance yet unseen.
I know the genie is out of the bottle, but there are absolutely monumental shifts in technology happening that can and will be used for evil and mere dishonesty.
And those are just the ways LLM and "AI" will fuck with us without guardrails. Even in a walled garden, we honestly won't be able to trust any online interaction with people in the near future. Your comment and mine could both be LLM generated in the near future. Webs of trust will be more necessary.
Anyone who can't think of about five ways AI is going to radically shake society isn't thinking hard enough.
In an earlier time, we called these "books" and there was some similar backlash. But I digress.
Then write laws and regulations about the actions of humans using the tools. The tools have no agency. The human using them towards bad ends do.
By the way, writing things the state considers immoral is an enshrined right.
How do you draw the line between AI writing assistance and predictive text auto completion and spell check in popular document editors today? I would note that predictive text is completely amoral and will do all sorts of stuff the state considers immoral.
Who decides what’s immoral? The licensing folks in the government? What right do they have to tell me my morality is immoral? I can hold and espouse any morality I desire so long as I break no law.
I’d note that as a nation we have a really loose phrasing in the bill of rights for gun rights, but a very clear phrasing about freedom of speech. We generally say today that guns are free game unless used for illegal actions. These proposals say tools that take our thoughts and opinions and creation of language to another level are more dangerous than devices designed for no other purpose than killing things.
Ben Franklin must be spinning so fast in his grave he’s formed an accretion disc.
Let me be clear: everyone in the world is about to have a Jarvis/Enterprise ship's computer/Data/name-your-assistant available to them, but ready and willing to use their power for nefarious purposes. It is not just a matter of reading books. It lowers the barrier on a lot of things, good and bad, significantly.
What if we required licenses to create a website? After all, some unscrupulous individuals create websites that sell drugs and other illicit things!
I am not endorsing restrictions. I was merely stating the fact that this shit is coming down the pipe, and it /will/ be destabilizing, and just because society survived the printing press doesn't mean the age of AI will be safe or easy.