zlacker

[parent] [thread] 11 comments
1. fnordp+(OP)[view] [source] 2023-05-16 15:10:24
I don’t understand the need to control AI tech, no matter how advanced, in any way what-so-ever.

It is a tool. If I use a tool for illegal purposes I have broken the law. I can be held accountable for having broken the law. If the laws are deficient, make the laws stronger and punish people for wrong deed, regardless of the tool at hand.

This is a naked attempt to build a regulatory moat while capitalizing on fear of the unknown and ignorance. It’s attempting to regulate research into something that has no external ability to cause harm without the use of a principal directing it.

I can see a day (perhaps) when AIs have some form of independent autonomy, or even display agency and sentience, when we can revisit. Other issues come into play as well, such as the morality of owning a sentience and what that entails. But that is way down the road. And even further if Microsoft’s proxy closes the doors on anyone but Microsoft, Google, Amazon, and Facebook.

replies(1): >>unethi+j3
2. unethi+j3[view] [source] 2023-05-16 15:25:13
>>fnordp+(OP)
The below is not an endorsement of any particular regulation.

It is a tool which allows any individual to have nearly instant access not only to all the world's public data, but the ability to correlate and research that data to synthesize new information quickly.

Without guardrails, someone can have a completely amoral LLM that has the ability to write persuasive manifestos on any kind of extremist movement that prior would have taken someone with intelligence.

A person will be able to ask the model how best to commit various crimes with the lowest chances of being caught.

It will enable a level of pattern matching and surveillance yet unseen.

I know the genie is out of the bottle, but there are absolutely monumental shifts in technology happening that can and will be used for evil and mere dishonesty.

And those are just the ways LLM and "AI" will fuck with us without guardrails. Even in a walled garden, we honestly won't be able to trust any online interaction with people in the near future. Your comment and mine could both be LLM generated in the near future. Webs of trust will be more necessary.

Anyone who can't think of about five ways AI is going to radically shake society isn't thinking hard enough.

replies(2): >>tomrod+T5 >>cal5k+DS
◧◩
3. tomrod+T5[view] [source] [discussion] 2023-05-16 15:35:14
>>unethi+j3
> Without guardrails, someone can have a completely amoral LLM that has the ability to write persuasive manifestos on any kind of extremist movement that prior would have taken someone with intelligence.

In an earlier time, we called these "books" and there was some similar backlash. But I digress.

replies(2): >>kredd+l8 >>unethi+Jq
◧◩◪
4. kredd+l8[view] [source] [discussion] 2023-05-16 15:44:25
>>tomrod+T5
Not that I support AI regulations, but reading a book is a higher barrier to entry than asking a chat assistant to do immoral things.
replies(1): >>fnordp+Tb
◧◩◪◨
5. fnordp+Tb[view] [source] [discussion] 2023-05-16 15:58:00
>>kredd+l8
(Acknowledging you didn’t support regulation in your statement, just riffing)

Then write laws and regulations about the actions of humans using the tools. The tools have no agency. The human using them towards bad ends do.

By the way, writing things the state considers immoral is an enshrined right.

How do you draw the line between AI writing assistance and predictive text auto completion and spell check in popular document editors today? I would note that predictive text is completely amoral and will do all sorts of stuff the state considers immoral.

Who decides what’s immoral? The licensing folks in the government? What right do they have to tell me my morality is immoral? I can hold and espouse any morality I desire so long as I break no law.

I’d note that as a nation we have a really loose phrasing in the bill of rights for gun rights, but a very clear phrasing about freedom of speech. We generally say today that guns are free game unless used for illegal actions. These proposals say tools that take our thoughts and opinions and creation of language to another level are more dangerous than devices designed for no other purpose than killing things.

Ben Franklin must be spinning so fast in his grave he’s formed an accretion disc.

◧◩◪
6. unethi+Jq[view] [source] [discussion] 2023-05-16 16:55:27
>>tomrod+T5
If you can scan city schematics, maps, learn about civil and structural engineering through various textbooks and plot a subway bombing in an afternoon, you're a faster learner than I am.

Let me be clear: everyone in the world is about to have a Jarvis/Enterprise ship's computer/Data/name-your-assistant available to them, but ready and willing to use their power for nefarious purposes. It is not just a matter of reading books. It lowers the barrier on a lot of things, good and bad, significantly.

replies(3): >>fnordp+9W >>tomrod+301 >>selimt+Af2
◧◩
7. cal5k+DS[view] [source] [discussion] 2023-05-16 19:12:43
>>unethi+j3
If LLMs/AI are the problem, they're also the solution. Access restrictions will do nothing but centralize control over one of the most important developments of the century.

What if we required licenses to create a website? After all, some unscrupulous individuals create websites that sell drugs and other illicit things!

◧◩◪◨
8. fnordp+9W[view] [source] [discussion] 2023-05-16 19:26:33
>>unethi+Jq
Crimes are crimes the person commits. Planning an attack is a crime. Building a model to commit crimes is probably akin to planning an attack, and might itself be a crime. But the thought that researchers and the every man have to be kept away from AI so globo mega corps can protect us from the AI enabled Lex Luthor is absurd. The protections against criminal activity is already codified in law.
◧◩◪◨
9. tomrod+301[view] [source] [discussion] 2023-05-16 19:43:23
>>unethi+Jq
> It lowers the barrier on a lot of things, good and bad, significantly.

Like books!

replies(1): >>unethi+Y71
◧◩◪◨⬒
10. unethi+Y71[view] [source] [discussion] 2023-05-16 20:22:23
>>tomrod+301
Yes, I understand your analogy.

I am not endorsing restrictions. I was merely stating the fact that this shit is coming down the pipe, and it /will/ be destabilizing, and just because society survived the printing press doesn't mean the age of AI will be safe or easy.

replies(1): >>fnordp+Hd1
◧◩◪◨⬒⬓
11. fnordp+Hd1[view] [source] [discussion] 2023-05-16 20:51:41
>>unethi+Y71
But at least Alexa will be able to order ten rolls of toilet paper instead of ten million reams of printer paper
◧◩◪◨
12. selimt+Af2[view] [source] [discussion] 2023-05-17 05:26:16
>>unethi+Jq
For a minute there I misread Jarvis as Jarvik
[go to top]