zlacker

[parent] [thread] 8 comments
1. tomrod+(OP)[view] [source] 2023-05-16 15:35:14
> Without guardrails, someone can have a completely amoral LLM that has the ability to write persuasive manifestos on any kind of extremist movement that prior would have taken someone with intelligence.

In an earlier time, we called these "books" and there was some similar backlash. But I digress.

replies(2): >>kredd+s2 >>unethi+Qk
2. kredd+s2[view] [source] 2023-05-16 15:44:25
>>tomrod+(OP)
Not that I support AI regulations, but reading a book is a higher barrier to entry than asking a chat assistant to do immoral things.
replies(1): >>fnordp+06
◧◩
3. fnordp+06[view] [source] [discussion] 2023-05-16 15:58:00
>>kredd+s2
(Acknowledging you didn’t support regulation in your statement, just riffing)

Then write laws and regulations about the actions of humans using the tools. The tools have no agency. The human using them towards bad ends do.

By the way, writing things the state considers immoral is an enshrined right.

How do you draw the line between AI writing assistance and predictive text auto completion and spell check in popular document editors today? I would note that predictive text is completely amoral and will do all sorts of stuff the state considers immoral.

Who decides what’s immoral? The licensing folks in the government? What right do they have to tell me my morality is immoral? I can hold and espouse any morality I desire so long as I break no law.

I’d note that as a nation we have a really loose phrasing in the bill of rights for gun rights, but a very clear phrasing about freedom of speech. We generally say today that guns are free game unless used for illegal actions. These proposals say tools that take our thoughts and opinions and creation of language to another level are more dangerous than devices designed for no other purpose than killing things.

Ben Franklin must be spinning so fast in his grave he’s formed an accretion disc.

4. unethi+Qk[view] [source] 2023-05-16 16:55:27
>>tomrod+(OP)
If you can scan city schematics, maps, learn about civil and structural engineering through various textbooks and plot a subway bombing in an afternoon, you're a faster learner than I am.

Let me be clear: everyone in the world is about to have a Jarvis/Enterprise ship's computer/Data/name-your-assistant available to them, but ready and willing to use their power for nefarious purposes. It is not just a matter of reading books. It lowers the barrier on a lot of things, good and bad, significantly.

replies(3): >>fnordp+gQ >>tomrod+aU >>selimt+H92
◧◩
5. fnordp+gQ[view] [source] [discussion] 2023-05-16 19:26:33
>>unethi+Qk
Crimes are crimes the person commits. Planning an attack is a crime. Building a model to commit crimes is probably akin to planning an attack, and might itself be a crime. But the thought that researchers and the every man have to be kept away from AI so globo mega corps can protect us from the AI enabled Lex Luthor is absurd. The protections against criminal activity is already codified in law.
◧◩
6. tomrod+aU[view] [source] [discussion] 2023-05-16 19:43:23
>>unethi+Qk
> It lowers the barrier on a lot of things, good and bad, significantly.

Like books!

replies(1): >>unethi+521
◧◩◪
7. unethi+521[view] [source] [discussion] 2023-05-16 20:22:23
>>tomrod+aU
Yes, I understand your analogy.

I am not endorsing restrictions. I was merely stating the fact that this shit is coming down the pipe, and it /will/ be destabilizing, and just because society survived the printing press doesn't mean the age of AI will be safe or easy.

replies(1): >>fnordp+O71
◧◩◪◨
8. fnordp+O71[view] [source] [discussion] 2023-05-16 20:51:41
>>unethi+521
But at least Alexa will be able to order ten rolls of toilet paper instead of ten million reams of printer paper
◧◩
9. selimt+H92[view] [source] [discussion] 2023-05-17 05:26:16
>>unethi+Qk
For a minute there I misread Jarvis as Jarvik
[go to top]