zlacker

[parent] [thread] 11 comments
1. bioeme+(OP)[view] [source] 2023-05-16 18:59:18
Open AI lobbying for regulation on common people being able to use AI, isn't it wonderful.
replies(6): >>intelV+b2 >>electr+f2 >>Freeby+C3 >>182716+66 >>skybri+b7 >>cwkoss+Uw
2. intelV+b2[view] [source] 2023-05-16 19:10:14
>>bioeme+(OP)
First mover AI enlightenment for me, regulation for thee, my competitors & unworthy proles.

- Lord Altman

replies(1): >>thrill+fh
3. electr+f2[view] [source] 2023-05-16 19:10:33
>>bioeme+(OP)
They acknowledged there’s no technical moat, so it’s time to lobby for a regulatory one.

Predictable. Disappointing, but predictable.

replies(1): >>happyt+k4
4. Freeby+C3[view] [source] 2023-05-16 19:16:22
>>bioeme+(OP)
I just cancelled my ChatGPT Plus subscription. I do not want to support monopolization of this technology. Companies apparently learned their lesson with the freedom of the Internet.
replies(1): >>eastbo+yb
◧◩
5. happyt+k4[view] [source] [discussion] 2023-05-16 19:19:50
>>electr+f2
Walks like a duck. Talks like a duck. It's a duck.

We've seen this duck so many times before.

No need to innovate when you can regulate.

6. 182716+66[view] [source] 2023-05-16 19:25:55
>>bioeme+(OP)
Hopefully it will be just like software piracy, there will be civil disobedience as well, and they will never truly be able to stamp it out.

And it raises First Amendment issues as well. I think it's morally wrong to prohibit the development of software, which is what AI models are, especially if it's done in a personal capacity.

How do they even know that the author is based in the US anyway. Just use a Russian or Chinese Git hosting provider, where these laws don't exist?

And by the way foreign developers won't even have to jump through these hoops in the first place, so this law will only put the US at a disadvantage compared to the rest of the world.

If these lobbyists get their way, by restricting AI development in both the US and the EU, it will be hilarious to see that out of all places, Russia might be one of the few large countries where it's development will remain unrestricted.

Even better, is that if Russia splits up we will have a new wild west for this kind of thing....

7. skybri+b7[view] [source] 2023-05-16 19:29:27
>>bioeme+(OP)
There are all sorts of dangerous things where there are restrictions on what the common people can do. Prescription drugs and fully automatic machine guns are two examples. You can't open your own bank either.

For anyone who really believes that AI is dangerous, having some reasonable regulations on it is logical. It's a good start on not being doomed. It goes against everyone's egalitarian/libertarian impulses, though.

The thing is, AI doesn't seem nearly as dangerous as a fully-automatic machine gun. For now. It's just generating text (and video) for fun, right?

replies(2): >>hollas+De >>mrangl+3H
◧◩
8. eastbo+yb[view] [source] [discussion] 2023-05-16 19:49:31
>>Freeby+C3
OpenAI belongs to Microsoft. Cancel your subscription to GitHub, LinkedIn, O365…

It’s funny how all Microsoft properties are in dominant position on their market.

◧◩
9. hollas+De[view] [source] [discussion] 2023-05-16 20:04:41
>>skybri+b7
I move hundreds of thousands of my dollars around between financial institutions just using text.
◧◩
10. thrill+fh[view] [source] [discussion] 2023-05-16 20:18:33
>>intelV+b2
Anything for my friends, the law for my competitors.
11. cwkoss+Uw[view] [source] 2023-05-16 21:44:31
>>bioeme+(OP)
Roko's Basilisk will have a special layer of hell just for Sam Altman and his decision to name his company OpenAI
◧◩
12. mrangl+3H[view] [source] [discussion] 2023-05-16 22:49:03
>>skybri+b7
AI and machine guns aren't comparable. Machine guns will never ever decide to autonomously fire.

The shared point of both AI alarmists and advocates is that AI will be highly resistant to being subject to regulation, ultimately. As dictated by the market for it. They won't want to regulate something, assuming they could, for which its free operation underlies everyone's chance of survival against competing systems.

I only find that danger is inherent in the effort of people that casually label things as "dangerous".

I'm still exploring whether its the laziness aspect, itself, of the alarmist vocabulary in the absence of required explanation. Or whether my issue lies with the suspicion of emotional manipulation and an attempt to circumvent having to actually explain one's reasoning, using alarmist language absent required explanation.

Already, AI pessimists are well on their way to losing any window where their arguments will be heard and meaningful. We can tell by their parroting the word "dangerous" as the total substance of their arguments. Which will soon be a laughable defense. They'd better learn more words.

[go to top]