A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.
The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.
We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.
If a platform encourages and doesn’t moderate at all, yes we should go after the platform.
Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists
Everything I read from X's competitors in the media tells me to hate X, and hate Elon.
If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?
Yes, AI chatbots have to do everything in their power to avoid users easily generating such content.
AND
Yes, people that do so (even if done so on your self-hosted model) have to be punished.
I believe it is OK that Grok is being investigated because the point is to figure out whether this was intentional or not.
Just my opinion.
(note that this isn't a raid on Musk personally! It's a raid on X corp for the actions of X corp and posts made under the @grok account by X corp)
X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?
I mean, I thought that was basically already the law in the UK.
I can see practical differences between X/twitter doing moderation and the full ISP censorship, but I cannot see any differences in principle...
——-
You’ve said that whatever is behind door number 1 is unacceptable.
Behind door number 2, “holding tool users responsible”, is tracking every item generated via AI, and being able to hold those users responsible.
If you don’t like door number 2, we have door number 3 - which is letting things be.
For any member of society, opening door 3 is straight out because the status quo is worse than reality before AI.
If you reject door 1 though, you are left with tech monitoring. Which will be challenged because of its invasive nature.
Holding Platforms responsible is about the only option that works, at least until platforms tell people they can’t do it.
If LLMs should have guardrails, why should open source ones be exempt? What about people hosting models on hugging face? WHat if you use a model both distributed by and hosted by hugging face.
I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.