Liability is unlimited and there's no provision in law for being a single person or small group of volunteers. You'll be held to the same standards as a behemoth with full time lawyers (the stated target of the law but the least likely to be affected by it)
http://www.antipope.org/charlie/blog-static/2024/12/storm-cl...
The entire law is weaponised unintented consequences.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
"We’ve heard concerns from some smaller services that the new rules will be too burdensome for them. Some of them believe they don’t have the resources to dedicate to assessing risk on their platforms, and to making sure they have measures in place to help them comply with the rules. As a result, some smaller services feel they might need to shut down completely.
So, we wanted to reassure those smaller services that this is unlikely to be the case."
Individuals and small groups not held directly liable for comments on their blog unless its proven they're responsible for inculcating that environment.
"Safe harbour" - if someone threatens legal action, the host can pass on liability to the poster of the comment. They can (temporarily) hide/remove the comment until a court decides on its legality.
Political winds shift, and if someone is saying something the new government doesn't like, the legislation is there to utterly ruin someone's life.
The least likely to be negatively affected. This will absolutely be good for them in that it just adds another item to the list of things that prevents new entrants from competing with them.
It’s clear the UK wants big monopolistic tech platforms to fully dominate their local market so they only have a few throats to choke when trying to control the narrative…just like “the good old days” of centralized media.
I wouldn’t stand in the way of authoritarians if you value your freedom (or the ability to have a bank account).
The risk just isn't worth it. You write a blog post that rubs someone power-adjacent the wrong way and suddenly you're getting the classic "...nice little blog you have there...would be a shame to find something that could be interpreted as violating 1 of our 17 problem areas..."
Unless Ofcom actively say "we will NOT enforce the Online Safety Act against small blogs", the chilling effect is still there. Ofcom need to own this. Either they enforce the bad law, or loudly reject their masters' bidding. None of this "oh i don't want to but i've had to prosecute this crippled blind orphan support forum because one of them insulted islam but ny hands are tied..."
This is the flimsiest paper thin reassurance. They've built a gun with which they can destroy the lives of individuals hosting user generated content, but they've said they're unlikely to use it.
A minister tweeted that it didn’t apply to shotguns, as if that’s legally binding as opposed to you know, the law as written.
I think an interesting alternate angle here would be to require unmoderated community admins to keep record of real identity info for participants, so if something bad shows up the person who posted it is trivially identifiable and can easily be reprimanded. This has other problems, of course, but is interesting to consider.
1) Law enforcement enforces the law. People posting CSAM are investigated by the police, who have warrants and resources and so on, so each time they post something is another chance to get caught. When they get caught they go to jail and can't harm any more children.
2) Private parties try to enforce the law. The people posting CSAM get banned, but the site has no ability to incarcerate them, so they just make a new account and do it again. Since they can keep trying and the penalty is only having to create a new account, which they don't really care about, it becomes a cat and mouse game except that even if the cat catches the mouse, the mouse just reappears under a different name with the new knowledge of how to avoid getting caught next time. Since being detected has minimal risk, they get to try lots of strategies until they learn how to evade the cat, instead of getting eaten (i.e. going to prison) the first time they get caught. So they get better at evading detection, which makes it harder for law enforcement to catch them either. Meanwhile the site is then under increasing pressure to "do something" because the problem has been made worse rather than better, so they turn up the false positives and cause more collateral damage to innocent people. But that doesn't change the dynamic, it only causes the criminals to evolve their tactics, which they can try an unlimited number of times until they learn how to evade detection again. Meanwhile as soon as they do, the site despite their best efforts is now hosting the material again. The combined costs of the heroic efforts to try and the liability from inevitably failing destroys smaller sites and causes market consolidation. The megacorps then become a choke point for other censorship, some by various governments, others by the corporations themselves. That is an evil in itself, but if you like to take it from the other side, that evil causes ordinary people chafe. So they start to develop and use anti-censorship technology. As that technology becomes more widespread with greater public support, the perpetrators of the crimes you're trying to prevent find it easier to avoid detection.
You want the police to arrest the pedos. You don't want a dystopian megacorp police state.
That's a criminal offence in the UK (two year prison sentence in some circumstances). Do you have a good feeling for what might count as incitement in those circumstances?
The problem is the dishonesty, saying the intent is one thing but being unwilling to codify the stated intent.
No, they don't. My blog is not all that popular. It has got some programming puzzles, Linux HOW-TOs and stuff. Most of my audience is just my friends.
So what's the best course of action? Remove comments feature entirely? Maybe that's what I should do. I wonder what everyone else's doing.
Of course that is the cynical version of it. But as others have pointed out some people dont like these sort of risk.
That would assume no malice from the goverment? Isn't the default assumption that every government want to exert control over its population at this stage, even in "democracies"? There's nothing unintended here.
From Ofcom:
> this exemption would cover online services where the only content users can upload or share is comments on media articles you have published
The point is simply that even merely picking 1% or 0.1% of people completely at random to audit keeps 99% of normal people in line, which is far more valuable to society (not just in immediate dollars) than the cost of those few actual audits, regardless what those audits "earn" in collecting a few, or zero, or indeed negative dollars that might have gone uncollected from a random individual. There is no reason an audit should not show that there was an error and the government owes the taxayer, let alone collecting nothing or collecting less than the cost of the audit.
The police's job is not to recover ypur stolen lawnmower, it's to maintain order in general. They expend many thousands of dollars in resources to track down a lawnmower theif not to recover your $400 possession, but to inhibit the activity of theft in general.
Tax audits are, or should be imo, like that.
The actual details of what should be written in the IRS manual are this: Something.
It's a meaningless question since we're not at that level. I'm only talking about the fallacy of treating tax audits as nothing more than a direct and immediate source of income instead of a means to maintain order and a far greater but indirect source of income.
> 1.17 A U2U service is exempt if the only way users can communicate on it is by posting comments or reviews on the service provider’s own content (as distinct from another user’s content).
A blog is only exempt if users communicate to the blogpost author, on the topic of the blogpost. If they comment on each other, or go off-topic, then the blog is not exempt.
That's why that exemption is basically useless. Anyone can write "hey commenter number 3 i agree commenter number 1's behaviour is shocking" and your exemption is out the window.
But here's the thing: it's often the case that the theft rate in an area is down to a handful a prolific thieves... who act with impunity because they reckon that any one act of theft won't be followed up.
I'd hope that in most jurisdictions, police keep track of who the prolific thieves/shoplifters/burglars/muggers are, and are also willing to look into individual thefts, etc., because even when it's the thief's first crime, there can often be an organised crime link - the newbie thief's drug dealer has asked them to do a "favour" to clear a debt, or such.
So it can be really useful to track down your lawnmower. Sometimes. And the police don't know if it's worth it or not until they do the work. I can see the parallels in this analogy to tax audits.
I'd like to say we could trust the implementation and enforcement of this law to make sense and follow the spirit of existing blog comment sections rather than the letter of a law that could be twisted against almost anyone accepting comments —for most people GDPR compliance enforcement has been a light touch, with warnings rather than immediate fines— but that's not really how laws should work.