Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
It's like local US news websites blocking European users over GDPR concerns.
If I ran a small forum in the UK I would shut it down - not worth risk of jail time for getting it wrong.
terrorism
child sexual exploitation and abuse (CSEA) offences, including
grooming
image-based child sexual abuse material (CSAM)
CSAM URLs
hate
harassment, stalking, threats and abuse
controlling or coercive behaviour
intimate image abuse
extreme pornography
sexual exploitation of adults
human trafficking
unlawful immigration
fraud and financial offences
proceeds of crime
drugs and psychoactive substances
firearms, knives and other weapons
encouraging or assisting suicide
foreign interference
animal crueltyLiability is unlimited and there's no provision in law for being a single person or small group of volunteers. You'll be held to the same standards as a behemoth with full time lawyers (the stated target of the law but the least likely to be affected by it)
http://www.antipope.org/charlie/blog-static/2024/12/storm-cl...
The entire law is weaponised unintented consequences.
I don't know if you said this sarcastically, but I have a friend in Switzerland who reads U.S. news websites via Web Archive or Archive IS exactly because of that.
Accessing some of these news sites returns CloudFlare's "not available" in your region message or similar.
Is it discord's responsibility to comply, the admin/moderators, or all of the above?
Being silly to ridicule overreaching laws is top-trolling! Love it.
Hell, we do that ourselves, but only for our own infrastructure that isn't expected to be used outside the county. Whitelisting your own country and blocking everything else cuts out >99% of scrapers and script kiddies.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
"We’ve heard concerns from some smaller services that the new rules will be too burdensome for them. Some of them believe they don’t have the resources to dedicate to assessing risk on their platforms, and to making sure they have measures in place to help them comply with the rules. As a result, some smaller services feel they might need to shut down completely.
So, we wanted to reassure those smaller services that this is unlikely to be the case."
> Something is a hate incident if the victim or anyone else think it was motivated by hostility or prejudice based on: disability, race, religion, gender identity or sexual orientation.
This probably worries platforms that need to moderate content. Sure, perhaps 80% of the cases are clear cut, but it’s the 20% that get missed and turn into criminal liability that would be the most concerning. Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Further, prejudices in terms of language do change often. As bad actors get censored based on certain language, they will evolve to use other words/phrases to mean the same thing. The government is far more likely to be aware of these (and be able to prosecute them) than some random forum owner.
One of the exemptions is for "Services provided by persons providing education or childcare."
Fact is that its very unlikely they would ever face any issues about having it not blocked.
Individuals and small groups not held directly liable for comments on their blog unless its proven they're responsible for inculcating that environment.
"Safe harbour" - if someone threatens legal action, the host can pass on liability to the poster of the comment. They can (temporarily) hide/remove the comment until a court decides on its legality.
Political winds shift, and if someone is saying something the new government doesn't like, the legislation is there to utterly ruin someone's life.
To do any good you don't want to cause grief for the victims of the crazy law, you want to cause grief to its perpetrators.
The least likely to be negatively affected. This will absolutely be good for them in that it just adds another item to the list of things that prevents new entrants from competing with them.
Not doubting it, but if you have a reference to hand it will save me having to search.
If it's just something you remember but don't have a reference then that's OK, I'll go hunting based on your clue.
It’s clear the UK wants big monopolistic tech platforms to fully dominate their local market so they only have a few throats to choke when trying to control the narrative…just like “the good old days” of centralized media.
I wouldn’t stand in the way of authoritarians if you value your freedom (or the ability to have a bank account).
The risk just isn't worth it. You write a blog post that rubs someone power-adjacent the wrong way and suddenly you're getting the classic "...nice little blog you have there...would be a shame to find something that could be interpreted as violating 1 of our 17 problem areas..."
And of course, it will turn into yet another game of cat and mouse, as bad actors find new creative ways to bypass automatic censors.
Recourse doesn't matter for a sole proprietorship. If they have to engage with a lawyer whatsoever, the site is dead or blocked because they don't have the resources for that.
Unless Ofcom actively say "we will NOT enforce the Online Safety Act against small blogs", the chilling effect is still there. Ofcom need to own this. Either they enforce the bad law, or loudly reject their masters' bidding. None of this "oh i don't want to but i've had to prosecute this crippled blind orphan support forum because one of them insulted islam but ny hands are tied..."
This is the flimsiest paper thin reassurance. They've built a gun with which they can destroy the lives of individuals hosting user generated content, but they've said they're unlikely to use it.
A minister tweeted that it didn’t apply to shotguns, as if that’s legally binding as opposed to you know, the law as written.
So... paperwork, with no real effect, use, or results. And you're trying to defend it?
I do agree with need something, but this is most definitely not the solution.
These closures are acts of protest, essentially.
I agree with @teymour's description of the law. It is totally normal legislation.
https://mastodon.neilzone.co.uk/@neil
http://3kj5hg5j2qxm7hgwrymerh7xerzn3bowmfflfjovm6hycbyfuhe6l...
I think an interesting alternate angle here would be to require unmoderated community admins to keep record of real identity info for participants, so if something bad shows up the person who posted it is trivially identifiable and can easily be reprimanded. This has other problems, of course, but is interesting to consider.
https://medium.com/@rviragh/ofcom-and-the-online-safety-act-...
... unlike the issue of what size of service is covered, this isn't a pinky swear by Ofcom.
If you've never considered what the risks are to your users, you're doing them a disservice.
I've also not defended it, I've tried to correct misunderstandings about what it is and point to a reliable primary source with helpful information.
1) Law enforcement enforces the law. People posting CSAM are investigated by the police, who have warrants and resources and so on, so each time they post something is another chance to get caught. When they get caught they go to jail and can't harm any more children.
2) Private parties try to enforce the law. The people posting CSAM get banned, but the site has no ability to incarcerate them, so they just make a new account and do it again. Since they can keep trying and the penalty is only having to create a new account, which they don't really care about, it becomes a cat and mouse game except that even if the cat catches the mouse, the mouse just reappears under a different name with the new knowledge of how to avoid getting caught next time. Since being detected has minimal risk, they get to try lots of strategies until they learn how to evade the cat, instead of getting eaten (i.e. going to prison) the first time they get caught. So they get better at evading detection, which makes it harder for law enforcement to catch them either. Meanwhile the site is then under increasing pressure to "do something" because the problem has been made worse rather than better, so they turn up the false positives and cause more collateral damage to innocent people. But that doesn't change the dynamic, it only causes the criminals to evolve their tactics, which they can try an unlimited number of times until they learn how to evade detection again. Meanwhile as soon as they do, the site despite their best efforts is now hosting the material again. The combined costs of the heroic efforts to try and the liability from inevitably failing destroys smaller sites and causes market consolidation. The megacorps then become a choke point for other censorship, some by various governments, others by the corporations themselves. That is an evil in itself, but if you like to take it from the other side, that evil causes ordinary people chafe. So they start to develop and use anti-censorship technology. As that technology becomes more widespread with greater public support, the perpetrators of the crimes you're trying to prevent find it easier to avoid detection.
You want the police to arrest the pedos. You don't want a dystopian megacorp police state.
What are you going to do Ofcom?
HN/YC could just tell them to go pound sand, no? (Assuming YC doesn't have any operations in the UK; I have no idea.)
On my single-user Fedi server, the only person who can directly upload and share images is me. But because my profile is public, it's entirely possible that someone I'm following posts something objectionable (either intentionally or via exploitation) and it would be visible via my server (albeit fetched from the remote site.) Does that come under "moderation"? Ofcom haven't been clear. And if someone can post pornography, your site needs age verification. Does my single-user Fedi instance now need age verification because a random child might look at my profile and see a remotely-hosted pornographic image that someone (not on my instance) has posted? Ofcom, again, have not been clear.
It's a crapshoot with high stakes and only one side knows the rules.
The "stated purpose" is irrelevant. Even if they are being honest about their stated purpose (questionable), the only thing that matters is how it ends up playing out in reality.
That's a criminal offence in the UK (two year prison sentence in some circumstances). Do you have a good feeling for what might count as incitement in those circumstances?
The problem is the dishonesty, saying the intent is one thing but being unwilling to codify the stated intent.
People saying criticism is politically motivated (ignoring the fact that this law was drafted by the Tories and passed by Labour...so I am not exactly clear what the imagined motivation might be) ignore the fact that the UK has had this trend in law for a long time and the outcome has generally been negative (or, at best, a massive waste of resources).
Legislation has a context: if we lived in a country where police behaved sensibly, I could reasonably see how someone could believe this was sensible...that isn't reality though. Police have a maximalist interpretation of their powers (for example, non-crime hate incidents...there is no legislation governing their use, they are used regularly to "question the thinking" of people who write critical things about politicians, usually local, or the police...no appointed authority gave them this power, their usage his been questioned by ministers...they register hundreds of thousands of a year still).
Btw, if you want to know how the sausage is made: security services/police want these laws, some event happens, and then there is a coordinated campaign with the media (the favour is usually swapped for leaks later) to build up "public support" (not actual support, just the appearance of support), meetings with ministers are arranged "look at the headlines"...this Act wasn't some organic act of legislative genius, it was the outcome of a targeted media campaign from an incident that, in factual terms, is unrelated with what the Act eventually became (if this sounds implausible, remember that May gave Nissan £30m on the back of SMMT organising about a week's worth of negative headlines, remember that Johnson brought in about 4m migrants off the back of about two days of briefing against him by a six-month old lobbying group from hotels and poultry slaughterhouses...this is actually how the govt works...no-one reads papers apart from politicians).
Giving Ofcom this power, if you are familiar with their operations, is an act of literal insanity. Their budget has exploded higher (I believe near a quarter of a billion now). If you think tech companies are actually going to enforce our laws for us, you are wrong. But suggesting that Ofcom with their new legions of civil servants is supposed to the watchdog of online content...it makes no sense, it cannot be described as "totally normal" in a country other than China.
It is right that a country should snuff out all communities, large and small, and drive them to hosting in another country, or "under the wing" of a behemoth with a fully-funded legal department?
It's a blatantly destructive law.
No, they don't. My blog is not all that popular. It has got some programming puzzles, Linux HOW-TOs and stuff. Most of my audience is just my friends.
So what's the best course of action? Remove comments feature entirely? Maybe that's what I should do. I wonder what everyone else's doing.
Of course that is the cynical version of it. But as others have pointed out some people dont like these sort of risk.
In fact, if you have had a place that people can report abuse and it's just not really happening much then you can say you're low risk for that. That's in some of the examples.
> Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
That would impact the poster, not the site.
I don't think you need a report button but a known way of reporting things by your users is likely going to be required if you have a load of user generated stuff that's not moderated by default.
Then you don't have a user to user service you're running, right?
> And if someone can post pornography, your site needs age verification.
That's an entirely separate law, isn't it?
That would assume no malice from the goverment? Isn't the default assumption that every government want to exert control over its population at this stage, even in "democracies"? There's nothing unintended here.
which is an umbrella term for everything that the government does not like right now, and does not mind jailing you for. In other words, it's their way to kill the freedom of expression.
"The Act’s duties apply to search services and services that allow users to post content online or to interact with each other."[0]
My instance does allow users (me) to post content online and, technically, depending on how you define "user", it does allow me to interact with other "users". Problem is that the act and Ofcom haven't clearly defined what "other users of that service" means - a bare reading would interpret it as "users who have accounts/whatever on the same system", yes, and that's what I'm going with but it's a risk if they then say "actually, it means anyone who can interact with your content from other systems"[2] (although I believe they do have a carve out for news sites, etc., re: "people can only interact with content posted by the service" which may also cover a small single-user Fedi instance. But who knows? I certainly can't afford a lawyer or solicitor to give me guidance for each of my servers that could fall under OSA - that's into double digits right now.)
> That's an entirely separate law, isn't it?
No, OSA covers that[1]
[0] https://www.gov.uk/government/publications/online-safety-act...
[1] https://www.ofcom.org.uk/online-safety/protecting-children/i...
[2] "To be considered a user of a user-to-user service for a month, a person doesn’t need to post anything. Just viewing content on a user-to-user service is enough to count as using that service." from https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
So bullshit jobs that do nothing productive but are there for "compliance". I think we have enough of that, thanks.
From Ofcom:
> this exemption would cover online services where the only content users can upload or share is comments on media articles you have published
The point is simply that even merely picking 1% or 0.1% of people completely at random to audit keeps 99% of normal people in line, which is far more valuable to society (not just in immediate dollars) than the cost of those few actual audits, regardless what those audits "earn" in collecting a few, or zero, or indeed negative dollars that might have gone uncollected from a random individual. There is no reason an audit should not show that there was an error and the government owes the taxayer, let alone collecting nothing or collecting less than the cost of the audit.
The police's job is not to recover ypur stolen lawnmower, it's to maintain order in general. They expend many thousands of dollars in resources to track down a lawnmower theif not to recover your $400 possession, but to inhibit the activity of theft in general.
Tax audits are, or should be imo, like that.
The actual details of what should be written in the IRS manual are this: Something.
It's a meaningless question since we're not at that level. I'm only talking about the fallacy of treating tax audits as nothing more than a direct and immediate source of income instead of a means to maintain order and a far greater but indirect source of income.
> 1.17 A U2U service is exempt if the only way users can communicate on it is by posting comments or reviews on the service provider’s own content (as distinct from another user’s content).
A blog is only exempt if users communicate to the blogpost author, on the topic of the blogpost. If they comment on each other, or go off-topic, then the blog is not exempt.
That's why that exemption is basically useless. Anyone can write "hey commenter number 3 i agree commenter number 1's behaviour is shocking" and your exemption is out the window.
But here's the thing: it's often the case that the theft rate in an area is down to a handful a prolific thieves... who act with impunity because they reckon that any one act of theft won't be followed up.
I'd hope that in most jurisdictions, police keep track of who the prolific thieves/shoplifters/burglars/muggers are, and are also willing to look into individual thefts, etc., because even when it's the thief's first crime, there can often be an organised crime link - the newbie thief's drug dealer has asked them to do a "favour" to clear a debt, or such.
So it can be really useful to track down your lawnmower. Sometimes. And the police don't know if it's worth it or not until they do the work. I can see the parallels in this analogy to tax audits.
I'd like to say we could trust the implementation and enforcement of this law to make sense and follow the spirit of existing blog comment sections rather than the letter of a law that could be twisted against almost anyone accepting comments —for most people GDPR compliance enforcement has been a light touch, with warnings rather than immediate fines— but that's not really how laws should work.