Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
If I ran a small forum in the UK I would shut it down - not worth risk of jail time for getting it wrong.
terrorism
child sexual exploitation and abuse (CSEA) offences, including
grooming
image-based child sexual abuse material (CSAM)
CSAM URLs
hate
harassment, stalking, threats and abuse
controlling or coercive behaviour
intimate image abuse
extreme pornography
sexual exploitation of adults
human trafficking
unlawful immigration
fraud and financial offences
proceeds of crime
drugs and psychoactive substances
firearms, knives and other weapons
encouraging or assisting suicide
foreign interference
animal cruelty> Something is a hate incident if the victim or anyone else think it was motivated by hostility or prejudice based on: disability, race, religion, gender identity or sexual orientation.
This probably worries platforms that need to moderate content. Sure, perhaps 80% of the cases are clear cut, but it’s the 20% that get missed and turn into criminal liability that would be the most concerning. Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Further, prejudices in terms of language do change often. As bad actors get censored based on certain language, they will evolve to use other words/phrases to mean the same thing. The government is far more likely to be aware of these (and be able to prosecute them) than some random forum owner.
So... paperwork, with no real effect, use, or results. And you're trying to defend it?
I do agree with need something, but this is most definitely not the solution.
If you've never considered what the risks are to your users, you're doing them a disservice.
I've also not defended it, I've tried to correct misunderstandings about what it is and point to a reliable primary source with helpful information.
On my single-user Fedi server, the only person who can directly upload and share images is me. But because my profile is public, it's entirely possible that someone I'm following posts something objectionable (either intentionally or via exploitation) and it would be visible via my server (albeit fetched from the remote site.) Does that come under "moderation"? Ofcom haven't been clear. And if someone can post pornography, your site needs age verification. Does my single-user Fedi instance now need age verification because a random child might look at my profile and see a remotely-hosted pornographic image that someone (not on my instance) has posted? Ofcom, again, have not been clear.
It's a crapshoot with high stakes and only one side knows the rules.
In fact, if you have had a place that people can report abuse and it's just not really happening much then you can say you're low risk for that. That's in some of the examples.
> Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
That would impact the poster, not the site.
I don't think you need a report button but a known way of reporting things by your users is likely going to be required if you have a load of user generated stuff that's not moderated by default.
Then you don't have a user to user service you're running, right?
> And if someone can post pornography, your site needs age verification.
That's an entirely separate law, isn't it?
which is an umbrella term for everything that the government does not like right now, and does not mind jailing you for. In other words, it's their way to kill the freedom of expression.
"The Act’s duties apply to search services and services that allow users to post content online or to interact with each other."[0]
My instance does allow users (me) to post content online and, technically, depending on how you define "user", it does allow me to interact with other "users". Problem is that the act and Ofcom haven't clearly defined what "other users of that service" means - a bare reading would interpret it as "users who have accounts/whatever on the same system", yes, and that's what I'm going with but it's a risk if they then say "actually, it means anyone who can interact with your content from other systems"[2] (although I believe they do have a carve out for news sites, etc., re: "people can only interact with content posted by the service" which may also cover a small single-user Fedi instance. But who knows? I certainly can't afford a lawyer or solicitor to give me guidance for each of my servers that could fall under OSA - that's into double digits right now.)
> That's an entirely separate law, isn't it?
No, OSA covers that[1]
[0] https://www.gov.uk/government/publications/online-safety-act...
[1] https://www.ofcom.org.uk/online-safety/protecting-children/i...
[2] "To be considered a user of a user-to-user service for a month, a person doesn’t need to post anything. Just viewing content on a user-to-user service is enough to count as using that service." from https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...