> The act creates a new duty of care of online platforms, requiring them to take action against illegal, or legal but "harmful", content from their users. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.
The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?
They mention especially in their CSAM discussion that, in practice, a lot of that stuff ends up being distributed by smallish operators, by intention or by negligence—so if your policy goal is to deter it, you have to be able to spank those operators too. [0]
> In response to feedback, we have expanded the scope of our CSAM hash-matching measure to capture smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.
Surely we can all think of web properties that have gone to seed (and spam) after they outlive their usefulness to their creators.
I wonder how much actual “turnover” something like 4chan turns over, and how they would respond to the threat of a 10% fine vs an £18mm one…
[0] https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This basically ensures that the only people allowed to host online services for other people in the UK will be large corporations. As they are the only ones that can afford the automation and moderation requirements imposed by this bill.
You should be able to self-host content, but you can't do something like operate a forums website or other smaller social media platform unless you can afford to hire lawyers and spend thousands of dollars a month hiring moderators and/or implementing a bullet proof moderation system.
Otherwise you risk simply getting shutdown by Ofcom. Or you can do everything yo are supposed to do and get shutdown anyways. Good luck navigating their appeals processes.
But surely no right minded judge would do such a thing, right?
Exactly the complaint that everyone on here made about GDPR, saying the sky would fall in. If you read UK law like an American lawyer you will find it very scary.
But we don't have political prosecuters out to make a name for themselves, so it works ok for us.
HOWEVER: I'm not sure how you would get access to the CSAM hash database if you're were starting a new online image hosting service.
The requirements to sign up for IWF (the defacto UK CSAM database) membership are:
- be legally registered organisations trading for more than 12 months;
- be publicly listed on their country registration database;
- have more than 2 full-time unrelated employees;
- and demonstrate they have appropriate data security systems and processes in place.
Cloudflare have a free[1] one but you have to be a Cloudflare customer.
Am I missing something, or does this make it very difficult to start up a public facing service from scratch?
This is the problem with many European (and I guess also UK) laws.
GDPR is one notable example. Very few people actually comply with it properly. Hidden "disagree" options in cookie pop-ups and unauthorized data transfers to the US are almost everywhere, not to mention the "see personalized ads or pay" business model.
Unlike with most American laws, GDPR investigations happen through a regulator, not a privately-initiated discovery process where the suing party has an incentive to dig up as much dirt as possible, so in effect, you only get punished if you either really go overboard or are a company that the EU dislikes (which is honestly mostly just Meta at this point).
If it's the company, the shareholders etc are not liable.
It's by design. Politicians have fallen for big tech lobbyists once again.
Also who says that the hashes provided by your CSAM database of choice are actually flagging illegal data and not also data that whoever runs the database wants to track down? You have no idea. You are just complicit in the surveillance state, really.
You need to do a risk assessment and keep a copy. Depending on how risky things are, you need to put more mitigations in place.
If you have a neighbourhood events thing that people can post to, and you haven't had complaints and generally keep an eye out for misuse, that's it.
If you run a large scale chat room for kids with suicidal thoughts where unvetted adults can talk to them in DMs you're going to have a higher set of mitigations and things in place.
Scale is important, but it's not the only determining factor. An example of low risk for suicide harm is
> A large vertical search service specialised in travel searches, including for flights and hotels. It has around 10 million monthly UK users. It uses recommender systems, including for suggesting destinations. It has a basic user reporting system. There has never been any evidence or suggestion of illegal suicide content appearing in search results, and the provider can see no way in which this could ever happen. Even though it is a large service, the provider concludes it has negligible or no risk for the encouraging or assisting suicide offence
An example for high risk of grooming is
> A social media site has over 10 million monthly UK users. It allows direct messaging and has network expansion prompts. The terms of service say the service is only for people aged 16 and over. As well as a content reporting system, the service allows users to report and block other users. While in theory only those aged 16 and over are allowed to use the service, it does not use highly effective age assurance and it is known to be used by younger children. While the service has received few reports from users of grooming, external expert organisations have highlighted that it is known to be used for grooming. It has been named in various police cases and in a prominent newspaper investigation about grooming. The provider concludes the service is high risk for grooming