zlacker

[parent] [thread] 4 comments
1. Noboru+(OP)[view] [source] 2025-08-15 09:55:28
Doesn't this just mean that it is about "protecting children" and influence over public discourse? The fact remains that the Category 1 rules impose onerous duties on websites that have a significant influence over public discourse, with the effect that many of them will see their influence significantly reduced and may have to fold altogether if they cannot afford to comply.

In fact it is pretty obvious from the OSA itself that the definition of Category 1 is not primarily about capturing porn sites.

replies(2): >>exaspe+xa >>roenxi+kb
2. exaspe+xa[view] [source] 2025-08-15 11:32:59
>>Noboru+(OP)
> In fact it is pretty obvious from the OSA itself that the definition of Category 1 is not primarily about capturing porn sites.

Indeed it is not.

The main focus of the Category 1 stuff is evidently whether big sites are actually doing enough to allow children (and parents) to report threats and danger and not see content they don't want to see.

It is for example about trying to reduce harms to children from pro-suicide and pro-anorexia content as well, and about compelling the Category 1 services to provide mechanisms so children can report bullying, grooming and online sexual exploitation from other users.

And also to provide some access to oversight and reporting from to those mechanisms.

That is to say: if a Category 1 service is open to children, it needs to have workable mechanisms to allow children to report threatening and disturbing content and messaging from other users, it needs to at least provide context/warnings around and probably filter pro-suicide and pro-anorexia content, and it is required to be able to present evidence of how those tools are being used and whether they are effective.

If you've ever tried to get Facebook to take down a scam ad (like, for example, the plethora of ads now using an AI-generated Martin Lewis) you will understand that there are genuine concerns about whether the tools available to non-adult users are effective for anything at all.

Category 1 regulations have not yet been finalised and they are not merely being imposed; the likely Category 1 services are being consulted.

replies(1): >>spwa4+9v1
3. roenxi+kb[view] [source] 2025-08-15 11:41:00
>>Noboru+(OP)
I think the original paraphrase is actually pretty reasonable even with the full context - what is Section 1 doing in the Act if it is primarily aimed at protecting children? There is a lot more public discourse going on than there are unsafe children. If the act deals with both it is, practically, an act aimed primarily at influencing the public discourse with some child-related rules tacked on. Something like 80% of a persons life on the internet is engaging with public discourse and 20% is as a child.
replies(1): >>Jensso+ih1
◧◩
4. Jensso+ih1[view] [source] [discussion] 2025-08-15 17:47:12
>>roenxi+kb
That just makes it even worse, they sell this as a child protection act but as you say most of what it affects has nothing to do with child safety.
◧◩
5. spwa4+9v1[view] [source] [discussion] 2025-08-15 18:57:39
>>exaspe+xa
> It is for example about trying to reduce harms to children from pro-suicide and pro-anorexia content as well, and about compelling the Category 1 services to provide mechanisms so children can report bullying, grooming and online sexual exploitation from other users.

Except ... if the government really wanted to improve children's and teenage mental health in the UK, they could easily increase the budget for treatments and youth services. That would be the first thing to do, and they're doing the opposite.

Which shows that they just don't care. Taking needed money away from the most vulnerable children on one hand and claiming that new legislation protects children ... does not sum to protecting children.

[go to top]