In fact it is pretty obvious from the OSA itself that the definition of Category 1 is not primarily about capturing porn sites.
Indeed it is not.
The main focus of the Category 1 stuff is evidently whether big sites are actually doing enough to allow children (and parents) to report threats and danger and not see content they don't want to see.
It is for example about trying to reduce harms to children from pro-suicide and pro-anorexia content as well, and about compelling the Category 1 services to provide mechanisms so children can report bullying, grooming and online sexual exploitation from other users.
And also to provide some access to oversight and reporting from to those mechanisms.
That is to say: if a Category 1 service is open to children, it needs to have workable mechanisms to allow children to report threatening and disturbing content and messaging from other users, it needs to at least provide context/warnings around and probably filter pro-suicide and pro-anorexia content, and it is required to be able to present evidence of how those tools are being used and whether they are effective.
If you've ever tried to get Facebook to take down a scam ad (like, for example, the plethora of ads now using an AI-generated Martin Lewis) you will understand that there are genuine concerns about whether the tools available to non-adult users are effective for anything at all.
Category 1 regulations have not yet been finalised and they are not merely being imposed; the likely Category 1 services are being consulted.
Except ... if the government really wanted to improve children's and teenage mental health in the UK, they could easily increase the budget for treatments and youth services. That would be the first thing to do, and they're doing the opposite.
Which shows that they just don't care. Taking needed money away from the most vulnerable children on one hand and claiming that new legislation protects children ... does not sum to protecting children.