zlacker

[return to "Lfgss shutting down 16th March 2025 (day before Online Safety Act is enforced)"]
1. owisd+uv[view] [source] 2024-12-16 20:23:17
>>buro9+(OP)
The actual OfCom code of practice is here: https://www.ofcom.org.uk/siteassets/resources/documents/onli...

A cycling site with 275k MAU would be in the very lowest category where compliance is things like 'having a content moderation function to review and assess suspected illegal content'. So having a report button.

◧◩
2. orf+Dw[view] [source] 2024-12-16 20:29:35
>>owisd+uv
This: OP seems to be throwing the baby out with the bathwater.

Im surprised they don’t already have some form of report/flag button.

◧◩◪
3. codazo+5G[view] [source] 2024-12-16 21:29:01
>>orf+Dw
I’m not so sure. It’s a layman’s interpretation, but I think any “forum” would be multi-risk.

That means you need to do CSAM scanning if you accept images, CSAM URL scanning if you accept links, and there’s a lot more than that to parse here.

◧◩◪◨
4. IanCal+LM1[view] [source] 2024-12-17 09:47:09
>>codazo+5G
I doubt it. While it's always a bit of a gray area, the example for "medium risk" is a site with 8M monthly users who share images, doesn't have proactive scanning and has been warned by multiple major organisations that it has been used a few times to share CSAM material.

Cases where they assume you should say "medium risk" without evidence of it happening are if you've got several major risk factors:

> (a) child users; (b) social media services; (c) messaging services; (d) discussion forums and chat rooms; (e) user groups; (f) direct messaging; (g) encrypted messaging.

Also, before someone comes along with a specific subset and says those several things are benign

> This is intended as an overall guide, but rather than focusing purely on the number of risk factors, you should consider the combined effect of the risk factors to make an overall judgement about the level of risk on your service

And frankly if you have image sharing, groups, direct messaging, encrypted messaging, child users, a decent volume and no automated processes for checking content you probably do have CSAM and grooming on your service or there clearly is a risk of it happening.

[go to top]