Sometimes it's explicitly mentioned but oftentimes it's behind "appropriate and proportionate measures"
We should make the laws for our digital spaces for human person use cases first, not corporate person use cases. Even if it's in the sense of trying to protect humans from corporations.
https://www.ofcom.org.uk/siteassets/resources/documents/onli...
It amounts to your basic terms of service. It means that you'll need to moderate your forums, and prove that you have a policy for moderation. (basically what all decent forums do anyway) The crucial thing is that you need to record that you've done it, and reassessed it. and prove "you understand the 17 priority areas"
Its similar for what a trustee of a small charity is supposed to do each year for its due diligence.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A...
2. This Regulation does not apply to the processing of personal data: (c) by a natural person in the course of a purely personal or household activity;
You just have to make individual value judgements every day on thousands of pieces of content for SEVENTEEN highly specific priority areas.
Then keep detailed records on each value judgement such that it can hold up to legal scrutiny from an activist court official.
> Any competent forum operator is already doing all of thisWhat is your evidence that the record keeping described by the parent is routine among competent forum operators?
what can we do about this creep up of totalitarian surveillance plutocracy?
sweet were the 1990s with a dream.of.information access for all.
little did we know we were the information being accessed.
srry
very un-HN-y.. maybe it's just the time of the year but this really pulls me down currently.
If your CCTV system captures images of people outside the boundary of your private domestic property...
Most European countries have laws for recording spaces not your own. They typically predate the GDPR by decades. AFAIK, they are not harmonized, except for a tiny bit by the GDPR.If I understand it well, this is a big difference between the USA where you can mostly record the public space and create databases of what everyone does in public. IN Europe (even outside the EU), there is a basic expectation of privacy even in public spaces. You are allowed to make short term recordings, do journalism, and have random people accidentally wander in and out of your recording. Explicitly targetting specific people or long-term recording is somewhere between frowned upon to flat out illegal.
GDPR, Safeguarding, liability for the building you operate in, money laundering. there are lots of laws you are liable for.
this is similar to running a cricket club, or scout club
For running a scout association each lesson could technically require an individual risk assessment for every piece of equipment, and lesson. The hall needs to be safe, and you need to prove that it's safe. Also GDPR, and safeguarding, background checks, money laundering.
> hold up to legal scrutiny from an activist court official
Its not the USA. activist court officials require a functioning court system. Plus common law has the concept of reasonable. A moderated forum will be of a much higher standard of moderation than facebook/twitter/tiktok.
Plenty of things in UK law attract "an unlimited fine", but even that doesn't lead to people actually being fined amounts greater than all the money that's ever existed.
From the linked document above: "You need to keep a record of each illegal content risk assessment you carry out", "service providers may fully review their risk assessment (for example, as a matter of course every year)"
And links to a guidance document on reviewing the risk assessment[1] which says: "As a minimum, we consider that service providers should undertake a compliance review once a year".
[1] https://www.ofcom.org.uk/siteassets/resources/documents/onli...
I work for the latter kind of merchant, and "complexity" is not a word I would associate with VATMOSS. Here is what we've had to do to deal with VATMOSS:
• Register with the tax authority in a country that was part of VATMOSS. We registered with Ireland. We did this online via the Irish tax authority's web site. It took something like 15-30 minutes.
• Collect VAT. VAT rates are country wide and don't change very often so it is easy to simply have a rate table in our database. No need to integrate any third party paid tax processing API into our checkout process.
Once a month I run a script that uses a free API from apilayer.com to get the rates for each country and tell me if any do not match the rates in our database, but that's just because I'm lazy. :-) It's not much work to just manually search to find news of upcoming VAT rate changes.
• At the start of each quarter we have to report how much we sold and how much VAT we collected for each country. I run a script I wrote that generates a CSV file with that data from our database. We upload it to the Irish tax authority's web site and send them the total VAT. They deal with distributing the data and money to the other countries.
It was a bit more complicated before Brexit. Back then we made the mistake of picking the UK as our country to register with. Instead of going online by making a web-based way to do things like Ireland did, the UK did it by making available OpenOffice versions of their paper forms for download. You could download those, edit them to contain your information, and then upload them.
That's simply not true.
2) "Any competent forum operator is already doing all of this [this = record keeping requirements described by the parent]".
These two assertions seem to conflict (unless good forum OPs are doing wrong record keeping). Are you willing to take another stab at it? What does good forum op record keeping look like?
https://russ.garrett.co.uk/2024/12/17/online-safety-act-guid... has a more comprehensive translation into more normal English.
You will need to assess the risk of people seeing something from one of those categories (for speciality forms, mostly low), think about algorithms showing it to users (again for forums thats pretty simple) Then have a mechanism to allow people to report offending content.
Taking proportionate steps to stop people posting stuff in the first place (pretty much the same as spam controls, and then banning offenders)
The perhaps harder part is allowing people to complain about take downs, but then adding a subforum for that is almost certainly proportionate[1].
[1] untested law, so not a guarantee
For comparison imagine there was a new law against SQL Injection. Competent forum operators are already guarding against SQL Injection because they don't want to be owned by hackers. But they likely are not writing down a document explaining how they guard against it. If they were required to make a document which writes down "all SQL data updates are handled by Django's ORM" they might then think "would OfCom think this was enough? Maybe we should add that we keep Django up to date ... actually we're running an 18 months old version, let's sign up to Django's release mailing list, decide to stay within 3-6 months of stable version, and add a git commit hook which greps for imports of SQL libraries so we can check that we don't update data any other way". They are already acting against SQL injection but this imaginary law requires them to make it a proper formal procedure not an ad-hoc thing.
> "What does good forum op record keeping look like?"
Good forum operators already don't want their forums to become crime cesspits because that will ruin the experience for the target users and will add work and risk for themselves. So they will already have guards against bot signups, guards against free open image hosting, guards against leaking user private and personal information. They will have guards against bad behaviour such as passive moderation where users can flag and report objectionable content, or active moderation where mods read along and intervene. If they want to guard against moderators power tripping, they will have logs of moderation activities such as editing post content, banning accounts. There will be web server logs, CMS / admin tool logs, which will show signups, views, edits. They will likely have activity graphs and alerts if something suddenly becomes highly popular or spikes bandwidth use so they can look what's going on. If they contact the authorities there may be email or call logs of that contact, there will be mod messages records from users, likely not all in one place. If a forum is for people dealing with debt and bankruptcy they might have guards against financial scams targetting users of their service such as a sticky post warning users, a banned words list for common scam terms - second hand sales site https://www.gumtree.com has a box of 'safety tips' prominently on the right warning about common scams.
Larger competent forums with multiple paid (or volunteer) employees would likely already have some of this formalised and centralised just to make it possible to work with as a team, and for employment purposes (training, firing, guarding against rogue employees, complying with existing privacy and safety regulations).
Yes I think the new law will require forum operators to do more. I don't think it's unreasonable to require forum operators once a year to consider "is your forum at particular risk of people grooming children, inciting terrorism, scamming users, etc? If your site is a risk, what are you doing to lower the chance of it happening, and increase the chance of it being detected? And can you show OfCom that you actually are considering these things and putting relevant guards in place?".
(Whether the potiential fines and the vagueness/clarity are appropriate is a separate thing).