Tough pill to swallow for some, but there is no difference between irrational demands made from the government of the UK, and say, North Korea. It's everyone's choice which side of history they'd like to be on.
Oh well, I'll stick to HN.
> A statement from the US Department of State that it does not believe the law applies to American entities and a commitment to defend them against it.
We need legislation that tackles the various issues this legislation aims to tackle but in much more targeted ways. It needs to specifically target the biggest social media companies (Meta, X, Reddit, etc). Smaller forums are irrelevant. If your algorithm recommends self-harm content to kids who then go on to kill themselves it is right that you should be held responsible. "We're so big we can't be expected to police the content we host" should not be an acceptable argument.
For the record, I only host it for myself, so I'm pretty sure I wouldn't have received any of the legal protections that the OSA is now stripping away, and thus geoblocking the UK wouldn't matter. But if there's something else I'm missing, please let me know.
I also recognize the potential for legal action or criminal charges. Those are great opportunities to fight these tyrants in one of the few battlefields available to us.
As Martin Luther King, Jr. said:
"I submit that an individual who breaks a law that conscience tells him is unjust, and willingly accepts the penalty by staying in jail to arouse the conscience of the community over its injustice, is in reality expressing the very highest respect for law."
Now, it may be that Lobsters simply doesn't have the highest respect for the law, or perhaps they value their community's survival over leading by principle and securing a bright future for posterity. Short-sighted thinking, but I understand the motivation.
My comment is a call for people to consider their priorities with respect to the world we leave behind us. Taking the easy way out and avoiding conflict with an encroaching global authoritarian movement is not going to fix anything, and contributes toward an ever-darker future.
This is not a jab at Lobster, and I am glad to see them at least trying something before simply geoblocking the UK. But what if geoblocking isn't enough? What if knowing a VPN-enabled user is from the UK but not banning them is still grounds for a lawsuit or criminal charges?[0] What if more countries join in?
[0] This leads to the ironic outcome that the UK becomes less represented in the next generation of online discourse and slides further into backwater obscurity, despite being home to such rich culture and academia.
A lot of people don't really like the toxic discussions that crypto usually tends to devolve in. So it makes sense to block the browser if you don't want those people on your server.
A little while back there was the story [0] of a Mastodon admin who hated CloudFlare and its centralized protection, but found that he had no choice but to sign up for it anyway because a disgruntled user kept launching DDoS attacks and he had no other way to keep his instance online. A bunch of people here and elsewhere kept unhelpfully replying that, “you don’t need CloudFlare, you could just do [incredibly convoluted and time-consuming solution] instead”, and all of those people were missing the point: CloudFlare is “set it and forget it”, which is a non-negotiable requirement for anything which is run as a hobby instead of a full-time job.
It’s the same with this UK law: yes, you could spend weeks of your life learning the intricacies of laws in some other country, or you could just block them and be done with it. Businesses which might need revenue from UK users will do the former, but if I’m running a site out of my own time and money, I’ll do the latter. And I don’t want hobby sites to have to disappear: the Internet is commercialized enough as it is, and regulating passion projects out of existence would kill the last remaining independent scraps.
[0]: >>21719793
Ah, yes, my American brain made the association too easily. Regardless, it doesn't alter my point much, in that the US/UK relationship is still more friendly than North Korea. Of course, it may not stay that way.
> Now, it may be that Lobsters simply doesn't have the highest respect for the law, or perhaps they value their community's survival over leading by principle and securing a bright future for posterity. Short-sighted thinking, but I understand the motivation.
This argument seems to dismiss the fact that the original thread started with a statement that they do not have the financial stability to challenge such a law. Sure, they could go principally bankrupt, but what effect would such a small fry really have in the global geopolitical environment? I just don't think they have a stage that's anything close to the size of MLK's.
https://www.ofcom.org.uk/siteassets/resources/documents/onli...
It defines “large service” as “a service that has more than 7 million monthly active United Kingdom users”. This is 25% of the UK population. If your service isn’t a household name, it mainly doesn’t apply to you, but the language they use makes it seem like this applies to more.
As far as I can tell it's just another browser that blocks a lot of internet crap.
Extradition treaties, such as the UK–US Extradition Treaty of 2003, allow for extradition requests between the two countries.
They generally require that the alleged offense be a crime in both jurisdictions ("dual criminality") which ensures that individuals are not extradited for actions that are not considered crimes in their home country.
Also incidents like this:
https://lobste.rs/s/zp4ofg/lobster_burntsushi_has_left_site
This kind of stuff gives me hugbox vibes, i would not feel safe there. I'm somewhat sure some of the moderators use the website as personal political leverage.
It's a connected world, so online activities are probably pretty grey-areas. If you defraud someone (for example) but they're in another country, where did the crime happen?
They can ask ISPs to do the censorship if they really want to keep us “safe”.
Then again, I actively go out of my way to be toxic on the internet, so maybe they have a point
> 1. Individual accountable for illegal content safety duties and reporting and complaints duties
> 2. Written statements of responsibilities
> 3. Internal monitoring and assurance
> 4. Tracking evidence of new and increasing illegal harm
> 5. Code of conduct regarding protection of users from illegal harm
> 6. Compliance training
> 7. Having a content moderation function to review and assess suspected illegal content
> 8. Having a content moderation function that allows for the swift take down of illegal content
> 9. Setting internal content policies
> 10. Provision of materials to volunteers
> 11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
> 12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
A lot of those sound scary to deal with but upon closer look don't actually seem like much of a burden. Here's what I concluded when I looked into this back then.
First, #2, #4, #5, #6, #9, and #10 only apply to sites that have more than 7 000 000 monthly active UK users or are "multi-risk". Multi-risk means being at medium to high risk in at least two different categories of illegal/harmful content. The categories of illegal/harmful content are terrorism, child sexual exploitation or abuse, child sex abuse images, child sex abuse URLs, grooming, encouraging or assisting suicide, and hate.
Most smaller forums that are targeting particular subjects or interests probably won't be multi-risk. But for the sake of argument let's assume a smaller forum that is multi-risk and consider what is required of them.
#1 means having someone who has to explain and justify to top management what the site is doing to comply.
#2 means written statements saying which senior managers are responsible for the various things needed for compliance.
#3 is not applicable. It only applies to services that are large (more than 7 000 000 active monthly UK users) and are multi-risk.
#4 means keeping track of evidence of new or increasing illegal content and informing top management. Evidence can come from your normal processing, like dealing with complaints, moderation, and referrals from law enforcement.
Basically, keep some logs and stats and look for trends, and if any are spotted bring it up with top management. This doesn't sound hard.
#5 You have to have something that sets the standards and expectations for the people who will dealing with all this. This shouldn't be difficult to produce.
#6 When you hire people to work on or run your service you need to train them to do it in accord with your approach to complying with the law. This does not apply to people who are volunteers.
#7 and #8 These cover what you should do when you become aware of suspected illegal content. For the most part I'd expect sites could handle it like the handle legal content that violates the site's rules (e.g., spam or off-topic posts).
#9 You need a policy that states what is allowed on the service and what is not. This does not seem to be a difficult requirement.
#10 You have to give volunteer moderators access to materials that let them actually do the job.
#11 This only applies to (1) services with more than 7 000 000 monthly active UK users that have at least a medium risk of image-based CSAM, or (2) services with a high risk of image-based CSAM that either have at least 700 000 monthly active UK users or are a "file-storage and file-sharing service".
A "file-storage and file-sharing service" is:
> A service whose primary functionalities involve enabling users to:
> a) store digital content, including images and videos, on the cloud or dedicated server(s); and
> b) share access to that content through the provision of links (such as unique URLs or hyperlinks) that lead directly to the content for the purpose of enabling other users to encounter or interact with the content.
#12 Similar to #11, but without the "file-storage and file-sharing service" part, so only applicable if you have at least 700 000 monthly active UK users and are at a high risk of CSAM URLs or have at least 7 000 000 monthly active UK users and at least a medium risk of CSAM URLs.
The US-USA extradition treaties have always been one way.
I thought extradition laws usually had some kind of clause like "only applies if it'd be a crime in both jurisdictions", but now I think about that I can't remember where I heard that or why I think it so I will admit complete ignorance.
[1.] https://github.com/lobsters/lobsters-ansible/issues/45#issue...
You can also come back under the same account, which is what I did. I emailed dang, and promised to respect the site's guidelines this time.
Lobste.rs on the other hand has terrible moderation, with moderators arguably being from one political clique. You can find details about the ridiculous ban they issued on my account in this X thread: https://x.com/sridca/status/1751586241110519837
pushcx then tried to argue with me here after 3 months, lying about the ban: https://news.ycombinator.com/item?id=40166912#40176320
I’m not entirely sure I agree with this sentiment. It certainly isn’t true from a legal standpoint in America, where section 230 explicitly absolves you from any such responsibility. I also don’t think most of the objections to the OSA center around a need to remove child pornography, but instead the fact that you are forced to employ technological measures that don’t currently exist to remove child pornography. All of this is a little besides the point though because…
> If you can't deal with those responsibilities you can't run the website.
I absolutely can. If the law is unreasonable, I can block all users from the host country, and keep running my website. Which is exactly what Lobsters and a lot of other people are choosing to do.
Extraditing someone because their website uses encryption would force them to prosecute so many people and organizations it would be a joke. You would have to really piss them off.
[1] diverse in the only way which really matters, this being diversity of opinion. On the internet nobody knows you're a dog after all.
The requirements will be modified to include a larger number of sites whenever the government may find this to be convenient. The MAU will limit will be reduced, and/or the scope of "illegal/harmful content" will be expanded.
The scanning for CSAM only applies to (1) large sites that are at medium or high risk for image-based CSAM and (2) services that are at high risk of image-based CSAM and either have more than 700k monthly UK users or are file-storage or file-sharing services.
I might have missed it but the only age verification requirement I'm seeing is that if the site is a large service that has a medium risk for grooming or it has a high risk for grooming and it already has a means to determine the age of users, then the site has to do some things like when recommending new contacts not recommend connections between adults and children and not allowing adults to send unsolicited DMs to children.
A small to medium sized forum site that is already doing to kind of monitoring and moderation that you have to do to keep your site from being completely overrun with spam and off topic material shouldn't really have to make many changes. Mostly it will just be writing some documentation.