Edit:
> Most of them are product managers, software developers. … They work with the policy teams with an internal set of tools to forward links and explanations about why they need to be removed.
How far people are actually influenced and in which direction... that's anybody's guess.
Yes, all social media have open channels with law enforcement. That's because social media have legal obligations and when someone comes to a moderator claiming to be a law enforcement officer working on a kidnapping or preventing a terrorist attack and needing time-sensitive help to save lives, you don't want the moderator to have to guess whether that's a real emergency or a hoax.
It's... not a secret. If you live in a democracy, you can quickly find out the name of these channels, they have websites.
Source: I've been part of a moderation team. Not on something that large, though.
You work at one of these companies for enough years and someone will accuse you of supporting terrorists eventually.
What you learn working for a multinational corporation is that as an international community, people don't agree on much. Including definitions of "terrorism," fairness, geopolitical borders, or the law.
It's a weird feeling. If you ever wonder how companies can stray so far from "obvious" morality... That's how. Things get a lot less obvious when you're in the position that everyone has an opinion and the opinions often conflict.
So to answer your question more directly... It doesn't take long for outsiders accusing you of supporting terrorism to be met (if only in your own internal filters) with "Oh you have a problem with my approach? Get in line."
(On the flip side, a lot of the training for people acting in that capacity in a big corp is how not to get phished. When you are in the front-line of moderation / customer interaction / etc., bad actors will attempt to use you to compromise third parties. There's a reason there are formal processes for dealing with law enforcement, for example).
One incident stands out because we received far more messages than I'd ever seen, it was the time we posted a news story about Netanyahu blaming a Palestinian for the Holocaust. We got several messages about what horrible lying racists we were, that was common to all the messages, but they had one main difference. About half the messages claimed Netanyahu never said what he said. The other half claimed he did say it, but he was right.
Yes, of course. Content management is the expected standard when dealing with crowdsourced content. This includes any data coming from social media. This is subject to essentially private, subjective concerns.
I don't generally disagree with your point, but I suspect the relentless pursuit of profit above all other values figures more relevant to this narrative than cultural drift.
There was no such thing in the Twitter files.
It was clearly shown in the Twitter files that there are many relations deep into social media companies and that is very likely true for every larger platform.
It would be surprising if there weren't backchannels, because they have become relevant, sadly.
Sadly if the Undertale soundtrack was aggressively content ID'd/DMCA'd, that would have been a way to take it down. But that would penalize everyone who uploads footage of that game, so obviously that's not done.
"LERS is a system in which a verified law enforcement agent can securely submit a legal request for user data, view the status of the submitted request, and download the response submitted by Google.
If you are a sworn law enforcement agent or other government official who is authorized to issue legal process in connection with an official investigation, you may submit your request through this system."
--is not the height of transparency, though.
At this point, I think enough of us have lived through being vaccinated that this canard fails to hold water.
(Nothing is 100% safe, and the deaths due to vaccine reaction were tragic. The vaccine was orders of magnitude safer than COVID ripping through the population unchecked).