Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
If I ran a small forum in the UK I would shut it down - not worth risk of jail time for getting it wrong.
terrorism
child sexual exploitation and abuse (CSEA) offences, including
grooming
image-based child sexual abuse material (CSAM)
CSAM URLs
hate
harassment, stalking, threats and abuse
controlling or coercive behaviour
intimate image abuse
extreme pornography
sexual exploitation of adults
human trafficking
unlawful immigration
fraud and financial offences
proceeds of crime
drugs and psychoactive substances
firearms, knives and other weapons
encouraging or assisting suicide
foreign interference
animal cruelty> Something is a hate incident if the victim or anyone else think it was motivated by hostility or prejudice based on: disability, race, religion, gender identity or sexual orientation.
This probably worries platforms that need to moderate content. Sure, perhaps 80% of the cases are clear cut, but it’s the 20% that get missed and turn into criminal liability that would be the most concerning. Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Further, prejudices in terms of language do change often. As bad actors get censored based on certain language, they will evolve to use other words/phrases to mean the same thing. The government is far more likely to be aware of these (and be able to prosecute them) than some random forum owner.