The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?
This is the problem with many European (and I guess also UK) laws.
GDPR is one notable example. Very few people actually comply with it properly. Hidden "disagree" options in cookie pop-ups and unauthorized data transfers to the US are almost everywhere, not to mention the "see personalized ads or pay" business model.
Unlike with most American laws, GDPR investigations happen through a regulator, not a privately-initiated discovery process where the suing party has an incentive to dig up as much dirt as possible, so in effect, you only get punished if you either really go overboard or are a company that the EU dislikes (which is honestly mostly just Meta at this point).