The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?