That's definitely not true.
Under some circumstances LLMs can spit out large chunks of the original content verbatim. Meaning this can actively leak the contents of a confidential discussion out into a completely different context, a risk that does not exist with spam scanning.