Literally several times a week, I have to close PRs with docs that clearly no one read because they are blatantly wrong. This happened after LLMs. If what you're claiming is happening, I'm not seeing it anywhere.
AI is incapable of capturing human context that 99.999% of the time exists in people's brains, not code or context. This is why it is crucial that humans write for humans, not an LLM that puts out docs that have the aesthetics of looking acceptable.