The LLM has one job, to make code that looks plausible. That's it. There's no logic gone into writing that bit of code. So the bugs often won't be like those a programmer makes. Instead, they can introduce a whole new class of bug that's way harder to debug.
LLMs are way faster than me at writing tests. Just prompt for the kind of test you want.
I can and do use AI to help with test coverage but coverage is pointless if you don’t catch the interesting edge cases.