LLMs are clumsy interns now, very leaky. But we know human experts can be leak-proof. Why can't LLMs get there, too, better at coding, understanding your intentions, reviewing automatically for deviations, etc.?
Thought experiment: could you work well with a team of human experts just below your level? Then you should be able to work well with future LLMs.