It is us, the users of the LLMs, that need to learn from those mistakes.
If you prompt an LLM and it makes a mistake, you have to learn not to prompt it in the same way in the future.
It takes a lot of time and experimentation to find the prompting patterns that work.
My current favorite tactic is to dump sizable amounts of example code into the models every time I use them. I find this works extremely well. I will take code that I wrote previously that accomplishes a similar task, drop that in and describe what I want it to build next.