I use LLMs for coding every day. There have been significant improvements over the years but mostly across a single dimension: mapping human language to code. This capability is robust, but you still have to know how to manage context to keep them focused. I still have to direct them to consider e.g. performance or architecture considerations.
I'm not convinced that they can reason effectively (see the ARC-AGI-2 benchmarks). Doesn't mean that they are not useful, but they have their limitations. I suspect we still need to discover tech distinct from LLMs to get closer to what a human brain does.