Thanks for this.
Even this is questionable, cause we're seeing it making forms and solving leetcodes, but no llm yet created a new approach, reduced existing unnecessary complexity (which we created mountains of), made something truly new in general. All they seem to do is rehash of millions of "mainstream" works, and AAA isn't mainstream. Cranking up the parameter count or the time of beating around the bush (aka cot) doesn't magically substitute for lack of a knowledge graph with thick enough edges, so creating a next-gen AAA video game is far out of scope of llm's abilities. They are stuck in 2020 office jobs and weekend open source tech, programming-wise.
I've not spent too long thinking on the following, so I'm prepared for someone to say I'm totally wrong, but:
I feel like the services economy can be broadly broken down into: pleasure, progress and chores. Pleasure being poetry/literature, movies, hospitality, etc; progress being the examples you gave like science/engineering, mathematics; and chore being things humans need to coordinate or satisfy an obligation (accountants, lawyers, salesmen).
In this case, if we assume AI can deal with things not in the grey zone, then it can deal with 'progress' and many 'chores', which are massive chunks of human output. There's not much grey zone to them. (Well, there is, but there are many correct solutions; equivalent pieces of code that are acceptable, multiple versions of a tax return, each claiming different deductions, that would fly by the IRS, etc)
At current rate of progress, I really do think in another 6 months they'll be pretty good at tackling technical debt and overcomplication, at least in codebases that have good unit/integration test coverage or are written in very strongly typed languages with a type-friendly structure. (Of course, those usually aren't the codebases needing significant refactoring, but I think AIs are decent at writing unit tests against existing code too.)
You say that like it's nothing special! Honestly I'm still in awe at the ability of modern LLMs to do any kind of programming. It's weird how something that would have been science fiction 5 years ago is now normalised.
HOWEVER there is a case to be made that software is an insanely powerful lever for many industries, especially AI. And if current AI gets good enough at software problems that it can improve its own infrastructure or even ideate new model architectures, then we would (in this hypothetical case), potentially reach an "intelligence explosion," which would (may) _actually_ yield a true, generalized intelligence.
So as a cynic, while I think the intermediary goal of many of these so-called-agi companies is just your usual SaaS automation slop because thats the easiest industry to disrupt and extract money from (and the people at these companies only really know how software works, as opposed to having knowledge of other things like chemistry, biology, etc), I also think that in theory, being a very fast and low cost programming agent is a bit more powerful than you think.
I guess it’s the age old question if we really know what we are doing („experience“) or we just tumble through life and it works out because the overall system of humans interacting with each other is big enough. The current state of world politics makes be think it’s the latter.
AI progress depends not just on ideation speed, but on validation speed. And validation in some fields needs to pass through the physical world, which makes it expensive, slow, and rate limited. Hence I don't think AI can reach singularity. That would only be possible if validation was as easy to scale as ideation.
I agree with you on construction and physical work.