the easy part is done but the hard part is so hard it takes years to progress
There is also no guarantee of continued progress to a breakthrough.
We have been through several "AI Winters" before where promising new technology was discovered and people in the field were convinced that the breakthrough was just around the corner and it never came.
LLMs aren't quite the same situation as they do have some undeniable utility to a wide variety of people even without AGI springing out of them, but the blind optimism that surely progress will continue at a rapid pace until the assumed breakthrough is realized feels pretty familiar to the hype cycle preceding past AI "Winters".
Yeah, remember when we spent 15 years (~2000 to ~2015) calling it “machine learning” because AI was a bad word?
We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc
You know a technology isn’t practical yet because it’s still being called AI.
So I'd argue any algorithm that comes from control theory is not AI, those are just basic old dumb machines. You can't make planes without control theory, humans can't keep a plane steady without it, so Wrights Brothers adding this to their plane is why they succeeded making a flying machine.
So if autopilots are AI then the Wrights Brothers developed an AI to control their plane. I don't think anyone sees that as AI, not even at the time they did the first flight.