the easy part is done but the hard part is so hard it takes years to progress
There is also no guarantee of continued progress to a breakthrough.
We have been through several "AI Winters" before where promising new technology was discovered and people in the field were convinced that the breakthrough was just around the corner and it never came.
LLMs aren't quite the same situation as they do have some undeniable utility to a wide variety of people even without AGI springing out of them, but the blind optimism that surely progress will continue at a rapid pace until the assumed breakthrough is realized feels pretty familiar to the hype cycle preceding past AI "Winters".
Yeah, remember when we spent 15 years (~2000 to ~2015) calling it “machine learning” because AI was a bad word?
We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc
You know a technology isn’t practical yet because it’s still being called AI.
One of the first humanoid robots was an 18th century clockwork mechanism inside a porcelain doll that autonomously wrote out “Cogito Ergo Sum” in cursive with a pen. It was considered thought provoking at the time because it implied that some day machines could think.
BBC video posted to reddit 10 years ago: https://www.reddit.com/r/history/s/d6xTeqfKCv
> It was considered thought provoking at the time because it implied that some day machines could think.
What constitutes "thinking"? That's approximately the same question as what qualifies as AGI. LLMs and RL seem to be the first time humanity has achieved anything that begins to resemble that but clearly both of those come up short ... at least so far.
Meanwhile I'm quite certain that a glorified PID loop (ie autopilot) does not qualify as machine learning (AI if you'd prefer). If someone wants to claim that it does then he's going to need to explain how his definition excludes mechanical clockwork.
And I think the point is that the definition doesn’t exclude pure mechanical devices since that’s exactly what a computer is.
> It isn’t thinking about your conversation while you go take a poo.
The commercial offerings for "reasoning" models can easily run for 10 to 15 minutes before spitting out an answer. As to whether or not what it's doing counts as "thinking" ...
> the definition doesn’t exclude pure mechanical devices since that’s exactly what a computer is.
By the same logic a songbird or even a human is also a mechanical device. What's your point?
I never said anything about excluding mechanical devices. I referred to "mechanical clockwork" meaning a mechanical pocket watch or similar. If the claim is that autopilot qualifies as AI then I want to know how that gets squared with a literal pocket watch not being AI.
Tell me you don’t know how AI works without telling me you don’t know how AI works. After it sends you an output, the AI stops doing anything. Your conversation sits resident in ram for a bit, but there is no more processing happening.
It is waiting until you give it feedback... some might say it is a loop... a feedback loop ... that continues until the output has reached the desired state ... kinda sounds familiar ... like a PID loop where the human is the controller...
>To claim that an LLM is equivalent to a PID loop is utterly ridiculous.
Is it? It looks like one to me.
> By that logic a 747 is "basically a glorified lawn mower".
I don’t think a 747 can mow lawns, but I assume it has the horsepower to do it with some modifications.