Now we are just reliant on ‘I’ll know it when I see it’.
LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.
It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.
LLMs are not AIs, but they could be a core component for one.