So it's not so much about his incorrect predictions, but that these predictions were based on a core belief. And when the predictions turned out to be false, he didn't adjust his core beliefs, but just his predictions.
So it's natural to ask, if none of the predictions you derived from your core belief come true, maybe your core belief isn't true.
if the "core belief" is that the LLM architecture cannot be the way to AGI, that is more of an "educated bet", which does not get falsified when LLMs improve but still suggest their initial faults. If seeing that LLMs seem constrained in the "reactive system" as opposed to a sought "deliberative system" (or others would say "intuitive" vs "procedural" etc.) was an implicit part of the original "core belief", then it still stands in spite of other improvements.
Rinse and repeat.
After a while you question whether LLMs are actually a dead end
As I said, it will depend on whether the examples in question were actually substantial part of the "core belief".
For example: "But can they perform procedures?" // "Look at that now" // "But can they do it structurally? Consistently? Reliably?" // "Look at that now" // "But is that reasoning integrated or external?" // "Look at that now" // "But is their reasoning fully procedurally vetted?" (etc.)
I.e.: is the "progress" (which would be the "anomaly" in scientific prediction) part of the "substance" or part of the "form"?