And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).
The problem is... that there is a whole amount of "smart" activities humans do without being conscious of it.
- Walking, riding a bike, or typing on a keyboard happen fluidly without conscious planning of each muscle movement.
- You can finish someone sentence or detect if a sentence is grammatically wrong, often without being able to explain the rule.
- When you enter a room, your brain rapidly identifies faces, furniture, and objects without you consciously thinking, “That is a table,” or “That is John.”