But we have no idea how many "essential" differences there are (if any!).
Dismissing LLMs as avenues toward intelligence just because they are simpler and easier to understand than our minds is a bit like looking at a modern phone from a 19th century point of view and dismissing the notion that it could be "just a Turing machine": Sure, the phone is infinitely more complex, but at its core those things are the same regardless.