zlacker

[parent] [thread] 3 comments
1. myrmid+(OP)[view] [source] 2026-01-20 13:42:52
> If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model.

But you could make the exact same argument for a human mind? (could just simulate all those neural interactions with pen and paper)

The only way to get out of it is to basically admit magic (or some other metaphysical construct with a different name).

replies(2): >>encycl+72 >>cess11+g3
2. encycl+72[view] [source] 2026-01-20 13:56:38
>>myrmid+(OP)
> But you could make the exact same argument for a human mind?

It would be an argument and you are free to make it. What the human mind is, is an open scientific and philosophical problem many are working on.

The point is that LLM's are NOT the same because we DO know that LLM's are. Please see the myriad of tutorials 'write an LLM from scratch'

replies(1): >>myrmid+K5
3. cess11+g3[view] [source] 2026-01-20 14:05:59
>>myrmid+(OP)
I'm not so sure "a human mind" is the kind of newtonian clockwork thingiemabob you "could just simulate" within the same degree of complexity as the thing you're simulating, at least not without some sacrifices.
◧◩
4. myrmid+K5[view] [source] [discussion] 2026-01-20 14:22:16
>>encycl+72
We do know that they are different, and that there are some systematic shortcomings in LLMs for now (e.g. no mechanism for online learning).

But we have no idea how many "essential" differences there are (if any!).

Dismissing LLMs as avenues toward intelligence just because they are simpler and easier to understand than our minds is a bit like looking at a modern phone from a 19th century point of view and dismissing the notion that it could be "just a Turing machine": Sure, the phone is infinitely more complex, but at its core those things are the same regardless.

[go to top]