zlacker

[return to "A non-anthropomorphized view of LLMs"]
1. dtj112+AR[view] [source] 2025-07-07 08:31:42
>>zdw+(OP)
It's possible to construct a similar description of whatever it is that human brain is doing that clearly fails to capture the fact that we're conscious. If you take a cross section of every nerve feeding into the human brain at a given time T, the action potentials across those cross sections can be embedded in R^n. If you take the history of those action potentials across the lifetime of the brain, you get a path through R^n that is continuous, and maps roughly onto your subjectively experienced personal history, since your brain neccesarily builds your experienced reality from this signal data moment to moment. If you then take the cross sections of every nerve feeding OUT of your brain at time T, you have another set of action potentials that can be embedded in R^m which partially determines the state of the R^n embedding at time T + delta. This is not meaningfully different from the higher dimensional game of snake described in the article, more or less reducing the experience of being a human to 'next nerve impulse prediction', but it obviously fails to capture the significance of the computation which determines what that next output should be.
◧◩
2. sailin+HU[view] [source] 2025-07-07 08:58:50
>>dtj112+AR
I don’t see how your description “clearly fails to capture the fact that we're conscious” though. There are many example in nature of emergent phenomena that would be very hard to predict just by looking at its components.

This is the crux of the disagreement between those that believe AGI is possible and those that don’t. Some are convinced that we “obviously” more than the sum of our parts, and thus an LLM can’t achieve consciousness because it’s missing this magic ingredient, and those that believe consciousness is just an emergent behaviour from a complex device (the brain). And thus we might be able to recreate it simply by scaling the complexity of another system.

[go to top]