zlacker

[parent] [thread] 3 comments
1. ben_w+(OP)[view] [source] 2024-01-07 09:51:22
I read such statements as being claims that "intuition" is part of consciousness etc.

It's still too strong a claim given that matrix multiplication also describes quantum mechanics and by extension chemistry and by extension biology and by extension our own brains… but I frequently encounter examples of mistaking two related concepts for synonyms, and I assume in this case it is meant to be a weaker claim about LLMs not being conscious.

Me, I think the word "intuition" is fine, just like I'd say that a tree falling in a forest with no one to hear it does produce a sound because sound is the vibration of the air instead of the qualia.

replies(2): >>golol+n >>edgyqu+SO
2. golol+n[view] [source] 2024-01-07 09:58:08
>>ben_w+(OP)
Funnily, for me intuition is the part of intelligence which I can more easily imagine as being done by a neural network. When my intuition says this person is not to trust I can easily imagine that being something like a simple hyperplane classification in situation space.

It's the active, iterative thinking and planning that is more critical for AGI and, while obviousky theoretically possible, much harder to imagine a neural network performing.

3. edgyqu+SO[view] [source] 2024-01-07 17:22:09
>>ben_w+(OP)
No, matrix multiplication is the system humans use to make predictions about those things but it doesn’t describe their fundamental structure and there’s no reason to imply they do.
replies(1): >>ben_w+Ny1
◧◩
4. ben_w+Ny1[view] [source] [discussion] 2024-01-07 22:35:00
>>edgyqu+SO
> describe their fundamental structure

That is literally, literally, what it does.

One may argue that it does so wrongly, but that's a different claim entirely.

> there’s no reason to imply they do

The predictions matching reality to the best of our collective abilities to test them is such a reason.

The saying that "all models are wrong but some are useful" is a reason against that.

[go to top]