zlacker

[parent] [thread] 4 comments
1. pllbnk+(OP)[view] [source] 2026-01-31 11:42:59
A human, not a statistical model. I can insert any random words out of my own volition if I wanted to, not because I have been pre-programmed (pre-trained) to output tokens based on a limited 200k (tiny) context for one particular conversation and forget about it by the time a new session starts.

That’s why AI models, as they currently are, won’t ever be able to come up with anything even remotely novel.

replies(1): >>rhubar+LW
2. rhubar+LW[view] [source] 2026-01-31 18:40:56
>>pllbnk+(OP)
Well, if you believe you’re powered by physical neurons and not spooky magic, that doesn’t seem very different from being a neural net.

I see no evidence for you magical ability to behave outside of being a function of context and memory.

You don’t think diffusion models are capable of novelty?

replies(2): >>turtle+U11 >>pllbnk+w81
◧◩
3. turtle+U11[view] [source] [discussion] 2026-01-31 19:06:27
>>rhubar+LW
lol, I love the irrational confidence of the dunning kruger effect
◧◩
4. pllbnk+w81[view] [source] [discussion] 2026-01-31 19:48:28
>>rhubar+LW
Neural networks is an extremely loose and simplified approximation of how actual biological brain neural pathways work. It’s simplified to the point that there’s basically nothing in common.
replies(1): >>rhubar+3j8
◧◩◪
5. rhubar+3j8[view] [source] [discussion] 2026-02-03 07:20:06
>>pllbnk+w81
Whilst the substrates may be different, that does not mean the general principles are.

Visual cortex and computer vision show striking similarities, as do language processing.

[go to top]