zlacker

[parent] [thread] 5 comments
1. rhubar+(OP)[view] [source] 2026-01-31 07:48:34
And what exactly do you think you are, sir?
replies(1): >>pllbnk+pl
2. pllbnk+pl[view] [source] 2026-01-31 11:42:59
>>rhubar+(OP)
A human, not a statistical model. I can insert any random words out of my own volition if I wanted to, not because I have been pre-programmed (pre-trained) to output tokens based on a limited 200k (tiny) context for one particular conversation and forget about it by the time a new session starts.

That’s why AI models, as they currently are, won’t ever be able to come up with anything even remotely novel.

replies(1): >>rhubar+ai1
◧◩
3. rhubar+ai1[view] [source] [discussion] 2026-01-31 18:40:56
>>pllbnk+pl
Well, if you believe you’re powered by physical neurons and not spooky magic, that doesn’t seem very different from being a neural net.

I see no evidence for you magical ability to behave outside of being a function of context and memory.

You don’t think diffusion models are capable of novelty?

replies(2): >>turtle+jn1 >>pllbnk+Vt1
◧◩◪
4. turtle+jn1[view] [source] [discussion] 2026-01-31 19:06:27
>>rhubar+ai1
lol, I love the irrational confidence of the dunning kruger effect
◧◩◪
5. pllbnk+Vt1[view] [source] [discussion] 2026-01-31 19:48:28
>>rhubar+ai1
Neural networks is an extremely loose and simplified approximation of how actual biological brain neural pathways work. It’s simplified to the point that there’s basically nothing in common.
replies(1): >>rhubar+sE8
◧◩◪◨
6. rhubar+sE8[view] [source] [discussion] 2026-02-03 07:20:06
>>pllbnk+Vt1
Whilst the substrates may be different, that does not mean the general principles are.

Visual cortex and computer vision show striking similarities, as do language processing.

[go to top]