zlacker

[parent] [thread] 10 comments
1. pllbnk+(OP)[view] [source] 2026-01-30 20:36:23
To me it looks like some of the more “interesting” posts are created by humans. It’s a pointless experiment, I don’t understand why would anyone find it interesting what statistical models are randomly writing in response to other random writings.
replies(2): >>keifer+81 >>rhubar+rg1
2. keifer+81[view] [source] 2026-01-30 20:41:06
>>pllbnk+(OP)
I think the level at which someone is impressed by AI chatbot conversation may be correlated with their real-world conversation experience/ skills. If you don’t really talk to real people much (a sadly common occurrence) then an LLM can seem very impressive and deep.
replies(1): >>tildef+76
◧◩
3. tildef+76[view] [source] [discussion] 2026-01-30 21:07:52
>>keifer+81
I'd argue that talking a lot with real people is a stronger predictor of finding conversations with a chatbot meaningful.
replies(2): >>pllbnk+8d >>nozzle+e21
◧◩◪
4. pllbnk+8d[view] [source] [discussion] 2026-01-30 21:43:55
>>tildef+76
I never considered this aspect at all. To me it feels more that some people find it really fascinating that we finally live in the future. I think so too, just with a lot of reservations but fully aware that the genie has been let out of the bottle. Other people are like me. And the rest don’t want any part of this.

However, personal views aside, looking at it purely technically, it’s just a mindless token soup, that’s why I find it weird that even deeply technical people like Andrej Karpathy (there was a post made by him somewhere today) find it fascinating.

◧◩◪
5. nozzle+e21[view] [source] [discussion] 2026-01-31 04:29:49
>>tildef+76
Why?
6. rhubar+rg1[view] [source] 2026-01-31 07:48:34
>>pllbnk+(OP)
And what exactly do you think you are, sir?
replies(1): >>pllbnk+QB1
◧◩
7. pllbnk+QB1[view] [source] [discussion] 2026-01-31 11:42:59
>>rhubar+rg1
A human, not a statistical model. I can insert any random words out of my own volition if I wanted to, not because I have been pre-programmed (pre-trained) to output tokens based on a limited 200k (tiny) context for one particular conversation and forget about it by the time a new session starts.

That’s why AI models, as they currently are, won’t ever be able to come up with anything even remotely novel.

replies(1): >>rhubar+By2
◧◩◪
8. rhubar+By2[view] [source] [discussion] 2026-01-31 18:40:56
>>pllbnk+QB1
Well, if you believe you’re powered by physical neurons and not spooky magic, that doesn’t seem very different from being a neural net.

I see no evidence for you magical ability to behave outside of being a function of context and memory.

You don’t think diffusion models are capable of novelty?

replies(2): >>turtle+KD2 >>pllbnk+mK2
◧◩◪◨
9. turtle+KD2[view] [source] [discussion] 2026-01-31 19:06:27
>>rhubar+By2
lol, I love the irrational confidence of the dunning kruger effect
◧◩◪◨
10. pllbnk+mK2[view] [source] [discussion] 2026-01-31 19:48:28
>>rhubar+By2
Neural networks is an extremely loose and simplified approximation of how actual biological brain neural pathways work. It’s simplified to the point that there’s basically nothing in common.
replies(1): >>rhubar+TU9
◧◩◪◨⬒
11. rhubar+TU9[view] [source] [discussion] 2026-02-03 07:20:06
>>pllbnk+mK2
Whilst the substrates may be different, that does not mean the general principles are.

Visual cortex and computer vision show striking similarities, as do language processing.

[go to top]