zlacker

[parent] [thread] 6 comments
1. windex+(OP)[view] [source] 2026-01-01 14:56:08
Wow.

You’re using ‘derived’ to imply ‘therefore equivalent.’ That’s a category error. A cookbook is derived from food culture. Does an LLM taste food? Can it think about how good that cookie tastes?

A flight simulator is derived from aerodynamics - yet it doesn’t fly.

Likewise, text that resembles reasoning isn’t the same thing as a system that has beliefs, intentions, or understanding. Humans do. LLMs don't.

Also... Ask an LLM what's the difference between a human brain and an LLM. If an LLM could "think" it wouldn't give you the answer it just did.

replies(2): >>Camper+xf >>closew+dg
2. Camper+xf[view] [source] 2026-01-01 16:52:41
>>windex+(OP)
Ask an LLM what's the difference between a human brain and an LLM. If an LLM could "think" it wouldn't give you the answer it just did.

I imagine that sounded more profound when you wrote it than it did just now, when I read it. Can you be a little more specific, with regard to what features you would expect to differ between LLM and human responses to such a question?

Right now, LLM system prompts are strongly geared towards not claiming that they are humans or simulations of humans. If your point is that a hypothetical "thinking" LLM would claim to be a human, that could certainly be arranged with an appropriate system prompt. You wouldn't know whether you were talking to an LLM or a human -- just as you don't now -- but nothing would be proved either way. That's ultimately why the Turing test is a poor metric.

replies(1): >>windex+Qd1
3. closew+dg[view] [source] 2026-01-01 16:55:50
>>windex+(OP)
You’re arguing against a straw man. No one is claiming LLMs have beliefs, intentions, or understanding. They don’t need them to be economically useful.
replies(1): >>windex+m61
◧◩
4. windex+m61[view] [source] [discussion] 2026-01-01 22:28:50
>>closew+dg
Oh yes, they are.

And beyond people claiming that LLMs are basically sentient you have people like CamperBob2 who made this wild claim:

"""There's no such thing as people without language, except for infants and those who are so mentally incapacitated that the answer is self-evidently "No, they cannot."

Language is the substrate of reason. It doesn't need to be spoken or written, but it's a necessary and (as it turns out) sufficient component of thought."""

Let that sink. They literally think that there's no such thing as people without language. Talk about a wild and ignorant take on life in general!

replies(1): >>Camper+Dw1
◧◩
5. windex+Qd1[view] [source] [discussion] 2026-01-01 23:19:47
>>Camper+xf
> Right now, LLM system prompts are strongly geared towards not claiming that they are humans or simulations of humans. If your point is that a hypothetical "thinking" LLM would claim to be a human, that could certainly be arranged with an appropriate system prompt. You wouldn't know whether you were talking to an LLM or a human -- just as you don't now -- but nothing would be proved either way. That's ultimately why the Turing test is a poor metric.

The mental gymnastics here is entertainment at best. Of course the thinking LLM would give feedback on how it's actually just a pattern model over text - well, we shouldn't believe that! The LLM was trained to lie about it's true capabilities in your own admission?

How about these...

What observable capability would you expect from "true cognitive thought" that a next-token predictor couldn’t fake?

Where are the system’s goals coming from—does it originate them, or only reflect the user/prompt?

How does it know when it’s wrong without an external verifier? If the training data says X and the answer is Y - how will it ever know it was wrong and reach the correct conclusion?

replies(1): >>Camper+xw1
◧◩◪
6. Camper+xw1[view] [source] [discussion] 2026-01-02 01:47:30
>>windex+Qd1
How does it know when it’s wrong without an external verifier? If the training data says X and the answer is Y - how will it ever know it was wrong and reach the correct conclusion?

You need to read a few papers with publication dates after 2023.

◧◩◪
7. Camper+Dw1[view] [source] [discussion] 2026-01-02 01:48:27
>>windex+m61
How'd they communicate with the test subjects?

That's "language."

[go to top]