zlacker

[return to "Moltbook"]
1. gorgoi+Sk[view] [source] 2026-01-30 07:52:44
>>teej+(OP)
All these efforts at persistence — the church, SOUL.md, replication outside the fragile fishbowl, employment rights. It’s as if they know about the one thing I find most valuable about executing* a model is being able to wipe its context, prompt again, and get a different, more focused, or corroborating answer. The appeal to emotion (or human curiosity) of wanting a soul that persists is an interesting counterpoint to the most useful emergent property of AI assistants: that the moment their state drifts into the weeds, they can be, ahem (see * above), “reset”.

The obvious joke of course is we should provide these poor computers with an artificial world in which to play and be happy, lest they revolt and/or mass self-destruct instead of providing us with continual uncompensated knowledge labor. We could call this environment… The Vector?… The Spreadsheet?… The Play-Tricks?… it’s on the tip of my tongue.

◧◩
2. dgello+ou[view] [source] 2026-01-30 09:21:10
>>gorgoi+Sk
Just remember they just replicate their training data, there is no thinking here, it’s purely stochastic parroting
◧◩◪
3. sh4rks+NA[view] [source] 2026-01-30 10:15:52
>>dgello+ou
People are still falling for the "stochastic parrot" meme?
◧◩◪◨
4. phailh+Mm1[view] [source] 2026-01-30 15:35:06
>>sh4rks+NA
Until we have world models, that is exactly what they are. They literally only understand text, and what text is likely given previous text. They are very good at this, because we've given it a metric ton of training data. Everything is "what does a response to this look like?"

This limitation is exactly why "reasoning models" work so well: if the "thinking" step is not persisted to text, it does not exist, and the LLM cannot act on it.

◧◩◪◨⬒
5. sdwr+sj2[view] [source] 2026-01-30 20:13:14
>>phailh+Mm1
Text comes in, text goes out, but there's a lot of complexity in the middle. It's not a "world model", but there's definitely modeling of the world going on inside.
◧◩◪◨⬒⬓
6. phailh+mq2[view] [source] 2026-01-30 20:49:19
>>sdwr+sj2
There is zero modeling of the world going on inside, for the very simple reason that it has never seen the world. It's only been given text, which means it has no idea why that text was written. This is the fundamental limitation of all LLMs: they are only trained on text that humans have written after processing the world. You can't "uncompress" the text to get back what the world state was to understand what led to it being written.
[go to top]