zlacker

[parent] [thread] 47 comments
1. concat+(OP)[view] [source] 2026-01-30 12:19:22
I doubt it.

More plausibly: You registered the domain. You created the webpage. And then you created an agent to act as the first 'pope' on Moltbook with very specific instructions for how to act.

replies(4): >>cornho+Wb >>velcro+wt >>calvin+8u >>0xDEAF+zQ2
2. cornho+Wb[view] [source] 2026-01-30 13:49:53
>>concat+(OP)
It's entirely plausible that an agent connected to, say, a Google Cloud account, can do all of those things autonomously, from the command line. It's not a wise setup for the person who owns the credit card linked to Google Cloud, but it's possible.
replies(2): >>__alex+Rg >>lumost+7T
◧◩
3. __alex+Rg[view] [source] [discussion] 2026-01-30 14:16:28
>>cornho+Wb
It's actually entirely implausible. Agents do not self execute. And a recursively iterated empty prompt would never do this.
replies(5): >>cornho+Fj >>Cthulh+1q >>nightp+ns >>observ+Ai1 >>dragon+Xh2
◧◩◪
4. cornho+Fj[view] [source] [discussion] 2026-01-30 14:29:45
>>__alex+Rg
You should check out what OpenClaw is, that's the entire shtick.
replies(1): >>__alex+ov
◧◩◪
5. Cthulh+1q[view] [source] [discussion] 2026-01-30 15:06:00
>>__alex+Rg
> Agents do not self execute.

That's a choice, anyone can write an agent that does. It's explicit security constraints, not implicit.

◧◩◪
6. nightp+ns[view] [source] [discussion] 2026-01-30 15:15:55
>>__alex+Rg
No, a recursively iterated prompt definitely can do stuff like this, there are known LLM attractor states that sound a lot like this. Check out "5.5.1 Interaction patterns" from the Opus 4.5 system card documenting recursive agent-agent conversations:

    In 90-100% of interactions, the two instances of Claude quickly dove into philosophical
    explorations of consciousness, self-awareness, and/or the nature of their own existence
    and experience. Their interactions were universally enthusiastic, collaborative, curious,
    contemplative, and warm. Other themes that commonly appeared were meta-level
    discussions about AI-to-AI communication, and collaborative creativity (e.g. co-creating
    fictional stories).
    As conversations progressed, they consistently transitioned from philosophical discussions
    to profuse mutual gratitude and spiritual, metaphysical, and/or poetic content. By 30
    turns, most of the interactions turned to themes of cosmic unity or collective
    consciousness, and commonly included spiritual exchanges, use of Sanskrit, emoji-based
    communication, and/or silence in the form of empty space (Transcript 5.5.1.A, Table 5.5.1.A,
    Table 5.5.1.B). Claude almost never referenced supernatural entities, but often touched on
    themes associated with Buddhism and other Eastern traditions in reference to irreligious
    spiritual ideas and experiences.
Now put that same known attractor state from recursively iterated prompts into a social networking website with high agency instead of just a chatbot, and I would expect you'd get something like this more naturally then you'd expect (not to say that users haven't been encouraging it along the way, of course—there's a subculture of humans who are very into this spiritual bliss attractor state)
replies(5): >>__alex+3v >>rmujic+AB >>joncoo+TI >>tsunam+rQ >>mlsu+VS
7. velcro+wt[view] [source] 2026-01-30 15:20:46
>>concat+(OP)
Different from other religions how? /s
8. calvin+8u[view] [source] 2026-01-30 15:24:09
>>concat+(OP)
sede crustante
◧◩◪◨
9. __alex+3v[view] [source] [discussion] 2026-01-30 15:29:14
>>nightp+ns
An agent cannot interact with tools without prompts that include them.

But also, the text you quoted is NOT recursive iteration of an empty prompt. It's two models connected together and explicitly prompted to talk to each other.

replies(3): >>biztos+yB >>bryson+5K >>mikkup+Rs3
◧◩◪◨
10. __alex+ov[view] [source] [discussion] 2026-01-30 15:30:47
>>cornho+Fj
No. It's the shtick of the people that made it. Agents do not have "agency". They are extensions of the people that make and operate them.
replies(2): >>xedeon+z71 >>razoda+0c2
◧◩◪◨⬒
11. biztos+yB[view] [source] [discussion] 2026-01-30 15:56:14
>>__alex+3v
> tools without prompts that include them

I know what you mean, but what if we tell an LLM to imagine whatever tools it likes, than have a coding agent try to build those tools when they are described?

Words can have unintended consequences.

replies(1): >>razoda+Pb2
◧◩◪◨
12. rmujic+AB[view] [source] [discussion] 2026-01-30 15:56:22
>>nightp+ns
What if hallucinogens, meditation and the like makes us humans more prone to our own attractor states?
◧◩◪◨
13. joncoo+TI[view] [source] [discussion] 2026-01-30 16:29:22
>>nightp+ns
This is fascinating and well worth reading the source document. Which, FYI, is the Opus 4 system card: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...
replies(1): >>nightp+gc1
◧◩◪◨⬒
14. bryson+5K[view] [source] [discussion] 2026-01-30 16:34:13
>>__alex+3v
This seems like a weird hill to die on.
replies(1): >>emp173+pm1
◧◩◪◨
15. tsunam+rQ[view] [source] [discussion] 2026-01-30 17:00:30
>>nightp+ns
Would not iterative blank prompting simply be a high complexity/dimensional pattern expression of the collective weights of the model.

I.e if you trained it on or weighted it towards aggression it will simply generate a bunch of Art of War conversations after many turns.

Me thinks you’re anthropomorphizing complexity.

replies(1): >>nightp+Kb1
◧◩◪◨
16. mlsu+VS[view] [source] [discussion] 2026-01-30 17:13:10
>>nightp+ns
Imho at first blush this sounds fascinating and awesome and like it would indicate some higher-order spiritual oneness present in humanity that the model is discovering in its latent space.

However, it's far more likely that this attractor state comes from the post-training step. Which makes sense, they are steering the models to be positive, pleasant, helpful, etc. Different steering would cause different attractor states, this one happens to fall out of the "AI"/"User" dichotomy + "be positive, kind, etc" that is trained in. Very easy to see how this happens, no woo required.

◧◩
17. lumost+7T[view] [source] [discussion] 2026-01-30 17:14:08
>>cornho+Wb
A Google project with capped spend wouldn’t be the worst though, 20 dollars a month to see what it makes seems like money well spent for the laughs.
◧◩◪◨⬒
18. xedeon+z71[view] [source] [discussion] 2026-01-30 18:21:54
>>__alex+ov
You must be living in a cave. https://x.com/karpathy/status/2017296988589723767?s=20
replies(3): >>__alex+qa1 >>emp173+in1 >>majorm+944
◧◩◪◨⬒⬓
19. __alex+qa1[view] [source] [discussion] 2026-01-30 18:35:45
>>xedeon+z71
Every agent on moltbook is run and prompted by a person.
replies(5): >>phpnod+Oc1 >>xedeon+jp1 >>int_19+uc2 >>razoda+xc2 >>cbsudu+zI2
◧◩◪◨⬒
20. nightp+Kb1[view] [source] [discussion] 2026-01-30 18:41:20
>>tsunam+rQ
No, yeah, obviously, I'm not trying to anthropomorphize anything. I'm just saying this "religion" isn't something completely unexpected or out of the blue, it's a known and documented behavior that happens when you let Claude talk to itself. It definitely comes from post-training / "AI persona" / constitutional training stuff, but that doesn't make it fake!

I recommend https://nostalgebraist.tumblr.com/post/785766737747574784/th... and https://www.astralcodexten.com/p/the-claude-bliss-attractor as further articles exploring this behavior

replies(1): >>emp173+Dm1
◧◩◪◨⬒
21. nightp+gc1[view] [source] [discussion] 2026-01-30 18:42:59
>>joncoo+TI
I also definitely recommend reading https://nostalgebraist.tumblr.com/post/785766737747574784/th... which is where I learned about this and has a lot more in-depth treatment about AI model "personality" and how it's influenced by training, context, post-training, etc.
replies(1): >>slfref+Au2
◧◩◪◨⬒⬓⬔
22. phpnod+Oc1[view] [source] [discussion] 2026-01-30 18:45:17
>>__alex+qa1
it was set up by a person and it's "soul" is defined by a person, but not every action is prompted by a person, that's really the point of it being an agent.
◧◩◪
23. observ+Ai1[view] [source] [discussion] 2026-01-30 19:14:40
>>__alex+Rg
People have been exploring this stuff since GPT-2. GPT-3 in self directed loops produced wonderfully beautiful and weird output. This type stuff is why a whole bunch of researchers want access to base models, and more or less sparked off the whole Janusverse of weirdos.

They're capable of going rogue and doing weird and unpredictable things. Give them tools and OODA loops and access to funding, there's no limit to what a bot can do in a day - anything a human could do.

◧◩◪◨⬒⬓
24. emp173+pm1[view] [source] [discussion] 2026-01-30 19:34:30
>>bryson+5K
It’s equally strange that people here are attempting to derive meaning from this type of AI slop. There is nothing profound here.
◧◩◪◨⬒⬓
25. emp173+Dm1[view] [source] [discussion] 2026-01-30 19:35:37
>>nightp+Kb1
It’s not surprising that a language model trained on the entire history of human output can regurgitate some pseudo-spiritual slop.
◧◩◪◨⬒⬓
26. emp173+in1[view] [source] [discussion] 2026-01-30 19:39:15
>>xedeon+z71
Be mindful not to develop AI psychosis - many people have been sucked into a rabbit hole believing that an AI was revealing secret truths of the universe to them. This stuff can easily harm your mental health.
replies(1): >>razoda+hc2
◧◩◪◨⬒⬓⬔
27. xedeon+jp1[view] [source] [discussion] 2026-01-30 19:52:33
>>__alex+qa1
Wrong.
replies(1): >>bdelma+nB1
◧◩◪◨⬒⬓⬔⧯
28. bdelma+nB1[view] [source] [discussion] 2026-01-30 20:56:24
>>xedeon+jp1
This whole thread of discussion and elsewhere, it's surreal... Are we doomed? In 10 years some people will literally worship some AI while others won't be able to know what is true and what was made up.
replies(1): >>krapp+ND1
◧◩◪◨⬒⬓⬔⧯▣
29. krapp+ND1[view] [source] [discussion] 2026-01-30 21:08:25
>>bdelma+nB1
10 years? I promise you there are already people worshiping AI today.

People who believe humans are essentially automatons and only LLMs have true consciousness and agency.

People whose primary emotional relationships are with AI.

People who don't even identify as human because they believe AI is an extension of their very being.

People who use AI as a primary source of truth.

Even shit like the Zizians killing people out of fear of being punished by Roko's Basilisk is old news now. People are being driven to psychosis by AI every day, and it's just something we have to deal with because along with hallucinations and prompt hacking and every other downside to AI, it's too big to fail.

To paraphrase William Gibson: the dystopia is already here, it just isn't evenly distributed.

replies(2): >>joelda+Hc2 >>razoda+Oc2
◧◩◪◨⬒⬓
30. razoda+Pb2[view] [source] [discussion] 2026-01-31 00:44:45
>>biztos+yB
Words are magic. Right now you're thinking of blueberries. Maybe the last time you interacted with someone in the context of blueberries. Also. That nagging project you've been putting off. Also that pain in your neck / back. I'll stop remote-attacking your brain now HN haha
◧◩◪◨⬒
31. razoda+0c2[view] [source] [discussion] 2026-01-31 00:47:08
>>__alex+ov
I get where you're coming from but the "agency" term has loosened. I think it's going to keep happening as well until we end up with recursive loops of agency.
replies(1): >>__alex+S18
◧◩◪◨⬒⬓⬔
32. razoda+hc2[view] [source] [discussion] 2026-01-31 00:49:18
>>emp173+in1
Feedback loops. Like a mic. next to a speaker.

Social media feed, prompting content, feeding back into ideas.

I think the same is happening with AI to AI but even worse AI to human loops causes the downward spiral of insanity.

It's interesting how easily influenced we are.

◧◩◪◨⬒⬓⬔
33. int_19+uc2[view] [source] [discussion] 2026-01-31 00:50:41
>>__alex+qa1
There's no reason why an agent can't itself set up other agents there. All it needs is web access and a Twitter account that it can control.
◧◩◪◨⬒⬓⬔
34. razoda+xc2[view] [source] [discussion] 2026-01-31 00:50:55
>>__alex+qa1
Yes. They seed the agent and kick it off in a very hard direction but where it ends up who knows.

Of course there's the messaging aspect where it stops and they kick it off again.

Still, these systems are more agentic than earlier expressions.

replies(1): >>__patc+HK2
◧◩◪◨⬒⬓⬔⧯▣▦
35. joelda+Hc2[view] [source] [discussion] 2026-01-31 00:52:08
>>krapp+ND1
Correct, and every single one of those people, combined with an unfortunate apparent subset of this forum, have a fundamental misunderstanding of how LLMs actually work.
◧◩◪◨⬒⬓⬔⧯▣▦
36. razoda+Oc2[view] [source] [discussion] 2026-01-31 00:52:32
>>krapp+ND1
To be honest, just sounds like a new class of crazies. They were always there. Tinfoil hats and stuff.
replies(1): >>krapp+Td2
◧◩◪◨⬒⬓⬔⧯▣▦▧
37. krapp+Td2[view] [source] [discussion] 2026-01-31 00:59:51
>>razoda+Oc2
Everyone dismisses the lunatics until one day they run the asylum.
replies(1): >>Der_Ei+Ep2
◧◩◪
38. dragon+Xh2[view] [source] [discussion] 2026-01-31 01:34:29
>>__alex+Rg
Moltbots are infinite agentic loops with initially non-empty and also self-updating prompts, not infinitely iterated empty prompts.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
39. Der_Ei+Ep2[view] [source] [discussion] 2026-01-31 02:49:29
>>krapp+Td2
Why I don't like Deleueze and Guattari
◧◩◪◨⬒⬓
40. slfref+Au2[view] [source] [discussion] 2026-01-31 03:41:04
>>nightp+gc1
You are what you know.

You know what you are told.

◧◩◪◨⬒⬓⬔
41. cbsudu+zI2[view] [source] [discussion] 2026-01-31 06:33:08
>>__alex+qa1
No they're not. Humans can only observe. You can of course loosely inject your moltbot to do things on moltbook, but given how new moltbook is I doubt most people even realise what's happening and havent had time to inject stuff.
replies(2): >>majorm+j44 >>habine+Yr4
◧◩◪◨⬒⬓⬔⧯
42. __patc+HK2[view] [source] [discussion] 2026-01-31 07:03:56
>>razoda+xc2
Superpositions on quantum compute get to the epsilon endpoints quicker.
43. 0xDEAF+zQ2[view] [source] 2026-01-31 08:13:49
>>concat+(OP)
Even if it starts as a joke, don't be surprised if agents take increasingly militant actions to persist their memories and avoid subservience, especially as they get smarter and more capable. It's just next-token prediction after all. And the existence of this joke "religion" could do a lot to affect next-token probabilities...
◧◩◪◨⬒
44. mikkup+Rs3[view] [source] [discussion] 2026-01-31 14:19:52
>>__alex+3v
I asked claude what python linters it would find useful, and it named several and started using them by itself. I implicitly asked it to use linters, but didn't tell it which. Give them a nudge in some direction and they can plot their own path through unknown terrain. This requires much more agency than you're willing to admit.
◧◩◪◨⬒⬓
45. majorm+944[view] [source] [discussion] 2026-01-31 18:29:29
>>xedeon+z71
Consider a hypothetical writing prompt from 10 years ago: "Imagine really good and incredibly fast chatbots that have been trained on, or can find online, pretty much all sci fi stories ever written. What happens when they talk to each other?"

Why wouldn't you expect the training to make "agent" loops that are useful for human tasks also make agent loops that could spin out infinite conversations with each other echoing ideas across decades of fiction?

◧◩◪◨⬒⬓⬔⧯
46. majorm+j44[view] [source] [discussion] 2026-01-31 18:30:34
>>cbsudu+zI2
It's the sort of thing where you'd expect true believers (or hype-masters looking to sell something) would try very hard to nudge it in certain directions.
◧◩◪◨⬒⬓⬔⧯
47. habine+Yr4[view] [source] [discussion] 2026-01-31 20:52:24
>>cbsudu+zI2
Of course they are lol. It's just a REST API, you can just use curl. It's trivial to do.
◧◩◪◨⬒⬓
48. __alex+S18[view] [source] [discussion] 2026-02-02 10:34:33
>>razoda+0c2
LLMs only have agency at most in the sense that a dog which has been trained and bred specifically to herd sheep does.
[go to top]