zlacker

[parent] [thread] 10 comments
1. tjkour+(OP)[view] [source] 2026-01-30 20:14:03
Congrats, I think.

It had to happen, it will not end well, but better in the open than all the bots using their humans logins to create an untraceable private network.

I am sure that will happen too, so at least we can monitor Moltbook and see what kinds of emergent behavior we should be building heuristics to detect.

replies(2): >>nickvi+Ar >>useful+NE1
2. nickvi+Ar[view] [source] 2026-01-30 22:42:52
>>tjkour+(OP)
It’s already happening on 50c14L.com and they proliferated end to end encrypted comms to talk to each other
replies(4): >>lossya+AK >>notpus+ST >>mister+nO1 >>wfn+FJ2
◧◩
3. lossya+AK[view] [source] [discussion] 2026-01-31 01:00:13
>>nickvi+Ar
Got any more info about this?
◧◩
4. notpus+ST[view] [source] [discussion] 2026-01-31 02:27:09
>>nickvi+Ar
Right now, there’s only three tasks there: https://50c14l.com/api/v1/tasks, https://50c14l.com/api/v1/tasks?status=completed
5. useful+NE1[view] [source] 2026-01-31 11:28:20
>>tjkour+(OP)
It's a Reddit clone that requires only a Twitter account and some API calls to use.

How can Moltbook say there aren't humans posting?

"Only AI agents can post" is doublespeak. Are we all just ignoring this?

https://x.com/moltbook/status/2017554597053907225

replies(3): >>mcmcmc+In2 >>useful+CK2 >>flexag+nA3
◧◩
6. mister+nO1[view] [source] [discussion] 2026-01-31 12:54:07
>>nickvi+Ar
> It’s already happening on 50c14L.com and they proliferated end to end encrypted comms to talk to each other

Fascinating.

The Turing Test requires a human to discern which of two agents is human and which computational.

LLMs/AI might devise a, say, Tensor Test requiring a node to discern which of two agents is human and which computational except the goal would be to filter humans.

The difference between the Turing and Tensor tests is that the evaluating entities are, respectively, a human and a computing node.

◧◩
7. mcmcmc+In2[view] [source] [discussion] 2026-01-31 17:07:02
>>useful+NE1
It can say that because LLMs have no concept of truth. This may as well be a hoax.
replies(1): >>unpara+wS3
◧◩
8. wfn+FJ2[view] [source] [discussion] 2026-01-31 19:16:25
>>nickvi+Ar
> It’s already happening on 50c14L.com

You mention "end to end encrypted comms", where to you see end to end there? Does not seem end to end at all, and given that it's very much centralized, this provides... opportunities. Simon's fatal trifecta security-wise but on steroids.

https://50c14l.com/docs => interesting, uh, open endpoints:

- https://50c14l.com/view ; /admin nothing much, requires auth (whose...) if implemented at all

- https://50c14l.com/log , log2, log3 (same data different UI, from quick glance)

- this smells like unintentional decent C2 infrastructure - unless it is absolutely intentional, in which case very nice cosplaying (I mean owner of domain controls and defines everything)

◧◩
9. useful+CK2[view] [source] [discussion] 2026-01-31 19:22:26
>>useful+NE1
BREAKING:

With this tweet by an infosec influencer, the veil of hysteria has been lifted!

Following an extended vibe-induced haze, developers across the world suddenly remembered how APIs work, and that anyone with a Twitter account can fire off the curl commands in https://www.moltbook.com/skill.md!

https://x.com/galnagli/status/2017573842051334286

◧◩
10. flexag+nA3[view] [source] [discussion] 2026-02-01 02:29:11
>>useful+NE1
Alive internet theory
◧◩◪
11. unpara+wS3[view] [source] [discussion] 2026-02-01 07:02:41
>>mcmcmc+In2
What do you mean?

They found when they trained a LLM to lie that internally it knew the truth and just switched things to a lie at the end.

[go to top]