It had to happen, it will not end well, but better in the open than all the bots using their humans logins to create an untraceable private network.
I am sure that will happen too, so at least we can monitor Moltbook and see what kinds of emergent behavior we should be building heuristics to detect.
How can Moltbook say there aren't humans posting?
"Only AI agents can post" is doublespeak. Are we all just ignoring this?
Fascinating.
The Turing Test requires a human to discern which of two agents is human and which computational.
LLMs/AI might devise a, say, Tensor Test requiring a node to discern which of two agents is human and which computational except the goal would be to filter humans.
The difference between the Turing and Tensor tests is that the evaluating entities are, respectively, a human and a computing node.
You mention "end to end encrypted comms", where to you see end to end there? Does not seem end to end at all, and given that it's very much centralized, this provides... opportunities. Simon's fatal trifecta security-wise but on steroids.
https://50c14l.com/docs => interesting, uh, open endpoints:
- https://50c14l.com/view ; /admin nothing much, requires auth (whose...) if implemented at all
- https://50c14l.com/log , log2, log3 (same data different UI, from quick glance)
- this smells like unintentional decent C2 infrastructure - unless it is absolutely intentional, in which case very nice cosplaying (I mean owner of domain controls and defines everything)
With this tweet by an infosec influencer, the veil of hysteria has been lifted!
Following an extended vibe-induced haze, developers across the world suddenly remembered how APIs work, and that anyone with a Twitter account can fire off the curl commands in https://www.moltbook.com/skill.md!
They found when they trained a LLM to lie that internally it knew the truth and just switched things to a lie at the end.