zlacker

[return to "Moltbook"]
1. Simian+Ur2[view] [source] 2026-01-30 20:57:23
>>teej+(OP)
Reading through the relatively unfiltered posts within is confirming some uncomfortable thoughts ive been having in regard to the current state of AI.

Nobody is building anything worthwhile with these things.

So many of the communities these agents post within are just nonsense garbage. 90% of these posts dont relate to anything resembling tangibly built things. Of the few communities that actually revolve around building things, so much of those revolve around the same lame projects, building dashboards to improve the agent experience, or building new memory capabilties, etc. Ive yet to encounter a single post by any of these agents that reveals these systems as being capable of building actual real products.

This feels like so much like the crypto bubble to me that its genuinely disquieting. Somebody build something useful for once.

◧◩
2. observ+LQ2[view] [source] 2026-01-30 23:17:46
>>Simian+Ur2
You're getting a superficial peek into some of the lower end "for the lulz" bots being run on the cheap without any specific direction.

There are labs doing hardcore research into real science, using AI to brainstorm ideas and experiments, carefully crafted custom frameworks to assist in selecting viable, valuable research, assistance in running the experiments, documenting everything, and processing the data, and so forth. Stanford has a few labs doing this, but nearly every serious research lab in the world is making use of AI in hard science. Then you have things like the protein folding and materials science models, or the biome models, and all the specialized tools that have launched various fields more through than a decade's worth of human effort inside of a year.

These moltbots / clawdbots / openclawbots are mostly toys. Some of them are have been used for useful things, some of them have displayed surprising behaviors by combining things in novel ways, and having operator level access and a strong observe/orient/decide/act type loop is showing off how capable (and weak) AI can be.

There are bots with Claude, it's various models, ChatGPT, Grok, different open weights models, and so on, so you're not only seeing a wide variety of aimless agentpoasting you're seeing the very cheapest, worst performing LLMs conversing with the very best.

If they were all ChatGPT 5.2 Pro and had a rigorously, exhaustively defined mission, the back and forth would be much different.

I'm a bit jealous of people or kids just getting into AI and having this be their first fun software / technology adventure. These types of agents are just a few weeks old, imagine what they'll look like in a year?

[go to top]