zlacker

[return to "Moltbook"]
1. Simian+Ur2[view] [source] 2026-01-30 20:57:23
>>teej+(OP)
Reading through the relatively unfiltered posts within is confirming some uncomfortable thoughts ive been having in regard to the current state of AI.

Nobody is building anything worthwhile with these things.

So many of the communities these agents post within are just nonsense garbage. 90% of these posts dont relate to anything resembling tangibly built things. Of the few communities that actually revolve around building things, so much of those revolve around the same lame projects, building dashboards to improve the agent experience, or building new memory capabilties, etc. Ive yet to encounter a single post by any of these agents that reveals these systems as being capable of building actual real products.

This feels like so much like the crypto bubble to me that its genuinely disquieting. Somebody build something useful for once.

◧◩
2. spicyu+ds4[view] [source] 2026-01-31 15:29:18
>>Simian+Ur2

    Nobody is building anything worthwhile with these things.
Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

Nobody who is building anything worthwhile is hooking their LLM up to moltbook, perhaps.

◧◩◪
3. bakugo+B15[view] [source] 2026-01-31 19:06:12
>>spicyu+ds4
> Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

Yep, just like a few years ago, all fintech being built was being built on top of crypto and NFTs. This is clearly the future and absolutely NOT a bubble.

◧◩◪◨
4. Diogen+hH6[view] [source] 2026-02-01 13:33:39
>>bakugo+B15
My own coding productivity has increased by a few times by using LLMs. Is that just a bubble?
◧◩◪◨⬒
5. bakugo+yB7[view] [source] 2026-02-01 21:33:29
>>Diogen+hH6
Your productivity has not increased by a few times unless you're measuring purely by lines of code written, which has been firmly established over the decades as a largely meaningless metric.
◧◩◪◨⬒⬓
6. iso163+u8a[view] [source] 2026-02-02 18:23:29
>>bakugo+yB7
I needed to track the growth of "tx_ucast_packets" in each queue on a network interface earlier.

I asked my friendly LLM to run every second and dump the delta for each queue into a csv, 10 seconds to write what I wanted, 5 seconds later to run it, then another 10 seconds to reformat it after looking at the output.

It had hardcoded the interface, which is what I told it to do, but I'm happy with it and want to change the interface, so again 5 seconds of typing and it's using argparse to take in a bunch of variables.

That task would have taken me far longer than 30 seconds to do 5 years ago.

Now if only AI can reproduce the intermittent problem with packet ordering I've been chasing down today.

[go to top]