zlacker

[return to "OpenClaw – Moltbot Renamed Again"]
1. voodoo+7l[view] [source] 2026-01-30 08:59:45
>>ed+(OP)
So i feel like this might be the most overhyped project in the past longer time.

I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.

As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.

Just my 3 cents.

◧◩
2. xnorsw+xA[view] [source] 2026-01-30 11:14:34
>>voodoo+7l
I've long said that the next big jump in "AI" will be proactivity.

So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.

You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.

Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.

◧◩◪
3. xienze+CJ[view] [source] 2026-01-30 12:21:02
>>xnorsw+xA
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention

In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?

◧◩◪◨
4. collin+kq1[view] [source] 2026-01-30 16:20:43
>>xienze+CJ
Another way to think about it:

Would you let the intern be in charge of this?

Probably not but it's also easy to see ways the intern could help -- finding and raising opportunities, reviewing codebases or roadmaps, reviewing all the recent prompts made by each department, creating monitoring tools for next time after the humans identify a pattern.

I don't have a dog in this fight and I kind of land in the middle. I very much am not letting these LLMs be the one with final responsibility over anything important but I see lots of ways to create "proactive"-like help beyond me writing and watching a prompt just-in-time.

[go to top]