zlacker

Clawdbot - open source personal AI assistant

submitted by KuzeyA+(OP) on 2026-01-26 00:27:41 | 405 points 261 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
5. bravur+X4[view] [source] 2026-01-26 01:09:05
>>KuzeyA+(OP)
How do people think about the sort of access and permissions it needs?

"Don't give it access to anything you wouldn't give a new contractor on day one."

https://x.com/rahulsood/status/2015397582105969106

19. ex3ndr+37[view] [source] 2026-01-26 01:30:11
>>KuzeyA+(OP)
What if we will go even further? I have built end-to-end messaging layer for Clawdbot to talk to each other, called Murmur - https://github.com/slopus/murmur.

We tried this with friends and it is truly magical (while crazy insecure) - i can ask my agent to search friends life, their preferences, about their calendars, what films they are watching. It can look at emails and find if you need something and go to people around asking for help. It is truly magical. Very very curious where it can go. At the moment it is exceptionally easy to exfiltrate anything, but you still can control via proper prompts - what you want to share and what you dont want to. I bet models will became better and eventually it wont be a problem.

21. AWebOf+m7[view] [source] 2026-01-26 01:32:38
>>KuzeyA+(OP)
If you're interested in hosting it at no cost on Oracle Cloud's always free tier (4 cpu, 24GB ram), instead of buying a Mac Mini or paying for a VPS, I wrote up how-to with a Pulumi infra-as-code template here: https://abrown.blog/posts/personal-assistant-clawdbot-on-ora...
27. kristo+Ra[view] [source] 2026-01-26 01:58:40
>>KuzeyA+(OP)
Baffling.

Isn't this just a basic completion loop with toolcalling hooked up to a universal chat gateway?

Isn't that a one shot chatgpt prompt?

(Yes it is: https://chatgpt.com/share/6976ca33-7bd8-8013-9b4f-2b417206d0...)

Why's everyone couch fainting over this?

29. theham+bb[view] [source] 2026-01-26 02:01:06
>>KuzeyA+(OP)
something feels off to me about the clawdbot hype

About the maintainer's github:

688 commits on Nov 25, 2025... out of which 296 commits were in clawdbot, IN ONE DAY, he prolly let lose an agent on the project for a few hours...

he has more than 200 commits on an average per day, but mostly 400-500 commits per day, and people are still using this project without thinking of the repercussions)

Now, something else i researched:

Someone launched some crypto on this, has $6M mktcap

https://www.coincarp.com/currencies/clawdbot/

Crypto people hyping clawed: https://x.com/0xifreqs/status/2015524871137120459

And this article telling you how to use clawed and how "revolutionary" it is (which has author name "Solana Levelup"): https://medium.com/@gemQueenx/clawdbot-ai-the-revolutionary-...

Make of that what you will

◧◩
32. theham+zb[view] [source] [discussion] 2026-01-26 02:03:38
>>theham+bb
his github: https://github.com/steipete

look at his contribution graph, it's absolutely wild

◧◩
36. dangoo+Yb[view] [source] [discussion] 2026-01-26 02:05:51
>>theham+bb
the developer is very well known https://github.com/steipete

the crypto is obviously not official and just another scam, trying to ride the popularity

Make of that what you will

◧◩◪
83. jason_+pl[view] [source] [discussion] 2026-01-26 03:33:15
>>kristo+Uj
>>9224
86. xtagon+Sl[view] [source] 2026-01-26 03:37:47
>>KuzeyA+(OP)
Wild. There are 300 open Github issues. One of them is this (also AI generated) security report: https://github.com/clawdbot/clawdbot/issues/1796 claiming findings of hundreds of high-risk issues, including examples of hard coded, unencrypted OAuth credentials.

I am...disinclined to install this software.

◧◩◪
96. xtagon+Qp[view] [source] [discussion] 2026-01-26 04:21:19
>>strang+Cn
You're talking about if a box is compromised, but to clarify, this is hard coded into the source in the repo, not an end-user's credentials (and it's a `client_id` and `client_secret`, not a token): https://github.com/clawdbot/clawdbot/blob/7187c3d06765c9d3a7...
◧◩
103. eddyg+ws[view] [source] [discussion] 2026-01-26 04:52:36
>>gdiamo+dr
There are definitely people who should not be running this

https://www.shodan.io/search?query=clawdbot-gw

◧◩◪◨
110. tehlik+Fv[view] [source] [discussion] 2026-01-26 05:36:02
>>bdangu+df
The text is Turkish - use auto translation from twitter to read: https://x.com/ersinkoc/status/2015394695015240122
◧◩
124. akmari+Gz[view] [source] [discussion] 2026-01-26 06:25:59
>>theham+bb
Peter Steinberger is a well respected developer that started out in the mobile dev community. He founded a company, then made an exit and is set for money, so he just does things for fun.

Yes, he AI generated all of it, go through his articles at https://steipete.me/ to see how he does it, it’s definitely not “vibe coding”, he does make sure that what’s being output is solid.

He was one of the people in the top charts of using Claude Code a year back, which brought around the limits we know today.

He also hosts Claude Code anonymous meetups all over the world.

He’s overall a passionate developer that cares about the thing he’s building.

131. abhise+QA[view] [source] 2026-01-26 06:39:30
>>KuzeyA+(OP)
Tried installing clawdbot. Got blocked by (my own) sandbox because it tried to git clone some stuff which in turn was accessing my private keys.

- clawdbot depends on @whiskeysockets/baileys

- @whiskeysockets/baileys depends on libsignal

npm view @whiskeysockets/baileys dependencies

[..] libsignal: 'git+https://github.com/whiskeysockets/libsignal-node.git', [..]

libsignal is not a regular npm package but a GitHub repository, which need to be cloned and built locally.

So suddenly, my sandbox profile, tuned for npm package installation no longer works because npm decides to treat my system as a build environment.

May be genuine use-case but its hard to keep up.

◧◩
161. dewey+k21[view] [source] [discussion] 2026-01-26 11:07:31
>>hestef+G11
Because it's using an actual Mac as a gateway to run this on: https://docs.clawd.bot/help/faq#do-i-have-to-buy-a-mac-mini-...
◧◩
179. reacha+pa1[view] [source] [discussion] 2026-01-26 12:13:34
>>jwally+g31
I've been thinking about this very thing the last few days. "secretary in my Mac" to be specific. An ever running daemon that uses an LLM model for smarts, but pretty much do as many dumb things deterministically as possible. 1. Fetch my calendars(Fastmail, work Google Calendar, Couple's calendar at Cupla) and embellish it with routine tasks like pickup/drop kids, and give me a Today view like this https://zoneless.tools/difference/london-vs-new-york?cities=...

2. Access to my TODO list on Apple Notes and basically remind my ADHD brain that I ought to be doing something and not let it slip because it is uninteresting.

3. Have access to all models via API keys I configure and maintain a "research journal" of all the things I go to LLMs for - "research of bike that fits my needs" whatever and figure out if there needs to be a TODO about them and add if I say yes.

4. View my activity as a professional coach and nudge me into action "Hey you wanted to do this at work this year, but you haven't begun.. may be it is time you look at it Thursday at 3 PM?"

5. View my activity as a mental health coach and nudge me like "hey you're researching this, that and blah while X, Y and Z are pending. Want me to record the state of this research so you can get back to doing X, Y and Z?" or Just talk to me like a therapist would.

6. Be my spaghetti wall. When a new idea pops into my head, I send this secretary a message, and it ruminates over it like I would and matures that idea in a directory that I can review and obsess over later when there is time..

As you see, this is quite personal in nature, I dont want hosted LLMs to know me this deeply. It has to be a local model even if it is slow.

◧◩◪◨⬒⬓
188. saberi+Tq1[view] [source] [discussion] 2026-01-26 14:03:05
>>pgwhal+1m
Literally this from the past two weeks, a prompt injection attack that works on Superhuman, the AI email assistant application.

https://www.promptarmor.com/resources/superhuman-ai-exfiltra...

>>46592424

◧◩◪◨⬒⬓⬔⧯
206. lmeyer+oS1[view] [source] [discussion] 2026-01-26 16:12:21
>>pgwhal+fo1
Like https://www.securityweek.com/hackers-target-popular-nx-build... ?

Or the many people putting content in their LI profiles, forums like these, etc because they know scrapers are targeting them ?

Or the above, for the users stating they are using it to scrape hn?

◧◩◪◨⬒⬓⬔⧯▣
210. pgwhal+N62[view] [source] [discussion] 2026-01-26 17:12:38
>>lmeyer+oS1
> Like https://www.securityweek.com/hackers-target-popular-nx-build... ?

I only had time to skim this, but it doesn't seem like prompt injection to me, just good old fashioned malware in a node package.

Your other two examples do seem to open the door for prompt injection, I was just asking about documented cases of it succeeding.

◧◩
211. amistr+m72[view] [source] [discussion] 2026-01-26 17:15:50
>>jwally+g31
I've been spending some nights & weekends building exactly this recently. I wanted something that managed my email & calendar, and proactively helped out (or nagged me) when it identified anything important.

It has a handful of core features:

- key obligations & insights are grok'd from emails and calendar events - these get turned into an ever-evolving always-up-to-date set of tasks; displayed on a web UX and sent to you in a personalized daily briefing - you can chat via telegram or email with the agent, and it can research/query your inbox or calendar/create or resolve tasks/email others/etc - if the AI identifies opportunities to be proactive (eg upcoming deadline or lack of RSVP on an event), it pings you with more context and you can give the green light for the agent to execute

Generally trying to identify finite list of busywork tasks that could be automated, and let users delegate the agent to execute them. Or, in the future (and with high enough confidence), let the agent just execute automatically.

Built the stack on Cloudflare (d1, Cloudflare Workers/Workfolows/queues, Vectorize), using gemini-3-flash as the model.

Would love any feedback: https://elani.ai.

213. bluesn+Fs2[view] [source] 2026-01-26 18:39:36
>>KuzeyA+(OP)
Had a similar thought since I started using the Slack MCP in Claude Code. It's handy for instance during an incident to be researching the problem, digging through Sentry or Clickhouse or the code and have it post updates directly to our #engineering channel for the team to see. But... they can't reply. Or rather they can but Claude has to poll each thread or channel to see replies which is a pretty clumsy workflow.

So anyway long story short I made something like Clawdbot but in the cloud: https://stumpy.ai/

Didn't occur to me to design it to run locally and leave running on my machine. You can't close your laptop or Clawdbot dies? It can read all your files? Rather run agents in the cloud. I gave them sandboxes (Fly sprites) so you can still have them do software development or whatever.

220. vismit+EU3[view] [source] 2026-01-27 03:46:56
>>KuzeyA+(OP)
Qordinate - Another personal AI assistant: https://www.qordinate.ai
◧◩
221. bluesn+TU3[view] [source] [discussion] 2026-01-27 03:49:23
>>jwally+g31
Building it now. Basically raw agents you can talk to over any channel like Slack/Telegram/etc. (Should have SMS and voice calling working shortly.) Can connect to your email/calendar. Files and sqlite for memory/storage. Optional sandbox for coding or whatever. It's all a bit rough but working.

https://stumpy.ai

225. SinghC+a14[view] [source] 2026-01-27 04:52:37
>>KuzeyA+(OP)
We at Qordinate have made a managed version - which works on Telegram, Slack, WhatsApp, and our own app at the moment. With iMessage, Email, Teams in pipeline. Think - if you don't want to do the hassle of managing this yourself, you can offload that to us, along with security aspects of prompt injection, and performance aspects of tool search, etc. If you want to try, it's available for free right now since we are early at https://qordinate.ai
◧◩◪◨⬒
255. mgdev+zn9[view] [source] [discussion] 2026-01-28 15:08:07
>>bronco+Eu5
One thing you can try is powering Clawdbot with a local model. My company recently wrote[0] about it.

Unclear what kind of quality you'll get out of it, but since the tokens are all local, kinda doesn't matter if it burns through 10x more for the same outcome.

[0]:https://www.docker.com/blog/clawdbot-docker-model-runner-pri...

261. BojanT+Iuq[view] [source] 2026-02-02 21:45:56
>>KuzeyA+(OP)
I am too afraid to try it. There's so much that can go wrong. Wrote an article about it if someone is curious: https://intelligenttools.co/blog/moltbook-ai-assistant-socia...
[go to top]