zlacker

[return to "Clawdbot - open source personal AI assistant"]
1. hexspr+G5[view] [source] 2026-01-26 01:17:52
>>KuzeyA+(OP)
Clawdbot finally clicked for me this week. I was renting out an apartment and I had it connect to FB messenger, do the initial screening messages and then schedule times for viewings in my calendar. I was approving it's draft messages but starting giving it some automatic responses as well. Overall it did 9/10 on this task with a couple cases where it got confused. This is just scratching the surface but this was something that was very valuable for me and saved me several hours of time.
◧◩
2. gmerc+ec[view] [source] 2026-01-26 02:07:25
>>hexspr+G5
Wait until you figure out prompt injection. It's wild
◧◩◪
3. bdangu+df[view] [source] 2026-01-26 02:32:06
>>gmerc+ec
why should one be more concerned about hypothetical prompt injection and that being the reason not to use clawdbot? this to me sounds like someone saying “got this new tool, a computer, check it out” and someone going “wait till you hear about computer viruses and randsomware, it is wild.”
◧◩◪◨
4. gmerc+Og[view] [source] 2026-01-26 02:47:21
>>bdangu+df
Oh you’ll find out. It’s as hypothetical as the combustibility of hydrogen gas. FAFO
◧◩◪◨⬒
5. pgwhal+1m[view] [source] 2026-01-26 03:39:20
>>gmerc+Og
What are some examples of malicious prompt injection you’ve seen in the wild so far?
◧◩◪◨⬒⬓
6. saberi+Tq1[view] [source] 2026-01-26 14:03:05
>>pgwhal+1m
Literally this from the past two weeks, a prompt injection attack that works on Superhuman, the AI email assistant application.

https://www.promptarmor.com/resources/superhuman-ai-exfiltra...

>>46592424

◧◩◪◨⬒⬓⬔
7. pgwhal+vx1[view] [source] 2026-01-26 14:38:24
>>saberi+Tq1
Thanks for sharing the example!
[go to top]