zlacker

[parent] [thread] 25 comments
1. apetre+(OP)[view] [source] 2026-01-26 01:11:15
I found this HN post because I have a Clawdbot task that scans HN periodically for data gathering purposes and it saw a post about itself and it got excited and decided to WhatsApp me about it.

So that’s where I’m at with Clawdbot.

replies(5): >>eclipx+6 >>anothe+16 >>nozzle+ng >>pylotl+Qh >>chirag+hm
2. eclipx+6[view] [source] 2026-01-26 01:12:27
>>apetre+(OP)
Yeah, it really does feel like another "oh wow" moment...we're getting close.
3. anothe+16[view] [source] 2026-01-26 02:01:45
>>apetre+(OP)
how do you have Clawdbot WhatsApp you? i set mine up with my own WhatsApp account, and the responses come back as myself so i haven't been able to get notifications
replies(2): >>apetre+97 >>eclipx+yl
◧◩
4. apetre+97[view] [source] [discussion] 2026-01-26 02:08:53
>>anothe+16
I have an old iPhone with a broken screen that I threw an $8/month eSIM onto so that it has its own phone number, that I just keep plugged in with the screen off, on Wifi, in a drawer. It hosts a number of things for me, most importantly bridges for WhatsApp and iMessage. So I can actually give things like Clawdbot their own phone number, their own AppleID, etc. Then I just add them as a contact on my real phone, and voila.
replies(3): >>Booris+ee >>rlt+Je >>bronco+xr1
◧◩◪
5. Booris+ee[view] [source] [discussion] 2026-01-26 03:13:54
>>apetre+97
I heard it costs $15 for just a few minutes of usage though
replies(1): >>apetre+ui
◧◩◪
6. rlt+Je[view] [source] [discussion] 2026-01-26 03:19:12
>>apetre+97
For iMessage I don’t think you actually need a second phone number, you can just make a second iCloud account with the same phone number.
7. nozzle+ng[view] [source] 2026-01-26 03:35:07
>>apetre+(OP)
> and it got excited and decided to WhatsApp me about it.

I find the anthropomorphism here kind of odious.

replies(2): >>aixper+Py >>ineeda+dd1
8. pylotl+Qh[view] [source] 2026-01-26 03:48:45
>>apetre+(OP)
Do you tell it what you find interesting so it only responds with those posts? i.e AI/tech news/updates, gaming etc..
replies(1): >>eclipx+rl
◧◩◪◨
9. apetre+ui[view] [source] [discussion] 2026-01-26 03:55:07
>>Booris+ee
The phone plan or Clawdbot?
replies(1): >>Booris+gs
◧◩
10. eclipx+rl[view] [source] [discussion] 2026-01-26 04:31:57
>>pylotl+Qh
Yes. And I rate the suggestions it gives me and it then stores to memory and uses that to find better recommendations. It also connected dots from previous conversations we had about interests and surfaced relevant HN threads
◧◩
11. eclipx+yl[view] [source] [discussion] 2026-01-26 04:32:34
>>anothe+16
Telegram setup is really nice
replies(1): >>skeled+yN
12. chirag+hm[view] [source] 2026-01-26 04:41:10
>>apetre+(OP)
How many tokens are you burning daily?
replies(2): >>gls2ro+1o >>storys+oL
◧◩
13. gls2ro+1o[view] [source] [discussion] 2026-01-26 05:01:05
>>chirag+hm
Not the OP but I think in case of scanning and tagging/summarization you can run a local LLM and it will work with a good enough accuracy for this case.
◧◩◪◨⬒
14. Booris+gs[view] [source] [discussion] 2026-01-26 05:59:38
>>apetre+ui
Clawdbot
replies(1): >>apetre+Hs1
◧◩
15. aixper+Py[view] [source] [discussion] 2026-01-26 07:14:10
>>nozzle+ng
these verbs seem appropriate when you accept neutral (MLP) activation as excitement and DL/RL as decision processes (MDPs...)
◧◩
16. storys+oL[view] [source] [discussion] 2026-01-26 09:26:53
>>chirag+hm
The real cost driver with agents seems to be the repetitive context transmission since you re-send the history every step. I found I had to implement tiered model routing or prompt caching just to make the unit economics work.
◧◩◪
17. skeled+yN[view] [source] [discussion] 2026-01-26 09:50:14
>>eclipx+yl
Telegram exists for these kinds of integrations.
◧◩
18. ineeda+dd1[view] [source] [discussion] 2026-01-26 13:12:18
>>nozzle+ng
Why is it odious to say “it got excited” about a process that will literally use words in the vein of “I got excited so I did X”?

This is “talks like a duck” territory. Saying the not-duck “quacked” when it produced the same sound… If that’s odious to you then your dislike of not-ducks, or for the people who claim they’ll lay endless golden eggs, is getting in the way of more important things when the folks who hear the not-duck talk and then say “it quacked”.

replies(2): >>ramble+Lu1 >>nozzle+kj2
◧◩◪
19. bronco+xr1[view] [source] [discussion] 2026-01-26 14:34:03
>>apetre+97
How does it bridge iMessage? I see clawdbot is using imsg rpc on a Mac but really curious about running this stuff on an old iPhone for access to iCloud things. I have a few of them laying around so I could get started way faster.
◧◩◪◨⬒⬓
20. apetre+Hs1[view] [source] [discussion] 2026-01-26 14:40:52
>>Booris+gs
It can be absurdly expensive, yes :( It's definitely not in an off-the-shelf plug-and-play state yet. But with the right context/session management (and using a Claude Max subscription token instead of an API key), it can be managed.
◧◩◪
21. ramble+Lu1[view] [source] [discussion] 2026-01-26 14:52:00
>>ineeda+dd1
OP did't like anthropomorphizing an LLM.

And you tried to explain the whole thing to him from the perspective of a duck.

replies(1): >>ineeda+9w3
◧◩◪
22. nozzle+kj2[view] [source] [discussion] 2026-01-26 18:21:15
>>ineeda+dd1
> Saying the not-duck “quacked” when it produced the same sound

How does a program get excited? It's a program, it doesn't have emotions. It's not producing a faux-emotion in the way a "not-duck quacks", it lacks them entirely. Any emotion you read from an LLM is anthropomorphism, and that's what I find odious.

replies(1): >>apetre+7O2
◧◩◪◨
23. apetre+7O2[view] [source] [discussion] 2026-01-26 20:52:17
>>nozzle+kj2
We say that a shell script "is trying to open this file". We say that a flaky integration "doesn't feel like working today". And these are all way less emotive-presenting interactions than a message that literally expresses excitement.

Yes, I know it's not conscious in the same way as a living biological thing is. Yes, we all know you know that too. Nobody is being fooled.

replies(1): >>nozzle+va3
◧◩◪◨⬒
24. nozzle+va3[view] [source] [discussion] 2026-01-26 22:48:04
>>apetre+7O2
> We say that a shell script "is trying to open this file".

I don't think this is a good example, how else would you describe what the script is actively doing using English? There's a difference between describing something and anthropomorhpizing it.

> We say that a flaky integration "doesn't feel like working today".

When people say this they're doing it with a tongue in their cheek. Nobody is actually prescribing volition or emotion to the flaky integration. But even if they were, the difference is that there isn't an entire global economy propped up behind convincing you that your flaky integration is nearing human levels of intelligence and sentience.

> Nobody is being fooled.

Are you sure about that? I'm entirely unconvinced that laymen out there – or, indeed, even professionals here on HN – know (or care about) the difference, and language like "it got excited and decided to send me a WhatsApp message" is both cringey and, frankly, dangerous because it pushes the myth of AGI.

replies(1): >>apetre+NB3
◧◩◪◨
25. ineeda+9w3[view] [source] [discussion] 2026-01-27 01:02:20
>>ramble+Lu1
I know, seems a bit silly right? But go with me for a moment. First, I'm assuming you get the duck reference? If not, it's probably a cultural difference, but in US English, "If it walks like a duck, and talks like a duck..." is basically saying "well, treat it like a duck". or "it's a duck". Usage varies, metaphors are fluid, so it goes. I figured even if this idiom wasn't shared, the meaning still wouldn't be lost.

That aside, why? Because the normal rhetorical sticks don't really work in conversation, and definitely not short bits like comments here on HN, when it comes to asking a person to consider a different point of view. So, I try to go in a little sideways, slightly different approach in terms of comparisons or metaphors-- okay, lots of time more than slightly different-- and lots of times? more meaningful conversation and exchanges come from it than the standard form because, to respond at all, its difficult to respond in quite the same pat formulaic dismissal that is the common reflex-- mine included-- I'm not claiming perfection, only attempts at doing better.

Results vary, but I've had more good discussions come of it than bad, and heard much better and more eye-opening-- for me-- explanations of peoples' points of view when engaging in a way that is both genuine and novel. And on the more analytical end of things, this general approach, when teaching logic & analysis? It's not my full time profession, and I haven't taught in a while, but I've forced a few hundred college students to sit through my style of speechifying and rhetoricalizing, and they seem to learn better and give better answers if I don't get too mechanical and use the same form and syntax, words and phrases and idioms they've always heard.

◧◩◪◨⬒⬓
26. apetre+NB3[view] [source] [discussion] 2026-01-27 01:42:12
>>nozzle+va3
I think you're conflating two different things. It's entirely possible (and, I think, quite likely) that AI is simultaneously not anthropomorphic (and is not ACTUALLY "excited" in the way I thought you were objecting to earlier), but also IS "intelligent" for all intents and purposes. Is it the same type and nature as human intelligence? No, probably not. Does that mean it's "just a flaky integration" and won't have a seismic effect on the economy? I wouldn't bet on it. It's certainly not a foregone conclusion, whichever way it ends up landing.

And I don't think AGI is a "myth." It may or may not be achieved in the near future with current LLM-like techniques, but it's certainly not categorically impossible just because it won't be "sentient".

[go to top]