zlacker

OpenClaw is what Apple intelligence should have been

submitted by jakequ+(OP) on 2026-02-05 00:28:06 | 306 points 240 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
11. calvin+T5[view] [source] [discussion] 2026-02-05 01:14:53
>>crazyg+w5
Apple literally lives on the "Cutting Edge" a-la XKCD [1]. My wife is an iPerson and she always tells me about these new features (my phone has had them since $today-5 years). But for her, these are brand new exciting things!

https://xkcd.com/606/

33. IcyWin+G8[view] [source] 2026-02-05 01:38:43
>>jakequ+(OP)
According to https://1password.com/blog/from-magic-to-malware-how-opencla..., The top skill is/was malware.

It's obviously broken, so no, Apple Intelligence should not have been this.

◧◩
57. huwser+na[view] [source] [discussion] 2026-02-05 01:52:10
>>crazyg+w5
I don’t believe this was ever confirmed by Apple, but there was widespread speculation at the time[1] that the delay was due to the very prompt injection attacks OpenClaw users are now discovering. It would be genuinely catastrophic to ship an insecure system with this kind of data access, even with an ‘unsafe mode’.

These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.

The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.

[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...

◧◩◪◨
63. lanake+gb[view] [source] [discussion] 2026-02-05 01:58:34
>>neuman+La
Probably the Mac Mini. A few OpenClaw users are buying the agent a dedicated device so that it can integrate with their Apple account.

For example: https://x.com/michael_chomsky/status/2017686846910959668.

◧◩◪◨
123. dmix+Zh[view] [source] [discussion] 2026-02-05 02:58:19
>>DrewAD+qd
> What people are talking about doing with OpenClaw I find absolutely insane.

Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.

[1] https://openclaw.ai/blog/introducing-openclaw

◧◩◪◨⬒⬓
141. whatsu+hk[view] [source] [discussion] 2026-02-05 03:18:22
>>koolal+mi
There are few open source projects coming along that let you sell your compute power in a decentralized way. I don't know how genuine some of these are [0] but it could be the reason: people are just trying to make money.

0. https://www.daifi.ai/

144. EGreg+zk[view] [source] 2026-02-05 03:21:33
>>jakequ+(OP)
No. Emphatically NOT. Apple has done a great job safeguarding people's devices and privacy from this crap. And no, AI slop and local automation is scarcely better than giving up your passwords to see pictures of cats, which is an old meme about the gullibility of the general public.

OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.

Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.

As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).

Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.

If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.

A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.

I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...

This is how I feel:

https://www.instagram.com/reels/DIUCiGOTZ8J/

PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.

172. joeygu+8p[view] [source] 2026-02-05 04:00:28
>>jakequ+(OP)
Just to add more credence to this thesis. Here’s the knowledge navigator. https://m.youtube.com/watch?v=umJsITGzXd0

It’s a 1987 ad like video showing a professor interacting with what looks like the Dynabook as an essentially AI personal assistant. Apple had this vision a long time ago. I guess they just lost the path somewhere along the way.

◧◩◪◨
182. mrkstu+Hr[view] [source] [discussion] 2026-02-05 04:28:17
>>mrkstu+wr
https://www.youtube.com/watch?v=welKoeoK6zI
◧◩
214. ed_mer+1A[view] [source] [discussion] 2026-02-05 05:58:10
>>ronces+dj
Confirmed BS: https://resellcalendar.com/news/news/mac-mini-shortage-clawd...
◧◩◪
237. SCdF+nM[view] [source] [discussion] 2026-02-05 07:59:09
>>yoyohe+Vd
You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.

AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!

The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.

Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.

But here we are :-)

[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.

[1] while LLMs are the underlying technology https://simonwillison.net/tags/lethal-trifecta/

[go to top]