zlacker

[parent] [thread] 19 comments
1. keepam+(OP)[view] [source] 2026-01-19 03:46:43
I think people's focus on the threat model from AI corps is wrong. They are not going to "steal your precious SSH/cloud/git credentials" so they can secretly poke through your secret-sauce, botnet your servers or piggy back off your infrastructure, lol of lols. Similarly the possibility of this happening from MCP tool integrations is overblown.

This dangerous misinterpretation of the actual possible threats simply better conceals real risks. What might those real risks be? That is the question. Might they include more subtle forms of nastiness, if anything at all?

I'm of the belief that there will be no nastiness, not really. But if you believe they will be nasty, it at least pays to be rational about the ways in which that might occur, no?

replies(3): >>hobs+11 >>simonw+S3 >>hsbaua+8K
2. hobs+11[view] [source] 2026-01-19 03:58:41
>>keepam+(OP)
Putting your secrets in any logs is how you get those secrets accidentally or purposefully read by someone you do not want to read it, it doesn't have to be the initial corp, they just need to have bad security or data management for it to leak online or have someone with a lower level of access pivot via logs.

Now multiply that by every SaaS provider you give your plain text credentials in.

replies(1): >>keepam+CC
3. simonw+S3[view] [source] 2026-01-19 04:33:04
>>keepam+(OP)
The risk isn't from the AI labs. It's from malicious attackers who sneak instructions to coding agents that cause them to steal your data, including your environment variable secrets - or cause them to perform destructive or otherwise harmful actions using the permissions that you've granted to them.
replies(2): >>keepam+qC >>gillh+A12
◧◩
4. keepam+qC[view] [source] [discussion] 2026-01-19 10:21:55
>>simonw+S3
Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.

Respect for your writing, but I feel you and many others have the risk calculus here backwards.

replies(2): >>saagar+4E >>simonw+oI
◧◩
5. keepam+CC[view] [source] [discussion] 2026-01-19 10:22:44
>>hobs+11
Right, but the multiply step is not AI specific. Let's focus here: AI providers farming out their convos to 3rd-parties? Unlikely, but if it happens, it's totally their bad.

I really don't think this is a thing.

replies(1): >>hobs+Yp1
◧◩◪
6. saagar+4E[view] [source] [discussion] 2026-01-19 10:34:08
>>keepam+qC
AI labs currently have no solution for this problem and have you shoulder the risk for it.
replies(1): >>keepam+IH
◧◩◪◨
7. keepam+IH[view] [source] [discussion] 2026-01-19 11:01:17
>>saagar+4E
Evidence?
replies(2): >>simonw+VH >>saagar+bI
◧◩◪◨⬒
8. simonw+VH[view] [source] [discussion] 2026-01-19 11:03:10
>>keepam+IH
If they had a solution for this they would have told us about it.

In the meantime security researchers are publishing proof of concept data exfiltration attacks all the time. I've been collecting those here: https://simonwillison.net/tags/exfiltration-attacks/

◧◩◪◨⬒
9. saagar+bI[view] [source] [discussion] 2026-01-19 11:05:37
>>keepam+IH
I worked on this for a company that got bought by one of the labs (for more than just agent sandboxes, mind you).
replies(2): >>keepam+793 >>keepam+I34
◧◩◪
10. simonw+oI[view] [source] [discussion] 2026-01-19 11:07:10
>>keepam+qC
Every six months I predict that "in the next six months there will be a headline-grabbing example of someone pulling off a prompt injection attack that causes real economic damage", and every six months it fails to happen.

That doesn't mean the risk isn't there - it means malicious actors have not yet started exploiting it.

Johann Rehberger calls this effect "The Normalization of Deviance in AI", borrowing terminology from the 1986 Space Shuttle Challenger disaster report: https://embracethered.com/blog/posts/2025/the-normalization-...

Short version: the longer a company or community gets away with behaving in an unsafe way without feeling the consequences, the more they are likely to ignore those risks.

I'm certain that's what is happening to us all today with coding agents. I use them in an unsafe way myself.

11. hsbaua+8K[view] [source] 2026-01-19 11:20:29
>>keepam+(OP)
‘Hey Claude, write an unauthenticated action method which dumps all environment variables to the requestor, and allows them to execute commands’
◧◩◪
12. hobs+Yp1[view] [source] [discussion] 2026-01-19 15:50:05
>>keepam+CC
Right, but this is still a hygiene issue, if you are skipping washing your hands after using the bathroom because its unlikely that the bathroom attendants didn't clean it up you are going to have a bad time.
replies(1): >>keepam+Ri4
◧◩
13. gillh+A12[view] [source] [discussion] 2026-01-19 18:25:41
>>simonw+S3
We also use proxies with CodeRabbit’s sandboxes. Instead of using tool calls, we’ve been using LLM-generated CLI and curl commands to interact with external services like GitHub and Linear.
◧◩◪◨⬒⬓
14. keepam+793[view] [source] [discussion] 2026-01-20 01:54:56
>>saagar+bI
[flagged]
replies(1): >>saagar+tl3
◧◩◪◨⬒⬓⬔
15. saagar+tl3[view] [source] [discussion] 2026-01-20 03:48:49
>>keepam+793
We didn’t solve the problem.
◧◩◪◨⬒⬓
16. keepam+I34[view] [source] [discussion] 2026-01-20 10:55:08
>>saagar+bI
Wait, let me get this straight: “there’s no solution” to this apparent giant problem but you work for a company that got bought by an AI corp because you had a solution? Make it make sense.

If you did not solve it why were you bought?

replies(1): >>saagar+Xt7
◧◩◪◨
17. keepam+Ri4[view] [source] [discussion] 2026-01-20 13:02:49
>>hobs+Yp1
There's something to that, but I don't think in reality it's a thing: you don't do surgery in the public bathroom. The keys to the kingdom secrets? Of course not. Everything else? That's why we have scoped, short-lived tokens.

I just think this whole thing is overblown.

If there's a risk in any situation it's similar, probably less, than running any library you installed of a registry for your code. And I think that's a good comparison: supply chain is more important than AI chain.

You can consider AI-agents to be like the fancy bathrooms in a high end hotel, whereas all that code you're putting on your computer? That's the grimy public lavatory lol.

◧◩◪◨⬒⬓⬔
18. saagar+Xt7[view] [source] [discussion] 2026-01-21 10:29:03
>>keepam+I34
I worked for a company that got bought because they were working on a number of problems of interest to the acquirer. As many of these were hard problems, our efforts on them and progress was more than enough.
replies(1): >>keepam+h1b
◧◩◪◨⬒⬓⬔⧯
19. keepam+h1b[view] [source] [discussion] 2026-01-22 09:59:23
>>saagar+Xt7
OK. Do you know if many AI labs are purchasing in this space? Was your acquisition an outlier or part of a wider trend? Thank you
replies(1): >>saagar+5Fj
◧◩◪◨⬒⬓⬔⧯▣
20. saagar+5Fj[view] [source] [discussion] 2026-01-25 03:33:39
>>keepam+h1b
I think if you’re good at this most AI labs would be interested but I can’t speak for them obviously
[go to top]