zlacker

[parent] [thread] 11 comments
1. keepam+(OP)[view] [source] 2026-01-19 10:21:55
Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.

Respect for your writing, but I feel you and many others have the risk calculus here backwards.

replies(2): >>saagar+E1 >>simonw+Y5
2. saagar+E1[view] [source] 2026-01-19 10:34:08
>>keepam+(OP)
AI labs currently have no solution for this problem and have you shoulder the risk for it.
replies(1): >>keepam+i5
◧◩
3. keepam+i5[view] [source] [discussion] 2026-01-19 11:01:17
>>saagar+E1
Evidence?
replies(2): >>simonw+v5 >>saagar+L5
◧◩◪
4. simonw+v5[view] [source] [discussion] 2026-01-19 11:03:10
>>keepam+i5
If they had a solution for this they would have told us about it.

In the meantime security researchers are publishing proof of concept data exfiltration attacks all the time. I've been collecting those here: https://simonwillison.net/tags/exfiltration-attacks/

◧◩◪
5. saagar+L5[view] [source] [discussion] 2026-01-19 11:05:37
>>keepam+i5
I worked on this for a company that got bought by one of the labs (for more than just agent sandboxes, mind you).
replies(2): >>keepam+Hw2 >>keepam+ir3
6. simonw+Y5[view] [source] 2026-01-19 11:07:10
>>keepam+(OP)
Every six months I predict that "in the next six months there will be a headline-grabbing example of someone pulling off a prompt injection attack that causes real economic damage", and every six months it fails to happen.

That doesn't mean the risk isn't there - it means malicious actors have not yet started exploiting it.

Johann Rehberger calls this effect "The Normalization of Deviance in AI", borrowing terminology from the 1986 Space Shuttle Challenger disaster report: https://embracethered.com/blog/posts/2025/the-normalization-...

Short version: the longer a company or community gets away with behaving in an unsafe way without feeling the consequences, the more they are likely to ignore those risks.

I'm certain that's what is happening to us all today with coding agents. I use them in an unsafe way myself.

◧◩◪◨
7. keepam+Hw2[view] [source] [discussion] 2026-01-20 01:54:56
>>saagar+L5
[flagged]
replies(1): >>saagar+3J2
◧◩◪◨⬒
8. saagar+3J2[view] [source] [discussion] 2026-01-20 03:48:49
>>keepam+Hw2
We didn’t solve the problem.
◧◩◪◨
9. keepam+ir3[view] [source] [discussion] 2026-01-20 10:55:08
>>saagar+L5
Wait, let me get this straight: “there’s no solution” to this apparent giant problem but you work for a company that got bought by an AI corp because you had a solution? Make it make sense.

If you did not solve it why were you bought?

replies(1): >>saagar+xR6
◧◩◪◨⬒
10. saagar+xR6[view] [source] [discussion] 2026-01-21 10:29:03
>>keepam+ir3
I worked for a company that got bought because they were working on a number of problems of interest to the acquirer. As many of these were hard problems, our efforts on them and progress was more than enough.
replies(1): >>keepam+Roa
◧◩◪◨⬒⬓
11. keepam+Roa[view] [source] [discussion] 2026-01-22 09:59:23
>>saagar+xR6
OK. Do you know if many AI labs are purchasing in this space? Was your acquisition an outlier or part of a wider trend? Thank you
replies(1): >>saagar+F2j
◧◩◪◨⬒⬓⬔
12. saagar+F2j[view] [source] [discussion] 2026-01-25 03:33:39
>>keepam+Roa
I think if you’re good at this most AI labs would be interested but I can’t speak for them obviously
[go to top]