zlacker

[parent] [thread] 7 comments
1. jackfr+(OP)[view] [source] 2026-01-13 18:17:55
The proxy pattern here is clever - essentially treating the LLM context window as an untrusted execution environment and doing credential injection at a layer it can't touch.

One thing I've noticed building with Claude Code is that it's pretty aggressive about reading .env files and config when it has access. The proxy approach sidesteps that entirely since there's nothing sensitive to find in the first place.

Wonder if the Anthropic team has considered building something like this into the sandbox itself - a secrets store that the model can "use" but never "read".

replies(6): >>iterat+GWh >>Joshua+mei >>mike-c+v5j >>ironbo+C5j >>ipytho+Auj >>edstar+pRj
2. iterat+GWh[view] [source] 2026-01-19 01:56:57
>>jackfr+(OP)
It could even hash individual keys and scan context locally before sending to check if it accidentally contains them.
3. Joshua+mei[view] [source] 2026-01-19 05:22:55
>>jackfr+(OP)
That's how they did "build an AI app" back when the claude.ai coding tool was javascript running in a web worker on the client machine.
4. mike-c+v5j[view] [source] 2026-01-19 13:14:21
>>jackfr+(OP)
> a secrets store that the model can "use" but never "read".

How would that work? If the AI can use it, it can read it. E.g:

    secret-store "foo" > file
    cat file
You'd have to be very specific about how the secret can be used in order for the AI to not be able to figure out what it is. You could provide a http proxy in the sandbox that injects a HTTP header to include the secret, when the secret is for accessing a website for example, and tell the AI to use that proxy. But you'd also have to scope down which URLs the proxy can access with that secret otherwise it could just visit a page like this to read back the headers that were sent:

https://www.whatismybrowser.com/detect/what-http-headers-is-...

Basically, for every "use" of a secret, you'd have to write a dedicated application which performs that task in a secure manner. It's not just the case of adding a special secret store.

replies(1): >>ashwin+0Ao
5. ironbo+C5j[view] [source] 2026-01-19 13:15:13
>>jackfr+(OP)
Sounds like an attacker could hack Anthropic and get access to a bunch of companies via the credentials Claude Code ingested?
6. ipytho+Auj[view] [source] 2026-01-19 15:41:01
>>jackfr+(OP)
I guess I don't understand why anyone thinks giving an LLM access to credentials is a good idea in the first place? It's been demonstrated best practice to separate authentication/authorization from the LLM's context window/ability to influence for several years now.

We spent the last 50 years of computer security getting to a point where we keep sensitive credentials out of the hands of humans. I guess now we have to take the next 50 years to learn the lesson that we should keep those same credentials out of the hands of LLMs as well?

I'll be sitting on the sideline eating popcorn in that case.

7. edstar+pRj[view] [source] 2026-01-19 17:13:24
>>jackfr+(OP)
While sandboxing is definitely more secure... Why not put a global deny on .env-like filename patterns as a first measure?
◧◩
8. ashwin+0Ao[view] [source] [discussion] 2026-01-21 00:40:10
>>mike-c+v5j
This seems like an under-rated comment. You are right, this is a vulnerability and the blog doesn't talk about this.
[go to top]