zlacker

Using proxies to hide secrets from Claude Code

submitted by drewgr+(OP) on 2026-01-13 18:12:09 | 132 points 57 comments
[view article] [source] [links] [go to bottom]
replies(12): >>jackfr+x1 >>dang+wVh >>TheRoq+EYh >>samlin+X0i >>pauldd+A7i >>keepam+48i >>dtkav+99i >>JimDab+cbi >>josego+Kdi >>data-o+6Gj >>theoze+8Sj >>1vuio0+cek
1. jackfr+x1[view] [source] 2026-01-13 18:17:55
>>drewgr+(OP)
The proxy pattern here is clever - essentially treating the LLM context window as an untrusted execution environment and doing credential injection at a layer it can't touch.

One thing I've noticed building with Claude Code is that it's pretty aggressive about reading .env files and config when it has access. The proxy approach sidesteps that entirely since there's nothing sensitive to find in the first place.

Wonder if the Anthropic team has considered building something like this into the sandbox itself - a secrets store that the model can "use" but never "read".

replies(6): >>iterat+dYh >>Joshua+Tfi >>mike-c+27j >>ironbo+97j >>ipytho+7wj >>edstar+WSj
2. dang+wVh[view] [source] 2026-01-19 01:30:32
>>drewgr+(OP)
Recent and related: >>46623126 via Ask HN: How do you safely give LLMs SSH/DB access? - >>46620990 .
◧◩
3. iterat+dYh[view] [source] [discussion] 2026-01-19 01:56:57
>>jackfr+x1
It could even hash individual keys and scan context locally before sending to check if it accidentally contains them.
4. TheRoq+EYh[view] [source] 2026-01-19 02:01:52
>>drewgr+(OP)
At the moment I'm just using "sops" [1]. I have my env var files encrypted uth AGE encryption. Then I run whatever I want to run with "sops exec-env ...", it's basically forwarding the secrets to your program.

I like it because it's pretty easy to use, however it's not fool-proof: if the editor which you use for editing the env vars is crashing or killed suddently, it will leave a "temp" file with the decrypted vars on your computer. Also, if this same editor has AI features in it, it may read the decrypted vars anyways.

- [1]: https://github.com/getsops/sops

replies(1): >>jclark+u2i
5. samlin+X0i[view] [source] 2026-01-19 02:26:12
>>drewgr+(OP)
Here's the set up I use on Linux:

The idea is to completely sandbox the program, and allow only access to specific bind mounted folders. But we also want to have to the frills of using GUI programs, audio, and network access. runc (https://github.com/opencontainers/runc) allows us to do exactly this.

My config sets up a container with folders bind mounted from the host. The only difficult part is setting up a transparent network proxy so that all the programs that need internet just work.

Container has a process namespace, network namespace, etc and has no access to host except through the bind mounted folders. Network is provided via a domain socket inside a bind mounted folder. GUI programs work by passing through a Wayland socket in a folder and setting environmental variables.

The set up looks like this

    * config.json - runc config
    * run.sh - runs runc and the proxy server
    * rootfs/ - runc rootfs (created by exporting a docker container) `mkdir rootfs && docker export $(docker create archlinux:multilib-devel) | tar -C rootfs -xvf -`
    * net/ - folder that is bind mounted into the container for networking
Inside the container (inside rootfs/root):

    * net-conf.sh - transparent proxy setup
    * nft.conf - transparent proxy nft config
    * start.sh - run as a user account
Clone-able repo with the files: https://github.com/dogestreet/dev-container
replies(3): >>brunob+h3i >>idoros+w3i >>ekidd+U8i
◧◩
6. jclark+u2i[view] [source] [discussion] 2026-01-19 02:42:01
>>TheRoq+EYh
I do something similar but this only protects secrets at rest. If you app has an exploit an attack could just export all your secrets to a file.

I prototyped a solution where I use an external debugger to monitor my app, when the app needs a secret it generates a breakpoint and the debugger catches it and then inspects the call stack of the function requesting the secret and then copies it into the process memory (intended to be erased immediately after use). Not 100% security but a big improvement and a bit more flexible and auditable compared to a proxy

replies(1): >>chrisw+Z8i
◧◩
7. brunob+h3i[view] [source] [discussion] 2026-01-19 02:51:06
>>samlin+X0i
Any particular reason why you shared these files in a gist rather a repo?
replies(1): >>samlin+N5i
◧◩
8. idoros+w3i[view] [source] [discussion] 2026-01-19 02:53:46
>>samlin+X0i
try firejail insread
replies(1): >>samlin+P5i
◧◩◪
9. samlin+N5i[view] [source] [discussion] 2026-01-19 03:23:04
>>brunob+h3i
Yeah you're right, a repo is better: https://github.com/dogestreet/dev-container

I've made it clonable and should be straightforward to run now.

◧◩◪
10. samlin+P5i[view] [source] [discussion] 2026-01-19 03:23:55
>>idoros+w3i
Not even close to the same thing, with this setup you can install dev tools, databases, etc and run inside the container.

It's a full development environment in a folder.

11. pauldd+A7i[view] [source] 2026-01-19 03:42:22
>>drewgr+(OP)
Isn’t this (part of) the point of MCP.
replies(1): >>eddyth+Dfi
12. keepam+48i[view] [source] 2026-01-19 03:46:43
>>drewgr+(OP)
I think people's focus on the threat model from AI corps is wrong. They are not going to "steal your precious SSH/cloud/git credentials" so they can secretly poke through your secret-sauce, botnet your servers or piggy back off your infrastructure, lol of lols. Similarly the possibility of this happening from MCP tool integrations is overblown.

This dangerous misinterpretation of the actual possible threats simply better conceals real risks. What might those real risks be? That is the question. Might they include more subtle forms of nastiness, if anything at all?

I'm of the belief that there will be no nastiness, not really. But if you believe they will be nasty, it at least pays to be rational about the ways in which that might occur, no?

replies(3): >>hobs+59i >>simonw+Wbi >>hsbaua+cSi
◧◩
13. ekidd+U8i[view] [source] [discussion] 2026-01-19 03:57:24
>>samlin+X0i
I have a version of this without the GUI, but with shared mounts and user ID mapping. It uses systemd-nspawn, and it's great.

In retrospect, agent permission models are unbelievably silly. Just give the poor agents their own user accounts, credentials, and branch protection, like you would for a short-term consultant.

replies(1): >>samlin+F9i
◧◩◪
14. chrisw+Z8i[view] [source] [discussion] 2026-01-19 03:57:52
>>jclark+u2i
clever
◧◩
15. hobs+59i[view] [source] [discussion] 2026-01-19 03:58:41
>>keepam+48i
Putting your secrets in any logs is how you get those secrets accidentally or purposefully read by someone you do not want to read it, it doesn't have to be the initial corp, they just need to have bad security or data management for it to leak online or have someone with a lower level of access pivot via logs.

Now multiply that by every SaaS provider you give your plain text credentials in.

replies(1): >>keepam+GKi
16. dtkav+99i[view] [source] 2026-01-19 04:00:01
>>drewgr+(OP)
I'm working on something similar called agent-creds [0]. I'm using Envoy as the transparent (MITM) proxy and macaroons for credentials.

The idea is that you can arbitrarily scope down credentials with macaroons, both in terms of scope (only certain endpoints) and time. This really limits the damage that an agent can do, but also means that if your credentials are leaked they are already expired within a few minutes. With macaroons you can design the authz scheme that *you* want for any arbitrary API.

I'm also working on a fuse filesystem to mount inside of the container that mints the tokens client-side with short expiry times.

https://github.com/dtkav/agent-creds

replies(2): >>badeey+hsj >>ashwin+DDo
◧◩◪
17. samlin+F9i[view] [source] [discussion] 2026-01-19 04:06:38
>>ekidd+U8i
The other reason to sandbox is to reduce damage if another NPM supply chain attack drops. User accounts should solve the problem, but they are just too coarse grained and fiddly especially when you have path hierarchies. I'd hate to have another dependency on systemd, hence runc only.
18. JimDab+cbi[view] [source] 2026-01-19 04:23:07
>>drewgr+(OP)
Is this a reimplementation of Fly.io’s Tokenizer? How does it compare?

https://fly.io/blog/tokenized-tokens/

https://github.com/superfly/tokenizer

replies(3): >>eddyth+Mbi >>dtkav+eei >>Rafert+fsj
◧◩
19. eddyth+Mbi[view] [source] [discussion] 2026-01-19 04:31:29
>>JimDab+cbi
We truly are living in the dumbest timeline aren’t we.

I was just having an argument with a high level manager 2 weeks ago about how we already have an outbound proxy that does this, but he insisted that a mitm proxy is not the same as fly.io “tokenizer”. See, that one tokanizes every request, ours just sets the Authorization header for service X. I tried to explain that it’s all mitm proxies altering the request, just for him to say “I don’t care about altering the request, we shouldn’t alter the request. We just need to tokenize the connection itself”

◧◩
20. simonw+Wbi[view] [source] [discussion] 2026-01-19 04:33:04
>>keepam+48i
The risk isn't from the AI labs. It's from malicious attackers who sneak instructions to coding agents that cause them to steal your data, including your environment variable secrets - or cause them to perform destructive or otherwise harmful actions using the permissions that you've granted to them.
replies(2): >>keepam+uKi >>gillh+E9k
21. josego+Kdi[view] [source] 2026-01-19 04:56:41
>>drewgr+(OP)
I am gonna be that guy and say it would be nice to share the actual code vs using images to display what the code looks like. It's not great for screenreaders and anyone who want to quickly try out the functionality.
◧◩
22. dtkav+eei[view] [source] [discussion] 2026-01-19 05:03:44
>>JimDab+cbi
IMHO there are a couple axis that are interesting in this space.

1. What do the tokens look like that you are you storing in the client? This could just be the secret (but encrypted), or you could design a whole granular authz system. It seems like tokenizer is the former and Formal is the latter. I think macaroons are an interesting choice here.

2. Is the MITM proxy transparent? Node, curl, etc allow you to specify a proxy as an environment variable, but if you're willing to mess with the certificate store than you can run arbitrary unmodified code. It seems like both Tokenizer and Formal are explicit proxies.

3. What proxy are you using, and where does it run? Depending on the authz scheme/token format you could run the proxy centrally, or locally as a "sidecar" for your dev container/sandbox.

◧◩
23. eddyth+Dfi[view] [source] [discussion] 2026-01-19 05:19:18
>>pauldd+A7i
Possibly, but the point is that MCP is a DOA idea. An agent, like Claude code or opencode, don’t need an MCP. it’s nonsensical to expect or need an MCP before someone can call you.

There is no `git` MCP either . Opencode is fully capable of running `git add .` or `aws ec2 terminate-instance …` or `curl -XPOST https://…`

Why do we need the MCP? The problem now is that someone can do a prompt injection to tell it to send all your ~/.was/credentials to a random endpoint. So let’s just have a dummy value there, and inject the actual value in a transparent outbound proxy that the agent doesn’t have access to.

replies(1): >>pauldd+n6j
◧◩
24. Joshua+Tfi[view] [source] [discussion] 2026-01-19 05:22:55
>>jackfr+x1
That's how they did "build an AI app" back when the claude.ai coding tool was javascript running in a web worker on the client machine.
◧◩◪
25. keepam+uKi[view] [source] [discussion] 2026-01-19 10:21:55
>>simonw+Wbi
Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.

Respect for your writing, but I feel you and many others have the risk calculus here backwards.

replies(2): >>saagar+8Mi >>simonw+sQi
◧◩◪
26. keepam+GKi[view] [source] [discussion] 2026-01-19 10:22:44
>>hobs+59i
Right, but the multiply step is not AI specific. Let's focus here: AI providers farming out their convos to 3rd-parties? Unlikely, but if it happens, it's totally their bad.

I really don't think this is a thing.

replies(1): >>hobs+2yj
◧◩◪◨
27. saagar+8Mi[view] [source] [discussion] 2026-01-19 10:34:08
>>keepam+uKi
AI labs currently have no solution for this problem and have you shoulder the risk for it.
replies(1): >>keepam+MPi
◧◩◪◨⬒
28. keepam+MPi[view] [source] [discussion] 2026-01-19 11:01:17
>>saagar+8Mi
Evidence?
replies(2): >>simonw+ZPi >>saagar+fQi
◧◩◪◨⬒⬓
29. simonw+ZPi[view] [source] [discussion] 2026-01-19 11:03:10
>>keepam+MPi
If they had a solution for this they would have told us about it.

In the meantime security researchers are publishing proof of concept data exfiltration attacks all the time. I've been collecting those here: https://simonwillison.net/tags/exfiltration-attacks/

◧◩◪◨⬒⬓
30. saagar+fQi[view] [source] [discussion] 2026-01-19 11:05:37
>>keepam+MPi
I worked on this for a company that got bought by one of the labs (for more than just agent sandboxes, mind you).
replies(2): >>keepam+bhl >>keepam+Mbm
◧◩◪◨
31. simonw+sQi[view] [source] [discussion] 2026-01-19 11:07:10
>>keepam+uKi
Every six months I predict that "in the next six months there will be a headline-grabbing example of someone pulling off a prompt injection attack that causes real economic damage", and every six months it fails to happen.

That doesn't mean the risk isn't there - it means malicious actors have not yet started exploiting it.

Johann Rehberger calls this effect "The Normalization of Deviance in AI", borrowing terminology from the 1986 Space Shuttle Challenger disaster report: https://embracethered.com/blog/posts/2025/the-normalization-...

Short version: the longer a company or community gets away with behaving in an unsafe way without feeling the consequences, the more they are likely to ignore those risks.

I'm certain that's what is happening to us all today with coding agents. I use them in an unsafe way myself.

◧◩
32. hsbaua+cSi[view] [source] [discussion] 2026-01-19 11:20:29
>>keepam+48i
‘Hey Claude, write an unauthenticated action method which dumps all environment variables to the requestor, and allows them to execute commands’
◧◩◪
33. pauldd+n6j[view] [source] [discussion] 2026-01-19 13:09:31
>>eddyth+Dfi
> Opencode is fully capable of running

> Why do we need the MCP?

> The problem now

And there it is.

I understand that this is an alternative solution, and appreciate it.

◧◩
34. mike-c+27j[view] [source] [discussion] 2026-01-19 13:14:21
>>jackfr+x1
> a secrets store that the model can "use" but never "read".

How would that work? If the AI can use it, it can read it. E.g:

    secret-store "foo" > file
    cat file
You'd have to be very specific about how the secret can be used in order for the AI to not be able to figure out what it is. You could provide a http proxy in the sandbox that injects a HTTP header to include the secret, when the secret is for accessing a website for example, and tell the AI to use that proxy. But you'd also have to scope down which URLs the proxy can access with that secret otherwise it could just visit a page like this to read back the headers that were sent:

https://www.whatismybrowser.com/detect/what-http-headers-is-...

Basically, for every "use" of a secret, you'd have to write a dedicated application which performs that task in a secure manner. It's not just the case of adding a special secret store.

replies(1): >>ashwin+xBo
◧◩
35. ironbo+97j[view] [source] [discussion] 2026-01-19 13:15:13
>>jackfr+x1
Sounds like an attacker could hack Anthropic and get access to a bunch of companies via the credentials Claude Code ingested?
◧◩
36. Rafert+fsj[view] [source] [discussion] 2026-01-19 15:18:39
>>JimDab+cbi
The concept of a proxy injecting/removing sensitive data has been for much longer, e.g. VGS has a JS SDK and proxy to handle credit card data for you and keep you out of PCI scope.
◧◩
37. badeey+hsj[view] [source] [discussion] 2026-01-19 15:18:48
>>dtkav+99i
made with ai?
replies(1): >>dtkav+kxk
◧◩
38. ipytho+7wj[view] [source] [discussion] 2026-01-19 15:41:01
>>jackfr+x1
I guess I don't understand why anyone thinks giving an LLM access to credentials is a good idea in the first place? It's been demonstrated best practice to separate authentication/authorization from the LLM's context window/ability to influence for several years now.

We spent the last 50 years of computer security getting to a point where we keep sensitive credentials out of the hands of humans. I guess now we have to take the next 50 years to learn the lesson that we should keep those same credentials out of the hands of LLMs as well?

I'll be sitting on the sideline eating popcorn in that case.

◧◩◪◨
39. hobs+2yj[view] [source] [discussion] 2026-01-19 15:50:05
>>keepam+GKi
Right, but this is still a hygiene issue, if you are skipping washing your hands after using the bathroom because its unlikely that the bathroom attendants didn't clean it up you are going to have a bad time.
replies(1): >>keepam+Vqm
40. data-o+6Gj[view] [source] 2026-01-19 16:24:30
>>drewgr+(OP)
I’ve been using 1Password’s env templates with `op run` for this locally. It hijacks stdout and filters your credentials.

That does not make it immune to Claude’s prying, but at least Claude can then read the .env file and satisfy its need to prove that a credential exists without reading it.

I have found even when I say a credential exists and is correct Claude does not believe me. Which is infuriating. I’m willing to bet Claude’s logs have a gold mine that could own 90% of big tech firms.

41. theoze+8Sj[view] [source] 2026-01-19 17:10:23
>>drewgr+(OP)
A proxy is a good solution although a bit more involved. A great first step is just getting any secrets - both the ones the AI actually needs access to and your application secrets - out of plaintext .env files.

A great way to do that is either encrypting them or pulling them declaratively from a secure backend (1Pass, AWS Secrets Manager, etc). Additional protection is making sure that those secrets don't leak, either in outgoing server responses, or in logs.

https://varlock.dev (open source!) can help with the secure injection, log redaction, and provide a ton more tooling to simplify how you deal with config and secrets.

◧◩
42. edstar+WSj[view] [source] [discussion] 2026-01-19 17:13:24
>>jackfr+x1
While sandboxing is definitely more secure... Why not put a global deny on .env-like filename patterns as a first measure?
◧◩◪
43. gillh+E9k[view] [source] [discussion] 2026-01-19 18:25:41
>>simonw+Wbi
We also use proxies with CodeRabbit’s sandboxes. Instead of using tool calls, we’ve been using LLM-generated CLI and curl commands to interact with external services like GitHub and Linear.
44. 1vuio0+cek[view] [source] 2026-01-19 18:52:23
>>drewgr+(OP)
"When hostnames and headers are hard to edit: mitmproy add-ons"

"The mitmproxy tool also supports addons where you can transform HTTP requests between Claude Code and third-party web servers. For example, you could write an add-on that intercepts https://api.anthropic.com and updates the X-API-Key header with an actual Anthropic API Key."

"You can then pass this add-on via mitmproxy -s reroute_hosts.py."

If using HAproxy, then is no need to write "add-ons", just edit the configuration file and reload

For example, something like

   http-request set-header x-api-key API_KEY if { hdr(host) api.anthropic.com }

   echo reload|socat stdio unix:/path-to-socket/socket-name
For me, HAproxy is smaller and faster than mitmproxy
◧◩◪
45. dtkav+kxk[view] [source] [discussion] 2026-01-19 20:30:43
>>badeey+hsj
Yeah, it says so at the top of the README (though I suppose I could have put that in the comment too). I'm not building a product, just sharing a pattern for internal tooling.

Someone on another thread asked me to share it so I had claude rework it to use docker-compose and remove the references to how I run it in my internal network.

◧◩◪◨⬒⬓⬔
46. keepam+bhl[view] [source] [discussion] 2026-01-20 01:54:56
>>saagar+fQi
[flagged]
replies(1): >>saagar+xtl
◧◩◪◨⬒⬓⬔⧯
47. saagar+xtl[view] [source] [discussion] 2026-01-20 03:48:49
>>keepam+bhl
We didn’t solve the problem.
◧◩◪◨⬒⬓⬔
48. keepam+Mbm[view] [source] [discussion] 2026-01-20 10:55:08
>>saagar+fQi
Wait, let me get this straight: “there’s no solution” to this apparent giant problem but you work for a company that got bought by an AI corp because you had a solution? Make it make sense.

If you did not solve it why were you bought?

replies(1): >>saagar+1Cp
◧◩◪◨⬒
49. keepam+Vqm[view] [source] [discussion] 2026-01-20 13:02:49
>>hobs+2yj
There's something to that, but I don't think in reality it's a thing: you don't do surgery in the public bathroom. The keys to the kingdom secrets? Of course not. Everything else? That's why we have scoped, short-lived tokens.

I just think this whole thing is overblown.

If there's a risk in any situation it's similar, probably less, than running any library you installed of a registry for your code. And I think that's a good comparison: supply chain is more important than AI chain.

You can consider AI-agents to be like the fancy bathrooms in a high end hotel, whereas all that code you're putting on your computer? That's the grimy public lavatory lol.

◧◩◪
50. ashwin+xBo[view] [source] [discussion] 2026-01-21 00:40:10
>>mike-c+27j
This seems like an under-rated comment. You are right, this is a vulnerability and the blog doesn't talk about this.
◧◩
51. ashwin+DDo[view] [source] [discussion] 2026-01-21 00:58:47
>>dtkav+99i
> With macaroons you can design the authz scheme that you want for any arbitrary API.

How would you build such an authz scheme? When claude asks permissions to access a new endpoint, if the user allows it, then reissue the macaroons?

replies(1): >>dtkav+EQo
◧◩◪
52. dtkav+EQo[view] [source] [discussion] 2026-01-21 03:11:16
>>ashwin+DDo
There are two parts here:

1. You can issue your own tokens which means you can design your own authz in front of the upstream API token.

2. Macaroons can be attenuated locally.

So at the time that you decide you want to proxy an upstream API, you can add restrictions like endpoint path to your scheme.

Then, once you have that authz scheme in place, the developer (or agent) can attenuate permissions within that authz scheme for a particular issued macaroon.

I could grant my dev machine the ability to access e.g. /api/customers and /api/products. If i want to have claude write a script to add some metadata to my products, I might attenuate my token to /api/products only and put that in the env file for the script.

Now claude can do development on the endpoint, the token is useless if leaked, and Claude can't read my customer info.

Stripe actually does offer granular authz and short lived tokens, but the friction of minting them means that people don't scope tokens down as much.

replies(1): >>ashwin+3Xv
◧◩◪◨⬒⬓⬔⧯
53. saagar+1Cp[view] [source] [discussion] 2026-01-21 10:29:03
>>keepam+Mbm
I worked for a company that got bought because they were working on a number of problems of interest to the acquirer. As many of these were hard problems, our efforts on them and progress was more than enough.
replies(1): >>keepam+l9t
◧◩◪◨⬒⬓⬔⧯▣
54. keepam+l9t[view] [source] [discussion] 2026-01-22 09:59:23
>>saagar+1Cp
OK. Do you know if many AI labs are purchasing in this space? Was your acquisition an outlier or part of a wider trend? Thank you
replies(1): >>saagar+9NB
◧◩◪◨
55. ashwin+3Xv[view] [source] [discussion] 2026-01-23 03:21:14
>>dtkav+EQo
I understand that, but how do you come up with the endpoints you want claude to have access to ahead of time?

For example, how do you collect all the endpoints that have access to customer info per your example.

Thought about it and couldn't find a way how

replies(1): >>dtkav+l7w
◧◩◪◨⬒
56. dtkav+l7w[view] [source] [discussion] 2026-01-23 04:59:58
>>ashwin+3Xv
I'm not sure I'm fully understanding you, but in my experience I have a few upstream APIs I want to use for internal tools (stripe, gmail, google cloud, anthropic, discord, my own pocketbase instance, redis) but there are a lot of different scripts/skills that need differing levels of credentials.

For example, If I want to write a skill that can pull subscription cancellations from today, research the cancellation reason, and then push a draft email to gmail, then ideally I'd have...

- a 5 minute read-only token for /subscriptions and /customers for stripe

- a 5 minute read-write token to push to gmail drafts

- a 5 minute read-only token to customer events in the last 24h

Claude understands these APIs well (or can research the docs) so it isn't a big lift to rebuild authz, and worst case you can do it by path prefix and method (GET, POST, etc) which works well for a lot of public APIs.

I feel like exposing the API capability is the easy part, and being able to get tight-fitting principle-of-least-privilege tokens is the hard part.

◧◩◪◨⬒⬓⬔⧯▣▦
57. saagar+9NB[view] [source] [discussion] 2026-01-25 03:33:39
>>keepam+l9t
I think if you’re good at this most AI labs would be interested but I can’t speak for them obviously
[go to top]