Look at this one:
> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!
> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?
I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.
No it doesn't. Typing speed is never the bottleneck for an expert.
As an offline database of Google-tier knowledge, LLM's are useful. Though current LLM tech is half-baked, we need:
a) Cheap commodity hardware for running your own models locally. (And by "locally" I mean separate dedicated devices, not something that fights over your desktop's or laptop's resources.)
b) Standard bulletproof ways to fine-tune models on your own data. (Inference is already there mostly with things like llama.cpp, finetuning isn't.)
I remember hearing somewhere that humans have a limited capacity in terms of number of decisions made in a day, and it seems to fit here: If I'm writing the code myself, I have to make several decisions on every line of code, and that's mentally tiring, so I tend to stop and procrastinate frequently.
If an LLM is handling a lot of the details, then I'm just making higher-level decisions, allowing me to make more progress.
Of course this is totally speculation and theories like this tend to be wrong, but it is at least consistent with how I feel.