zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. rienbd+s22[view] [source] 2025-06-03 06:30:13
>>gregor+(OP)
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

◧◩
2. victor+Ng2[view] [source] 2025-06-03 08:58:34
>>rienbd+s22
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
◧◩◪
3. otabde+uj2[view] [source] 2025-06-03 09:30:10
>>victor+Ng2
> Still saves a lot of time vs typing everything from scratch

No it doesn't. Typing speed is never the bottleneck for an expert.

As an offline database of Google-tier knowledge, LLM's are useful. Though current LLM tech is half-baked, we need:

a) Cheap commodity hardware for running your own models locally. (And by "locally" I mean separate dedicated devices, not something that fights over your desktop's or laptop's resources.)

b) Standard bulletproof ways to fine-tune models on your own data. (Inference is already there mostly with things like llama.cpp, finetuning isn't.)

◧◩◪◨
4. boruto+An2[view] [source] 2025-06-03 10:12:54
>>otabde+uj2
I realize I procrastinate less when using LLM to write code which I know I could write.
◧◩◪◨⬒
5. kenton+EP2[view] [source] 2025-06-03 13:42:42
>>boruto+An2
I've noticed this too.

I remember hearing somewhere that humans have a limited capacity in terms of number of decisions made in a day, and it seems to fit here: If I'm writing the code myself, I have to make several decisions on every line of code, and that's mentally tiring, so I tend to stop and procrastinate frequently.

If an LLM is handling a lot of the details, then I'm just making higher-level decisions, allowing me to make more progress.

Of course this is totally speculation and theories like this tend to be wrong, but it is at least consistent with how I feel.

◧◩◪◨⬒⬓
6. autoex+sn3[view] [source] 2025-06-03 17:01:30
>>kenton+EP2
I have a feeling that it's something that might help today but also something you might pay for later. When you have to maintain or bug fix that same code down the line the fact that you were the one making all those higher-level decisions and thinking through the details gives you an advantage. Just having everything structured and named in ways that make the most sense to you seems like it'd be helpful the next time you have to deal with the code.

While it's often a luxury, I'd much rather work on code I wrote than code somebody else wrote.

[go to top]