zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. rienbd+s22[view] [source] 2025-06-03 06:30:13
>>gregor+(OP)
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

◧◩
2. victor+Ng2[view] [source] 2025-06-03 08:58:34
>>rienbd+s22
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
◧◩◪
3. dismal+rl3[view] [source] 2025-06-03 16:51:14
>>victor+Ng2
> Still saves a lot of time vs typing everything from scratch

Probably very language specific. I use a lot of Ruby, typing things takes no time it's so terse. Instead I get to spend 95% of my time pondering my problems (or prompting the LLM)...

◧◩◪◨
4. deepsu+Pm3[view] [source] 2025-06-03 16:58:29
>>dismal+rl3
With a proper IDE you don't type much even in Java/.Net, it's all autocomplete anyway. "Too verbose" complaints are mostly from Notepad lovers, and those who never needed to read somebody else's code.
[go to top]