zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. rienbd+s22[view] [source] 2025-06-03 06:30:13
>>gregor+(OP)
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

◧◩
2. victor+Ng2[view] [source] 2025-06-03 08:58:34
>>rienbd+s22
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
◧◩◪
3. signa1+az2[view] [source] 2025-06-03 11:59:44
>>victor+Ng2
> ... Still saves a lot of time vs typing everything from scratch ...

how ? the prompts have still to be typed right ? and then the output examined in earnest.

◧◩◪◨
4. victor+Wm3[view] [source] 2025-06-03 16:59:09
>>signa1+az2
Latest project I been working on. Prompts are a few sentences (and technically I dictate them instead of typing) and the LLM generates a few hundred lines of code.
[go to top]