zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. rienbd+s22[view] [source] 2025-06-03 06:30:13
>>gregor+(OP)
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

◧◩
2. victor+Ng2[view] [source] 2025-06-03 08:58:34
>>rienbd+s22
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
◧◩◪
3. signa1+az2[view] [source] 2025-06-03 11:59:44
>>victor+Ng2
> ... Still saves a lot of time vs typing everything from scratch ...

how ? the prompts have still to be typed right ? and then the output examined in earnest.

◧◩◪◨
4. fastba+dC2[view] [source] 2025-06-03 12:25:35
>>signa1+az2
A prompt can be as little as a sentence to write hundreds of lines of code.
◧◩◪◨⬒
5. shaky-+bh3[view] [source] 2025-06-03 16:26:06
>>fastba+dC2
Hundreds of lines that you have to carefully read and understand.
◧◩◪◨⬒⬓
6. ImPost+7J3[view] [source] 2025-06-03 19:09:11
>>shaky-+bh3
You also have to do that with code you write without LLM assistance.
[go to top]