zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. rienbd+s22[view] [source] 2025-06-03 06:30:13
>>gregor+(OP)
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

◧◩
2. victor+Ng2[view] [source] 2025-06-03 08:58:34
>>rienbd+s22
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
◧◩◪
3. otabde+uj2[view] [source] 2025-06-03 09:30:10
>>victor+Ng2
> Still saves a lot of time vs typing everything from scratch

No it doesn't. Typing speed is never the bottleneck for an expert.

As an offline database of Google-tier knowledge, LLM's are useful. Though current LLM tech is half-baked, we need:

a) Cheap commodity hardware for running your own models locally. (And by "locally" I mean separate dedicated devices, not something that fights over your desktop's or laptop's resources.)

b) Standard bulletproof ways to fine-tune models on your own data. (Inference is already there mostly with things like llama.cpp, finetuning isn't.)

◧◩◪◨
4. brails+jv2[view] [source] 2025-06-03 11:26:45
>>otabde+uj2
> No it doesn't. Typing speed is never the bottleneck for an expert

How could that possibly be true!? Seems like it'd be the same as suggesting being constrained to analog writing utensils wouldn't bottleneck the process of publishing a book or research paper. At the very least such a statement implies that people with ADHD can't be experts.

◧◩◪◨⬒
5. otabde+WD2[view] [source] 2025-06-03 12:37:37
>>brails+jv2
> How could that possibly be true!?

(I'll assume you're not joking, because your post is ridiculous enough to look like sarcasm.)

The answer is because programmers read code 10 times more (and think about code 100 times more) than they write it.

◧◩◪◨⬒⬓
6. brails+hA3[view] [source] 2025-06-03 18:16:22
>>otabde+WD2
I wasn't joking, it's a bottleneck sometimes, that's it. It's a bottleneck like comfort and any good tool is a bottleneck, like a slow computer is a bottleneck. It's silly to suggest that your ability to rapidly use a fundamental tool is never a bottleneck, no matter what other bits need to come into play during the course of your day.

My ability to review and understand intent behind code isn't a primarily bottleneck to me actually efficiently reviewing code when it's requested of me, the primary bottleneck is being notified at the right time that I have a waiting request to review code.

If compilers were never a bottleneck, why would we ever try to make them faster? If build tools were never a bottleneck, why would we ever optimize those? These are all just some of the things that can stand between the identification of a problem and producing a solution for it.

[go to top]