zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. rienbd+s22[view] [source] 2025-06-03 06:30:13
>>gregor+(OP)
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

◧◩
2. victor+Ng2[view] [source] 2025-06-03 08:58:34
>>rienbd+s22
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
◧◩◪
3. otabde+uj2[view] [source] 2025-06-03 09:30:10
>>victor+Ng2
> Still saves a lot of time vs typing everything from scratch

No it doesn't. Typing speed is never the bottleneck for an expert.

As an offline database of Google-tier knowledge, LLM's are useful. Though current LLM tech is half-baked, we need:

a) Cheap commodity hardware for running your own models locally. (And by "locally" I mean separate dedicated devices, not something that fights over your desktop's or laptop's resources.)

b) Standard bulletproof ways to fine-tune models on your own data. (Inference is already there mostly with things like llama.cpp, finetuning isn't.)

◧◩◪◨
4. brails+jv2[view] [source] 2025-06-03 11:26:45
>>otabde+uj2
> No it doesn't. Typing speed is never the bottleneck for an expert

How could that possibly be true!? Seems like it'd be the same as suggesting being constrained to analog writing utensils wouldn't bottleneck the process of publishing a book or research paper. At the very least such a statement implies that people with ADHD can't be experts.

◧◩◪◨⬒
5. thisis+eE2[view] [source] 2025-06-03 12:40:13
>>brails+jv2
Completely agree with you. I was working on the front-end of an application and I prompted Claude the following: "The endpoint /foo/bar is returning the json below ##json goes here##, show this as cards inside the component FooBaz following the existing design system".

In less than 5 minutes Claude created code that: - encapsulated the api call - modeled the api response using Typescript - created a re-usable and responsive ui component for the card (including a load state) - included it in the right part of the page

Even if I typed at 200wpm I couldn't produce that much code from such a simple prompt.

I also had similar experiences/gains refactoring back-end code.

This being said, there are cases in which writing the code yourself is faster than writing a detailed enough prompt, BUT those cases are becoming exception with new LLM iteration. I noticed that after the jump from Claude 3.7 to Claude 4 my prompts can be way less technical.

◧◩◪◨⬒⬓
6. oblio+Y53[view] [source] 2025-06-03 15:24:57
>>thisis+eE2
The thing is... does your code end there? Would you put that code in production without a deep analysis of what Claude did?
◧◩◪◨⬒⬓⬔
7. s900mh+rD4[view] [source] 2025-06-04 03:33:25
>>oblio+Y53
I’m not who you replied to but I keep functions small and testable paired with unit tests with a healthy mix of happy/sad path.

Afterwards I make sure the LLM passes all the tests before I spend my time to review the code.

I find this process keeps the iterations count low for review -> prompt -> review.

I personally love writing code with an LLM. I’m a sloppy typist but love programming. I find it’s a great burnout prevention.

For context: node.js development/React (a very LLM friendly stack.)

[go to top]