zlacker

[parent] [thread] 105 comments
1. rienbd+(OP)[view] [source] 2025-06-03 06:30:13
The commits are revealing.

Look at this one:

> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!

> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?

I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.

replies(10): >>bootsm+H1 >>throwa+e4 >>octobe+i5 >>i5heu+u5 >>PeterS+y5 >>victor+le >>banana+ew >>kenton+lL >>Action+o11 >>jofzar+tec
2. bootsm+H1[view] [source] 2025-06-03 06:45:58
>>rienbd+(OP)
There is also one quite early in the repo where the dev has to tell Claude to store only the hashes of secrets
3. throwa+e4[view] [source] 2025-06-03 07:14:14
>>rienbd+(OP)
While I think this is a cool (public) experiment by Claude, asking an LLM to write security-sensitive code seems crazy at this point. Ad absurdum: Can you imagine asking Claude to implement new functionality in OpenSSL libs!?
4. octobe+i5[view] [source] 2025-06-03 07:24:38
>>rienbd+(OP)
It's a Jr Developer that you have to check all their code over. To some people that is useful. But you're still going to have to train Jr Developers so they can turn into Sr Developers.
replies(2): >>PeterS+16 >>Cthulh+Nz
5. i5heu+u5[view] [source] 2025-06-03 07:26:36
>>rienbd+(OP)
Revealing against what?

If you look at the README it is completely revealed... so i would argue there is nothing to "reveal" in the first place.

> I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

> To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.

replies(4): >>risyac+59 >>rienbd+Hd >>JW_000+De >>kortil+tX
6. PeterS+y5[view] [source] 2025-06-03 07:26:57
>>rienbd+(OP)
Which is exactly why AI coding assistants work with your expertise rather than replace it. Most people I see fail at AI assisted development are either non-technical people expecting the AI will solve it all, or technical people playing gotcha with the machine rather than collaborating with it.
◧◩
7. PeterS+16[view] [source] [discussion] 2025-06-03 07:32:27
>>octobe+i5
I don't like the jr dev analogy. It neither has the same weaknesses nor the same strenghts.

It's more like the genious coworker that has an overassertive ego and sometimes shows up drunk, but if you know how to work with them and see past their flaws, can be a real asset.

replies(1): >>hn_thr+721
◧◩
8. risyac+59[view] [source] [discussion] 2025-06-03 08:02:55
>>i5heu+u5
If the guy knew how to properly implement oauth - did he save any time though by prompting or just tried to prove a point that if you actually already know all details of impl you can guide llm to do it?

Thats the biggest issue I see. In most cases I don't use llm because DIYing it takes less time than prompting/waiting/checking every line.

replies(2): >>JimDab+mb >>theshr+vc
◧◩◪
9. JimDab+mb[view] [source] [discussion] 2025-06-03 08:24:44
>>risyac+59
> did he save any time though

Yes:

> It took me a few days to build the library with AI.

> I estimate it would have taken a few weeks, maybe months to write by hand.

>>44160208

> or just tried to prove a point that if you actually already know all details of impl you can guide llm to do it?

No:

> I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

https://github.com/cloudflare/workers-oauth-provider/?tab=re...

replies(1): >>autoex+pm1
◧◩◪
10. theshr+vc[view] [source] [discussion] 2025-06-03 08:38:25
>>risyac+59
Do people save time by learning to write code at 420WPM? By optimising their vi(m) layouts and using languages with lots of fancy operators that make things quicker to write?

Using an LLM to write code you already know how to write is just like using intellisense or any other smart autocomplete, but at a larger scale.

◧◩
11. rienbd+Hd[view] [source] [discussion] 2025-06-03 08:52:09
>>i5heu+u5
> Revealing against what?

Revealing of what it is like working with an LLM in this way.

12. victor+le[view] [source] 2025-06-03 08:58:34
>>rienbd+(OP)
That is how LLM:s should be used today. An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch. Just the other day I was working on a prototype and let claude write code for a auth flow. Everything was good until the last step where it was just sending the user id as a string with the valid token. So if you got a valid token you could just pass in any user id and become that user. Still saved me a lot of time vs doing it from scratch.
replies(10): >>otabde+2h >>0point+rm >>Vinnl+Ss >>XCSme+0v >>signa1+Iw >>zx8080+Ex >>827a+ZL >>dismal+Zi1 >>blinde+0n2 >>noone_+sp8
◧◩
13. JW_000+De[view] [source] [discussion] 2025-06-03 09:02:53
>>i5heu+u5
I think OP meant "revealing" as in "enlightening", not as "uncovering something that was hidden intentionally".
◧◩
14. otabde+2h[view] [source] [discussion] 2025-06-03 09:30:10
>>victor+le
> Still saves a lot of time vs typing everything from scratch

No it doesn't. Typing speed is never the bottleneck for an expert.

As an offline database of Google-tier knowledge, LLM's are useful. Though current LLM tech is half-baked, we need:

a) Cheap commodity hardware for running your own models locally. (And by "locally" I mean separate dedicated devices, not something that fights over your desktop's or laptop's resources.)

b) Standard bulletproof ways to fine-tune models on your own data. (Inference is already there mostly with things like llama.cpp, finetuning isn't.)

replies(3): >>boruto+8l >>brails+Rs >>victor+Qk1
◧◩◪
15. boruto+8l[view] [source] [discussion] 2025-06-03 10:12:54
>>otabde+2h
I realize I procrastinate less when using LLM to write code which I know I could write.
replies(2): >>kenton+cN >>nipah+Azn
◧◩
16. 0point+rm[view] [source] [discussion] 2025-06-03 10:30:13
>>victor+le
I really don't agree with the idea that expert time would just be spent typing, and I'd be really surprised if that's the common sentiment around here.

An expert reasons, plans ahead, thinks and reasons a little bit more before even thinking about writing code.

If you are measuring productivity by lines of code per hour then you don't understand what being a dev is.

replies(2): >>brails+6s >>victor+ak1
◧◩◪
17. brails+6s[view] [source] [discussion] 2025-06-03 11:20:13
>>0point+rm
> I really don't agree with the idea that expert time would just be spent typing, and I'd be really surprised if that's the common sentiment around here.

They didn't suggest that at all, they merely suggested that the component of the expert's work that would otherwise be spent typing can be saved, while the rest of their utility comes from intense scrutiny, problem solving, decision making about what to build and why, and everything else that comes from experience and domain understanding.

replies(2): >>fc417f+BS >>kiitos+Cl2
◧◩◪
18. brails+Rs[view] [source] [discussion] 2025-06-03 11:26:45
>>otabde+2h
> No it doesn't. Typing speed is never the bottleneck for an expert

How could that possibly be true!? Seems like it'd be the same as suggesting being constrained to analog writing utensils wouldn't bottleneck the process of publishing a book or research paper. At the very least such a statement implies that people with ADHD can't be experts.

replies(4): >>otabde+uB >>thisis+MB >>throwa+7G >>nipah+FAn
◧◩
19. Vinnl+Ss[view] [source] [discussion] 2025-06-03 11:26:47
>>victor+le
At least for me, I'm fairly sure that I'm better at not adding security flaws to my code (which I'm already not perfect at!) than I am at spotting them in code that I didn't write, unfortunately.
replies(1): >>bryant+MK
◧◩
20. XCSme+0v[view] [source] [discussion] 2025-06-03 11:45:41
>>victor+le
> Still saves a lot of time vs typing everything from scratch.

In my experience, it takes longer to debug/instruct the LLM than to write it from scratch.

replies(1): >>Culona+9y
21. banana+ew[view] [source] 2025-06-03 11:56:38
>>rienbd+(OP)
this seems like a true but pointless observation? if you're producing security-sensitive code then experts need to be involved, whether that's me unwisely getting a junior to do something, or receiving a PR from my cat, or using an LLM.

removing expert humans from the loop is the deeply stupid thing the Tech Elite Who Want To Crush Their Own Workforces / former-NFT fanboys keep pushing, just letting an LLM generate code for a human to review then send out for more review is really pretty boring and already very effective for simple to medium-hard things.

replies(2): >>hn_thr+z11 >>toofy+lm2
◧◩
22. signa1+Iw[view] [source] [discussion] 2025-06-03 11:59:44
>>victor+le
> ... Still saves a lot of time vs typing everything from scratch ...

how ? the prompts have still to be typed right ? and then the output examined in earnest.

replies(3): >>fastba+Lz >>fragme+ba1 >>victor+uk1
◧◩
23. zx8080+Ex[view] [source] [discussion] 2025-06-03 12:09:28
>>victor+le
> An expert prompts it and checks the code. Still saves a lot of time vs typing everything from scratch.

It's a lie. The marketing one, to be specific, which makes it even worse.

replies(1): >>victor+4k1
◧◩◪
24. Culona+9y[view] [source] [discussion] 2025-06-03 12:13:38
>>XCSme+0v
Depends on what you're doing. For example when you're writing something like React components and using something like Tailwind for styling, I find the speedup is close to 10X.
replies(4): >>azemet+AA >>nijave+551 >>ksenze+wM1 >>XCSme+es3
◧◩◪
25. fastba+Lz[view] [source] [discussion] 2025-06-03 12:25:35
>>signa1+Iw
A prompt can be as little as a sentence to write hundreds of lines of code.
replies(1): >>shaky-+Je1
◧◩
26. Cthulh+Nz[view] [source] [discussion] 2025-06-03 12:25:44
>>octobe+i5
I don't really agree; a junior developer, if they're curious enough, wouldn't just write insecure code, they would do self-study and find out best practices etc before writing code, including not storing plaintext passwords and the like.
replies(1): >>hn_thr+e21
◧◩◪◨
27. azemet+AA[view] [source] [discussion] 2025-06-03 12:30:20
>>Culona+9y
Isn’t this because the LLMs had like a million+ react tutorials/articles/books/repos to train on?

I mean I try to use them for svelte or vue and it still recommends react snippets sometimes.

replies(5): >>Culona+3Q >>ambica+vW >>lovich+Yj1 >>trilli+ks1 >>lengla+Ns3
◧◩◪◨
28. otabde+uB[view] [source] [discussion] 2025-06-03 12:37:37
>>brails+Rs
> How could that possibly be true!?

(I'll assume you're not joking, because your post is ridiculous enough to look like sarcasm.)

The answer is because programmers read code 10 times more (and think about code 100 times more) than they write it.

replies(2): >>thisis+zC >>brails+Px1
◧◩◪◨
29. thisis+MB[view] [source] [discussion] 2025-06-03 12:40:13
>>brails+Rs
Completely agree with you. I was working on the front-end of an application and I prompted Claude the following: "The endpoint /foo/bar is returning the json below ##json goes here##, show this as cards inside the component FooBaz following the existing design system".

In less than 5 minutes Claude created code that: - encapsulated the api call - modeled the api response using Typescript - created a re-usable and responsive ui component for the card (including a load state) - included it in the right part of the page

Even if I typed at 200wpm I couldn't produce that much code from such a simple prompt.

I also had similar experiences/gains refactoring back-end code.

This being said, there are cases in which writing the code yourself is faster than writing a detailed enough prompt, BUT those cases are becoming exception with new LLM iteration. I noticed that after the jump from Claude 3.7 to Claude 4 my prompts can be way less technical.

replies(1): >>oblio+w31
◧◩◪◨⬒
30. thisis+zC[view] [source] [discussion] 2025-06-03 12:46:10
>>otabde+uB
Yeah, but how fast can you write compared to how fast you think?

How many times have you read a story card and by the time you finished reading it you thought "It's an easy task, should take me 1 hour of work to write the code and tests"?

In my experience, in most of those cases the AI can do the same amount of code writing in under 10 minutes, leaving me the other 50 minutes to review the code, make/ask for any necessary adjustments, and move on to another task.

replies(1): >>dns_sn+7E
◧◩◪◨⬒⬓
31. dns_sn+7E[view] [source] [discussion] 2025-06-03 12:55:28
>>thisis+zC
I don't know anyone who can think faster than they can type (on average), they would have to have an IQ over 150 or something. For mere mortals like myself, reasoning through edge cases and failure conditions and error handling and state invariants takes time. Time that I spend looking at a blinking cursor while the gears spin, or reading code. I've never finished a day where I thought to myself "gosh darn, if only I could type faster this would be done already".
replies(1): >>skydha+Ui1
◧◩◪◨
32. throwa+7G[view] [source] [discussion] 2025-06-03 13:07:10
>>brails+Rs
It seems fair to say that it is ~never the overall bottleneck? Maybe once you figure out what you want, typing speed briefly becomes the bottleneck, but does any expert finish a day thinking "If only I could type twice as fast, I'd have gotten twice as much work done?" That said, I don't think "faster typing" is the only benefit that AI assistance provides.
◧◩◪
33. bryant+MK[view] [source] [discussion] 2025-06-03 13:31:13
>>Vinnl+Ss
They're different mindsets. Some folks are better editors, inspectors, auditors, etc, whereas some are better builders, creators, and drafters.

So what you're saying makes sense. And I'm definitely on the other side of that fence.

replies(2): >>bluefl+oN >>namblo+9E8
34. kenton+lL[view] [source] 2025-06-03 13:34:46
>>rienbd+(OP)
Yeah I was disappointed in that one.

I hate to say, though, but I have reviewed a lot of human code in my time, and I've definitely caught many humans making similar-magnitude mistakes. :/

replies(2): >>hn_thr+O01 >>jjcm+nA1
◧◩
35. 827a+ZL[view] [source] [discussion] 2025-06-03 13:37:51
>>victor+le
I tend to disagree, but I don't know what my disagreement means for the future of being able to use AI when writing software. This workers-oauth-provider project is 1200 lines of code. An expert should be able to write that on the scale of an hour.

The main value I've gotten out of AI writing software comes from the two extremes; not from the middle-ground you present. Vibe coding can be great and seriously productive; but if I have to check it or manually maintain it in nearly any capacity more complicated than changing one string, productivity plummets. Conversely; delegating highly complex, isolated function writing to an AI can also be super productive, because it can (sometimes) showcase intelligence beyond mine and arrive at solutions which would take me 10x longer; but definitionally I am not the right person to check its code output; outside of maybe writing some unit tests for it (a third thing AI tends to be quite good at)

replies(2): >>fc417f+VS >>kenton+QR1
◧◩◪◨
36. kenton+cN[view] [source] [discussion] 2025-06-03 13:42:42
>>boruto+8l
I've noticed this too.

I remember hearing somewhere that humans have a limited capacity in terms of number of decisions made in a day, and it seems to fit here: If I'm writing the code myself, I have to make several decisions on every line of code, and that's mentally tiring, so I tend to stop and procrastinate frequently.

If an LLM is handling a lot of the details, then I'm just making higher-level decisions, allowing me to make more progress.

Of course this is totally speculation and theories like this tend to be wrong, but it is at least consistent with how I feel.

replies(1): >>autoex+0l1
◧◩◪◨
37. bluefl+oN[view] [source] [discussion] 2025-06-03 13:43:29
>>bryant+MK
When you form a mental model and then write code from that, thats a very lossy transformation. You can write comments and documentation to make it less lossy, but there will be information that is lost to an reviewer, who has to spend great effort to recreate it. If it is unknown how code is supposed to behave, then it becomes physically impossible to verify it for correctness.

This is less a matter of "mindset", but more a general problem of information.

replies(1): >>bbarne+6S
◧◩◪◨⬒
38. Culona+3Q[view] [source] [discussion] 2025-06-03 13:57:16
>>azemet+AA
Generally speaking, "LLMs" that I use are always the latest thinking versions of the flagship models (Grok 3/Gemini 2.5/...). GPT4o (and equivalent) are a mess.

But you're correct, when you use more exotic and/or quite new libraries, the outputs can be of mixed quality. For my current stack (Typescript, Node, Express, React 19, React Router 7, Drizzle and Tailwind 4) both Grok 3 (the paid one with 100k+ context) and Gemini 2.5 are pretty damn good. But I use them for prototyping, i.e. quickly putting together new stuff, for types, refactorings... I would never trust their output verbatim. (YET.) "Build an app that ..." would be a nightmare, but React-like UI code at sufficiently granular level is pretty much the best case scenario for LLMs as your components should be relatively isolated from the rest of the app and not too big anyways.

◧◩◪◨⬒
39. bbarne+6S[view] [source] [discussion] 2025-06-03 14:09:43
>>bluefl+oN
Whether reviewer or creator, if the start conditions / problem is known, both start with the same info.

"code base must do X with Y conditions"

The reviewer is at no disadvantage, other than the ability to walk the problem without coding.

replies(1): >>bluefl+cW
◧◩◪◨
40. fc417f+BS[view] [source] [discussion] 2025-06-03 14:13:21
>>brails+6s
It's not just time spent typing. Figuring out what needs to be typed can be both draining and time consuming. It's often (but not always) much easier to review someone else's solution to the problem than it is to solve it from scratch on your own.

Oddly enough security critical flows are likely to be one of the few exceptions because catching subtle reasoning errors that won't trip any unit tests when reviewing code that you didn't write is extremely difficult.

replies(2): >>oblio+W11 >>nipah+uBn
◧◩◪
41. fc417f+VS[view] [source] [discussion] 2025-06-03 14:15:51
>>827a+ZL
> An expert should be able to write that on the scale of an hour.

An expert in oauth, perhaps. Not your typical expert dev who doesn't specialize in auth but rather in whatever he's using the auth for. Navigating those sorts of standards is extremely time consuming.

replies(1): >>827a+rI1
◧◩◪◨⬒⬓
42. bluefl+cW[view] [source] [discussion] 2025-06-03 14:38:04
>>bbarne+6S
This is the ideal case where the produced code is well readable and commented so its intent is obvious.

The worst case is an intern or LLM having generated some code where the intent is not obvious and them not being able to explain the intent behind it. "How is that even related to the ticket"-style code.

◧◩◪◨⬒
43. ambica+vW[view] [source] [discussion] 2025-06-03 14:39:40
>>azemet+AA
Yes, definitely. Act accordingly.
◧◩
44. kortil+tX[view] [source] [discussion] 2025-06-03 14:45:42
>>i5heu+u5
Revealing the types of critical mistakes LLMs make. In particular someone that didn’t already understand OAuth likely would not have caught this and ended up with a vulnerable system.
◧◩
45. hn_thr+O01[view] [source] [discussion] 2025-06-03 15:08:04
>>kenton+lL
I just wanted to say thanks so much publishing this, and especially your comments here - I found them really helpful and insightful. I think it's interesting (though not unexpected) that many of the other commenters' comments here show what a Rorschach test this is. I think that's kind of unfortunate, because your experience clearly showed some of the benefits and limitations/pitfalls of coding like this in an objective manner.

I am curious, did you find the work of reviewing Claude's output more mentally tiring/draining than writing it yourself? Like some other folks mentioned, I generally find reviewing code more mentally tiring than writing it, but I get a lot of personal satisfaction by mentoring junior developers and collaborating with my (human) colleagues (most of them anyway...) Since I don't get that feeling when reviewing AI code, I find it more draining. I'm curious how you felt reviewing this code.

replies(1): >>kenton+R81
46. Action+o11[view] [source] 2025-06-03 15:11:51
>>rienbd+(OP)
But AIbros will be running around telling everyone that Claude invented OAuth for Cloudflare all on its own and then opensourced it.
◧◩
47. hn_thr+z11[view] [source] [discussion] 2025-06-03 15:12:37
>>banana+ew
I think it's a critically important observation.

I thought this experience was so helpful as it gave an objective, evidence-based sample on both the pros and cons of AI-assisted coding, where so many of the loudest voices on this topic are so one-sided ("AI is useless" or "developers will be obsolete in a year"). You say "removing expert humans from the loop is the deeply stupid thing the Tech Elite Who Want To Crush Their Own Workforces / former-NFT fanboys keep pushing", but the fact is many people with the power to push AI onto their workers are going to be more receptive to actual data and evidence than developers just complaining that AI is stupid.

◧◩◪◨⬒
48. oblio+W11[view] [source] [discussion] 2025-06-03 15:15:13
>>fc417f+BS
The problem is, building something IS the destination. At least the first 5-10 times. Building and fixing along the way is what builds lasting knowledge for most people.
◧◩◪
49. hn_thr+721[view] [source] [discussion] 2025-06-03 15:16:01
>>PeterS+16
I also like your analogy, but it also explains why I find working with AI-assisted coding so mentally tiresome.

It's like with some auto-driving systems - I say it like having a slightly inebriated teenager at the wheel. I can't just relax and read a book, because then I'd die. But so I have to be more mentally alert than just driving myself because everything could be going smoothly and relaxed, but at any moment the driving system could decide to drive into a tree.

◧◩◪
50. hn_thr+e21[view] [source] [discussion] 2025-06-03 15:16:40
>>Cthulh+Nz
You have clearly only ever worked with the creme de la creme of junior developers.
◧◩◪◨⬒
51. oblio+w31[view] [source] [discussion] 2025-06-03 15:24:57
>>thisis+MB
The thing is... does your code end there? Would you put that code in production without a deep analysis of what Claude did?
replies(2): >>brails+Ww1 >>s900mh+ZA2
◧◩◪◨
52. nijave+551[view] [source] [discussion] 2025-06-03 15:35:25
>>Culona+9y
Isn't there some way to speed up with codegen besides using LLMs?
replies(2): >>frank_+DG1 >>rienbd+VM2
◧◩◪
53. kenton+R81[view] [source] [discussion] 2025-06-03 15:56:56
>>hn_thr+O01
I find reviewing AI code less mentally tiring that reviewing human code.

This was a surprise to me! Until I tried it, I dreaded the idea.

I think it is because of the shorter feedback loop. I look at what the AI writes as it is writing it, and can ask for changes which it applies immediately. Reviewing human code typically has hours or days of round-trip time.

Also with the AI code I can just take over if it's not doing the right thing. Humans don't like it when I start pushing commits directly to their PR.

There's also the fact that the AI I'm prompting is, obviously, working on my priorities, whereas humans are often working on other priorities, but I can't just decline to review someone's code because it's not what I'm personally interested in at that moment.

When things go well, reviewing the AI's work is less draining than writing it myself, because it's basically doing the busy work while I'm still in control of high-level direction and architecture. I like that. But things don't always go well. Sometimes the AI goes in totally the wrong direction, and I have to prompt it too many times to do what I want, in which case it's not saving me time. But again, I can always just cancel the session and start doing it myself... humans don't like it when I tell them to drop a PR and let me do it.

Personally, I don't generally get excited about mentoring and collaborating. I wish I did, and I recognize it's an important part of my job which I have to do either way, but I just don't. I get excited primarily about ideas and architecture and not so much about people.

replies(1): >>hn_thr+9A1
◧◩◪
54. fragme+ba1[view] [source] [discussion] 2025-06-03 16:03:55
>>signa1+Iw
not if you don't want to. speech to text is pretty good these days, and even eg aider has a /voice command thanks to OpenAI's whisper.
◧◩◪◨
55. shaky-+Je1[view] [source] [discussion] 2025-06-03 16:26:06
>>fastba+Lz
Hundreds of lines that you have to carefully read and understand.
replies(3): >>victor+Gk1 >>ImPost+FG1 >>fastba+Np2
◧◩◪◨⬒⬓⬔
56. skydha+Ui1[view] [source] [discussion] 2025-06-03 16:50:46
>>dns_sn+7E
You could be fast if you were coding only the happy path, like a lot of juniors do. Instead of thinking about trivial things like malformed input, library semantics, framework gotchas and what not.
◧◩
57. dismal+Zi1[view] [source] [discussion] 2025-06-03 16:51:14
>>victor+le
> Still saves a lot of time vs typing everything from scratch

Probably very language specific. I use a lot of Ruby, typing things takes no time it's so terse. Instead I get to spend 95% of my time pondering my problems (or prompting the LLM)...

replies(2): >>victor+1k1 >>deepsu+nk1
◧◩◪◨⬒
58. lovich+Yj1[view] [source] [discussion] 2025-06-03 16:56:07
>>azemet+AA
I use https://visualstudio.microsoft.com/services/intellicode/ for my IDE which learns on your codebase, so it does end up saving me a ton of time after its learned my patterns and starts suggesting entire classes hooked up to the correct properties in my EF models.

It lets me still have my own style preferences with the benefit of AI code generation. Bridged the barrier I had with code coming from Claude/ChatGPT/etc where its style preferences were based on the wider internets standards. This is probably a preference on the level of tabs vs spaces, but ¯\_(ツ)_/¯

◧◩◪
59. victor+1k1[view] [source] [discussion] 2025-06-03 16:56:13
>>dismal+Zi1
It can create a whole dashboard view in elixir in a few seconds that is 100 lines long. No way I can type that in the same time.
replies(2): >>dismal+wn1 >>Quadma+us1
◧◩◪
60. victor+4k1[view] [source] [discussion] 2025-06-03 16:56:31
>>zx8080+Ex
huh?
◧◩◪
61. victor+ak1[view] [source] [discussion] 2025-06-03 16:57:28
>>0point+rm
Yea, and you still do that now. Lets say you spend 30% of your time coding and the rest planning. Well, now you got even more time for planning.
◧◩◪
62. deepsu+nk1[view] [source] [discussion] 2025-06-03 16:58:29
>>dismal+Zi1
With a proper IDE you don't type much even in Java/.Net, it's all autocomplete anyway. "Too verbose" complaints are mostly from Notepad lovers, and those who never needed to read somebody else's code.
◧◩◪
63. victor+uk1[view] [source] [discussion] 2025-06-03 16:59:09
>>signa1+Iw
Latest project I been working on. Prompts are a few sentences (and technically I dictate them instead of typing) and the LLM generates a few hundred lines of code.
◧◩◪◨⬒
64. victor+Gk1[view] [source] [discussion] 2025-06-03 17:00:10
>>shaky-+Je1
Depends on what it is doing. A html template without JS? Enough to just check if it looks right and works.
◧◩◪
65. victor+Qk1[view] [source] [discussion] 2025-06-03 17:00:45
>>otabde+2h
Maybe you type faster than me then :) I for sure type slower than Claude code. :)
◧◩◪◨⬒
66. autoex+0l1[view] [source] [discussion] 2025-06-03 17:01:30
>>kenton+cN
I have a feeling that it's something that might help today but also something you might pay for later. When you have to maintain or bug fix that same code down the line the fact that you were the one making all those higher-level decisions and thinking through the details gives you an advantage. Just having everything structured and named in ways that make the most sense to you seems like it'd be helpful the next time you have to deal with the code.

While it's often a luxury, I'd much rather work on code I wrote than code somebody else wrote.

◧◩◪◨
67. autoex+pm1[view] [source] [discussion] 2025-06-03 17:08:57
>>JimDab+mb
> I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel.

How novel is a OAuth provider library for cloudflare workers? I wouldn't be surprised if it'd been trained on multiple examples.

replies(1): >>kenton+sn1
◧◩◪◨⬒
68. kenton+sn1[view] [source] [discussion] 2025-06-03 17:15:09
>>autoex+pm1
I'm not aware of any other OAuth provider libraries for Workers. Plenty of clients, but not providers -- implementing the provider side is not that common, historically. See my other comment:

>>44164204

replies(1): >>nipah+EGn
◧◩◪◨
69. dismal+wn1[view] [source] [discussion] 2025-06-03 17:15:32
>>victor+1k1
In my experience the problem is never creating the dashboard view (there's a million examples of it out there anyway to copy/paste), but making sense of the data. Especially if you're doing anything even remotely novel.
◧◩◪◨⬒
70. trilli+ks1[view] [source] [discussion] 2025-06-03 17:45:12
>>azemet+AA
I put these in the Gemini Pro 2.5 system prompt and it's golden for Svelte.

https://svelte.dev/docs/llms

replies(1): >>azemet+Iw1
◧◩◪◨
71. Quadma+us1[view] [source] [discussion] 2025-06-03 17:46:13
>>victor+1k1
If you're making a dashboard view your productivity is zero, making it faster just multiplies zero by a bigger number.

Edit: this comment was more a result of me being in a terrible mood than a true claim. Sorry.

◧◩◪◨⬒⬓
72. azemet+Iw1[view] [source] [discussion] 2025-06-03 18:09:30
>>trilli+ks1
I do this and it still spits out react snippets regardless like 40% of the time... I feel like unless you are doing something extremely basic this is fine but once you introduce state or animations all these systems death spiral.
◧◩◪◨⬒⬓
73. brails+Ww1[view] [source] [discussion] 2025-06-03 18:10:57
>>oblio+w31
(GP) I wouldn't, but it would get me close enough that I can do the work that's more intellectually stimulating. Sometimes you need the people to do the concrete for a driveway, and sometimes you need to be signing off on the way the concrete was done, perhaps making some tweaks during the early stages.
◧◩◪◨⬒
74. brails+Px1[view] [source] [discussion] 2025-06-03 18:16:22
>>otabde+uB
I wasn't joking, it's a bottleneck sometimes, that's it. It's a bottleneck like comfort and any good tool is a bottleneck, like a slow computer is a bottleneck. It's silly to suggest that your ability to rapidly use a fundamental tool is never a bottleneck, no matter what other bits need to come into play during the course of your day.

My ability to review and understand intent behind code isn't a primarily bottleneck to me actually efficiently reviewing code when it's requested of me, the primary bottleneck is being notified at the right time that I have a waiting request to review code.

If compilers were never a bottleneck, why would we ever try to make them faster? If build tools were never a bottleneck, why would we ever optimize those? These are all just some of the things that can stand between the identification of a problem and producing a solution for it.

◧◩◪◨
75. hn_thr+9A1[view] [source] [discussion] 2025-06-03 18:29:12
>>kenton+R81
Thank you so much for your detailed, honest, and insightful response! I've done a bunch of AI-assisted coding to varying degrees of success, but your comment here helped me think about it in new ways so that I can take the most advantage of it.

Again, I think your posting of this is probably the best actual, real world evidence that shows both the pros and cons of AI-assisted coding, dispassionately. Awesome work!

◧◩
76. jjcm+nA1[view] [source] [discussion] 2025-06-03 18:30:39
>>kenton+lL
Most interesting aspect of this is it likely learned this pattern from human-written code!
replies(1): >>kenton+LP3
◧◩◪◨⬒
77. frank_+DG1[view] [source] [discussion] 2025-06-03 19:09:10
>>nijave+551
Some may have a better answer, but I often compare with tools like OpenAPI and AsyncAPI generators where HTTP/AMQP/etc code can be generated for servers, clients and extended documentation viewers.

The trade off here would be that you must create the spec file (and customize the template files where needed) which drives the codegen, in exchange for explicit control over deterministic output. So there’s more typing but potentially less cognitive overhead with reviewing a bunch of LLM output.

For this use case I find the explicit codegen UX preferable to inspecting what the LLM decided to do with my human-language prompt, if attempting to have the LLM directly code the library/executable source (as opposed to asking it to create the generator, template or API spec).

◧◩◪◨⬒
78. ImPost+FG1[view] [source] [discussion] 2025-06-03 19:09:11
>>shaky-+Je1
You also have to do that with code you write without LLM assistance.
◧◩◪◨
79. 827a+rI1[view] [source] [discussion] 2025-06-03 19:19:37
>>fc417f+VS
Maybe, but also: Cloudflare is one of like fifteen organizations on the planet writing code like this. The vast majority of The Rest Of Us will just consume code like this, which companies like Cloudflare, Auth0, etc write. That tends to be the nature of highly-specialized highly-domain-specific code. Cloudflare employs those mythical Oauth experts you talk about.
replies(1): >>kenton+lV1
◧◩◪◨
80. ksenze+wM1[view] [source] [discussion] 2025-06-03 19:43:48
>>Culona+9y
This can’t be stressed enough: it depends on what you’re doing. Developers talking about whether LLMs are useful are just talking past each other unless they say “useful for React” or “useful for Rust.” I mostly write Drupal code, and the JetBrains LLM autocomplete saves me a few keystrokes, maybe. It’s not amazing. My theory is that there just isn’t much boilerplate Drupal code out there to train on: everything possible gets pushed out of code and into configuration + UI. If I were writing React components I’d be having an entirely different experience.
◧◩◪
81. kenton+QR1[view] [source] [discussion] 2025-06-03 20:12:20
>>827a+ZL
> This workers-oauth-provider project is 1200 lines of code. An expert should be able to write that on the scale of an hour.

Are you being serious here?

Let's do the math.

1200 lines in a hour would be one line every three seconds, with no breaks.

And your figure of 1200 lines is apparently omitting whitespace and comments. The actual code is 2626 lines. Let's say we ignore blank lines, then it's 2251 lines. So one line per ~1.6 seconds.

The best typists type like 2 words per second, so unless the average line of code has 3 words on it, a human literally couldn't type that fast -- even if they knew exactly what to type.

Of course, people writing code don't just type non-stop. We spend most of our time thinking. Also time testing and debugging. (The test is 2195 lines BTW, not included in above figures.) Literal typing of code is a tiny fraction of a developer's time.

I'd say your estimate is wrong by at least one, but realistically more likely two orders of magnitude.

replies(1): >>827a+sd2
◧◩◪◨⬒
82. kenton+lV1[view] [source] [discussion] 2025-06-03 20:33:24
>>827a+rI1
That's me. I'm the expert.

On my very most productive days of my entire career I've managed to produce ~1000 lines of code. This library is ~5000 (including comments, tests, and documentation, which you omitted for some reason). I managed to prompt it out of the AI over the course of about five days. But they were five days when I also had a lot of other things going on -- meetings, chats, code reviews, etc. Not my most productive.

So I estimate it would have taken me 2x-5x longer to write this library by hand.

replies(1): >>ncruce+n35
◧◩◪◨
83. 827a+sd2[view] [source] [discussion] 2025-06-03 22:37:50
>>kenton+QR1
"On the scale of an hour" means "within an order of magnitude of one hour", or either "10 minutes to 10 hours" or "0.1 hours to 10 hours" depending on your interpretation, either is fine.
replies(1): >>chipsr+PJ5
◧◩◪◨
84. kiitos+Cl2[view] [source] [discussion] 2025-06-04 00:02:04
>>brails+6s
Time spent typing is statistically 0% of overall time spent in developing/implementing/shipping a feature or product or whatever. There's literally no reason to try to optimize that irrelevant detail.
replies(1): >>chipsr+kJ5
◧◩
85. toofy+lm2[view] [source] [discussion] 2025-06-04 00:09:49
>>banana+ew
> …removing expert humans from the loop is the deeply stupid thing the Tech Elite Who Want To Crush Their Own Workforce…

this is completely expected behavior by them. departments with well paid experts will be one of the first they’ll want to cut. in every field. experts cost money.

we’re a long, long, long way off from a bot that can go into random houses and fix under the sink plumbing, or diagnose and then fix an electrical socket. however, those who do most of their work on a computer, they’re pretty close to a point where they can cut these departments.

in every industry in every field, those will be jobs cut first. move fast and break things.

◧◩
86. blinde+0n2[view] [source] [discussion] 2025-06-04 00:18:01
>>victor+le
Sure! But over half the fun of coding is writing and learning.
◧◩◪◨⬒
87. fastba+Np2[view] [source] [discussion] 2025-06-04 00:55:33
>>shaky-+Je1
Are you not doing that already?

I go line-by-line through the code that I wrote (in my git client) before I stage+commit it.

replies(2): >>shaky-+fF6 >>nipah+3Cn
◧◩◪◨⬒⬓
88. s900mh+ZA2[view] [source] [discussion] 2025-06-04 03:33:25
>>oblio+w31
I’m not who you replied to but I keep functions small and testable paired with unit tests with a healthy mix of happy/sad path.

Afterwards I make sure the LLM passes all the tests before I spend my time to review the code.

I find this process keeps the iterations count low for review -> prompt -> review.

I personally love writing code with an LLM. I’m a sloppy typist but love programming. I find it’s a great burnout prevention.

For context: node.js development/React (a very LLM friendly stack.)

◧◩◪◨⬒
89. rienbd+VM2[view] [source] [discussion] 2025-06-04 06:24:57
>>nijave+551
You can require less code by using a more expressive programming language.
◧◩◪◨
90. XCSme+es3[view] [source] [discussion] 2025-06-04 12:59:45
>>Culona+9y
Scaffolding works fine, for things that are common, and you already have 100x examples on the web. Once you need something more specific, it falls apart and leads to hours of prompting and debugging for something that takes 30 minutes to write from scratch.

Some basic things it fails at:

  * Upgrading the React code-base from Material-UI V4 → V5
  * Implementing a simple header navigation dropdown in HTML/CSS that looks decent and is usable (it kept having bugs with hovering, wrong sizes, padding, responsiveness, duplicated code etc.)
  * Changing anything. About half of the time, it keeps saying "I made those changes", but no changes were made (it happens with all of them, Windsurf, Copilot, etc.).
◧◩◪◨⬒
91. lengla+Ns3[view] [source] [discussion] 2025-06-04 13:04:04
>>azemet+AA
I have had no issues with LLMs trying to force a language on me. I tried the whole snake game test with ChatGPT but Instead of using Python I asked it to use the nodejs bindings for raylib, which is rather unusual.

It did it in no time and no complaints.

replies(1): >>azemet+pW4
◧◩◪
92. kenton+LP3[view] [source] [discussion] 2025-06-04 15:24:10
>>jjcm+nA1
It's not a 100% bad idea. If you lose the encryption key, you lose the data. Data loss is bad! So better keep a backup of the key somewhere. I can see how it got there.

Defeats the purpose in this case though.

◧◩◪◨⬒⬓
93. azemet+pW4[view] [source] [discussion] 2025-06-04 21:59:51
>>lengla+Ns3
To be more honest, it did feel like if I just stuck with the standard library it was okay at generating a higher ratio of useful snippets. Once I introduced a library is where things fell apart.
◧◩◪◨⬒⬓
94. ncruce+n35[view] [source] [discussion] 2025-06-04 22:53:45
>>kenton+lV1
Would you have gotten to be where you are now, if AI would've always been there writing this code for you?

Would you have been able to review it for its faults, had you not experienced the pain of committing some of them yourself, or been warned by your peers of the same?

Would you have become the senior expert that you are if that had been your learning process?

◧◩◪◨⬒
95. chipsr+kJ5[view] [source] [discussion] 2025-06-05 06:52:47
>>kiitos+Cl2
No it's not. It's close to 50%.
replies(1): >>kiitos+WNb
◧◩◪◨⬒
96. chipsr+PJ5[view] [source] [discussion] 2025-06-05 06:56:48
>>827a+sd2
It means "less than one hour".
◧◩◪◨⬒⬓
97. shaky-+fF6[view] [source] [discussion] 2025-06-05 15:29:14
>>fastba+Np2
Yes, but you know the kind of code you write. When you re-check it, you are looking for minor typos, no major logic flaws affecting half the committed code.
◧◩
98. noone_+sp8[view] [source] [discussion] 2025-06-06 10:04:16
>>victor+le
For me, it’s not the typing - it’s the understanding. If I’m typing code, I have a mental model already or am building one as I type, whereas if I have an LLM generate the code then it’s “somebody else’s code” and I have to take the time to understand it anyway in order to usefully review it. Given that’s the case, I find it’s often quicker for me to just key the code myself, and come away with a better intuition for how it works at the end.
◧◩◪◨
99. namblo+9E8[view] [source] [discussion] 2025-06-06 12:38:44
>>bryant+MK
You don't become a good editor, inspector, what have you, by having other people/machines write all the code for you. To become and, perhaps more relevant, to stay a good reviewer, you need to regularly write code from scratch to see how it works. On top of that, languages, frameworks and libraries change constantly and you need to write and execute and experiment with new code to see exactly how it behaves so that you can eventually review the code that uses these features. Good reviewers are not born good reviewers!
◧◩◪◨⬒⬓
100. kiitos+WNb[view] [source] [discussion] 2025-06-07 20:23:15
>>chipsr+kJ5
If that's the case for you, then let me tell you, you're doing something wrong. It might not be you, it might be your team, or organization, but this is definitely not a normal experience.
101. jofzar+tec[view] [source] 2025-06-08 02:01:58
>>rienbd+(OP)
I know I'm preaching to the masses here, but isn't this why PR are so important?
◧◩◪◨
102. nipah+Azn[view] [source] [discussion] 2025-06-12 13:34:27
>>boruto+8l
Amazing, because I realized I procrastinate MORE when using LLM to write code which I know I could write. And not only that, I feel I'm losing the ability to do the coding myself and solve the problems myself when delegating this to the AI. Which is why no one should base their own decisions for life, like using or not using an LLM, on some random story from the internet.
◧◩◪◨
103. nipah+FAn[view] [source] [discussion] 2025-06-12 13:39:56
>>brails+Rs
Now you are just being silly with your comparisons. There is no analogy between those things: * the difference between handwritting a book and typing is the extreme pain you would feel in your hands versus being able to write more in the same time without it * the difference between typing and using your voice could be of a similar magnitude for someone with problems in their hands * the difference between any of those writing methods and using an AI to do it for you, is that you are abstracting YOURSELF from the equation, not the method of writing. It's not analogous, not even from a mountain of distance far. You are not less "bottlenecked" because you don't need to write the thing yourself, you are just not producing it at all, it's more analogous to you guiding the hands of another person with vague instructions, using of their own expressivity to make your book for you, then claiming it was you who wrote it. It's not a bottleneck question, it was never a bottleneck question, and this is the case because code IS the writing, it IS the problem solving area where you need to put your mind to work, not writing a prompt, but coding in a specific and well defined formal syntax.
◧◩◪◨⬒
104. nipah+uBn[view] [source] [discussion] 2025-06-12 13:44:58
>>fc417f+BS
> It's not just time spent typing. Figuring out what needs to be typed can be both draining and time consuming. It's often (but not always) much easier to review someone else's solution to the problem than it is to solve it from scratch on your own.

This is EXTREMELY false. When you write the code you [remember] it, it's fresh in your head, you [know] what it is doing and exactly what it's supposed to do. This is why debugging a codebase you didn't wrote is harder than one you wrote, if a bug happens you know exactly the spots it could be happening at and you can easily go and check them.

◧◩◪◨⬒⬓
105. nipah+3Cn[view] [source] [discussion] 2025-06-12 13:47:31
>>fastba+Np2
You read at the same speed line-by-line your code when you are in your git client?

You are doing something wrong. I go line-by-line through my code like 7x faster than I would do it for someone's else code, because I know what I wrote, my own intentions, my flow of coding and all of those details. I can just look at it en passant, while with AI code I need to carefully review every single detail and the connection between them to approve it.

◧◩◪◨⬒⬓
106. nipah+EGn[view] [source] [discussion] 2025-06-12 14:13:05
>>kenton+sn1
Novelness is not a characteristic of interpolation, tho, it's about extrapolation. If you have plenty of clients and plenty of related stuff to the provider side, even if on on auth, then it could be considerably trivial for the LLM to interpolate on that field.
[go to top]