zlacker

Cloudlflare builds OAuth with Claude and publishes all the prompts

submitted by gregor+(OP) on 2025-06-02 14:24:54 | 889 points 519 comments
[view article] [source] [go to bottom]

See also https://github.com/cloudflare/workers-oauth-provider/commits... (via https://news.ycombinator.com/item?id=44161672)


NOTE: showing posts with links only show all posts
5. abroad+L5[view] [source] 2025-06-02 15:01:04
>>gregor+(OP)
Oh hey, looks like it's mostly Kenton Varda, who you may recognize from his LAN party house: >>42156977
10. mtlync+57[view] [source] 2025-06-02 15:07:09
>>gregor+(OP)
>In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed.

I'm confused by "I (@kentonv)" means here because kentonv is a different user.[0] Are you saying this is your alt? Or is this a typo/misunderstanding?

Edit: Figured out that most of your post is quoting the README. Consider using > and * characters to clarify.

[0] https://news.ycombinator.com/user?id=kentonv

◧◩
15. diggan+v7[view] [source] [discussion] 2025-06-02 15:09:21
>>mtlync+57
It's a literal copy-paste from the README, I think it was supposed to be quoted but parent messed it up somehow.

https://github.com/cloudflare/workers-oauth-provider/blob/fe...

18. infini+X7[view] [source] 2025-06-02 15:11:10
>>gregor+(OP)
From this commit: https://github.com/cloudflare/workers-oauth-provider/commit/...

===

"Fix Claude's bug manually. Claude had a bug in the previous commit. I prompted it multiple times to fix the bug but it kept doing the wrong thing.

So this change is manually written by a human.

I also extended the README to discuss the OAuth 2.1 spec problem."

===

This is super relatable to my experience trying to use these AI tools. They can get halfway there and then struggle immensely.

◧◩◪
29. diggan+i9[view] [source] [discussion] 2025-06-02 15:18:39
>>kenton+O8
> I think we are in desperate need of safe vibe coding environments where code runs in a sandbox with security policies that make it impossible to screw up.

OpenAI's new Rust version of Codex might be of interest, haven't dived deeper into the codebase but seems they're thinking about sandboxing from the get-go: https://github.com/openai/codex/blob/7896b1089dbf702dd079299...

51. jaunty+kc[view] [source] 2025-06-02 15:35:51
>>gregor+(OP)
> Again, please check out the commit history -- especially early commits -- to understand how this went.

Direct link to earliest page of history: https://github.com/cloudflare/workers-oauth-provider/commits...

A lot of very explicit & clear prompting, with direct directions to go. Some examples on the first page: https://github.com/cloudflare/workers-oauth-provider/commit/... https://github.com/cloudflare/workers-oauth-provider/commit/...

63. ZiiS+cd[view] [source] 2025-06-02 15:40:01
>>gregor+(OP)
Shouldn't they really have asked it to read https://developers.cloudflare.com/workers/examples/protect-a...
75. kenton+Xe[view] [source] 2025-06-02 15:51:05
>>gregor+(OP)
I'm the author of this library! Or uhhh... the AI prompter, I guess...

I'm also the lead engineer and initial creator of the Cloudflare Workers platform.

--------------

Plug: This library is used as part of the Workers MCP framework. MCP is a protocol that allows you to make APIs available directly to AI agents, so that you can ask the AI to do stuff and it'll call the APIs. If you want to build a remote MCP server, Workers is a great way to do it! See:

https://blog.cloudflare.com/remote-model-context-protocol-se...

https://developers.cloudflare.com/agents/guides/remote-mcp-s...

--------------

OK, personal commentary.

As mentioned in the readme, I was a huge AI skeptic until this project. This changed my mind.

I had also long been rather afraid of the coming future where I mostly review AI-written code. As the lead engineer on Cloudflare Workers since its inception, I do a LOT of code reviews of regular old human-generated code, and it's a slog. Writing code has always been the fun part of the job for me, and so delegating that to AI did not sound like what I wanted.

But after actually trying it, I find it's quite different from reviewing human code. The biggest difference is the feedback loop is much shorter. I prompt the AI and it produces a result within seconds.

My experience is that this actually makes it feels more like I am authoring the code. It feels similarly fun to writing code by hand, except that the AI is exceptionally good at boilerplate and test-writing, which are exactly the parts I find boring. So... I actually like it.

With that said, there's definitely limits on what it can do. This OAuth library was a pretty perfect use case because it's a well-known standard implemented in a well-known language on a well-known platform, so I could pretty much just give it an API spec and it could do what a generative AI does: generate. On the other hand, I've so far found that AI is not very good at refactoring complex code. And a lot of my work on the Workers Runtime ends up being refactoring: any new feature requires a bunch of upfront refactoring to prepare the right abstractions. So I am still writing a lot of code by hand.

I do have to say though: The LLM understands code. I can't deny it. It is not a "stochastic parrot", it is not just repeating things it has seen elsewhere. It looks at the code, understands what it means, explains it to me mostly correctly, and then applies my directions to change it.

◧◩
127. kenton+Js[view] [source] [discussion] 2025-06-02 17:07:38
>>ZiiS+cd
The secret token is hashed first, and it's the hash that is looked up in storage. In this arrangement, an attacker cannot use timing to determine the correct value byte-by-byte, because any change to the secret token is expected to randomize the whole hash. So, timing-safe equality is not needed.

That said, if you have spotted a place in the code where you believe there is such a vulnerability, please do report it. Disclosure guidelines are at: https://github.com/cloudflare/workers-oauth-provider/blob/ma...

◧◩◪◨⬒⬓⬔
130. Powder+Dt[view] [source] [discussion] 2025-06-02 17:14:25
>>dingnu+Bs
https://openreview.net/pdf?id=GTHD2UnDIb
◧◩◪◨
136. diggan+Hw[view] [source] [discussion] 2025-06-02 17:36:05
>>dingnu+is
> Can you imagine if Excel worked like this?

I mean, why would I imagine that? Who would want that? It's like the argument against legal marijuana, and someone replies "But would you like your pilot to be high when flying?!". Right tool for the right job, clearly when you want 100% certainty then LLMs aren't the tool for that. Just because they're useful for some things don't mean we have to replace everything with them.

> Also, each try costs money!

I guess you're using some paid API? Try a different way then. I mostly use the web UI from OpenAI, or Codex lately, or ran locally with my own agent using local weights, neither is "each try costs money" more than writing data to my SSD is costing me money.

It's not a holy grail some people paint it, and not sure we're across the "productivity threshold" (>>44160664 ) yet, but it's worth trying it out probably before jumping to conclusions. But no one is forcing you either, YMMV and all that.

158. simonw+qE[view] [source] 2025-06-02 18:34:25
>>gregor+(OP)
The most clearly Claude-written commits are on the first page, this link should get you to them: https://github.com/cloudflare/workers-oauth-provider/commits...
◧◩
159. smalln+wE[view] [source] [discussion] 2025-06-02 18:34:55
>>weinzi+tD
> humans likely are nothing more than that

Relevant post: >>44089156

◧◩◪◨⬒⬓
169. jsnell+fI[view] [source] [discussion] 2025-06-02 19:00:29
>>Denzel+0D
Sure. Here's something I'd written on the subject that I'd left lying in my drafts folder for a month, but I've now published just for you :)

https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

It has links to public sources on the pricing of both LLMs and search, and explains why the low inference prices can't be due the inference being subsidized. (And while there are other possible explanations, it includes a calculator for what the compound impact of all of those possible explanations could be.)

◧◩◪
185. 9dev+fP[view] [source] [discussion] 2025-06-02 19:50:24
>>kenton+Og
Funny thing. I have built something similar recently, that is a 2.1-compliant authorisation server in TypeScript[0]. I did it by hand, with some LLM help on the documentation. I think it took me about two weeks full time, give or take, and there’s still work to do, especially on the testing side of things, so I would agree with your estimate.

I’m going to take a very close look at your code base :)

[0] https://github.com/colibri-hq/colibri/blob/next/packages/oau...

◧◩◪◨⬒
196. stevek+7V[view] [source] [discussion] 2025-06-02 20:30:24
>>baq+bH
I mean at the most extreme: that it can NEVER do so. Someone who holds this position would point to commits like >>44159659
◧◩◪◨⬒⬓⬔
203. whilen+yX[view] [source] [discussion] 2025-06-02 20:45:35
>>jsnell+fI
Just had a quick glance, but I think I found something to add to the Objection!-section of your post:

Brave's Search API is 3$ CPM and includes Web search, Images, Videos, News, Goggles[0]. Anthropic's API is 10$ CPM for Web search (and text only?), excluding any input/output tokens from your model of choice[1], that'd be an additional 15$ CPM, assuming 1KTok per request and Claude Sonnet 4 as a good model, so ~25$ CPM.

So your default "Ratio (Search cost / LLM cost): 25.0x" seems to be more on the 0.12x side of things (Search cost / LLM cost). Mind you, I just flew over everything in 10 mins and have no experience using either API.

[0]: https://brave.com/search/api/

[1]: https://www.anthropic.com/pricing#anthropic-api

◧◩◪
207. srhtft+hZ[view] [source] [discussion] 2025-06-02 20:56:30
>>kenton+Og
> It took me a few days to build the library with AI. ... > I estimate it would have taken a few weeks, maybe months to write by hand.

I don't think this is a fair assessment give the summary of the commit history https://pastebin.com/bG0j2ube shows your work started on 2025-02-27 and started trailing off at 2025-03-20 as others joined in. Minor changes continue to present.

> That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.

Still, this allowed you to complete in a month what may have taken two. That's a remarkable feat considering the time and value of someone of your caliber.

◧◩◪◨
210. giantr+a11[view] [source] [discussion] 2025-06-02 21:13:25
>>philip+Qh
> Like when you see a media report on a subject you know about and you see it's inaccurate but then somehow still trust the media on a subject you're a non-expert on.

Gell-Mann Amnesia https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect

◧◩◪◨⬒⬓⬔
225. Denzel+H71[view] [source] [discussion] 2025-06-02 21:49:42
>>jsnell+fI
Thanks for sharing!

It's worthwhile to note that https://github.com/deepseek-ai/open-infra-index/blob/main/20... shows cost vs. theoretical income. They don't show 80% gross margins and there's probably a reason they don't share their actual gross margin.

OpenAI is the easiest counterexample that proves inference is subsidized right now. They've taken $50B in investment; surpassed 400M WAUs (https://www.reuters.com/technology/artificial-intelligence/o...); lost $5B on $4B in revenue for 2024 (https://finance.yahoo.com/news/openai-thinks-revenue-more-tr...); and project they won't be cash-flow positive until 2029.

Prices would be significantly higher if OpenAI was priced for unit profitability right now.

As for the mega-conglomerates (Google, Meta, Microsoft), GenAI is a loss leader to build platform power. GenAI doesn't need to be unit profitable, it just needs to attract and retain people on their platform, ie you need a Google Cloud account to use Gemini API.

227. keeda+B81[view] [source] 2025-06-02 21:55:35
>>gregor+(OP)
A number of comments point out that OAuth is a well known standard and wonder how AI would perform on less explored problem spaces. As it happens I have some experience there, which I wrote about in this long-ass post nobody ever read: https://www.linkedin.com/pulse/adventures-coding-ai-kunal-ka...

It’s now a year+ old and models have advanced radically, but most of the key points still hold, which I've summarized here. The post has way more details if you need. Many of these points have also been echoed by others like @simonw.

Background:

* The main project is specialized and "researchy" enough that there is no direct reference on the Internet. The core idea has been explored in academic literature, a couple of relevant proprietary products exist, but nobody is doing it the way I am.

* It has the advantage of being greenfield, but the drawback of being highly “prototype-y”, so some gnarly, hacky code and a ton of exploratory / one-off programs.

* Caveat: my usage of AI is actually very limited compared to power users (not even on agents yet!), and the true potential is likely far greater than what I've described.

Highlights:

* At least 30% and maybe > 50% of the code is AI-generated. Not only are autocompletes frequent, I do a lot of "chat-oriented" and interactive "pair programming", so precise attribution is hard. It has written large, decently complicated chunks of code.

* It does boilerplate extremely easily, but it also handles novel use-cases very well.

* It can refactor existing code decently well, but probably because I'ver worked to keep my code highly modular and functional, which greatly limits what needs to be in the context (which I often manage manually.) Errors for even pretty complicated requests are rare, especially with newer models.

Thoughts:

* AI has let me be productive – and even innovate! – despite having limited prior background in the domains involved. The vast majority of all innovation comes from combining and applying well-known concepts in new ways. My workflow is basically a "try an approach -> analyze results -> synthesize new approach" loop, which generates a lot of such unique combinations, and the AI handles those just fine. As @kentonv says in the comments, there is no doubt in my mind that these models “understand” code, as opposed to being stochastic parrots. Arguments about what constitutes "reasoning" are essentially philosophical at this point.

* While the technical ideas so far have come from me, AI now shows the potential to be inventive by itself. In a recent conversation ChatGPT reasoned out a novel algorithm and code for an atypical, vaguely-defined problem. (I could find no reference to either the problem or the solution online.) Unfortunately, it didn't work too well :-) I suspect, however, that if I go full agentic by giving it full access to the underlying data and letting it iterate, it might actually refine its idea until it works. The main hurdles right now are logistics and cost.

* It took me months to become productive with AI, having to find a workflow AND code structure that works well for me. I don’t think enough people have put in the effort to find out what works for them, and so you get these polarized discussions online. I implore everyone, find a sufficiently interesting personal project and spend a few weekends coding with AI. You owe it to yourself, because 1) it's free and 2)...

* Jobs are absolutely going to be impacted. Mostly entry-level and junior ones, but maybe even mid-level ones. Without AI, I would have needed a team of 3+ (including a domain expert) to do this work in the same time. All knowledge jobs rely on a mountain of donkey work, and the donkey is going the way of the dodo. The future will require people who uplevel themselves to the state of the art and push the envelope using these tools.

* How we create AI-capable senior professionals without junior apprentices is going to be a critical question for many industries. My preliminary take is that motivated apprentices should voluntarily eschew all AI use until they achieve a reasonable level of proficiency.

◧◩
230. abalon+x91[view] [source] [discussion] 2025-06-02 22:01:33
>>stego-+6b
I like to make a rough analogy with autonomous vehicles. There's a leveling system from 1 (old school cruise control) to 5 (full automation):

* We achieved Level 2 autonomy first, which requires you to fully supervise and retain control of the vehicle and expect mistakes at any moment. So kind of neat but also can get you in big trouble if you don't supervise properly. Some people like it, some people don't see it as a net gain given the oversight required.

^ This is where Tesla "FSD beta" is at, and probably where LLM codegen tools are at today.

* After many years we have achieved a degree of Level 4 autonomy on well-trained routes albeit with occasional human intervention. This is where Waymo is at in certain cities. Level 4 means autonomy within specific but broad circumstances like a given area and weather conditions. While it is still somewhat early days it looks like we can generally trust these to operate safely and ask for help when they are not confident. Humans are not out of the loop.[1]

^ This is probably what where we can expect codegen to grow after many more years of training and refinement in specific domains. I.e. a lot of what CloudFlare engineers did with their prompt engineering tweaking was of this nature. Think of them as the employees driving the training vehicles around San Francisco for the past decade. And similarly, "L4 codegen" needs to prioritize code safety which in part means ensuring humans can understand situations and step in to guide and debug when the tool gets stuck.

* We are still nowhere close to Level 5 "drive anywhere and under any conditions a human can." And IMHO it's not clear we ever will based purely on the technology and methods that got us to L4. There are other brain mechanisms at work that need to be modeled.

[1] https://www.cnbc.com/2023/11/06/cruise-confirms-robotaxis-re...

237. tveita+oe1[view] [source] 2025-06-02 22:32:05
>>gregor+(OP)
Some examples of prompt exchanges that seem representative:

https://claude-workerd-transcript.pages.dev/oauth-provider-t... ("Total cost: $6.45")!

https://github.com/cloudflare/workers-oauth-provider/commit/...

https://github.com/cloudflare/workers-oauth-provider/commit/...

The first transcript includes the cost, would be interesting to know the ballpark of total Claude spend on this library so far.

--

This is opportune for me, as I've been looking for a description of AI workflows from people of some presumed competency. You'd think there would be many, but it's hard to find anything reliable amidst all the hype. Is anyone live coding anything but todo lists?

antirez: https://antirez.com/news/144#:~:text=Yesterday%20I%20needed%...

tptacek: >>44163292

◧◩◪
249. kenton+ik1[view] [source] [discussion] 2025-06-02 23:08:12
>>tkiolp+Lh1
As it happens, if this were released a month later, it would have been a huge loss for us.

This OAuth library is a core component of the Workers Remote MCP framework, which we managed to ship the day before the Remote MCP standard dropped.

And because we were there and ready for customers right at the beginning, a whole lot of people ended up building their MCP servers on us, including some big names:

https://blog.cloudflare.com/mcp-demo-day/

(Also if I had spent a month on this instead of a few days, that would be a month I wasn't spending on other things, and I have kind of a lot to do...)

◧◩◪
257. srhtft+os1[view] [source] [discussion] 2025-06-03 00:07:29
>>kenton+Gf
> It took a few days to produce this library -- it would almost certainly have taken me weeks to write it myself.

As mentioned in another comment >>44162965 I think this "few days" is unrealistic given the commit history. I think it would be more accurate to say it allowed you to do something in under one month that may have have taken two. A definite improvement, but not a reduction of weeks to days.

Or is that history inaccurate?

◧◩◪◨
270. kenton+7H1[view] [source] [discussion] 2025-06-03 02:29:54
>>srhtft+os1
I replied there: >>44165668
271. scherl+iH1[view] [source] 2025-06-03 02:31:04
>>gregor+(OP)
Is it really good form in TypeScript to make all functions async, even when functions don't use await? like this, https://github.com/cloudflare/workers-oauth-provider/blob/fe...
285. dang+QM1[view] [source] 2025-06-03 03:39:34
>>gregor+(OP)
We changed the URL from https://github.com/cloudflare/workers-oauth-provider/commits... to the project page.
◧◩
286. dang+pN1[view] [source] [discussion] 2025-06-03 03:44:29
>>mtlync+57
(this comment was originally a reply to >>44159167 , which summarized the readme in a confusing way.)
◧◩
307. animex+uV1[view] [source] [discussion] 2025-06-03 05:20:46
>>ab_tes+4V1
https://github.com/cloudflare/workers-oauth-provider/commits...

Start at the bottom...they are in the commit messages, or sometimes the .md file

324. aeneas+622[view] [source] 2025-06-03 06:26:37
>>gregor+(OP)
Very impressive, and at the same time very scary because who knows what security issues are hidden beneath the surface. Not even Claude knows! There is very reliable tooling like https://github.com/ory/hydra readily available that has gone through years of iteration and pentests. There are also lots of libraries - even for NodeJS - that have gone through certification.

In my view this is an antipattern of AI usage and „roll your own crypto“ reborn.

◧◩◪◨
344. JimDab+Od2[view] [source] [discussion] 2025-06-03 08:24:44
>>risyac+xb2
> did he save any time though

Yes:

> It took me a few days to build the library with AI.

> I estimate it would have taken a few weeks, maybe months to write by hand.

>>44160208

> or just tried to prove a point that if you actually already know all details of impl you can guide llm to do it?

No:

> I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

https://github.com/cloudflare/workers-oauth-provider/?tab=re...

◧◩◪
366. Etienn+1x2[view] [source] [discussion] 2025-06-03 11:41:01
>>kenton+aE
> Most people understand these two things to be, collectively, the "provider" side of OAuth

Citation needed. As another commenter already noted, the term "Provider" is rarely used in OAuth itself. When it is mentioned, it's typically in the context of OpenID Connect, where it refers specifically to the Authorization Server - not the Resource Server.

> the service provider, who is providing an API that requires authorization

That’s actually the Resource Server.

I understand that the current MCP spec [1] merges the Authorization Server and Resource Server roles, similar to what your library does. However, there are strong reasons to keep these roles separate [2].

In fact, the MCP spec authors acknowledge this [3], and the latest draft [4] makes implementing an Authorization Server optional for MCP services.

That’s why I’m being particular about clearly naming the roles your library supports in the OAuth flow. Going forward, MCP servers will always act as OAuth Resource Servers, but will only optionally act as Authorization Servers. Your library should make that distinction explicit.

[1]: https://modelcontextprotocol.io/specification/2025-03-26/bas...

[2]: https://aaronparecki.com/2025/04/03/15/oauth-for-model-conte...

[3]: https://github.com/modelcontextprotocol/modelcontextprotocol...

[4]: https://modelcontextprotocol.io/specification/draft/basic/au...

◧◩◪◨
381. impure+EF2[view] [source] [discussion] 2025-06-03 12:50:10
>>int_19+bx
Certainly in my version of LM Studio (0.3.15) it has a branch button at the end of every message [0]

[0] https://i.imgur.com/xZ2Fkn7.png

◧◩◪◨⬒⬓
429. lovich+qm3[view] [source] [discussion] 2025-06-03 16:56:07
>>azemet+2D2
I use https://visualstudio.microsoft.com/services/intellicode/ for my IDE which learns on your codebase, so it does end up saving me a ton of time after its learned my patterns and starts suggesting entire classes hooked up to the correct properties in my EF models.

It lets me still have my own style preferences with the benefit of AI code generation. Bridged the barrier I had with code coming from Claude/ChatGPT/etc where its style preferences were based on the wider internets standards. This is probably a preference on the level of tabs vs spaces, but ¯\_(ツ)_/¯

◧◩◪◨⬒⬓
439. kenton+Up3[view] [source] [discussion] 2025-06-03 17:15:09
>>autoex+Ro3
I'm not aware of any other OAuth provider libraries for Workers. Plenty of clients, but not providers -- implementing the provider side is not that common, historically. See my other comment:

>>44164204

◧◩◪◨⬒⬓
442. trilli+Mu3[view] [source] [discussion] 2025-06-03 17:45:12
>>azemet+2D2
I put these in the Gemini Pro 2.5 system prompt and it's golden for Svelte.

https://svelte.dev/docs/llms

◧◩◪
466. dang+DR3[view] [source] [discussion] 2025-06-03 19:58:18
>>aerhar+HG3
Can you please make your substantive points without name-calling like "This is hogwash"? Your comment would be just fine without that bit.

This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3."

479. gcr+uH4[view] [source] 2025-06-04 04:29:07
>>gregor+(OP)
This library has some pretty bad security bugs. For example, the author forgot to check that redirect_uri — matches one of the URLs listed during client registration.

The CVE is uncharacteristically scornful: https://nvd.nist.gov/vuln/detail/cve-2025-4143

I’m glad this was patched, but it is a bit worrying for something “not vibe coded” tbh

492. jplehm+h37[view] [source] 2025-06-04 22:35:20
>>gregor+(OP)
Fascinating share and discussion. I read many of the comments, but extracted key take-aways using GPT here: https://chatgpt.com/share/6840c9e8-a498-8005-971b-3b91e09b9d... for anyone interested.
◧◩◪
497. jdbohr+yY8[view] [source] [discussion] 2025-06-05 17:16:48
>>kenton+Og
YES!!!! I've actually been thinking about starting a studio specifically geared to turning complex RFPs and protocols into usable tools with AI-assisted coding. I built these using Cursor just to test how for it could go. I think the potential of doing that as a service is huge:

https://github.com/jdbohrman-tech/hermetic-mls https://github.com/jdbohrman-tech/roselite

I think it's funny that Roselite caused a huge meltdown to the Veilid team simply because they have a weird adamancy to no AI assistance. They even called it "plagiarism"

◧◩◪
513. nipah+tKp[view] [source] [discussion] 2025-06-12 14:20:15
>>kenton+Og
Your estimation maybe right, but maybe also there is a point on why it is right: https://neilmadden.blog/2025/06/06/a-look-at-cloudflares-ai-...

Maybe because (and I'm quoting that article) it is still lacking in what it should have that you managed to accomplish this task in "few days" instead of "a few weeks, maybe months".

Maybe the bottleneck was not your typing speed, but the [specific knowledge] to build that system. Because if you know something well enough, you can build it way faster, like rebuilding something from scratch, you will be faster as you already know the paths. In which case, my question would be: would not be writing this as fast, or maybe at least more secure and reasonable, if you had the complete knowledge of the system first.

Because contrary to LLMs, humans can actually improve and learn when they do things, and they don't whey they don't do things. Not knowing the code to the full extent is worth the time "gained" by using the LLM to write it?

I think it's very hard to estimate those other aspects of the thing.

[go to top]