zlacker

[return to "Cloudlflare builds OAuth with Claude and publishes all the prompts"]
1. stego-+6b[view] [source] 2025-06-02 15:27:21
>>gregor+(OP)
On the one hand, I would expect LLMs to be able to crank out such code when prompted by skilled engineers who also understand prompting these tools correctly. OAuth isn’t new, has tons of working examples to steal as training data from public projects, and in a variety of existing languages to suit most use cases or needs.

On the other hand, where I remain a skeptic is this constant banging-on that somehow this will translate into entirely new things - research, materials science, economies, inventions, etc - because that requires learning “in real time” from information sources you’re literally generating in that moment, not decades of Stack Overflow responses without context. That has been bandied about for years, with no evidence to show for it beyond specifically cherry-picked examples, often from highly-controlled environments.

I never doubted that, with competent engineers, these tools could be used to generate “new” code from past datasets. What I continue to doubt is the utility of these tools given their immense costs, both environmentally and socially.

◧◩
2. diggan+ao[view] [source] 2025-06-02 16:44:37
>>stego-+6b
> On the other hand, where I remain a skeptic is this constant banging-on that somehow this will translate into entirely new things - research, materials science, economies, inventions, etc

Does it even have to be able to do so? Just the ability to speed up exploration and validation based on what a human tells it to do is already enormously useful, depending on how much you can speed up those things, and how accurate it can be.

Too slow or too inaccurate and it'll have a strong slowdown factor. But once some threshold been reached, where it makes either of those things faster, I'd probably consider the whole thing "overall useful". Nut of course that isn't the full picture and ignoring all the tradeoffs is kind of cheating, there are more things to consider too as you mention.

I'm guessing we aren't quite over the threshold because it is still very young all things considered, although the ecosystem is already pretty big. I feel like generally things tend to grow beyond their usefulness initially, and we're at that stage right now, and people are shooting it all kind of directions to see what works or not.

◧◩◪
3. dingnu+hr[view] [source] 2025-06-02 16:59:14
>>diggan+ao
> Just the ability to speed up exploration and validation based on what a human tells it to do is already enormously useful, depending on how much you can speed up those things, and how accurate it can be.

The big question is: is it useful enough to justify the cost when the VC subsidies go away?

My phone recently offered me Gemini "now for free" and I thought "free for now, you mean. I better not get used to that. They should be required to call it a free trial."

◧◩◪◨
4. jsnell+Wx[view] [source] 2025-06-02 17:44:22
>>dingnu+hr
Inference is actually quite cheap. Like, a highly competitive LLM can cost 1/25th of a search query. And it is not due to inference being subsidized by VC money.

It's also getting cheaper all the time. Something like 1000x cheaper in the last two years at the same quality level, and there's not yet any sign of a plateau.

So it'd be quite surprising if the only long-term business model turned out to be subscriptions.

◧◩◪◨⬒
5. Denzel+0D[view] [source] 2025-06-02 18:23:37
>>jsnell+Wx
Can you link to any sources that support your claim?
◧◩◪◨⬒⬓
6. jsnell+fI[view] [source] 2025-06-02 19:00:29
>>Denzel+0D
Sure. Here's something I'd written on the subject that I'd left lying in my drafts folder for a month, but I've now published just for you :)

https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

It has links to public sources on the pricing of both LLMs and search, and explains why the low inference prices can't be due the inference being subsidized. (And while there are other possible explanations, it includes a calculator for what the compound impact of all of those possible explanations could be.)

◧◩◪◨⬒⬓⬔
7. whilen+yX[view] [source] 2025-06-02 20:45:35
>>jsnell+fI
Just had a quick glance, but I think I found something to add to the Objection!-section of your post:

Brave's Search API is 3$ CPM and includes Web search, Images, Videos, News, Goggles[0]. Anthropic's API is 10$ CPM for Web search (and text only?), excluding any input/output tokens from your model of choice[1], that'd be an additional 15$ CPM, assuming 1KTok per request and Claude Sonnet 4 as a good model, so ~25$ CPM.

So your default "Ratio (Search cost / LLM cost): 25.0x" seems to be more on the 0.12x side of things (Search cost / LLM cost). Mind you, I just flew over everything in 10 mins and have no experience using either API.

[0]: https://brave.com/search/api/

[1]: https://www.anthropic.com/pricing#anthropic-api

[go to top]