zlacker

[return to "My AI skeptic friends are all nuts"]
1. davidc+K8[view] [source] 2025-06-02 22:01:46
>>tablet+(OP)
>If you were trying and failing to use an LLM for code 6 months ago †, you’re not doing what most serious LLM-assisted coders are doing.

Here’s the thing from the skeptic perspective: This statement keeps getting made on a rolling basis. 6 months ago if I wasn’t using the life-changing, newest LLM at the time, I was also doing it wrong and being a luddite.

It creates a never ending treadmill of boy-who-cried-LLM. Why should I believe anything outlined in the article is transformative now when all the same vague claims about productivity increases were being made about the LLMs from 6 months ago which we now all agree are bad?

I don’t really know what would actually unseat this epistemic prior at this point for me.

In six months, I predict the author will again think the LLM products of 6 month ago (now) were actually not very useful and didn’t live up to the hype.

◧◩
2. idlewo+gr[view] [source] 2025-06-03 00:04:45
>>davidc+K8
An exponential curve looks locally the same at all points in time. For a very long period of time, computers were always vastly better than they were a year ago, and that wasn't because the computer you'd bought the year before was junk.

Consider that what you're reacting to is a symptom of genuine, rapid progress.

◧◩◪
3. Retr0i+9s[view] [source] 2025-06-03 00:14:03
>>idlewo+gr
I don't think anyone's contesting that LLMs are better now than they were previously.
◧◩◪◨
4. Neverm+Ms[view] [source] 2025-06-03 00:19:04
>>Retr0i+9s
> It creates a never ending treadmill of boy-who-cried-LLM.

The crying wolf reference only makes sense as a soft claim that LLM’s better or not, are not getting better in important ways.

Not a view I hold.

◧◩◪◨⬒
5. Retr0i+sv[view] [source] 2025-06-03 00:43:11
>>Neverm+Ms
The implicit claim is just that they're still not good enough (for whatever the use cases the claimant had in mind)
[go to top]