zlacker

[return to "My AI skeptic friends are all nuts"]
1. davidc+K8[view] [source] 2025-06-02 22:01:46
>>tablet+(OP)
>If you were trying and failing to use an LLM for code 6 months ago †, you’re not doing what most serious LLM-assisted coders are doing.

Here’s the thing from the skeptic perspective: This statement keeps getting made on a rolling basis. 6 months ago if I wasn’t using the life-changing, newest LLM at the time, I was also doing it wrong and being a luddite.

It creates a never ending treadmill of boy-who-cried-LLM. Why should I believe anything outlined in the article is transformative now when all the same vague claims about productivity increases were being made about the LLMs from 6 months ago which we now all agree are bad?

I don’t really know what would actually unseat this epistemic prior at this point for me.

In six months, I predict the author will again think the LLM products of 6 month ago (now) were actually not very useful and didn’t live up to the hype.

◧◩
2. killer+sw[view] [source] 2025-06-03 00:52:32
>>davidc+K8
Bullshit. We have absolute numbers, not just vibes.

The top of SWE-bench Verified leaderboard was at around 20% in mid-2024, i.e. AI was failing at most tasks.

Now it's at 70%.

Clearly it's objectively better at tackling typical development tasks.

And it's not like it went from 2% to 7%.

◧◩◪
3. lexand+Oy[view] [source] 2025-06-03 01:13:27
>>killer+sw
Isn't SWE-bench based on public Github issues? Wouldn't the increase in performance also be explained by continuing to train on newer scraped Github data, aka training on the test set?

The pressure for AI companies to release a new SOTA model is real, as the technology rapidly become commoditised. I think people have good reason to be skeptical of these benchmark results.

[go to top]