zlacker

[return to "My AI skeptic friends are all nuts"]
1. grey-a+ba[view] [source] 2025-06-02 22:10:44
>>tablet+(OP)
I’d love to see the authors of effusive praise of generative AI like this provide the proof of the unlimited powers of their tools in code. If GAI (or agents, or whatever comes next …) is so effective it should be quite simple to prove that by creating an AI only company and in short order producing huge amounts of serviceable code to do useful things. So far I’ve seen no sign of this, and the best use case seems to be generating text or artwork which fools humans into thinking it has coherent meaning as our minds love to fill gaps and spot patterns even where there are none. It’s also pretty good at reproducing things it has seen with variations - that can be useful.

So far in my experience watching small to medium sized companies try to use it for real work, it has been occasionally useful for exploring apis, odd bits of knowledge etc, but overall wasted more time than it has saved. I see very few signs of progress.

The time has come for llm users to put up or shut up - if it’s so great, stop telling us and show and use the code it generated on its own.

◧◩
2. marxis+ne[view] [source] 2025-06-02 22:36:28
>>grey-a+ba
I think we're talking past each other. There's always been a threshold: above it, code changes are worth the effort; below it, they sit in backlog purgatory. AI tools so far seem to lower implementation costs, moving the threshold down so more backlog items become viable. The "5x productivity" crowd is excited about this expanded scope, while skeptics correctly note the highest value work hasn't fundamentally changed.

I think what's happening is two groups using "productivity" to mean completely different things: "I can implement 5x more code changes" vs "I generate 5x more business value." Both experiences are real, but they're not the same thing.

https://peoplesgrocers.com/en/writing/ai-productivity-parado...

◧◩◪
3. yencab+sf[view] [source] 2025-06-02 22:42:11
>>marxis+ne
You seem to think generating 5x more code results in better code, in the left column. I highly doubt this.
◧◩◪◨
4. sbarre+Sm[view] [source] 2025-06-02 23:31:17
>>yencab+sf
It depends?

There's certainly a lot of code that needs to be written in companies that is simple and straightforward and where LLMs are absolutely capable of generating code as good as your average junior/intermediate developer would have written.

And of course there are higher complexity tasks where the LLM will completely face plant.

So the smart company chooses carefully where to apply the LLM and possibly does get 5x more code that is "better" in the sense that there's 5x more straightforward tickets closed/shipped, which is better than if they had less tickets closed/shipped.

◧◩◪◨⬒
5. yencab+xn[view] [source] 2025-06-02 23:35:06
>>sbarre+Sm
That wasn't the argument. The argument is that someone using an LLM to create 5x more code will achieve things like "Adding robust error handling" and "Cleaner abstractions".
◧◩◪◨⬒⬓
6. marxis+ty[view] [source] 2025-06-03 01:10:16
>>yencab+xn
Just to clarify, when I say 5x code changes, I was thinking of "edit" operations.

My intuition is the tail of low value "changes/edits" will skew fairly code size neutral.

A concrete example from this week "adding robust error handling" in TypeScript.

I ask the LLM to look at these files. See how there is a big try catch, and now I have the code working, there are two pretty different failure domains inside. Can you split up the try catch (which means hoisting some variable declarations outside the block scope).

This is a cursor rule for me `@split-failure-domain.mdc` because of how often this comes up (make some RPCs then validate desired state transition)

Then I update the placeholder comment with my prediction of the failure rate.

I "changed" the code, but the diff is +9/-6.

When I'm working on the higher complexity problems I tend to be closer to the edge of my understanding. Once I get a solution, very often I can simplify the code. There are many many ways to write the same exact program. Fewer make the essential complexity obvious. And when you shift things around in exactly the kind of mechanical transformation way that LLMs can speed up... then your diff is not that big. Might be negative.

[go to top]