zlacker

[parent] [thread] 24 comments
1. fooker+(OP)[view] [source] 2026-01-26 23:44:21
By lines of code, almost by an order of magnitude.

Some of the code is janky garbage, but that’s what most code it. There’s no use pearl clutching.

Human engineering time is better spent at figuring out which problems to solve than typing code token by token.

Identifying what to work on, and why, is a great research skill to have and I’m glad we are getting to realistic technology to make that a baseline skill.

replies(1): >>jacque+x
2. jacque+x[view] [source] 2026-01-26 23:47:33
>>fooker+(OP)
Well, you will somehow have to turn that 'janky garbage' into quality code, who will do that then?
replies(3): >>behnam+M2 >>fooker+13 >>tokioy+d8
◧◩
3. behnam+M2[view] [source] [discussion] 2026-01-27 00:00:45
>>jacque+x
> who will do that then?

the next version of LLMs. write with GPT 5.2 now, improve the quality using 5.3 in a couple months; best of both worlds.

◧◩
4. fooker+13[view] [source] [discussion] 2026-01-27 00:01:44
>>jacque+x
For most code, this never happens in the real world.

The vast majority of code is garbage, and has been for several decades.

replies(2): >>bdangu+8a >>pharri+Mm
◧◩
5. tokioy+d8[view] [source] [discussion] 2026-01-27 00:38:25
>>jacque+x
You don't really have to.
◧◩◪
6. bdangu+8a[view] [source] [discussion] 2026-01-27 00:54:17
>>fooker+13
This type of comments get downvoted the most on HN but it is absolute truth, most human-written code is “subpar” (trying to be nice and not say garbage). I have been working as a contractor for many years and code I’ve seen is just… hard to put it into words.

so much discussion here on HN which critiques “vibe codes” etc implies that human would have written it better which is vast vast majority is simply not the case

replies(1): >>fooker+Ej
◧◩◪◨
7. fooker+Ej[view] [source] [discussion] 2026-01-27 02:07:56
>>bdangu+8a
I have worked on some of the most supposedly reliable codebases on earth (compilers) for several decades, and most of the code in compilers is pretty bad.

And most of the code the compiler is expected to compile, seen from the perspective of fixing bugs and issues with compilers, is absolutely terrible. And the day that can be rewritten or improved reliably with AI can't come fast enough.

replies(1): >>jacque+bu
◧◩◪
8. pharri+Mm[view] [source] [discussion] 2026-01-27 02:32:10
>>fooker+13
So we should all work to become better programmers! What I'm seeing now is too many people giving up and saying "most code is bad, so I may was well pump out even worse code MUCH faster." People are chasing convenience and getting a far worse quality of life in exchange.
replies(2): >>fooker+Vn >>ben_w+t21
◧◩◪◨
9. fooker+Vn[view] [source] [discussion] 2026-01-27 02:43:30
>>pharri+Mm
I disagree, most code is not worth improving.

I would rather make N bad prototypes to understand the feasibility of solving N problems than trying to write beautiful code for one misguided problem which may turn out to be a dead end.

There are a few orders of magnitude more problems worth solving than you can write good code for. Your time is your most important resource, writing needlessly robust code, checking for situations that your prototype will never encounter, just wastes time when it gets thrown away.

A good analogy for this is how we built bridges in the Roman empire, versus how we do it now.

replies(1): >>pharri+Tr
◧◩◪◨⬒
10. pharri+Tr[view] [source] [discussion] 2026-01-27 03:20:29
>>fooker+Vn
Have you ever been frustrated with software before? Has a computer program ever wasted your time by being buggy, obviously too slow or otherwise too resource intensive, having a poorly thought out interface, etc?
replies(1): >>fooker+lu
◧◩◪◨⬒
11. jacque+bu[view] [source] [discussion] 2026-01-27 03:44:04
>>fooker+Ej
I honestly do not see how training AI on 'mountains of garbage' would have any other outcome than more garbage.

I've seen lots of different codebases from the inside, some good some bad. As a rule smaller + small team = better and bigger + more participants = worse.

replies(2): >>simonw+Bu >>fooker+Ku
◧◩◪◨⬒⬓
12. fooker+lu[view] [source] [discussion] 2026-01-27 03:44:49
>>pharri+Tr
Yes. I am, however, not willing to spend money to get it fixed.

From the other side, the vast majority of customers will happily take the cheap/free/ad-supported buggy software. This is why we have all these random Google apps, for example.

Take a look at the bug tracker of any large open source codebase, there will be a few tens of thousands of reported bugs. It is worse for closed corporate codebases. The economics to write good code or to get bugs fixed does not make sense until you have a paying customer complain loudly.

◧◩◪◨⬒⬓
13. simonw+Bu[view] [source] [discussion] 2026-01-27 03:46:45
>>jacque+bu
That's why the major AI labs are really careful about the code they include in the training runs.

The days of indiscriminately scraping every scrap of code on the internet and pumping it all in are long gone, from what I can tell.

replies(2): >>fooker+Nu >>jacque+vz
◧◩◪◨⬒⬓
14. fooker+Ku[view] [source] [discussion] 2026-01-27 03:48:10
>>jacque+bu
The way it seems to work now is to task agents to write a good test suite. AI is much better at this than it is at writing code from scratch.

Then you just let it iterate until tests pass. If you are not happy with the design, suggest a newer design and let it rip.

All this is expensive and wasteful now, but stuff becoming 100-1000x cheaper has happened for every technology we have invented.

replies(1): >>jacque+hQ
◧◩◪◨⬒⬓⬔
15. fooker+Nu[view] [source] [discussion] 2026-01-27 03:48:59
>>simonw+Bu
Do you have pointers to this?

Would be a great resource to understand what works and what doesn't.

replies(1): >>simonw+Za1
◧◩◪◨⬒⬓⬔
16. jacque+vz[view] [source] [discussion] 2026-01-27 04:33:50
>>simonw+Bu
Well, if as the OP points out it is 'all garbage' they don't have a whole lot of choice to discriminate.
◧◩◪◨⬒⬓⬔
17. jacque+hQ[view] [source] [discussion] 2026-01-27 07:35:42
>>fooker+Ku
Interesting, so this is effectively 'guided closed loop' software development with the testset as the control.

It gives me a bit of a 'turtles all the way down' feeling because if the test set can be 'good' why couldn't the code be good as well?

I'm quite wary of all of this, as you've probably gathered by now: the idea that you can toss a bunch of 'pass' tests into a box and then generate code until all of the tests pass is effectively a form of fuzzing, you've got some thing that passes your test set, but it may do a lot more than just that and your test set is not going to be able to exhaustively enumerate the negative cases.

This could easily result in 'surprise functionality' that you did not anticipate during the specification phase. The only way to deal with that then is to audit the generated code, which I presume would then be farmed out to yet another LLM.

This all places a very high degree of trust into a chain of untrusted components and that doesn't sit quite right with me. It probably means my understanding of this stuff is still off.

replies(1): >>fooker+YQ
◧◩◪◨⬒⬓⬔⧯
18. fooker+YQ[view] [source] [discussion] 2026-01-27 07:41:18
>>jacque+hQ
You are right.

What you are missing is that the thing driving this untrusted pile of hacks keep getting better at a rapid pace.

So much that the quality of the output is passable now, mimicking man-years of software engineering in a matter of hours.

If you don’t believe me, pick a project that you have always wanted to build from scratch and let cursor/claude code have a go at it. You get to make the key decisions, but the quality of work is pretty good now, so much that you don’t really have to double check much.

replies(1): >>jacque+821
◧◩◪◨⬒⬓⬔⧯▣
19. jacque+821[view] [source] [discussion] 2026-01-27 09:04:07
>>fooker+YQ
Thank you, I will try that and see where it leads. This all suggests a massive downward adjustment for any capitalized software is on the menu.
◧◩◪◨
20. ben_w+t21[view] [source] [discussion] 2026-01-27 09:06:15
>>pharri+Mm
I've seen all four quadrants of [good code, bad code] x [business success, business failure].

The real money we used to get paid was for business success, not directly for code quality; the quality metrics we told ourselves were closer to CV-driven development than anything the people with the money understood let alone cared about, which in turn was why the term "technical debt" was coined as a way to try to get the leadership to care about what we care about.

There's some domains where all that stuff we tell ourselves about quality, absolutely does matter… but then there's the 278th small restaurant that wants a website with a menu, opening hours, and table booking service without having e.g. 1500 American corporations showing up in the cookie consent message to provide analytics they don't need but are still automatically pre-packaged with the off-the-shelf solution.

replies(1): >>antonv+Ql2
◧◩◪◨⬒⬓⬔⧯
21. simonw+Za1[view] [source] [discussion] 2026-01-27 10:13:41
>>fooker+Nu
Not really, sadly. It's more an intuition knocked up from following the space - the AI labs are still pretty secretive about their training mix.
◧◩◪◨⬒
22. antonv+Ql2[view] [source] [discussion] 2026-01-27 16:42:56
>>ben_w+t21
I’ve seen those quadrants too, because I’ve come into several companies to help clean up a mess they’ve gotten into with bad code that they can no longer ignore. It is a compete certainty that we’re going to start seeing a lot more of that.

One ironic thing about LLM-generated bad code is that churning out millions of lines just makes it less likely the LLM is going to be able to manage the results, because token capacity is neither unlimited nor free.

(Note I’m not saying all LLM code is bad; but so far the fully vibecoded stuff seems bad at any nontrivial scale.)

replies(1): >>fooker+Uu2
◧◩◪◨⬒⬓
23. fooker+Uu2[view] [source] [discussion] 2026-01-27 17:16:04
>>antonv+Ql2
> because token capacity is neither unlimited nor free.

This is like dissing software from 2004 because it used 2gb extra memory.

In the last year, token context window increased by about 100x and halved in cost at the same time.

If this is the crux of your argument, technology advancement will render it moot.

replies(1): >>antonv+283
◧◩◪◨⬒⬓⬔
24. antonv+283[view] [source] [discussion] 2026-01-27 19:47:19
>>fooker+Uu2
> In the last year, token context window increased by about 100x and halved in cost at the same time.

So? It's nowhere close to solving the issue.

I'm not anti-LLM. I'm very senior at a company that's had an AI-centric primary product since before the GPT explosion. But in order to navigate what's going on now, we need to understand the strengths and weaknesses of the technology currently, as well as what it's likely to be in the near, medium, and far future.

The cost of LLMs dealing with their own generated multi-million LOC systems is very unlikely to become tractable in the near future, and possibly not even medium-term. Besides, no-one has yet demonstrated an LLM-based system for even achieving that, i.e. resolving the technical debt that it created.

Don't let fanboism get in the way of rationality.

replies(1): >>fooker+oM3
◧◩◪◨⬒⬓⬔⧯
25. fooker+oM3[view] [source] [discussion] 2026-01-27 22:21:09
>>antonv+283
> The cost of LLMs dealing with their own generated multi-million LOC systems is very unlikely to become tractable in the near future

If you have a concrete way to pose this problem, you'll find that there will be concrete solutions.

There is no way to demonstrate something as vague as "resolving the technical debt that it created".

[go to top]