zlacker

[return to "ChatGPT Containers can now run bash, pip/npm install packages and download files"]
1. behnam+sj[view] [source] 2026-01-26 20:58:52
>>simonw+(OP)
I wonder if the era of dynamic programming languages is over. Python/JS/Ruby/etc. were good tradeoffs when developer time mattered. But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go (assuming enough training data on the language ofc; LLMs still can't write Gleam/Janet/CommonLisp/etc.).

Esp. with Go's quick compile time, I can see myself using it more and more even in my one-off scripts that would have used Python/Bash otherwise. Plus, I get a binary that I can port to other systems w/o problem.

Compiled is back?

◧◩
2. jacque+uK[view] [source] 2026-01-26 23:17:55
>>behnam+sj
> But now that most code is written by LLMs

Is this true? It seems to be a massive assumption.

◧◩◪
3. fooker+ZO[view] [source] 2026-01-26 23:44:21
>>jacque+uK
By lines of code, almost by an order of magnitude.

Some of the code is janky garbage, but that’s what most code it. There’s no use pearl clutching.

Human engineering time is better spent at figuring out which problems to solve than typing code token by token.

Identifying what to work on, and why, is a great research skill to have and I’m glad we are getting to realistic technology to make that a baseline skill.

◧◩◪◨
4. jacque+wP[view] [source] 2026-01-26 23:47:33
>>fooker+ZO
Well, you will somehow have to turn that 'janky garbage' into quality code, who will do that then?
◧◩◪◨⬒
5. fooker+0S[view] [source] 2026-01-27 00:01:44
>>jacque+wP
For most code, this never happens in the real world.

The vast majority of code is garbage, and has been for several decades.

◧◩◪◨⬒⬓
6. pharri+Lb1[view] [source] 2026-01-27 02:32:10
>>fooker+0S
So we should all work to become better programmers! What I'm seeing now is too many people giving up and saying "most code is bad, so I may was well pump out even worse code MUCH faster." People are chasing convenience and getting a far worse quality of life in exchange.
◧◩◪◨⬒⬓⬔
7. ben_w+sR1[view] [source] 2026-01-27 09:06:15
>>pharri+Lb1
I've seen all four quadrants of [good code, bad code] x [business success, business failure].

The real money we used to get paid was for business success, not directly for code quality; the quality metrics we told ourselves were closer to CV-driven development than anything the people with the money understood let alone cared about, which in turn was why the term "technical debt" was coined as a way to try to get the leadership to care about what we care about.

There's some domains where all that stuff we tell ourselves about quality, absolutely does matter… but then there's the 278th small restaurant that wants a website with a menu, opening hours, and table booking service without having e.g. 1500 American corporations showing up in the cookie consent message to provide analytics they don't need but are still automatically pre-packaged with the off-the-shelf solution.

◧◩◪◨⬒⬓⬔⧯
8. antonv+Pa3[view] [source] 2026-01-27 16:42:56
>>ben_w+sR1
I’ve seen those quadrants too, because I’ve come into several companies to help clean up a mess they’ve gotten into with bad code that they can no longer ignore. It is a compete certainty that we’re going to start seeing a lot more of that.

One ironic thing about LLM-generated bad code is that churning out millions of lines just makes it less likely the LLM is going to be able to manage the results, because token capacity is neither unlimited nor free.

(Note I’m not saying all LLM code is bad; but so far the fully vibecoded stuff seems bad at any nontrivial scale.)

◧◩◪◨⬒⬓⬔⧯▣
9. fooker+Tj3[view] [source] 2026-01-27 17:16:04
>>antonv+Pa3
> because token capacity is neither unlimited nor free.

This is like dissing software from 2004 because it used 2gb extra memory.

In the last year, token context window increased by about 100x and halved in cost at the same time.

If this is the crux of your argument, technology advancement will render it moot.

◧◩◪◨⬒⬓⬔⧯▣▦
10. antonv+1X3[view] [source] 2026-01-27 19:47:19
>>fooker+Tj3
> In the last year, token context window increased by about 100x and halved in cost at the same time.

So? It's nowhere close to solving the issue.

I'm not anti-LLM. I'm very senior at a company that's had an AI-centric primary product since before the GPT explosion. But in order to navigate what's going on now, we need to understand the strengths and weaknesses of the technology currently, as well as what it's likely to be in the near, medium, and far future.

The cost of LLMs dealing with their own generated multi-million LOC systems is very unlikely to become tractable in the near future, and possibly not even medium-term. Besides, no-one has yet demonstrated an LLM-based system for even achieving that, i.e. resolving the technical debt that it created.

Don't let fanboism get in the way of rationality.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. fooker+nB4[view] [source] 2026-01-27 22:21:09
>>antonv+1X3
> The cost of LLMs dealing with their own generated multi-million LOC systems is very unlikely to become tractable in the near future

If you have a concrete way to pose this problem, you'll find that there will be concrete solutions.

There is no way to demonstrate something as vague as "resolving the technical debt that it created".

[go to top]