zlacker

[return to "ChatGPT Containers can now run bash, pip/npm install packages and download files"]
1. behnam+sj[view] [source] 2026-01-26 20:58:52
>>simonw+(OP)
I wonder if the era of dynamic programming languages is over. Python/JS/Ruby/etc. were good tradeoffs when developer time mattered. But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go (assuming enough training data on the language ofc; LLMs still can't write Gleam/Janet/CommonLisp/etc.).

Esp. with Go's quick compile time, I can see myself using it more and more even in my one-off scripts that would have used Python/Bash otherwise. Plus, I get a binary that I can port to other systems w/o problem.

Compiled is back?

◧◩
2. jacque+uK[view] [source] 2026-01-26 23:17:55
>>behnam+sj
> But now that most code is written by LLMs

Is this true? It seems to be a massive assumption.

◧◩◪
3. fooker+ZO[view] [source] 2026-01-26 23:44:21
>>jacque+uK
By lines of code, almost by an order of magnitude.

Some of the code is janky garbage, but that’s what most code it. There’s no use pearl clutching.

Human engineering time is better spent at figuring out which problems to solve than typing code token by token.

Identifying what to work on, and why, is a great research skill to have and I’m glad we are getting to realistic technology to make that a baseline skill.

◧◩◪◨
4. jacque+wP[view] [source] 2026-01-26 23:47:33
>>fooker+ZO
Well, you will somehow have to turn that 'janky garbage' into quality code, who will do that then?
◧◩◪◨⬒
5. fooker+0S[view] [source] 2026-01-27 00:01:44
>>jacque+wP
For most code, this never happens in the real world.

The vast majority of code is garbage, and has been for several decades.

◧◩◪◨⬒⬓
6. bdangu+7Z[view] [source] 2026-01-27 00:54:17
>>fooker+0S
This type of comments get downvoted the most on HN but it is absolute truth, most human-written code is “subpar” (trying to be nice and not say garbage). I have been working as a contractor for many years and code I’ve seen is just… hard to put it into words.

so much discussion here on HN which critiques “vibe codes” etc implies that human would have written it better which is vast vast majority is simply not the case

◧◩◪◨⬒⬓⬔
7. fooker+D81[view] [source] 2026-01-27 02:07:56
>>bdangu+7Z
I have worked on some of the most supposedly reliable codebases on earth (compilers) for several decades, and most of the code in compilers is pretty bad.

And most of the code the compiler is expected to compile, seen from the perspective of fixing bugs and issues with compilers, is absolutely terrible. And the day that can be rewritten or improved reliably with AI can't come fast enough.

◧◩◪◨⬒⬓⬔⧯
8. jacque+aj1[view] [source] 2026-01-27 03:44:04
>>fooker+D81
I honestly do not see how training AI on 'mountains of garbage' would have any other outcome than more garbage.

I've seen lots of different codebases from the inside, some good some bad. As a rule smaller + small team = better and bigger + more participants = worse.

◧◩◪◨⬒⬓⬔⧯▣
9. fooker+Jj1[view] [source] 2026-01-27 03:48:10
>>jacque+aj1
The way it seems to work now is to task agents to write a good test suite. AI is much better at this than it is at writing code from scratch.

Then you just let it iterate until tests pass. If you are not happy with the design, suggest a newer design and let it rip.

All this is expensive and wasteful now, but stuff becoming 100-1000x cheaper has happened for every technology we have invented.

◧◩◪◨⬒⬓⬔⧯▣▦
10. jacque+gF1[view] [source] 2026-01-27 07:35:42
>>fooker+Jj1
Interesting, so this is effectively 'guided closed loop' software development with the testset as the control.

It gives me a bit of a 'turtles all the way down' feeling because if the test set can be 'good' why couldn't the code be good as well?

I'm quite wary of all of this, as you've probably gathered by now: the idea that you can toss a bunch of 'pass' tests into a box and then generate code until all of the tests pass is effectively a form of fuzzing, you've got some thing that passes your test set, but it may do a lot more than just that and your test set is not going to be able to exhaustively enumerate the negative cases.

This could easily result in 'surprise functionality' that you did not anticipate during the specification phase. The only way to deal with that then is to audit the generated code, which I presume would then be farmed out to yet another LLM.

This all places a very high degree of trust into a chain of untrusted components and that doesn't sit quite right with me. It probably means my understanding of this stuff is still off.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. fooker+XF1[view] [source] 2026-01-27 07:41:18
>>jacque+gF1
You are right.

What you are missing is that the thing driving this untrusted pile of hacks keep getting better at a rapid pace.

So much that the quality of the output is passable now, mimicking man-years of software engineering in a matter of hours.

If you don’t believe me, pick a project that you have always wanted to build from scratch and let cursor/claude code have a go at it. You get to make the key decisions, but the quality of work is pretty good now, so much that you don’t really have to double check much.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. jacque+7R1[view] [source] 2026-01-27 09:04:07
>>fooker+XF1
Thank you, I will try that and see where it leads. This all suggests a massive downward adjustment for any capitalized software is on the menu.
[go to top]