zlacker

[return to "ChatGPT Containers can now run bash, pip/npm install packages and download files"]
1. behnam+sj[view] [source] 2026-01-26 20:58:52
>>simonw+(OP)
I wonder if the era of dynamic programming languages is over. Python/JS/Ruby/etc. were good tradeoffs when developer time mattered. But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go (assuming enough training data on the language ofc; LLMs still can't write Gleam/Janet/CommonLisp/etc.).

Esp. with Go's quick compile time, I can see myself using it more and more even in my one-off scripts that would have used Python/Bash otherwise. Plus, I get a binary that I can port to other systems w/o problem.

Compiled is back?

◧◩
2. bopbop+jV[view] [source] 2026-01-27 00:22:14
>>behnam+sj
> But now that most code is written by LLMs

Got anything to back up this wild statement?

◧◩◪
3. dankwi+AY[view] [source] 2026-01-27 00:50:29
>>bopbop+jV
Me, my team, and colleagues also in software dev are all vibe coding. It's so much faster.
◧◩◪◨
4. manish+Y41[view] [source] 2026-01-27 01:34:52
>>dankwi+AY
If I may ask, does the code produced by LLM follow best practices or patterns? What mental model do you use to understand or comprehend your codebase?

Please know that I am asking as I am curious and do not intend to be disrespectful.

◧◩◪◨⬒
5. mjevan+C71[view] [source] 2026-01-27 01:57:38
>>manish+Y41
Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.

The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.

We aren't anywhere near general intelligence yet.

[go to top]