zlacker

[return to "Which programming languages are most token-efficient?"]
1. solomo+sk[view] [source] 2026-01-12 04:09:59
>>tehnub+(OP)
I'm biased by my preferred style of programming languages but I think that pure statically typed functional languages are incredibly well suited for LLMs. The purity gives you referential transparency and static analysis powers that the LLM can leverage to stay correctly on task.

The high level declarative nature and type driven development style of languages like Haskell also make it really easy for an experienced developer to review and validate the output of the LLM.

Early on in the GPT era I had really bad experiences generating Haskell code with LLMs but I think that the combination of improved models, increased context size, and agentic tooling has allowed LLMs to really take advantage of functional programming.

◧◩
2. eru+bl[view] [source] 2026-01-12 04:16:49
>>solomo+sk
I'm inclined to agree with you in principle, but there's much, much less Haskell examples in their training corpus than for JavaScript or Python.
◧◩◪
3. kstrau+Ll[view] [source] 2026-01-12 04:21:40
>>eru+bl
And yet the models I've used have been great with Rust, which pales in lines of code to JavaScript (or Python or PHP or Perl or C or C++).
◧◩◪◨
4. eru+Fm[view] [source] 2026-01-12 04:31:23
>>kstrau+Ll
I've also had decent experiences with Rust recently. I haven't done enough Haskell programming in the AI era to really say.

But it could be that different programming languages are a bit like different human languages for these models: when they have more than some threshold of training data, they can express their general problem solving skills in any of them? And then it's down to how much the compiler and linters can yell at them.

For Rust, I regularly tell them to make `clippy::pedantic` happy (and tell me explicitly when they think that the best way to do that is via an explicit ignore annotation in the code to disable a certain warning for a specific line).

Pedantic clippy is usually too.. pedantic for humans, but it seems to work reasonably well with the agents. You can also add clippy::cargo which ain't included in clippy::pedantic.

◧◩◪◨⬒
5. solomo+Xm[view] [source] 2026-01-12 04:34:10
>>eru+Fm
> But it could be that different programming languages are a bit like different human languages for these models: when they have more than some threshold of training data, they can express their general problem solving skills in any of them? And then it's down to how much the compiler and linters can yell at them.

I think this is exactly right.

◧◩◪◨⬒⬓
6. jagged+uz[view] [source] 2026-01-12 06:44:58
>>solomo+Xm
Exactly my opinion - I think the more you lock down the "search space" by providing strong and opinionated tooling, the more LLMs perform well. I think of it as starting something like a simulated annealing trying to get a correct solution, versus the same simulated annealing run while using heuristics and bounds to narrow the solution space
[go to top]