zlacker

[parent] [thread] 10 comments
1. adastr+(OP)[view] [source] 2026-01-20 01:12:16
I don’t think that assumption holds. For example, only recently have agents started getting Rust code right on the first try, but that hasn’t mattered in the past because the rust compiler and linters give such good feedback that it immediately fixes whatever goof it made.

This does fill up context a little faster, (1) not as much as debugging the problem would have in a dynamic language, and (2) better agentic frameworks are coming that “rewrite” context history for dynamic on the fly context compression.

replies(4): >>root_a+U3 >>Punchy+PD >>bevr13+tw1 >>Growin+xH1
2. root_a+U3[view] [source] 2026-01-20 01:54:04
>>adastr+(OP)
> that hasn’t mattered in the past because the rust compiler and linters give such good feedback that it immediately fixes whatever goof it made.

This isn't even true today. Source: heavy user of claude code and gemini with rust for almost 2 years now.

replies(2): >>adastr+If >>ekidd+Ik
◧◩
3. adastr+If[view] [source] [discussion] 2026-01-20 03:41:05
>>root_a+U3
I have no problems with rust and Claude Code, and I use it on a daily basis.
◧◩
4. ekidd+Ik[view] [source] [discussion] 2026-01-20 04:39:57
>>root_a+U3
Yeah, I have zero problem getting Opus 4.5 to write high-quality Rust code. And I'm picky.
5. Punchy+PD[view] [source] 2026-01-20 08:09:07
>>adastr+(OP)
so you're saying... the assumption actually holds
replies(1): >>adastr+6H1
6. bevr13+tw1[view] [source] 2026-01-20 15:08:15
>>adastr+(OP)
> because the rust compiler and linters give such good feedback that it immediately fixes whatever goof it made.

I still experience agents slipping in a `todo!` and other hacks to get code to compile, lint, and pass tests.

The loop with tests and doc tests are really nice, agreed, but it'll still shit out bad code.

replies(1): >>adastr+YB2
◧◩
7. adastr+6H1[view] [source] [discussion] 2026-01-20 15:56:23
>>Punchy+PD
No, it’s the exact opposite of the assumption. It doesn’t matter how represented the language is in the training data, so long as the surrounding infrastructure is good.
8. Growin+xH1[view] [source] 2026-01-20 15:58:10
>>adastr+(OP)
> only recently have agents started getting Rust code right on the first try

This is such a silly thing to say. Either you set the bar so low that "hello world" qualifies or you expect LLMs to be able to reason about lifetimes, which they clearly cannot. But LLMs were never very good at full-program reasoning in any language.

I don't see this language fixing this, but it's not trying to—it just seems to be removing cruft

replies(1): >>adastr+Gy2
◧◩
9. adastr+Gy2[view] [source] [discussion] 2026-01-20 19:20:54
>>Growin+xH1
I have had no issue with Claude writing code that uses lifetimes. It seems to be able to reason about them just fine.

I don't know what to say. May experience does not match yours.

◧◩
10. adastr+YB2[view] [source] [discussion] 2026-01-20 19:37:53
>>bevr13+tw1
What agents, using what models?
replies(1): >>bevr13+HY4
◧◩◪
11. bevr13+HY4[view] [source] [discussion] 2026-01-21 14:01:04
>>adastr+YB2
Whatever work is paying for on a given day. We've rotated through a few offerings. It's a work truck not a personal vehicle, for me.

I manage a team of interns and I don't have the energy to babysit an agent too. For me, gpt and gemini yield the best talk-it-through approach. For example, dropping a research paper into the chat and describing details until the implementation is clarified.

We also use Claude and Cursor, and that was an exceptionally disruptive experience. Huge, sweeping, wrong changes all over. Gyah! If I bitch about todo! macros, this is where they came from.

For hobby projects, I sometimes use whatever free agent microsoft is shilling via VS Code (and me selling my data) that day. This is relatively productive, but reaches profoundly wrong conclusions.

Writing for CLR in visual studio is the smoothest smart-complete experience today.

I have not touched Grok and likely won't.

/ two pennies

Hope that answers your questions.

[go to top]