This does fill up context a little faster, (1) not as much as debugging the problem would have in a dynamic language, and (2) better agentic frameworks are coming that “rewrite” context history for dynamic on the fly context compression.
This isn't even true today. Source: heavy user of claude code and gemini with rust for almost 2 years now.
I still experience agents slipping in a `todo!` and other hacks to get code to compile, lint, and pass tests.
The loop with tests and doc tests are really nice, agreed, but it'll still shit out bad code.
This is such a silly thing to say. Either you set the bar so low that "hello world" qualifies or you expect LLMs to be able to reason about lifetimes, which they clearly cannot. But LLMs were never very good at full-program reasoning in any language.
I don't see this language fixing this, but it's not trying to—it just seems to be removing cruft
I don't know what to say. May experience does not match yours.
I manage a team of interns and I don't have the energy to babysit an agent too. For me, gpt and gemini yield the best talk-it-through approach. For example, dropping a research paper into the chat and describing details until the implementation is clarified.
We also use Claude and Cursor, and that was an exceptionally disruptive experience. Huge, sweeping, wrong changes all over. Gyah! If I bitch about todo! macros, this is where they came from.
For hobby projects, I sometimes use whatever free agent microsoft is shilling via VS Code (and me selling my data) that day. This is relatively productive, but reaches profoundly wrong conclusions.
Writing for CLR in visual studio is the smoothest smart-complete experience today.
I have not touched Grok and likely won't.
/ two pennies
Hope that answers your questions.