E.g. when it comes to authoring code, C, which comes language, is by far one of the languages that LLMs excel most at.
Claude Code makes some efforts to reduce context size, but at the end of the day is loading entire source files into context (then keeping them there until told to remove them, or context is compacted). One of the major wins is to run subagents for some tasks, that use their own context rather than loading more into CCs own context.
Cursor makes more efficient use of context by building a vector database of code chunks, then only loading matching chunks into context (I believe it does this for Composer/agentic use as well as for tab/autocomplete).
One of the more obvious ways to reduce context use in a larger multi-module codebase would be to take advantage of the split between small module definition (e.g. C++ .h files) and large module implementations (.cpp files). Generally you'd only need to load module interfaces/definitions into context if you are working on code that uses the module, and Cursor's chunked approach can reduce that further.
For whole codebase overview a language server can help locate things, and one could use the AI to itself generate shortish summaries/overviews of source files and the codebase and structure, similar to what a human developer might keep in their head, rather than repeatedly reading entire source files for code that isn't actually being modified.
It seems we're really in the early days of agentic coding tools, and they have a lot of room to get better and more efficient.
But I would love for more expressive and compact languages to do better, selfish as I am. But I think training data size is more of a factor, and we won’t be all moving up Clojure any time soon.
If you're interested in learning more, https://github.com/sibyllinesoft/scribe
So I'm not convinced this is either the right metric, or even if you got the right metric that it's a metric you want to minimize.
Because that’s what happened in the real world when generating a bunch of untyped Python code.
I am not sure token efficiency is an interesting problem in the long term, though.
And in the short term I wonder if prompts could be pre-compiled to “compressed tokens”; the idea would be to use a smaller number of tokens to represent a frequently needed concept; kind of like LZ compression. Or maybe token compression becomes a feature of future models optimized for specific tasks.
I was wondering last year if it would be worthwhile trying to create a language that was especially LLM-friendly, eg that embedded more context in the language structure. The idea is to make more of the program and the thinking behind it, explicit to the LLM but in a programming language style to eliminate the ambiguity of natural language (one could just use comments).
Then it occurred to me that with current LLM training methodology that there’s a chicken-and-egg problem; it doesn’t start to show rewards until there is a critical mass of good code in the language for LLMs to train on.
But had never considered that a programming language might be created thats less human readable/auditable to enable LLMs.
Scares me a bit.
The high level declarative nature and type driven development style of languages like Haskell also make it really easy for an experienced developer to review and validate the output of the LLM.
Early on in the GPT era I had really bad experiences generating Haskell code with LLMs but I think that the combination of improved models, increased context size, and agentic tooling has allowed LLMs to really take advantage of functional programming.
`public` might have a token by itself, even though you can have `pub` occurring in other contexts, too.
But it could be that different programming languages are a bit like different human languages for these models: when they have more than some threshold of training data, they can express their general problem solving skills in any of them? And then it's down to how much the compiler and linters can yell at them.
For Rust, I regularly tell them to make `clippy::pedantic` happy (and tell me explicitly when they think that the best way to do that is via an explicit ignore annotation in the code to disable a certain warning for a specific line).
Pedantic clippy is usually too.. pedantic for humans, but it seems to work reasonably well with the agents. You can also add clippy::cargo which ain't included in clippy::pedantic.
I think this is exactly right.
For example I shared some Model code with Claude and Gemini (both via web interfaces) and they both tried to put Controller code into the Model, despite me multiple times telling them that the code wasn't wanted nor needed in there.
I had to (eventually) share the entire project with the models (despite them having been working with the code all along) before they would comply with my request (whilst also congratulating me on my far superior architecture..)
That costs more tokens for each problem than just saying "her look at this section and work toward this goal"
Seeing all the C languages and JavaScript at the bottom like this makes me wonder if it's not just that Curly brackets take a lot of tokens.
We're not building a language for LLMs just yet.
For a very imperfect human analogy, it feels like saying "a student can spend as much time thinking about the text as they want, so the textbook can be extremely terse".
Definitely just gut feelings though - not well tested or anything. I could be wrong.
> Smart code bundler that turns repositories into optimized code bundles meeting a token budget in milliseconds
Ok. So it's a tool, do I use it on my repo once? Then what? Do I use it as I go, does it sit somewhere accessible to something like Claude Code and the onus is on me to direct Claude to use this to search files instead of his out of box workflow ? I can see some CLI examples, what should I do with that where does that fit into what people are using with cursor / claude / gemini etc ?
This is the part I've been trying to hammer home about LLM created stuff. It leaves us with vague not well-understood outcomes that might do something. People are shipping/delivering things they don't even understand now and they often times can't speak to what their thing does with an acceptable level of authority. I'm not against creating tools with LLM's but I'm actually pretty against people creating the basic readme with LLM's. Wanna make a tool in an LLM? More power to you. But make sure you understand what was made, because we need humans in here telling other humans how to use it, because LLMs flat out lose the plot over the course of a large project and I think a big issue is LLM's can sometimes be more eloquent at writing than a lot of people can, so they opt for the LLM-generated readme.
But as someone who would maybe consider using something like this, I see that readme and it just looks like every claude code thing I've put together to date which is to say I've done some seemingly impossible things with Claude only to find that his ability to recap the entirety of it just ended up in a whole lot of seemingly meaningful words and phrases and sentences that actually paint a super disjointed picture of what exactly a repo is about.
Well, you can adapt your PHP producing pipeline to produce Haskell code that is correct in the sense of solving the problem at hand, but getting it to produce idiomatic code is probably a lot harder.
re: tokens and session length, there are other ways to manage this than language choice. Summarization is one, something I do is to not out read_file content in the messages, but rather in the system prompt. This means that when it tries to reread after edit, we don't have two copies of the file in context.
Going to 10M token sessions, keeping per turn context under 100k, working on Golang... language choice for the sake of tokens does not seem a good thing to decide based on
I think the underlying reason is that functional programming is very conducive to keeping the context tight and focused. For instance, most logic relevant to a task tends to be concentrated in a few functions and data structures across a smallish set of files. That's all you need to feed into the context.
Contrast that with say, Java, where the logic is often spread across a deep inheritance hierarchy located in bunch of separate files. Add to that large frameworks that encapsulate a whole lot of boilerplate and bespoke logic with magic being injected from arbitrary places via e.g. annotations. You'd need to load all of those files (or more likely, simply the whole codebase) and relevant documentation to get accurate results. And even then the additional context is not just extraneous and expensive, but also polluted with irrelevant data that actually reduces accuracy.
A common refrain of mine is that for the best results, you have to invest a lot of time experimenting AND adapt yourself to figure out what works best with AI. In my case, it was gradually shifting to a functional style after spending my whole career writting OO code.
Working on it, actually! I think it's a really interesting problem space - being efficient on tokens, readable by humans for review, strongly typed and static for reasoning purposes, and having extremely regular syntax. One of the biggest issues with symbols is that, to a human, matching parentheses is relatively easy, but the models struggle with it.
I expect a language like the one I'm playing with will mature enough over the next couple years that models with a knowledge cutoff around 1/2027 will probably know how to program it well enough for it to start being more viable.
One of the things I plan to do is build evals so that I can validate the performance of various models on my as yet only partially baked language. I'm also using only LLMs to build out the entire infrastructure, mostly to see if it's possible.
Plus, they will strongly "pull" the context when LLM parses it back, to the point of overriding your instructions (true story)
> One of the biggest issues with symbols is that, to a human, matching parentheses is relatively easy, but the models struggle with it.
Great point. I find it near trivial to close parens but llms seem to struggle with the lisps ive played with because of this counting issue. To the point where ive not been working with them as much. typescript and functional js as other commentors note is usually smooth sailing.
for (int index = 0; index < size; ++index)
instead of for index in 0...size
eats up a lot of tokens, especially in C where you also need this construct for iterating over arrays.Both, essentially, I expect the code examples to grow organically but I expect most of them to come from LLMs, after all, that's the point of the language. I basically expect there to be a step function in effectiveness when the language has been ingested by the models, but they're already plenty decent-ish right now at it.
The most fascinating thing to me, generating the whole thing, has been that the LLMs are really, really good at iterating in a tight loop by updating the interpreter with new syntax, updating the stdlib with that new syntax, building some small extension to try using it, and then surfacing the need for a new builtin or primitive to start the cycle over.
I'm also leaning heavily on Chatgpt-5.2's insanely good math skills, and the language I'm building is very math heavy - it's essentially a distant cousin to Idris or any of the other dependently-typed theorem proving languages.
Actually, Haskell was a bit too hard for me on my own for real projects. Now with AI assistants, I think it could be a great pick.
Those are pretty terse.
I've also seen multiple startups that have had some pretty impressive performance with Lean and Rocq.
My current theory is that as long as the LLM has sufficiently good baseline performance in a language, the kind of scaffolding and tooling you can build around the pure code generation will have an outsize effect, and languages with expressive type systems have a pretty direct advantage there: types can constrain and give immediate feedback to your system, letting you iterate the LLM generation faster and at a higher level than you could otherwise.
I recently saw a paper[1] about using types to directly constrain LLM output. The paper used TypeScript, but it seems like the same approach would work well with other typed languages as well. Approaches like that make generating typed code with LLMs even more promising.
Abstract:
> Language models (LMs) can generate code but cannot guarantee its correctness often producing outputs that violate type safety, program invariants, or other semantic properties. Constrained decoding offers a solution by restricting generation to only produce programs that satisfy user-defined properties. However, existing methods are either limited to syntactic constraints or rely on brittle, ad hoc encodings of semantic properties over token sequences rather than program structure.
> We present ChopChop, the first programmable framework for constraining the output of LMs with respect to semantic properties. ChopChop introduces a principled way to construct constrained decoders based on analyzing the space of programs a prefix represents. It formulates this analysis as a realizability problem which is solved via coinduction, connecting token-level generation with structural reasoning over programs. We demonstrate ChopChop's generality by using it to enforce (1) equivalence to a reference program and (2) type safety. Across a range of models and tasks, ChopChop improves success rates while maintaining practical decoding latency.
C is surprisingly efficient as well. Minimal keywords, terse syntax, single-character operators. Not much boilerplate, and the core logic is dense.
I think the worst languages are Java, C#, and Rust (lifetime annotations, verbose generics).
In my opinion, C or Go for imperative code, Factor / Forth if the model knows them well.
On https://danuker.go.ro/programming-languages.html you can find charts of popularity (TIOBE) vs code density for various programming languages together with which programming languages are Pareto-optimal regarding these two criteria.
If you’re going to write an article atleast do the base research yourself man
Nowadays, I write C# and TS at work, and it's absolutely crazy how much better the LLM is at TS, with almost all code being decent the first try, but with C# I need to do a lot of massaging.
C# often has a 'nice' and 'performant' way of doing things (for example, strings are nice, but they allocate and are UTF16, but ReadOnlySpan<byte> is faster for UTF8, and can reuse buffers), the performant syntax often ends up being very verbose, with the nice syntax being barely shorter than Go's. Go also does the right thing by default, and its strings are basically array slices into UTF8 byte arrays.
So: C tokenizes efficiently for equivalent logic, but stdlib poverty makes it expensive for typical benchmark tasks. Same applies to Factor/Forth, arguably worse.
Update: I noticed that the author mentions that "APL's famous terseness isn't a plus for LLMs." Isn't that just a design limitation of the LLM tokenizers?
[1]: https://github.com/ETHproductions/japt
I cannot speak much for C#, but you may be right. Claude's Opus is really good.
I'm not sure if you're being intentionally obtuse or you just don't have much of an attention span, but I'm not making any money off this so if you want to use 10x more tokens to get stuff done, by all means brother.
The idea would seem to be to give instructions to your agent (Claude Code, etc) to use this tool to discover the chunks of code (not entire source files) it needs to look at to modify a particular function. You could put these instructions on how/when to use scribe someplace like .claude/rules/scribe.md
I assume this is meant to work as an override to Claude Code's normal operation where it reads entire source files into context (not sure on details as to how CC decides which files are relevant if developer hasn't explicitly told it), so if you asked CC to do something that matches the instructions you'd put in scribe.md it would run scribe, send the output (code chunks and file locations) to Claude AI, which would then base it's edit requests on that.
It's not obvious if this --covering-set command is the only one scribe currently supports, or if it has other ones to output code chunks relevant for other use cases.
I don't think it is capable of writing galaxy brain Haskell libraries, it absolutely missed the forest for the trees, but if you have an existing code base with consistent patterns it can emulate then it can do a surprisingly good job.
Here is an example side project I have done extremely heavily with Claude: https://github.com/solomon-b/kpbj.fm
I built a library (without Claude) that wraps in an opinionated way Servant and a handful of other common libraries used to build Haskell web-apps and then have let Claude use that to build this site. There is absolutely some hairy code and I have done a ton of manual refactors on what Claude produces, but claude has been highly effective for me here.
Has anyone tried scribe for larger scale projects, and green field development?
It's simply not the case for the real world, you can't simulate the world perfectly and see what happens when you do things.
If your model is struggling with parentheses, that means it's not even the level of GPT-3 for a mainstream language.
It's not completely impossible with in-context learning I guess, but it will still be much weaker than .. eg all of GitHub and more on Python
Easy to test from a technical perspective is all I'm saying, and not a bad idea.