40% of code is now machine-written. That number's only going up. So I spent some weekends asking: what would an intermediate language look like if we stopped pretending humans are the authors?
NERD is the experiment.
Bootstrap compiler works, compiles to native via LLVM. It's rough, probably wrong in interesting ways, but it runs. Could be a terrible idea. Could be onto something. Either way, it was a fun rabbit hole.
Contributors welcome if this seems interesting to you - early stage, lots to figure out: https://github.com/Nerd-Lang/nerd-lang-core
Happy to chat about design decisions or argue about whether this makes any sense at all.
One thing in particular that I've noticed is that many of language features that enable concise code - such as e.g. type inference - are counter-productive for LLMs because they are essentially implicit context, and LLMs much prefer such things to be explicit. I suspect that e.g. forcing the model to spell out the type of every non-trivial expression in full would have an effect similar to explicit chain-of-thought.
Similarly I think that the ability to write deeply recursive expressions is not necessarily a good thing, and an LLM-centric language should deliberately limit that and require explicit variables to bind intermediate results to.
The single biggest factor though seems to be the ability to ground the model through tooling. Static typing helps a lot there, as do explicit purity annotations for code, and so does automated testing, but one area that I would particularly like to explore for LLM use is design-by-contract.
For a start, now have llms.txt to aid models while developing nerd programs.
https://www.nerd-lang.org/llms.txt
Eg:
Write a function that adds two numbers and returns the result Use https://nerd-lang.org/llms.txt for syntax.