40% of code is now machine-written. That number's only going up. So I spent some weekends asking: what would an intermediate language look like if we stopped pretending humans are the authors?
NERD is the experiment.
Bootstrap compiler works, compiles to native via LLVM. It's rough, probably wrong in interesting ways, but it runs. Could be a terrible idea. Could be onto something. Either way, it was a fun rabbit hole.
Contributors welcome if this seems interesting to you - early stage, lots to figure out: https://github.com/Nerd-Lang/nerd-lang-core
Happy to chat about design decisions or argue about whether this makes any sense at all.
How much of the code is read by humans, though? I think using languages that LLMs work well with, like TS or Python, makes a lot of sense but the chosen language still needs to be readable by humans.
I've never had a good result. Just tons of silent bugs that are obvious those experienced with Python, JS/TS, etc. and subtle to everyone else.
What about something like clojure? It’s already pretty succinct and Claude knows it quite well.
Plus there are heavily documented libraries that it knows how to use and are in its training data.
A poor craftsman may blame his tools, but some tools really are the wrong ones for the job.
Do jump in to contribute, these are amazing thoughts.
Your big idea seems to be changing the tech so that developers have an excuse to be even less responsible than they already are.
One thing in particular that I've noticed is that many of language features that enable concise code - such as e.g. type inference - are counter-productive for LLMs because they are essentially implicit context, and LLMs much prefer such things to be explicit. I suspect that e.g. forcing the model to spell out the type of every non-trivial expression in full would have an effect similar to explicit chain-of-thought.
Similarly I think that the ability to write deeply recursive expressions is not necessarily a good thing, and an LLM-centric language should deliberately limit that and require explicit variables to bind intermediate results to.
The single biggest factor though seems to be the ability to ground the model through tooling. Static typing helps a lot there, as do explicit purity annotations for code, and so does automated testing, but one area that I would particularly like to explore for LLM use is design-by-contract.
On New Year's Eve I announced NERD - a language built for LLMs, not for human authorship. The response was unexpectedly overwhelming. Questions, excitement, discussions, roasting - all of it.
But one question struck me: "What use case is this language built for?"
Fair. Instead of a general-purpose language covering all features - some of which may not even be relevant because we're not building apps the old way anymore - I picked one: agent-first.
What this means - you can now run an agent in NERD with one line of code:
-- Nerd code llm claude "What is Cloudflare Workers?"
No imports. No boilerplate. No framework.
The insight from working with agents and MCP: tools are absorbing integration complexity. Auth, retries, rate limiting - all moving into tool providers. What's left for agents? Orchestration.
And orchestration doesn't need much: → LLM calls → Tool calls → Control flow → That's it.
Every language today - Python, TypeScript, Java - was built for something else, then repurposed for agents. NERD starts from agents.
Fully story here: https://www.nerd-lang.org/agent-first
Considering their experience, this saves them time to think beyond coding. :)
For a start, now have llms.txt to aid models while developing nerd programs.
https://www.nerd-lang.org/llms.txt
Eg:
Write a function that adds two numbers and returns the result Use https://nerd-lang.org/llms.txt for syntax.