zlacker

[return to "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]
1. thorum+ci[view] [source] 2026-01-19 23:35:27
>>Scramb+(OP)
Developed by Jordan Hubbard of NVIDIA (and FreeBSD).

My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.

◧◩
2. nl+yL[view] [source] 2026-01-20 04:26:55
>>thorum+ci
> My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

This isn't really true. LLMs understand grammars really really well. If you have a grammar for your language the LLM can one-shot perfect code.

What they don't know is the tooling around the language. But again, this is pretty easily fixed - they are good at exploring cli tools.

[go to top]