zlacker

[return to "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]
1. thorum+ci[view] [source] 2026-01-19 23:35:27
>>Scramb+(OP)
Developed by Jordan Hubbard of NVIDIA (and FreeBSD).

My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.

◧◩
2. whimsi+Gi[view] [source] 2026-01-19 23:38:36
>>thorum+ci
easy enough to solve with RL probably
◧◩◪
3. measur+Oj[view] [source] 2026-01-19 23:48:00
>>whimsi+Gi
There is no RL for programming languages. Especially ones w/ no significant amount of code.
◧◩◪◨
4. nl+HN[view] [source] 2026-01-20 04:53:23
>>measur+Oj
I guess the op was implying that is something fixable fairly easily?

(Which is true - it's easy to prompt your LLM with the language grammar, have it generate code and then RL on that)

Easy in the sense of "it is only having enough GPUs to RL a coding capable LLM" anyway.

[go to top]