zlacker

[return to "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]
1. thorum+ci[view] [source] 2026-01-19 23:35:27
>>Scramb+(OP)
Developed by Jordan Hubbard of NVIDIA (and FreeBSD).

My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.

◧◩
2. whimsi+Gi[view] [source] 2026-01-19 23:38:36
>>thorum+ci
easy enough to solve with RL probably
◧◩◪
3. measur+Oj[view] [source] 2026-01-19 23:48:00
>>whimsi+Gi
There is no RL for programming languages. Especially ones w/ no significant amount of code.
◧◩◪◨
4. thorum+S41[view] [source] 2026-01-20 07:59:42
>>measur+Oj
Go read the DeepSeek R1 paper
◧◩◪◨⬒
5. measur+W61[view] [source] 2026-01-20 08:21:13
>>thorum+S41
Why would I do that? If you know something then quote the relevant passage & equation that says you can train code generators w/ RL on a novel language w/ little to no code to train on. More generally, don't ask random people on the internet to do work for you for free.
◧◩◪◨⬒⬓
6. thorum+6j1[view] [source] 2026-01-20 09:50:02
>>measur+W61
Your other comment sounded like you were interested in learning about how AI labs are applying RL to improve programming capability. If so, the DeepSeek R1 paper is a good introduction to the topic (maybe a bit out of date at this point, but very approachable). RL training works fine for low resource languages as long as you have tooling to verify outputs and enough compute to throw at the problem.
[go to top]