zlacker

[return to "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]
1. thorum+ci[view] [source] 2026-01-19 23:35:27
>>Scramb+(OP)
Developed by Jordan Hubbard of NVIDIA (and FreeBSD).

My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.

◧◩
2. whimsi+Gi[view] [source] 2026-01-19 23:38:36
>>thorum+ci
easy enough to solve with RL probably
◧◩◪
3. measur+Oj[view] [source] 2026-01-19 23:48:00
>>whimsi+Gi
There is no RL for programming languages. Especially ones w/ no significant amount of code.
◧◩◪◨
4. thorum+S41[view] [source] 2026-01-20 07:59:42
>>measur+Oj
Go read the DeepSeek R1 paper
◧◩◪◨⬒
5. measur+W61[view] [source] 2026-01-20 08:21:13
>>thorum+S41
Why would I do that? If you know something then quote the relevant passage & equation that says you can train code generators w/ RL on a novel language w/ little to no code to train on. More generally, don't ask random people on the internet to do work for you for free.
◧◩◪◨⬒⬓
6. whimsi+E42[view] [source] 2026-01-20 15:38:56
>>measur+W61
well, that’s one way to react to being provided with interesting reading material.
◧◩◪◨⬒⬓⬔
7. measur+o93[view] [source] 2026-01-20 20:05:41
>>whimsi+E42
Bring up passage that supports your claim. I'll wait.
◧◩◪◨⬒⬓⬔⧯
8. nl+bm4[view] [source] 2026-01-21 06:03:10
>>measur+o93
Not exactly sure what you are looking for here.

That GRPO works?

> Group Relative Policy Optimization (GRPO), a variant reinforcement learning (RL) algorithm of Proximal Policy Optimization (PPO) (Schulman et al., 2017). GRPO foregoes the critic model, instead estimating the baseline from group scores, significantly reducing training resources. By solely using a subset of English instruction tuning data, GRPO obtains a substantial improvement over the strong DeepSeekMath-Instruct, including both in-domain (GSM8K: 82.9% → 88.2%, MATH: 46.8% → 51.7%) and out-of-domain mathematical tasks (e.g., CMATH: 84.6% → 88.8%) during the reinforcement learning phase

Page 2 of https://arxiv.org/pdf/2402.03300

That GRPO on code works?

> Similarly, for code competition prompts, a compiler can be utilized to evaluate the model’s responses against a suite of predefined test cases, thereby generating objective feedback on correctness

Page 4 of https://arxiv.org/pdf/2501.12948

◧◩◪◨⬒⬓⬔⧯▣
9. measur+Vr4[view] [source] 2026-01-21 07:03:38
>>nl+bm4
None of those are novel domains w/ their own novel syntax & semantic validators, not to mention the dearth of readily available sources of examples for sampling the baselines. So again, where does it say it works for a programming language with nothing but a grammar & a compiler?
◧◩◪◨⬒⬓⬔⧯▣▦
10. nl+F75[view] [source] 2026-01-21 12:21:53
>>measur+Vr4
To quote you:

> here is no RL for programming languages.

and

> Either RL works & you have evidence

This is just so completely wrong, and here is the evidence.

I think everyone in this thread is just surprised you don't seem to know this.

Haven't you seen the hundreds of job ads for people to write code for LLMs to train on?

[go to top]