zlacker

[return to "Nanolang: A tiny experimental language designed to be targeted by coding LLMs"]
1. thorum+ci[view] [source] 2026-01-19 23:35:27
>>Scramb+(OP)
Developed by Jordan Hubbard of NVIDIA (and FreeBSD).

My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.

◧◩
2. whimsi+Gi[view] [source] 2026-01-19 23:38:36
>>thorum+ci
easy enough to solve with RL probably
◧◩◪
3. measur+Oj[view] [source] 2026-01-19 23:48:00
>>whimsi+Gi
There is no RL for programming languages. Especially ones w/ no significant amount of code.
◧◩◪◨
4. nl+HN[view] [source] 2026-01-20 04:53:23
>>measur+Oj
I guess the op was implying that is something fixable fairly easily?

(Which is true - it's easy to prompt your LLM with the language grammar, have it generate code and then RL on that)

Easy in the sense of "it is only having enough GPUs to RL a coding capable LLM" anyway.

◧◩◪◨⬒
5. measur+tQ[view] [source] 2026-01-20 05:24:15
>>nl+HN
If you can generate code from the grammar then what exactly are you RLing? The point was to generate code in the first place so what does backpropagation get you here?
◧◩◪◨⬒⬓
6. nl+2s1[view] [source] 2026-01-20 11:07:42
>>measur+tQ
Post RL you won't need to put the grammar in the prompt anymore.
◧◩◪◨⬒⬓⬔
7. measur+B93[view] [source] 2026-01-20 20:06:52
>>nl+2s1
The grammar of this language is no more than a few hundred tokens (thousands at worst) & current LLMs support context windows in the millions of tokens.
◧◩◪◨⬒⬓⬔⧯
8. nl+9s3[view] [source] 2026-01-20 21:53:22
>>measur+B93
Sure.

The point is that your statement about the ability to do RL is wrong.

Additionally your response to the Deepseek paper in the other subthread shows profound and deliberate ignorance.

◧◩◪◨⬒⬓⬔⧯▣
9. measur+BH3[view] [source] 2026-01-20 23:37:52
>>nl+9s3
Theorycrafting is very easy. Not a single person in this thread has shown any code to do what they're suggesting. You have access to the best models & yet you still haven't managed to prompt it to give you the code to prove your point so spare me any further theoretical responses. Either show the code to do exactly what you're saying is possible or admit you lack the relevant understanding to back up your claims.
◧◩◪◨⬒⬓⬔⧯▣▦
10. nl+Dk4[view] [source] 2026-01-21 05:50:09
>>measur+BH3
> You have access to the best models & yet you still haven't managed to prompt it to give you the code to prove your point so spare me any further theoretical responses. Either show the code to do exactly what you're saying is possible

GPU poor here though...

To quote someone (you...) on the internet:

> More generally, don't ask random people on the internet to do work for you for free.

>>46689232

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. measur+Dr4[view] [source] 2026-01-21 07:00:43
>>nl+Dk4
Claims require evidence & if you are unwilling to present it then admit you do not have any evidence to support your claims. It's not complicated. Either RL works & you have evidence or you do not know & can not claim that it works w/o first doing the required due diligence which (shockingly) actually requires work instead of empty theory crafting & hand waving.
[go to top]