zlacker

[parent] [thread] 22 comments
1. whimsi+(OP)[view] [source] 2026-01-19 23:38:36
easy enough to solve with RL probably
replies(1): >>measur+81
2. measur+81[view] [source] 2026-01-19 23:48:00
>>whimsi+(OP)
There is no RL for programming languages. Especially ones w/ no significant amount of code.
replies(3): >>whimsi+n1 >>nl+1v >>thorum+cM
◧◩
3. whimsi+n1[view] [source] [discussion] 2026-01-19 23:51:01
>>measur+81
not even wrong
replies(1): >>measur+T1
◧◩◪
4. measur+T1[view] [source] [discussion] 2026-01-19 23:55:16
>>whimsi+n1
Exactly.
◧◩
5. nl+1v[view] [source] [discussion] 2026-01-20 04:53:23
>>measur+81
I guess the op was implying that is something fixable fairly easily?

(Which is true - it's easy to prompt your LLM with the language grammar, have it generate code and then RL on that)

Easy in the sense of "it is only having enough GPUs to RL a coding capable LLM" anyway.

replies(1): >>measur+Nx
◧◩◪
6. measur+Nx[view] [source] [discussion] 2026-01-20 05:24:15
>>nl+1v
If you can generate code from the grammar then what exactly are you RLing? The point was to generate code in the first place so what does backpropagation get you here?
replies(1): >>nl+m91
◧◩
7. thorum+cM[view] [source] [discussion] 2026-01-20 07:59:42
>>measur+81
Go read the DeepSeek R1 paper
replies(1): >>measur+gO
◧◩◪
8. measur+gO[view] [source] [discussion] 2026-01-20 08:21:13
>>thorum+cM
Why would I do that? If you know something then quote the relevant passage & equation that says you can train code generators w/ RL on a novel language w/ little to no code to train on. More generally, don't ask random people on the internet to do work for you for free.
replies(2): >>thorum+q01 >>whimsi+YL1
◧◩◪◨
9. thorum+q01[view] [source] [discussion] 2026-01-20 09:50:02
>>measur+gO
Your other comment sounded like you were interested in learning about how AI labs are applying RL to improve programming capability. If so, the DeepSeek R1 paper is a good introduction to the topic (maybe a bit out of date at this point, but very approachable). RL training works fine for low resource languages as long as you have tooling to verify outputs and enough compute to throw at the problem.
replies(2): >>whimsi+6M1 >>measur+yQ2
◧◩◪◨
10. nl+m91[view] [source] [discussion] 2026-01-20 11:07:42
>>measur+Nx
Post RL you won't need to put the grammar in the prompt anymore.
replies(1): >>measur+VQ2
◧◩◪◨
11. whimsi+YL1[view] [source] [discussion] 2026-01-20 15:38:56
>>measur+gO
well, that’s one way to react to being provided with interesting reading material.
replies(1): >>measur+IQ2
◧◩◪◨⬒
12. whimsi+6M1[view] [source] [discussion] 2026-01-20 15:39:36
>>thorum+q01
imo generally not worth it to keep going when you encounter this sort of HN archetype
◧◩◪◨⬒
13. measur+yQ2[view] [source] [discussion] 2026-01-20 20:04:57
>>thorum+q01
So you should have no problem bringing up the exact passages & equations they use for their policies.
◧◩◪◨⬒
14. measur+IQ2[view] [source] [discussion] 2026-01-20 20:05:41
>>whimsi+YL1
Bring up passage that supports your claim. I'll wait.
replies(1): >>nl+v34
◧◩◪◨⬒
15. measur+VQ2[view] [source] [discussion] 2026-01-20 20:06:52
>>nl+m91
The grammar of this language is no more than a few hundred tokens (thousands at worst) & current LLMs support context windows in the millions of tokens.
replies(1): >>nl+t93
◧◩◪◨⬒⬓
16. nl+t93[view] [source] [discussion] 2026-01-20 21:53:22
>>measur+VQ2
Sure.

The point is that your statement about the ability to do RL is wrong.

Additionally your response to the Deepseek paper in the other subthread shows profound and deliberate ignorance.

replies(1): >>measur+Vo3
◧◩◪◨⬒⬓⬔
17. measur+Vo3[view] [source] [discussion] 2026-01-20 23:37:52
>>nl+t93
Theorycrafting is very easy. Not a single person in this thread has shown any code to do what they're suggesting. You have access to the best models & yet you still haven't managed to prompt it to give you the code to prove your point so spare me any further theoretical responses. Either show the code to do exactly what you're saying is possible or admit you lack the relevant understanding to back up your claims.
replies(1): >>nl+X14
◧◩◪◨⬒⬓⬔⧯
18. nl+X14[view] [source] [discussion] 2026-01-21 05:50:09
>>measur+Vo3
> You have access to the best models & yet you still haven't managed to prompt it to give you the code to prove your point so spare me any further theoretical responses. Either show the code to do exactly what you're saying is possible

GPU poor here though...

To quote someone (you...) on the internet:

> More generally, don't ask random people on the internet to do work for you for free.

>>46689232

replies(1): >>measur+X84
◧◩◪◨⬒⬓
19. nl+v34[view] [source] [discussion] 2026-01-21 06:03:10
>>measur+IQ2
Not exactly sure what you are looking for here.

That GRPO works?

> Group Relative Policy Optimization (GRPO), a variant reinforcement learning (RL) algorithm of Proximal Policy Optimization (PPO) (Schulman et al., 2017). GRPO foregoes the critic model, instead estimating the baseline from group scores, significantly reducing training resources. By solely using a subset of English instruction tuning data, GRPO obtains a substantial improvement over the strong DeepSeekMath-Instruct, including both in-domain (GSM8K: 82.9% → 88.2%, MATH: 46.8% → 51.7%) and out-of-domain mathematical tasks (e.g., CMATH: 84.6% → 88.8%) during the reinforcement learning phase

Page 2 of https://arxiv.org/pdf/2402.03300

That GRPO on code works?

> Similarly, for code competition prompts, a compiler can be utilized to evaluate the model’s responses against a suite of predefined test cases, thereby generating objective feedback on correctness

Page 4 of https://arxiv.org/pdf/2501.12948

replies(1): >>measur+f94
◧◩◪◨⬒⬓⬔⧯▣
20. measur+X84[view] [source] [discussion] 2026-01-21 07:00:43
>>nl+X14
Claims require evidence & if you are unwilling to present it then admit you do not have any evidence to support your claims. It's not complicated. Either RL works & you have evidence or you do not know & can not claim that it works w/o first doing the required due diligence which (shockingly) actually requires work instead of empty theory crafting & hand waving.
◧◩◪◨⬒⬓⬔
21. measur+f94[view] [source] [discussion] 2026-01-21 07:03:38
>>nl+v34
None of those are novel domains w/ their own novel syntax & semantic validators, not to mention the dearth of readily available sources of examples for sampling the baselines. So again, where does it say it works for a programming language with nothing but a grammar & a compiler?
replies(1): >>nl+ZO4
◧◩◪◨⬒⬓⬔⧯
22. nl+ZO4[view] [source] [discussion] 2026-01-21 12:21:53
>>measur+f94
To quote you:

> here is no RL for programming languages.

and

> Either RL works & you have evidence

This is just so completely wrong, and here is the evidence.

I think everyone in this thread is just surprised you don't seem to know this.

Haven't you seen the hundreds of job ads for people to write code for LLMs to train on?

replies(1): >>measur+DJ5
◧◩◪◨⬒⬓⬔⧯▣
23. measur+DJ5[view] [source] [discussion] 2026-01-21 16:48:55
>>nl+ZO4
You're not going to get less confused by doubling down. None of your claims are valid & this is because you haven't actually tried to do what you're suggesting. Taking a grammar & compiler & RLing will get you nowhere.
[go to top]