zlacker

[parent] [thread] 4 comments
1. jagged+(OP)[view] [source] 2026-01-12 09:18:11
Absolutely agreed. My theory is that the more tools you give the agent to lock down the possible output, the better it will be at producing correct output. My analogy is something like starting a simulated annealing run with bounds and heuristics to eliminate categorical false positives, or perhaps like starting the sieve of eratosthenes using a prime wheel to lessen the busywork.

I also think opinionated tooling is important - for example, the toy language I'm working on, there are no warnings, and there are no ignore pragmas, so the LLM has to confront error messages before it can continue.

replies(1): >>Within+NK
2. Within+NK[view] [source] 2026-01-12 14:22:40
>>jagged+(OP)
It should be impossible for an LLM to generate invalid code, as long as you force it to only generate tokens that the language allows.
replies(1): >>measur+fPh
◧◩
3. measur+fPh[view] [source] [discussion] 2026-01-17 01:49:10
>>Within+NK
Tokens do not encode semantics.
replies(1): >>Within+qni
◧◩◪
4. Within+qni[view] [source] [discussion] 2026-01-17 09:32:44
>>measur+fPh
You can choose which token to sample based on language semantics. You simply don't sample invalid ones. So the language should be restrictive on what tokens it allows enough that invalid code is impossible.
replies(1): >>SkiFir+kBi
◧◩◪◨
5. SkiFir+kBi[view] [source] [discussion] 2026-01-17 12:25:54
>>Within+qni
> You can choose which token to sample based on language semantics

Can you though?

> the language should be restrictive on what tokens it allows

This is a restriction on the language syntax, not its semantics.

[go to top]