zlacker

[return to "Nerd: A language for LLMs, not humans"]
1. nrhrjr+M2[view] [source] 2026-01-01 01:36:46
>>gnanag+(OP)
Feels like a dead end optimisation ala the bitter lesson.

No LLM has seen enough of this language vs. python and context is now going to be mostly wordy not codey (e.g. docs, specs etc.)

◧◩
2. norir+y4[view] [source] 2026-01-01 01:54:50
>>nrhrjr+M2
I suspect this is wrong. If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data. It makes no sense to me that a superior intelligence would be unable to trivially learn a new language syntax and apply its semantic knowledge to the new syntax. So I believe that either LLMs will improve to the point that they will easily pick up a new language or we will realize that LLMs themselves are the dead end.
◧◩◪
3. tyushk+k9[view] [source] 2026-01-01 02:45:41
>>norir+y4
I don't think your ultimatum holds. Even assuming LLMs are capable of learning beyond their training data, that just lead back to the purpose of practice in education. Even if you provide a full, unambiguous language spec to a model, and the model were capable of intelligently understanding it, should you expect its performance with your new language to match the petabytes of Python "practice" a model comes with?
◧◩◪◨
4. lovidi+qe[view] [source] 2026-01-01 03:43:13
>>tyushk+k9
Further to this, you can trivially observe two further LLM weaknesses: 1. that LLMs are bad at weird syntax even with a complete description. E.g. writing StandardML and similar languages, or any esolangs. 2. Even with lots of training data, LLMs cannot generalise their output to a shape that doesn’t resemble their training. E.g. ask the LLM to write any nontrivial assembler code like an OS bootstrap.

LLMs aren’t a “superior intelligence” because every abstract concept they “learn” is done so emergently. They understand programming concepts within the scope of languages and tasks that easily map back to those things, and due to finite quantisation they can’t generalise those concepts from first principles. I.e. it can map python to programming concepts, but it can’t map programming concepts to an esoteric language with any amount of reliability. Try doing some prompting and this becomes agonisingly apparent!

[go to top]