zlacker

[return to "Nerd: A language for LLMs, not humans"]
1. nrhrjr+M2[view] [source] 2026-01-01 01:36:46
>>gnanag+(OP)
Feels like a dead end optimisation ala the bitter lesson.

No LLM has seen enough of this language vs. python and context is now going to be mostly wordy not codey (e.g. docs, specs etc.)

◧◩
2. norir+y4[view] [source] 2026-01-01 01:54:50
>>nrhrjr+M2
I suspect this is wrong. If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data. It makes no sense to me that a superior intelligence would be unable to trivially learn a new language syntax and apply its semantic knowledge to the new syntax. So I believe that either LLMs will improve to the point that they will easily pick up a new language or we will realize that LLMs themselves are the dead end.
◧◩◪
3. tyushk+k9[view] [source] 2026-01-01 02:45:41
>>norir+y4
I don't think your ultimatum holds. Even assuming LLMs are capable of learning beyond their training data, that just lead back to the purpose of practice in education. Even if you provide a full, unambiguous language spec to a model, and the model were capable of intelligently understanding it, should you expect its performance with your new language to match the petabytes of Python "practice" a model comes with?
[go to top]