zlacker

[return to "Nerd: A language for LLMs, not humans"]
1. joegib+53[view] [source] 2026-01-01 01:40:34
>>gnanag+(OP)
Would it make more sense to instead train a model and tokenise the syntax of languages differently so that white space isn’t counted, keywords are all a single token each and so on?
◧◩
2. __Matr+I3[view] [source] 2026-01-01 01:46:38
>>joegib+53
After watching models struggle with string replacement in files I've started to wonder if they'd be better off in making those alterations in a lisp: where it's normal to manipulate code not as a string but as a syntax tree.
[go to top]