zlacker

[parent] [thread] 4 comments
1. gianca+(OP)[view] [source] 2026-01-12 11:26:27
The real question is what existing language is perfect for LLMs? Is Lisp? ASM? We know some LLMs are better at some languages but what existing language are they best at? Would be interesting to see. I know one spot they all fail at is niche programming libraries. They have to pull down docs or review raw code pulled down for the dependency, issue is in some languages like C# those dependencies are precompiled to bytecode, Java too.
replies(4): >>nzach+Eh >>ImJaso+1q >>quaton+6Bh >>wfn+GEk
2. nzach+Eh[view] [source] 2026-01-12 13:19:40
>>gianca+(OP)
> The real question is what existing language is perfect for LLMs?

I think verbosity in the language is even more important for LLMs than it is for humans. Because we can see some line like 'if x > y1.1 then ...' and relate it to the 10% of overbooking that our company uses as a business metric. But for the LLM would be way easier if it was 'if x > base overbook_limit then ...'.

For me, it doesn't make too much sense to focus on the token limit as a hard constraint. I know that for current SOTA LLMs we still have pretty small context windows, and for that reason it seems reasonable try to find a solution that optimizes the amount of information we can put into our contexts.

Besides that we have the problem of 'context priming'. We rarely create abstract software, what we generally create is a piece of software what interacts with the real world. Sometimes directly through a set of APIs and sometimes through a human that reads data from one system and uses it as input in another one. So, by using real world terminology we improve the odds for the LLM to do the right thing when we ask for a new feature.

And lastly there is the advantage of having a source code that can be audited when we need.

3. ImJaso+1q[view] [source] 2026-01-12 14:00:25
>>gianca+(OP)
Early in one of the conversations Gemini actually proposed a Lisp-like language with S-expressions. I don't remember why it didn't follow that path, but I suspect it would have been happy there.
4. quaton+6Bh[view] [source] 2026-01-17 02:16:09
>>gianca+(OP)
I have been having a crack at it in my spare time. A kind of intentional LISP where functions get turned into WASM in the cloud.

The functions are optionally tested using formal verification. I plan to enable this by default soon, as time allows.

These functions that get written can then be composed, and enzymes that run in the cloud actively look for functions to fuse.

Also more people use it, the faster the compiler gets via network scaling laws.

It's very much research at the moment, but kinda works.

Jupyter Notebook style interface with the beginnings of some image and media support.

https://prometheus.entrained.ai

Can try looking at some of the examples or trying something yourself.

Would love some feedback.

5. wfn+GEk[view] [source] 2026-01-18 09:59:03
>>gianca+(OP)
I've been thinking about this, take a look at this:

> From Tool Calling to Symbolic Thinking: LLMs in a Persistent Lisp Metaprogramming Loop

https://arxiv.org/abs/2506.10021

edit but also see cons[3] - maybe viable for very constrained domains, with strict namespace management and handling drop into debugger. Also, after thinking more, it likely only sounds nice (python vs lisp training corpus and library ecosystems; and there's mcp-py3repl (no reflection but otherwise more viable), PAL, etc.) Still - curious.

In theory (I've seen people discuss similar things before though), homoiconicity and persistent REPL could provide benefits - code introspection (and code is a traversable AST), wider persistent context but in a tree structure where it can choose breadth vs depth of context loading, progressive tool building, DSL building for given domain, and (I know this is a bit hype vibe) overall building up toolkit for augmented self-expanding symbolic reasoning tools for given domain / problem / etc. (starting with "build up toolkit for answering basic math questions including long sequences of small digits where you would normally trip up due to your token prediction based LLM mechanism"[2]). Worth running some quick experiments maybe, hm :)

P.S. and thinking of agentic loops (a very uh contemporary topic these days), exposing ways to manage and construct agent trees and loops itself is (while very possibly recipe for disaster; either way would need namespaces not to clash) certainly captivating to me (again given effective code/data traversal and modification options; ideally with memoization / caching / etc.)

[1] https://arxiv.org/abs/2506.10021

[2] https://www.youtube.com/watch?v=AWqvBdqCAAE on need for hybrid systems

[3] cons (heh): hallucination in the metaprogramming layer and LLMs being fundamentally statistical models and not well trained for Lisp-like langs, and inevitable state pollution (unless some kind of clever additional harness applied) likely removes much of the hype...

[go to top]