I am struggling a lot to see what the tech can and can not do, particularly designing systems with them, and how to build systems where the whole is bigger than the sum of its parts. And I think this is because I am constantly confused by their capabilities, despite understanding their machinery and how they work, their use of language just seems like magic. I even wrote https://punkx.org/jackdoe/language.html just to remind myself how to think about it.
I think this kind of research is amazing and we have to spend tremendous more effort into understanding how to use the tokens and how to build with them.
[1]: https://transformer-circuits.pub/2025/attribution-graphs/bio...
A bit tangential, but I look at programming as inherently being that. Every task I try to break down into some smaller tasks that together accomplish something more. That leads me to think that, if you structure the process of programming right, you will only end up solving small, minimally interwined problems. Might sound far-fetched, but I think it's doable to create such a workflow. And, even the dumber LLMs would slot in naturally into such a process, I imagine.
So if you are building a system, lets say you ask it to parse a pdf, and you put a judge to evaluate the quality of the output, and then you create a meta judge to improve the prompts of the parser and the pdf judge. The question is, is this going to get better as it is running, and even more, is it going to get better as the models are getting better?
You can build the same system in completely different way, more like 'program synthesis' imagine you dont use llms to parse, but you use them to write parser code, and tests, and then judge to judge the tests, or even escalate to human to verify, then you train your classifier that picks the parser. Now this system is much more likely to improve itself as it is running, and as the models are getting better.
Few months ago Yannic Kilcher gave this example as that it seems that current language models are very constrained mid-sentence, because they most importantly want produce semantically consistent and grammatically correct text, so the entropy mid sentence is very different than the entropy after punctuation. The . dot "frees" the distribution. What does that mean for "generalists" or "specialists" approach when sampling the wrong token can completely derail everything?
If you believe that the models will "think" then you should bet on the prompt and meta prompt approach, if you believe they will always be limited then you should build with program synthesis.
And, honestly, I am totally confused :) So this kind of research is incredibly useful to clear the mist. Also things like https://www.neuronpedia.org/
E.G. Why compliment (you can do this task), guilt (i will be fired if you don't do this task), and threatening (i will harm you if you don't do this task) work with different success rate? Sergey Brin said recently that threatening works best, I cant get my self to do it, so I take his word for it.
That is what I am struggling with, it is really easy at the moment to slot LLM and make everything worse. Mainly because its output is coming from torch.multinomial with all kinds of speculative decoding and quantizations and etc.
But I am convinced it is possible, just not the way I am doing it right now, thats why I am spending most of my time studying.
I, for one, welcome the age of wisdom.
Any “product” can be thought of this way.
Of systems there are many systems nested within systems, yet a simple singular order “emerges”, usually it is the designed intended function.
The trick to discerning systems lies in their relationships.
Actors through interfaces have a relationship (usually more than one so think of each relationship as its own system dynamic.)
A relationship is where the magic happens, usually a process with work being done (therefore interface inputs must account for this balance.)
Vectors. Vectors I am thinking are the real intellectual and functional mechanisms. Most systems process inputs of potential (“energy”) control signal (“information”) and assets (other actors for nested systems). Processes do the work of adding vector solutions [for some other problem] for whatever the output is.
That’s the topology as I am seeing it.
And of course Yannic Kilcher[4], and also listening in on the paper discussions they do on discord.
Practicing a lot with just doing backpropagation by hand and making toy models by hand to get intuition for the signal flow, and building all kinds of smallish systems, e.g. how far can you push whisper, small qwen3, and kokoro to control your computer with voice?
People think that deepseek/mistral/meta etc are democratizing AI, but its actually Karpathy who teaches us :) so we can understand them and make our own.
[1] https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxb...
[2] https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PL3FW7Lu3i5...
But they can also do math, logic, music notation, write code, LaTeX, SVG, etc.
Maybe the way forward is in LCM or go JEPA, therwise, as this Apple paper suggests, we will just keep pushing the "pattern matching" further, maybe we get some sort of phase transition at some point or maybe we have to switch architecture, we will see. It could be that things change when we get physical multimodality and real world experience, I dont know.
Maxwell could not get the theory of electromagnetism to work until he ditched pulleys and levers he’d included to describe the mechanics.
We won’t get AGI until we realize “there is no spoon” and language has nothing to do with our intelligence, just with out social tribalism: https://www.scientificamerican.com/article/you-dont-need-wor...
Take language out of the equation and drawing a circle, triangles, letters is just statistical physics. We can capture in energy models stored in an online state, statistical physics relative to the machine; its electromagnetic geometry: https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...
Our language doesn’t exist without humans. It’s not an immutable property of physics. It’s obfuscation and mind viruses. It’s story mode.
The computer acting as a web server or an LLM has an inherent energy model to it. New models of those patterns will be refined to a statefulness that strips away unnecessary language constructs in the system; like a lot of software most don’t use just developers.
I look forward to continuing my work in the hardware world to further compress and reduce the useless state of past systems of though we copy paste around to serve developers, to reduce context to sort through, and improve model quality: https://arxiv.org/abs/2309.10668
Single function factory hardware with embedded “prompt” that will boot from a model and the machines state will scaffold itself from there are coming: https://creativestrategies.com/jensen-were-with-you-but-were...
I wait with baited breathe to see what people will come up with to replace Altman's Basilisk in ~15 years.
- an old fisherman and aficionado of William Shakespeare.
https://www.vocabulary.com/articles/pardon-the-expression/ba...
FTFA: "Unless you've devoured several cans of sardines in the hopes that your fishy breath will lure a nice big trout out of the river, baited breath is incorrect."*