zlacker

[return to "Exploring the limits of large language models as quant traders"]
1. lordna+Te[view] [source] 2025-11-19 10:01:47
>>rzk+(OP)
I was chatting to a friend in the space. This guy is both experienced in trading and LLMs, and has gone all-in on using LLMs to get his day-to-day coding done. Now he's working on the model to end all models, which is a fairly ambitious way to put it, but it throws off some interesting conversations.

You need domain knowledge to get this to work. Things like "we fed the model the market data" are actually non-obvious. There might be more than one way to pre-process the data, and what the model sees will greatly affect what actions it comes up with. You also have to think about corner cases, eg when AlphaZero was applied to StarCraft, they had to give it some restrictions on the action rate, that kind of thing. Otherwise the model gets stuck in an imaginary money fountain.

But yeah, the AI thing hasn't passed by the quant trading community. A lot of things going on with AI trading teams being hired in various shops.

◧◩
2. JumpCr+og[view] [source] 2025-11-19 10:17:58
>>lordna+Te
> There might be more than one way to pre-process the data

I'm honestly more hopeful about AI replacing this process than the core algorithmic component, at least directly. (AI could help write the latter. But it's immediately useful for the former.)

[go to top]