zlacker

[return to "We gave 5 LLMs $100K to trade stocks for 8 months"]
1. sethop+W[view] [source] 2025-12-04 23:13:11
>>cheese+(OP)
> Testing GPT-5, Claude, Gemini, Grok, and DeepSeek with $100K each over 8 months of backtested trading

So the results are meaningless - these LLMs have the advantage of foresight over historical data.

◧◩
2. PTRFRL+f1[view] [source] 2025-12-04 23:14:57
>>sethop+W
> We were cautious to only run after each model’s training cutoff dates for the LLM models. That way we could be sure models couldn’t have memorized market outcomes.
◧◩◪
3. stusma+W3[view] [source] 2025-12-04 23:28:34
>>PTRFRL+f1
Even if it is after the cut off date wouldn't the models be able to query external sources to get data that could positively impact them? If the returns were smaller I could reasonably believe it but beating the S&P500 returns by 4x+ strains credulity.
◧◩◪◨
4. cheese+p6[view] [source] 2025-12-04 23:43:30
>>stusma+W3
We used the LLMs API and provided custom tools like a stock ticker tool that only gave stock price information for that date of backtest for the model. We did this for news apis, technical indicator apis etc. It took quite a long time to make sure that there weren't any data leakage. The whole process took us about a month or two to build out.
◧◩◪◨⬒
5. alchem+9b[view] [source] 2025-12-05 00:14:58
>>cheese+p6
I have a hunch Grok model cutoff is not accurate and somehow it has updated weights though they still call it the same Grok model as the params and size are unchanged but they are incrementally training it in the background. Of course I don’t know this but it’s what I would do in their situation since ongoing incremental training could he a neat trick to improve their ongoing results against competitors, even if marginal. I also wouldn’t trust the models to honestly disclose their decision process either.

That said. This is a fascinating area of research and I do think LLM driven fundamental investing and trading has a future.

[go to top]