zlacker

[return to "We gave 5 LLMs $100K to trade stocks for 8 months"]
1. sethop+W[view] [source] 2025-12-04 23:13:11
>>cheese+(OP)
> Testing GPT-5, Claude, Gemini, Grok, and DeepSeek with $100K each over 8 months of backtested trading

So the results are meaningless - these LLMs have the advantage of foresight over historical data.

◧◩
2. PTRFRL+f1[view] [source] 2025-12-04 23:14:57
>>sethop+W
> We were cautious to only run after each model’s training cutoff dates for the LLM models. That way we could be sure models couldn’t have memorized market outcomes.
◧◩◪
3. plufz+b2[view] [source] 2025-12-04 23:20:28
>>PTRFRL+f1
I know very little about how the environment where they run these models look, but surely they have access to different tools like vector embeddings with more current data on various topics?
◧◩◪◨
4. discon+E6[view] [source] 2025-12-04 23:44:53
>>plufz+b2
you can (via the api, or to a lesser degree through the setting in the web client) determine what tools if any a model can use
◧◩◪◨⬒
5. discon+u8[view] [source] 2025-12-04 23:55:25
>>discon+E6
with the exception that it doesn't seem possible to fully disable this for grok 4
[go to top]