zlacker

[return to "Exploring the limits of large language models as quant traders"]
1. vita77+F8[view] [source] 2025-11-19 09:05:54
>>rzk+(OP)
This is very thoughtful and interesting. It's worth noting that this is just a start and in future iterations they're planning to give the LLMs much more to work with (e.g. news feeds). It's somewhat predictable that LLMs did poorly with quantitative data only (prices) but I'm very curious to see how they perform once they can read the news and Twitter sentiment.
◧◩
2. rob_c+y9[view] [source] 2025-11-19 09:12:10
>>vita77+F8
Not just can i guarantee the models are bad with numbers, unless it's a highly tuned and modified version they're too slow for this arena. Stick to using attention transformers in better model designs which have much lower latencies than pre-trained llms...
[go to top]