zlacker

[return to "Exploring the limits of large language models as quant traders"]
1. callam+S8[view] [source] 2025-11-19 09:07:32
>>rzk+(OP)
The limits of LLM's for systematic trading were and are extremely obvious to anybody with a basic understanding of either field. You may as well be flipping a coin.
◧◩
2. falcor+Od[view] [source] 2025-11-19 09:48:24
>>callam+S8
20 years ago NNs were considered toys and it was "extremely obvious" to CS professors that AI can't be made to reliably distinguish between arbitrary photos of cats and dogs. But then in 2007 Microsoft released Asirra as a captcha problem [0], which prompted research, and we had an AI solving it not that long after.

Edit - additional detail: The original Asirra paper from October 2007 claimed "Barring a major advance in machine vision, we expect computers will have no better than a 1/54,000 chance of solving it" [0]. It took Philippe Golle from Palo Alto a bit under a year to get "a classifier which is 82.7% accurate in telling apart the images of cats and dogs used in Asirra" and "solve a 12-image Asirra challenge automatically with probability 10.3%" [1].

Edit 2: History is chock-full of examples of human ingenuity solving problems for very little external gain. And here we have a problem where the incentive is almost literally a money printing machine. I expect progress to be very rapid.

[0] https://www.microsoft.com/en-us/research/publication/asirra-...

[1] https://xenon.stanford.edu/~pgolle/papers/dogcat.pdf

◧◩◪
3. lambda+5k[view] [source] 2025-11-19 10:52:03
>>falcor+Od
What makes trading such a special case is that as you use new technology to increase the capability of your trading system, other market participants you are trading against will be doing the same; it's a never-ending arms race.
◧◩◪◨
4. jstanl+5n[view] [source] 2025-11-19 11:23:38
>>lambda+5k
That doesn't mean it doesn't work. That means it does work!

If other market participants chose not to use something then that would show that it doesn't work.

[go to top]