zlacker

[return to "Exploring the limits of large language models as quant traders"]
1. Havoc+57[view] [source] 2025-11-19 08:53:09
>>rzk+(OP)
Are language models really the best choice for this?

Seems to me that the outcome would be near random because they are so poorly suited. Which might manifest as

> We also found that the models were highly sensitive to seemingly trivial prompt changes

◧◩
2. baq+x7[view] [source] 2025-11-19 08:56:19
>>Havoc+57
they're tools. treat them as tools.

since they're so general, you need to explore if and how you can use them in your domain. guessing 'they're poorly suited' is just that, guessing. in particular:

> We also found that the models were highly sensitive to seemingly trivial prompt changes

this is as much as obvious for anyone who seriously looked at deploying these, that's why there are some very successful startups in the evals space.

◧◩◪
3. rob_c+7a[view] [source] 2025-11-19 09:15:44
>>baq+x7
> guessing 'they're poorly suited' is just that, guessing

I have a really nice bridge to sell you...

This "failure" is just a grab at trying to look "cool" and "innovative" I'd bet. Anyone with a modicum of understanding of the tooling (or hell experience they've been around for a few years now, enough for people to build a feeling for this), knows that this it's not a task for a pre-trained general LLM.

◧◩◪◨
4. baq+dA[view] [source] 2025-11-19 13:01:01
>>rob_c+7a
I think you have a different idea of what I'm saying than what I'm actually saying.
[go to top]