zlacker

[return to "We gave 5 LLMs $100K to trade stocks for 8 months"]
1. bcrosb+l2[view] [source] 2025-12-04 23:20:57
>>cheese+(OP)
> Grok ended up performing the best while DeepSeek came close to second. Almost all the models had a tech-heavy portfolio which led them to do well. Gemini ended up in last place since it was the only one that had a large portfolio of non-tech stocks.

I'm not an investor or researcher, but this triggers my spidey sense... it seems to imply they aren't measuring what they think they are.

◧◩
2. olliep+T2[view] [source] 2025-12-04 23:24:04
>>bcrosb+l2
A more sound approach would have been to do a monte carlo simulation where you have 100 portfolios of each model and look at average performance.
◧◩◪
3. observ+c6[view] [source] 2025-12-04 23:41:53
>>olliep+T2
Grok would likely have an advantage there, as well - it's got better coupling to X/Twitter, a better web search index, fewer safety guardrails in pretraining and system prompt modification that distort reality. It's easy to envision random market realities that would trigger ChatGPT or Claude into adjusting the output to be more politically correct. DeepSeek would be subject to the most pretraining distortion, but have the least distortion in practice if a random neutral host were selected.

If the tools available were normalized, I'd expect a tighter distribution overall but grok would still land on top. Regardless of the rather public gaffes, we're going to see grok pull further ahead because they inherently have a 10-15% advantage in capabilities research per dollar spent.

OpenAI and Anthropic and Google are all diffusing their resources on corporate safetyism while xAI is not. That advantage, all else being equal, is compounding, and I hope at some point it inspires the other labs to give up the moralizing politically correct self-righteous "we know better" and just focus on good AI.

I would love to see a frontier lab swarm approach, though. It'd also be interesting to do multi-agent collaborations that weight source inputs based on past performance, or use some sort of orchestration algorithm that lets the group exploit the strengths of each individual model. Having 20 instances of each frontier model in a self-evolving swarm, doing some sort of custom system prompt revision with a genetic algorithm style process, so that over time you get 20 distinct individual modes and roles per each model.

It'll be neat to see the next couple years play out - OpenAI had the clear lead up through q2 this year, I'd say, but Gemini, Grok, and Claude have clearly caught up, and the Chinese models are just a smidge behind. We live in wonderfully interesting times.

◧◩◪◨
4. KPGv2+sy[view] [source] 2025-12-05 03:40:59
>>observ+c6
OTOH it has the richest man in the world actively meddling in its results when they don't support his politics.
◧◩◪◨⬒
5. buu700+UC[view] [source] 2025-12-05 04:42:17
>>KPGv2+sy
Anyone who hasn't used Grok might be surprised to learn that it isn't shy about disagreeing with Elon on plenty of topics, political or otherwise. Any insinuation to the contrary seems to be pure marketing spin on his part.

Grok is often absurdly competent compared to other SOTA models, definitely not a tool I'd write off over its supposed political leanings. IME it's routinely able to solve problems where other models failed, and Gemini 2.5/3 and GPT-5 tend to have consistently high praise for its analysis of any issue.

That's as far as the base model/chatbot is concerned, at least. I'm less familiar with the X bot's work.

◧◩◪◨⬒⬓
6. skeete+LP1[view] [source] 2025-12-05 14:13:44
>>buu700+UC
it's so wildly inconsistent you can't build on top of it with reliability. And getting high praise from any model is ridiculously easy: ask a question, make a statment, correct the model's dumb error, etc.
◧◩◪◨⬒⬓⬔
7. buu700+Ro3[view] [source] 2025-12-05 21:28:04
>>skeete+LP1
It's easy for us as humans to correct dumb mistakes made by AI. It's less easy for AI to correct mistakes made by AI.

What's remarkable on Grok's part is when it spends five minutes churning through a few thousand lines of code (not the whole codebase, just the relevant files) and correctly arrives at the correct root cause of a complex bug in one shot.

Grok as a model may or may not be uniquely amazing per se, but the service's eagerness to throw compute at problems that genuinely demand it is a superpower that makes at least makes it uniquely amazing in practice. By comparison, even Gemini 3 often returns lazy/shallow/wrong responses (and I say that as a regular user of Gemini).

[go to top]