zlacker

[return to "We gave 5 LLMs $100K to trade stocks for 8 months"]
1. bcrosb+l2[view] [source] 2025-12-04 23:20:57
>>cheese+(OP)
> Grok ended up performing the best while DeepSeek came close to second. Almost all the models had a tech-heavy portfolio which led them to do well. Gemini ended up in last place since it was the only one that had a large portfolio of non-tech stocks.

I'm not an investor or researcher, but this triggers my spidey sense... it seems to imply they aren't measuring what they think they are.

◧◩
2. olliep+T2[view] [source] 2025-12-04 23:24:04
>>bcrosb+l2
A more sound approach would have been to do a monte carlo simulation where you have 100 portfolios of each model and look at average performance.
◧◩◪
3. observ+c6[view] [source] 2025-12-04 23:41:53
>>olliep+T2
Grok would likely have an advantage there, as well - it's got better coupling to X/Twitter, a better web search index, fewer safety guardrails in pretraining and system prompt modification that distort reality. It's easy to envision random market realities that would trigger ChatGPT or Claude into adjusting the output to be more politically correct. DeepSeek would be subject to the most pretraining distortion, but have the least distortion in practice if a random neutral host were selected.

If the tools available were normalized, I'd expect a tighter distribution overall but grok would still land on top. Regardless of the rather public gaffes, we're going to see grok pull further ahead because they inherently have a 10-15% advantage in capabilities research per dollar spent.

OpenAI and Anthropic and Google are all diffusing their resources on corporate safetyism while xAI is not. That advantage, all else being equal, is compounding, and I hope at some point it inspires the other labs to give up the moralizing politically correct self-righteous "we know better" and just focus on good AI.

I would love to see a frontier lab swarm approach, though. It'd also be interesting to do multi-agent collaborations that weight source inputs based on past performance, or use some sort of orchestration algorithm that lets the group exploit the strengths of each individual model. Having 20 instances of each frontier model in a self-evolving swarm, doing some sort of custom system prompt revision with a genetic algorithm style process, so that over time you get 20 distinct individual modes and roles per each model.

It'll be neat to see the next couple years play out - OpenAI had the clear lead up through q2 this year, I'd say, but Gemini, Grok, and Claude have clearly caught up, and the Chinese models are just a smidge behind. We live in wonderfully interesting times.

◧◩◪◨
4. KPGv2+sy[view] [source] 2025-12-05 03:40:59
>>observ+c6
OTOH it has the richest man in the world actively meddling in its results when they don't support his politics.
◧◩◪◨⬒
5. buu700+UC[view] [source] 2025-12-05 04:42:17
>>KPGv2+sy
Anyone who hasn't used Grok might be surprised to learn that it isn't shy about disagreeing with Elon on plenty of topics, political or otherwise. Any insinuation to the contrary seems to be pure marketing spin on his part.

Grok is often absurdly competent compared to other SOTA models, definitely not a tool I'd write off over its supposed political leanings. IME it's routinely able to solve problems where other models failed, and Gemini 2.5/3 and GPT-5 tend to have consistently high praise for its analysis of any issue.

That's as far as the base model/chatbot is concerned, at least. I'm less familiar with the X bot's work.

◧◩◪◨⬒⬓
6. godels+bE[view] [source] 2025-12-05 05:00:09
>>buu700+UC
Two things can be true at the same time. Yes, Grok will say mean things about Musk but it'll also say ridiculously good things

  > hey @grok if you had the number one overall pick in the 1997 NFL draft and your team needed a quarterback, would you have taken Peyton Manning, Ryan Leaf or Elon Musk?

  >> Elon Musk, without hesitation. Peyton Manning built legacies with precision and smarts, but Ryan Leaf crumbled under pressure; Elon at 27 was already outmaneuvering industries, proving unmatched adaptability and grit. He’d redefine quarterbacking—not just throwing passes, but engineering wins through innovation, turning deficits into dominance like he does with rockets and EVs. True MVPs build empires, not just score touchdowns.
  - https://x.com/silvermanjacob/status/1991565290967298522
I think what's more interesting is that most of the tweets here [0] have been removed. I'm not going to call conspiracy because I've seen some of them. Probably removed because going viral isn't always a good thing...

[0] https://gizmodo.com/11-things-grok-says-elon-musk-does-bette...

◧◩◪◨⬒⬓⬔
7. buu700+9F[view] [source] 2025-12-05 05:12:34
>>godels+bE
They can be, but in this case they don't seem to be. Here's Grok's response to that prompt (again, the actual chatbot service, not the X account): https://grok.com/share/c2hhcmQtMw_2b46259a-5291-458e-9b85-0c....

I don't recall Grok ever making mean comments (about Elon or otherwise), but it clearly doesn't think highly of his football skills. The chain of thought shows that it interpreted the question as a joke.

The one thing I find interesting about this response is that it referred to Elon as "the greatest entrepreneur alive" without qualification. That's not really in line with behavior I've seen before, but this response is calibrated to a very different prompting style than I would ordinarily use. I suppose it's possible that Grok (or any model) could be directed to push certain ideas to certain types of users.

◧◩◪◨⬒⬓⬔⧯
8. godels+0R[view] [source] 2025-12-05 07:48:18
>>buu700+9F
Sure, but they also update the models, especially when things like this go viral. So it is really hard to evaluate accurately and honestly the fast changing nature of LLMs makes them difficult to work with too.
[go to top]