zlacker

[return to "Two kinds of AI users are emerging"]
1. defros+4b[view] [source] 2026-02-02 01:23:49
>>martin+(OP)
The "upside" description:

  On the other you have a non-technical executive who's got his head round Claude Code and can run e.g. Python locally.

  I helped one recently almost one-shot converting a 30 sheet mind numbingly complicated Excel financial model to Python with Claude Code.

  Once the model is in Python, you effectively have a data science team in your pocket with Claude Code. You can easily run Monte Carlo simulations, pull external data sources as inputs, build web dashboards and have Claude Code work with you to really integrate weaknesses in your model (or business). It's a pretty magical experience watching someone realise they have so much power at their fingertips, without having to grind away for hours/days in Excel.
almost makes me physically sick.

I've a reasonably intense math background corrupted by application to geophysics and implementing real world numerical applications.

To be fair, this statement alone:

* 30 sheet mind numbingly complicated Excel financial model

makes my skin crawl and invokes a flight reflex.

Still, I'll concede that a Claude Code conversion to Python of a 30 sheet Excel financial model is unlikely to be significantly worse than the original.

◧◩
2. majorm+cd[view] [source] 2026-02-02 01:41:53
>>defros+4b
One of the dirty secrets of a lot of these "code adjacent" areas is that they have very little testing.

If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough.

If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong.

It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling.

◧◩◪
3. benjij+241[view] [source] 2026-02-02 11:11:10
>>majorm+cd
> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?

I recently watched a demo from a data science guy about the impending proliferation of AI in just about all related fields, his position was highly sceptical but with a "let's make the most of it while we can"

The part that stood out to me which I have repeated to colleagues since, was a demo where the guy fed his tame robot a .csv of price trends for apples and bananas, and asked it to visualise this. Sure enough, out comes a nice looking graph with two jagged lines. Pack it ship it move on..

But then he reveals that, as he wrote the data himself, he knows that both lines should just be an upward trend. Expands the axis labels - the LLM has alphabetized the months but said nothing of it in any of the outputs.

◧◩◪◨
4. senord+jo1[view] [source] 2026-02-02 13:42:20
>>benjij+241
Like every anecdote out there where an LLM makes a basic mistake, this one is worthless without knowing the model and prompt.
◧◩◪◨⬒
5. benjij+8P1[view] [source] 2026-02-02 16:07:47
>>senord+jo1
I don't recall the bot he was using, it was a rushed portion of the presentation to make the point that "yes these tools exist, but be mindful of the output - they're not a magic wand"
[go to top]