If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough.
If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong.
It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling.
Any and I mean any statistic someone throws at me I will try and dig in. And if I'm able to, I will usually find that something is very wrong somewhere. As in, the underlying data is usually just wrong, invalidating the whole thing or the data is reasonably sound but the person doing the analysis is making incorrect assumptions about parts of the data and then drawing incorrect conclusions.
Can't tell you how many times I've seen product managers making decisions based on a few hundred analytics events, trying to glean insight where there is none.
There are often more errors. Sometimes the actual results are wildly different in reality to what a model expects .. but the data treatment has been bug hunted until it does what was expected .. and then attention fades away.
A huge test for me was to have people review my analyses and poke holes. You feel good when your last 50 reports didn’t have a single thing anyone could point out.
I’ve been seeing a lot of people try to build analyses with AI who haven’t been burned with the “just because it sounds correct doesn’t mean it’s right” dilemma who haven’t realized what it takes before you can stamp your name on an analysis.
The local statistics office here recently presented salary statistics claiming that teachers' salaries had unexpectedly increased by 50%. All the press releases went out, and it was only questions raised by the public that forced the statistics office to review and correct the data.
I think 1) holds (as my experience matches your cynicism :), but I have a feeling that data minded people tend to overestimate the importance of 2)...
https://www.newscientist.com/article/dn23448-how-to-stop-exc...
In many experience, many of the statistics these people use doesn't matter in the success of a business --- they are vanity metrics. But people use statistics, and especially the wrong statistics, to pass their agenda. Regardless, it's important to fix the statistics.
What also can help for entrepreneurship is having a bias for action. So even if your insights are wrong, if you act and keep acting you will keep acting then you will partially shape reality to your will and bend to its will.
So there are certain forces where you can compensate for your lack of rigor.
The best companies have both of those things by their side.
Back in my data scientist days I used to push for testing and verification of models. Got told off for reducing the teams speed. If the model works well enough to get money in, and the managers that make the final calls do not understand the implications of being wrong, this would be the majority of cases.
What are you optimizing all that code for, it works doesnt it? Dont let perfect be the enemy of good. If it works 80% thats enough, just push it. What is technical debt?
I recently watched a demo from a data science guy about the impending proliferation of AI in just about all related fields, his position was highly sceptical but with a "let's make the most of it while we can"
The part that stood out to me which I have repeated to colleagues since, was a demo where the guy fed his tame robot a .csv of price trends for apples and bananas, and asked it to visualise this. Sure enough, out comes a nice looking graph with two jagged lines. Pack it ship it move on..
But then he reveals that, as he wrote the data himself, he knows that both lines should just be an upward trend. Expands the axis labels - the LLM has alphabetized the months but said nothing of it in any of the outputs.
I bet you do this only 75% of the time.