This is scientific malpractice! The most ridiculous confidence interval I've ever seen! 1.02 to 9.7, reported as "tripled", seriously? And of course the data is non-blinded, self-reported survey responses recalling events that occurred many years ago, and the analysis is not preregistered and splits the cohort in an arbitrary way to eke out so-called "statistical significance" (by the slimmest imaginable margin, 1.02 > 1.00, just barely).
How can this dreck be published? Everyone involved should be sanctioned. And everyone who took this headline at face value should seriously reconsider their approach to consuming science news.
https://old.reddit.com/r/science/comments/16t4eyg/drinking_d...
Assuming data is valid and unbiased of course.
Not a statistician, just curious.
The parent comment's point is that although the reported effect is significant at $\alpha = 0.05$ (the usual "95% CI" you mentioned), there are other problems that render their test of this hypothesis less than valid.
edit for those curious about odds ratio https://www.ncbi.nlm.nih.gov/books/NBK431098/#:~:text=The%20....
https://en.wikipedia.org/wiki/MDPI#Resignations_of_editors
In August 2018, 10 senior editors (including the editor-in-chief) of the journal Nutrients resigned, alleging that MDPI forced the replacement of the editor-in-chief because of his high editorial standards and for resisting pressure to "accept manuscripts of mediocre quality and importance.
Normally when I see a bad study there's like one or two serious problems with methodology, but when I read this through it's almost just every couple of paragraphs that the authors will say something or describe methodology that should be giving the reader pause. From literally the first paragraph in the intro:
> Changes in diagnostic definitions and guidelines and increased testing availability and funding have made major contributions to this increase in diagnosed cases; under the added impacts of changes in dietary, environmental, and other exposures affecting the intrauterine environment, ASD prevalence has reached unprecedented proportions.
Those two sentences contradict each other! You can't just tie them together with a semicolon like one thought implies the other. I'm not even saying that autism cases aren't actually rising at all, but you can't just go "our diagnostic criteria have changed; therefore environmental and dietary exposures are the cause." You have to actually put in the bare-bottom minimal amount of work to describe why you think that diagnostic criteria and social awareness aren't the primary causes, you can't just claim that changing diagnostic criteria itself implies diets are to blame.
It's unsurprising that somebody who would write this way would do bad statistical analysis.
That’s explained here: https://xkcd.com/882/
> Those two sentences contradict each other!
Id say that it’s a non sequitur; the first part of the sentence before the semicolon states something completely different than what follows, so the ‘impacts’ can’t be ‘added’ - they don’t have the same units.
To be fair, upvote doesn't necessarily mean "I agree with this", it often just means "this is the topic I would like to discuss".
I agree that the article is crap though.
I understood enough that this study found a correlation and that this was based on surveys. I thought it was an interesting finding, and concluded that this correlation should be examined more closely with more rigorous studies.
I did not go into the details of methodology and statistics and did not conclude, like you did, that this study has dubious value.
This is a trap that the public find themselves in with science reporting. Many people on HN have technical training to grasp these concepts but not understand them. I my self program, and use many of the same intellectual building blocks scientists use in the execution of my job. But I am not a scientist.
I am not a scientist is the key point because it means that I do not understand science. I know the “process” of science. I’ve read scientific papers. I’ve done toy experiments in school and in university. But I don’t understand it. To draw an analogy, Programming is a perception of reality. There are things that I do that I can never explain to management because they do not have direct experience with it. The “identity a bird in the park” XKCD comic is a meme of this concept [1] notwithstanding advances in AI research.
Like programming, science is a perception of reality. Like my management, I may have taken statistics, I know what confidence intervals are. But I have not lived the experience of building an experiment. Getting results, analyzing them, and constructing the distinctions necessary to reach an interesting and valid conclusion. If you’ve gone though that process, you know where to look for problems in a study. I and many people don’t. We will at best say, further study is needed, and at worst say that diet soda causes autism.
The public depends on experts to enter the conversation and share why things are wrong. This of course gets into the problem of “lies will make it half way around the world while the truth is still putting on its shoes”. Retractions may be made, but never perceived. This makes us vulnerable to bad faith actors employing the gish gallop and there’s not a general purpose solution to that.
I was "around" science for a good chunk of my life (both mom and that used to be academics, and I spent 8 years in academia myself doing a phd and postdoc).
The amount of crap studies, politics and bullshit that happens in those circles will make you realise how sad the state of the "advancement of science" is. And my experience was in 3 very different countries. We desperately need something like AI to be able to synthesize and filter scientific publications