https://fivethirtyeight.com/features/science-isnt-broken/
Another study with the same goal of comparing the results from different research teams found similar disparities, though the graphs aren’t quite as pretty.
What do you mean, all different? Most are exactly the same. The first 4 are a bit low and the last 3 a bit high, but last 2 and first also extremely wide, so irrelevant anyway. Everything else agrees, most within >99 % confidence with only slight differences on the absolute values.
9 of the teams reaching a different conclusion is a pretty large group. Nearly a third of the teams, using what I assume are legitimate methods, disagree with the findings of the other 20 teams.
Sure, not all teams disagree, but a lot do. So the issue is whether or not the current research paradigm correctly answers “subjective” questions such as these.
If we only look that those with p <0.05 (green) and with 95 % confidence interval, then there are 17 teams left. And they all(!) agree with more than 95% conference.
And you missed the pint in the very article about how p value isn’t really as useful as it’s been touted.
The chart sure makes it look like there was an overall consensus that refs are about 1.5x as likely, though.
Obligatory link to Statistics Done Wrong: The Woefully Complete Guide, a book on how statistics can and has been abused in subtle and insidious ways, sometimes recklessly. Specifically, the chapters on the consequences of underpowered statistics and comparing statistical significance between studies.
I’m no expert on statistics, but I know enough that repeated experiments should not yield wildly different results unless: 1) the phenomenon under observation is extremely subtle so results are getting lost in noise, 2) the experiments were performed incorrectly, or 3) the results aren’t wildly divergent after all.
- the whole point of statistics is to extract subtle signals from noise, if you’re getting wildly different results, the problem is you’re under-powered.
Thanks for taking the time to post these links, just letting you know you’re efforts have benefited at least one person who’s gonna enjoy reading this.
Just eyeballing the linked image… it looks like most of them agree?
The bias almost certainly exists, according to nearly all analysis here. They just disagree on its magnitude. And for the most part they don’t disagree by much.
I really found this out while writing my essay. If I wanted to I could interpret it slightly differently, resulting in totally different results.