In the ongoing Fat Knowledge look into what percentage of things we believe are actually true, we get this nice article from the Economist.
John Ioannidis, a Greek epidemiologist, believes 50% is a fair estimate of the proportion of scientific papers that eventually turn out to be wrong.Well that to confirm my belief that any study I see on TV icontradicteded a month later by another study. At 50% you have just as good of odds flipping a coin to get the right answer, which means the study has absolutely no value in making a decision.
He examined 49 research articles printed in widely read medical journals between 1990 and 2003. Each of these articles had been cited by other scientists in their own papers 1,000 times or more. However, 14 of themÂalmost a thirdÂwere later refuted by other work. Some of the refuted studies looked into whether hormone-replacement therapy was safe for women (it was, then it wasn't), whether vitamin E increased coronary health (it did, then it didn't), and whether stents are more effective than balloon angioplasty for coronary-artery disease (they are, but not nearly as much as was thought).
When Dr Ioannidis ran the numbers through his model, he concluded that even a large, well-designed study with little researcher bias has only an 85% chance of being right. An underpowered, poorly performed drug trial with researcher bias has but a 17% chance of producing true conclusions. Overall, more than half of all published research is probably wrong.
The one thing this article doesn't say though, is that studies looking at the same issue over time should get better. While the first one is no better the 50%, you should be able to get to 90% certainty after a few such studies.
So, how much should we take away from this article? The author of the article ends with this valid point:
Which leaves just one question: is there a less than even chance that Dr Iaonnidis's paper itself is wrong?Via Economist.com