Multiple comparisons dishonesty in science is easy, tempting, and probably rife. Let me explain. When you start any experiment, you have one, or a number of null hypotheses about the data, which you intend to reject at a significance level of 0.05 (that is to say, 1 in 20 times you will get a false positive by chance, but we consider this a small enough percentage to deem worthy of report.) Once the data are in, what often happens is that you don't find what you want, and you have to go back to the drawing board so that you haven't wasted a quarter of a million dollars. You run other regressions with plausible stories. One of them comes out significant, and you write your paper.
Now, the more tests you run on a single dataset, the higher chance you have of coming up with a false positive. There are ways of correcting for this (and if you report all the comparisons you do in a paper you have to report as well how you done the corrections), but if it's just you, sitting in the lab playing with the spreadsheet, no one has to know what you're doing. And of course, if the model is reasonable, you get published, post hoc ergo propter hoc* be damned. Now, I'm not leveling accusations at anyone in particular, but this is so easy to perpetrate that surely a great number of findings (that have not been reproduced) are just plain wrong. 1 in 10? 1 in 8? The investigation continues.
* admittedly, one should get these things right before conducting public displays of idiocy. however, see #6
Thursday, December 28, 2006
Subscribe to:
Post Comments (Atom)
Pages

Contributors
- The Corgi of Mystery
- 1980: Born. 1989: Sudden affliction of self-awareness. Things downhill ever since.
My Blog List
-
-
-
Monday, December 19, 20168 years ago
-
the way around is through8 years ago
-
Goodbye, Flying Inkpot10 years ago
-
-
-
The Glorious Unfolding11 years ago
-
On Friendship (Part II)14 years ago
-
-
-
-
Quote of the Week
--
Blog Archive
-
▼
2006
(292)
-
▼
December
(33)
- And #1: Most often, the winner is the guy who reco...
- 10 Things I Learned in 2006 (#2)
- 10 Things I Learned in 2006 (#3)
- 10 Things I Learned in 2006 (#4)
- 10 Things I Learned in 2006 (#5)
- 10 Things I Learned in 2006 (#6)
- 10 Things I Learned in 2006 (#7)
- 10 Things I Learned in 2006 (#8)
- From The Mezzanine, Nicholson Baker:I also liked t...
- 10 Things I Learned in 2006 (#9)
- 10 Things I Learned in 2006 (#10)
- I was in Ewa's apartment the other day waiting as ...
- there is a paucity of good dessert places in the h...
- (if only in my dreams)
- A word is dead When it is said, Some say. I say ...
- if you do go to watch the movie, here's a little s...
- ugh
- seth defended his dissertation today, and it was e...
- because -- i have been very very very good at livi...
- i intended to stay home in the morning to study fo...
- to the one who helpfully pointed me to this
- At one time most of my friends could hear the bell...
- a mountain of food in the lab -- potlucks are almo...
- slightly old news, but the rome season 2 teaser po...
- i had a heartstopping moment this morning when i j...
- The Grand Unifying Psychological Theory of Dennis ...
- fighting out of the thicket. the paper got written...
- why am i so obsessed with christmas carol lyrics?
- sara's exam just about exhausted me of the will to...
- today was the double whammy of (a) being beaten ha...
- the department system for signing up for classes i...
- everyone's busy studying. i spent a large chunk of...
- In a drear-nighted December, Too happy, ha...
-
▼
December
(33)
1 comment:
That's what academics, especially economists, call "publication bias". You can essentially run some regression to get whatever result you would like to get, which is a weakness of retrospective non-experimental studies. For that reason, many disciplines have started to adopt randomized experiments a la clinical trials, which is ostensibly free of such biases.
Post a Comment