Get a drink into any academic – any at all – and they’ll likely start talking about the flaws of the peer review process (myself included). Every one has stories of the “amazing” research they did which failed to be recognized in the most prestigious journals and then got accepted by lesser journals (although they’ll still brag about that).
This is why I chuckled when I saw this piece last night in WaPo:
A startling new study that shows a big spike in the death rate for a large group of middle-aged whites in the United States was rejected by two prestigious medical journals, the study’s co-author, Nobel laureate Angus Deaton, said Tuesday.
Deaton and Anne Case, both Princeton economists, received international media attention for the paper published Monday in the Proceedings of the National Academy of Sciences (PNAS). But before they submitted it there, they tried to get it published in the Journal of the American Medical Association (JAMA), Deaton said.
“We got it back almost instantaneously. It was almost like the e-mail had bounced. We got it back within hours,” said Deaton, who was interviewed in Dublin, where he was attending a conference on the Ebola crisis and global public health sponsored by Princeton University.
The bitter academic in me wasn’t surprised. This happens all of the time. After all, there’s a ton of good science out there, and only so much prime journal real estate.
This rejection was noteworthy (unlike mine) because (1) the study made international news and (2) the authors just won the Nobel Prize. That’s a perfect storm. Researchers just acknowledged to be the best in the world rejected for work that has just been deemed “game changing”. I was amused enough to tweet:
Prestigious med journals rejected stunning study on deaths among middle-aged whites https://t.co/BMlgWzrnh5 Peer review… boy, I don't know
— Aaron E. Carroll (@aaronecarroll) November 4, 2015
The context for my quip, for those of you who aren’t West Wing nerds, is that fictional Gov. Ritchie – when confronted with a problem like crime that seems too big for his little brain to handle – utters “Crime… boy, I don’t know”. It’s at that moment that fictional President Bartlet decides to “kick his ass” in the election.
I was reflecting on the point that even Nobel laureates get bitter about rejection. I was also thinking that peer review, while not perfect, is still the best option we’ve got. And, that the problem was probably too big for my little brain to handle.
But while I slept, in a different time zone than normal, others took this much more seriously:
— Trish Groves (@trished) November 4, 2015
In my Twitter timeline, there’s a pretty fierce debate going on about whether peer review actually worked here. People are talking about the methods section, which is, at least to my more “medical” journal standards, pretty thin. And that got me thinking about the last few days in general.
You see, when the paper first came out, I wrote to Austin (in one of our gazillion daily exchanges) that I was surprised that the finding was getting so much attention. After all, we had written at TIE many times in the last few years about the fact that life expectancy had been dropping for women who lived in certain (poorer) counties, for white women with less education, and for white men and women with less education. (Note we didn’t DO the research; we just wrote about it.)
I was therefore somewhat baffled that this was such news. Why were people so shocked? Was this fact now being accepted because the authors were Nobel Prize winners? Was it because the media hyped the results enough to a point where everyone, and not just TIE readers, would know?
Austin even wrote a post trying to tease out the difference. Was the new stuff the news about suicide and substance abuse?
I’m fascinated – so much so that I’m writing hundreds of words on this at 5 in the morning in Seattle cause I can’t sleep. Here, in no particular order, are my thoughts. I’ll be mulling on these all day:
- Was this really a blockbuster finding? Was everyone shocked by this (as the NYT story detailed)? Because, if the results aren’t as huge as many think, then it might not have warranted JAMA/NEJM treatment.
- Is the media over-hyping it now? Or were they under-hyping it before? Both? Neither?
- Was the paper rejected because the methods were too thin, or because the manuscript was lacking in some way? And if that’s the case, should peer review have allowed for that to be fixed? Methods sections can be edited and improved, after all.
- If the methods section is still as anemic as some think, could the paper even be properly judged? How are we sure the results are robust? What if they’re not reproducible?
- Is this an indictment of the media? Of peer review? Of our continued acceptance of irreproducible research?
Or am I just king of the nerds? That seems most likely.