Noam Schieber’s NYT piece today is devastating. About selecting papers to be most prominently featured at a top economics conference, David Card is quoted, “‘I choose papers that are going to be written up’ in the mainstream press. […] ‘It’s what the people want.'”
[T]he benefits to academics of generating media attention may be subtly skewing their research. “The pressure is tremendous,” said James Heckman, an economist at the University of Chicago and the winner of a Nobel Memorial Prize in Economic Science. “Many young economists realize that they win a MacArthur or the Clark prize, or both, by being featured in The Times.” […]
[P]opular media attention increasingly works in a candidate’s favor []. For tenure decisions, “I’ve gotten letters,” Dr. Heckman said, “that ask me to assess the impact and visibility of a person’s work.”
Often the effect is indirect but no less pronounced. Many scholars said, for example, that a growing number of colleagues relied on nonprofit foundations to finance their research and that foundation administrators tended to be most excited when the work found its way into the news media.
“The grant-giver looks at this and says, ‘O.K., let’s fund this guy or this woman because we’re not just going to generate results that are read by 10 people,'” said Daniel Drezner, a political scientist at Tufts University’s Fletcher School of Law and Diplomacy. “It’s actually going to be talked about.” […]
All of this has led to a new model of disseminating social science research through the media.
The piece mentions—and is no doubt motivated by—the recent retraction by Science of the Michael LaCour study. It ends reminding us of the Rogoff-Reinhardt kerfuffle in 2013.
When I talk about promoting research via social and conventional media, I mention the problems Schieber is getting at. Maybe I don’t emphasize them enough.
There is danger in the allure of attention and the rewards it can bring. There are incentives to cheat a little, if not a lot. But there are huge penalties too. The consequences of making a mistake, and the personal damages for outright fraud, are much higher when one leverages up one’s work and message through, say, New York Times reporting or column writing (or similar).
It’s tempting to say these are all financial incentives, of a type. A bigger name can command a better academic post, more book sales, higher speaking fees, and the like. But these are not financial incentives in the same way we perceive those of, say, industry-sponsored clinical trials. They’re not incentives to produce a specific result. They’re incentives to do something—anything—perceived as provocative, important, and timely (though, perhaps, still consistent with one’s tribal affiliations).
We most typically call these non-financial conflicts of interest. Scheiber has reminded us that they are strong. And they are dangerous. Yes, science can be self-correcting, but in the interim, we should be humble and cautious. We should guard against being fooled by a blockbuster new study that reverses previous conventional wisdom. We should be skeptical—and express caution, include caveats—until findings are vetted and replicated.
We should also guard against fooling ourselves. When there’s no direct money on the line, we are all still at risk of promoting ideas that science cannot and will not ultimately support.