• Progress Toward Open Science

    It’s hard to find someone who’ll argue against the notion that science should be open and transparent. In practice, however, much of what scientists do is not available for public scrutiny and debate. What can we do? A recent issue of Science Magazine provides perspectives on how to move toward open science (see also Austin commenting on Brendan Nyhan).

    Brian Nosek and colleagues report on a project to define standards for open science. The standards include:

    • Standards for transparency about study designs, study materials, data sharing, and analytic methods (that is, standards for sharing the code comprising statistical models or simulations).
    • Standards for preregistration of studies and data analysis plans.
    • Replication standards that recognize the value of replication for independent verification of research results and identify the conditions under which replication studies will be published in the journal.
    • Citation standards that extend current article citation norms to data, code, and research materials.

    Over 100 journals have signed on to these standards, including Science. Relatively few medical journals are signatories. Readers with connections to journals should raise these issues with editorial boards and professional societies.

    Getting these standards adopted across science will be a struggle. Some fields have huge incentives for priority of discovery. This encourages labs to conceal methods and even details about results for as long as possible. For scientists in any field, open science requires great attention to accurate documentation of methods and curation of data. These are time-intensive tasks. Therefore, practicing open science competes against the expectations that scientists achieve high research productivity.

    For this reason, Bruce Alberts and colleagues argue that scientists’ career

    incentives should be changed so that scholars are rewarded for publishing well rather than often. In tenure cases at universities, as in grant submissions, the candidate should be evaluated on the importance of a select set of work, instead of using the number of publications or impact rating of a journal as a surrogate for quality.

    The challenge in implementing evaluation on quality rather than quantity will be finding valid and reliable ways to measure importance.

    What’s at stake in open science? Transparency is part of what defines science, so open science is just better science. Period.

    But open science is also critical for the applications of science. As Austin and I discussed recently, part of the solution to the troubles concerning conflicts of interest in medical research is to make science more transparent. To the degree that we have confidence in the data and methods of a study, the less it matters that an author has a financial tie to industry. To the degree that we can verify, we don’t have to trust.

    The same applies to using science in policy. Getting to empirically-driven policy requires that we gather data and evaluate social programs against benchmarks. Credibility of evidence is everything here. Most of us have strong prior beliefs about the effectiveness of social programs, and we are inclined to distrust the research of those with whom we disagree. We will never completely counter those priors. But what we can do is make the data, design, methods, and analyses in policy-oriented science as reproducible as possible.


    Comments closed