• More on peer review: what is credible evidence?

    Austin sent me this interesting article in the New York Times discussing a variety of interrelated issues: speed of peer review, open access journals, the role of scientists/journalists/bloggers, and who owns research, to name a few. Science and peer review are slow and methodical; the questions, problems and answers are many.

    I learned from the NY Times article that the 6th Annual ScienceOnline conference is being held down the road from me at N.C. State University, a gathering that will confront many of these questions. And in the small world department, Anton Zuicker, co-founder of ScienceOnline, works at Duke University in communications for the Department of Medicine. Says Zuicker (aka @mistersugar on twitter):

    As ScienceOnline co-founder Anton Zuiker puts it, the conference focuses on “finding creative ways to facilitate connections that lead to conversations, conversations that lead to networks, networks that support communities, all in the name of promoting science and our understanding of the worlds around us.”

    I wrote a bit about suggestions for peer review, and got so many thoughtful responses and questions via email and in hallway conversations that I haven’t been able to distill them all to say more, and I realize I have more questions than answers. I am a consumer of and participant in peer review, not an expert in its conduct. However, nothing could be more important to this blog than a discussion about how to determine what constitutes credible evidence on which to base policy decisions?

    I am going to attend some of this week’s ScienceOnline conference and report back. I will plan to live-tweet some of it, and if you want to follow along my twitter handle is @donaldhtaylorjr and the conference hashtag is #scio12. My frame for the conversation will remain, what is credible evidence?

    DT

     

    Share
    Comments closed
     
    • Credibility = Reproducibility
      Credibility != Seniority
      Credibility = Openly published methods
      Credibility != Eminence of an idea

      And finally: Credibility = Clarity.

      If something is so convoluted as to be impossible to grasp by the intended audience, it is probably double-speak. It is either nothing masquerading as something, or an unsettling concept shrouded in comforting prose.

      In the 21st century, wikipedia is credible. It tells you how it got to where it was (methods), and the information is being constantly updated (reproducible). It’s primary failing is the information published is often at the mercy of what is commonly accepted (eminence). I don’t have a work-around for this problem, but I thought it was worth mentioning.

      P.S. Notice that I did not say credible = no conflict of interest. While conflicts are often corrupting, they are not in themselves disqualifying. Anyone who is OPEN as to how they got somewhere can be credible, even if it leads them to a financially favorable conclusion.

      • Heads up to those without Comp Sci knowledge:

        != is the same as “not equal to”

        • This is interesting stuff and I am really intrigued on how this conference will conclude. BTW, thanks for the heads up Will, it sure helped me understand the equation. Lol.

    • @Will
      thanks for the comment. By openly published methods do you mean a description of how a study was done, with a goal of allowing someone to reproduce what an author has done?

      • Pretty much. Although I don’t think the goal is necessarily allowing someone to reproduce it.

        Example: If a 5 panel committee is formed to establish the Best Standard Practice on treating hypertension, it is not enough to just list what practices are recommended by the panel. First, any statement released by that panel should explain why the five people were chosen (presumably some would be MDs, maybe a patient advocate, maybe an ethicist and a statistician…) From there, the panel should explain whether each decision required a unanimous decision or a simply majority. Then the panel has to explain where it got its scientific data & how that affected the committee’s decision (maybe they base decisions off of meta-analysis of clinical outcomes of different drug therapies, maybe it’s based off of expert testimony, maybe it’s based off of common practices by the leading regional care providers). These are important “methods” to disclose but they may not be reproducible. It gives the audience an ability to understand how the views propagated by the committee were derived. Did they ad lib it? Was it careful? Did it take into account all different types of treatments (pharmacotherapy vs diet changes vs surgical interventions vs exercise)? If all you are allowed to see is the OUTPUT without any understanding of how the INPUT was processed, you can’t really be sure what you’re looking at.

        Imagine someone who ran a study to examine the difference in CVD outcomes after treatment with Aspirin vs Placebo. Imagine that this study listed the odds ratio of cardiovascular event after treatment with Aspirin was cut in half. Great finding. Exciting finding. Then you find out the sample size was 4. Being open about methods allows readers to discern the strength of an argument, and to notice the pitfalls that authors may have committed (intentionally or not).