Beyond disclosure: How to think about conflicts of interest and the regulation of medical science

This post is co-authored by Bill Gardner and Austin Frakt.

The recent controversy about disclosure of conflicts of interest (see Bill here and here; Austin here and here) has called renewed attention to pervasive quality control problems in the scientific literature. We agree with Ian Roberts that

the challenge is not to describe the flaws in the current system but to create a better one, where decisions about healthcare are informed by valid and reliable evidence.

We also agree with Mick Watson that

Open science describes the practice of carrying out scientific research in a completely transparent manner, and making the results of that research available to everyone. Isn’t that just ‘science’?

How can we get serious about creating an open, valid, and reliable scientific literature?

We recommend starting by acknowledging our moral response to the problem, and then putting it aside. It’s impeding our thinking. We’re struck by how often we hear that the problem in bias is the “corruption” of some researchers or the “perversion” of the research process. There are many contexts in which it’s important to view science in moral terms. But we doubt that focusing on the virtues or vices of researchers will get us much closer to a solution. Instead, we should think about what institutions and policies will advance scientific learning.

In an ideal world, peer review of science should concern the evidence—data and methods—and the interpretation of findings in the light of existing knowledge. Facts about the authors ought to be extraneous. Aaron Kesselheim found that reviewers downgraded their ratings of the methodological rigor of clinical trials when they believed that the trials were funded by industry. That seems wrong: Consider how you would react if a study showed that reviewers downgraded their ratings of articles written by women, for example.

But this is the real world, and you can also make a case that the reviewers in Kesselheim’s study were behaving rationally. We may want reviewers to evaluate a research report based on the data and methods, but authors can only document so much in a paper. Given the limits on what authors can document, there’s reviewer uncertainty about the quality of the evidence. Bayesian inference suggests that the more uncertain you are about the evidence, the more weight you should give to your prior probabilities concerning the credibility of a report’s authors. Therefore, the evidence that studies funded by pharmaceutical companies are biased toward the companies’ products would seem to justify some weight on a prior to distrust their research.

In practice, though, humans may not compute like perfect Bayesians. We may use real or perceived COIs to over- or under-correct. So the better response, in the long-term, is to reduce our uncertainty about the data and methods. With less uncertainty about the evidence, priors about the authors would matter less and applied (or misapplied) less.

There are several strategies for reducing this uncertainty that the scientific community has applied (though not uniformly) or could apply going forward (perhaps with some infrastructure development). These strategies include:

  1. Registration of trials and reporting of all registered analysis (or clear metrics of the extent to which they are not reported);
  2. Archiving of trials’ analytical data files (see BMJ‘s Open Data campaign and GlaxoSmithKlein‘s commitment to provide access to anonymized patient-level data);
  3. Archiving of statistical programming (reproducible research);
  4. Expert evaluation of study methods by an individual or individuals without conflicts of interest;
  5. A possible future extension of these strategies is: Archiving of the transformations required to generate the analytic data files.

Pursuing these strategies would likely increase the transparency and reproducibility of research, the quality of scientific practice, and reduce uncertainty about its credibility and validity. To our knowledge, there are no scientific reasons not to pursue these strategies.

But there are economic, psychological, and ethical reasons. For example, we can’t make data sets public unless we can make sure that research participants can’t be identified from them. We should also consider the costs in researchers’ time, attention, and resources in complying with more rigorous standards of documentation, with parallel costs to society in possible delay of projects. It is true that science requires a meticulous attention to detail. Nevertheless, humans have finite attention and limited capacities for decision making. More time and attention spent on documentation might mean less time spent thinking and reading.

We should not take the existence of these potential costs as an excuse to do nothing about improving science and its credibility. We should do something while, reasonably, taking them into consideration.

There are reasons to believe that improvements in technology and the self-regulation* of research will facilitate our ability to do better science without unduly burdening researchers or endangering research participants. We are more likely to develop those technologies and self-regulations if we frame our considerations more in terms of questions about how to improve the validity, reliability, and transparency of science, as well as the rate of scientific progress, rather than questions about the moral virtues of researchers.

Disclosure of financial conflicts of interest should be retained as a necessary, though insufficient, tool of scientific integrity. But we must get beyond disclosure, and beyond our outrage over what we think it signals, to tighten up the process of science directly. In a world of competing interests, humans, unfortunately, do not always do good science by accident or because it’s the “right thing to do.” Science is important. We need to treat it as such, and tighten up our regulation of it.


* “Self-regulation” means regulation by scientists, not the government. The scholarly community must find ways to adequately regulate itself, e.g., through a consensus about the requirements of publication in top (or all) medical journals. Having said this, we acknowledge that NIH requirements on grantees—which we support—are an interesting and important case in which a governmental body can advance open science.

@Bill_Gardner and @afrakt

Hidden information below

Subscribe

Email Address*