• More instrumental variables studies of cancer treatment

    The study I wrote about earlier this week by Hadley et al. is just one of many to apply instrumental variables (IV) to analysis of cancer treatment (prostate in that case). Zeliadt and colleagues do so as well (also for prostate cancer) and cite several others. Both the Hadley and Zeliadt studies exploit practice pattern variation, specifically differences in prior year(s) rates of treatment across areas, to define IVs.

    For you to buy the results, you have to believe that lagged treatment rates strongly predict actual treatment (this can be shown) and, crucially, are not otherwise correlated outcomes, controlling for observable factors (this mostly requires faith). I would not believe the IVs valid if there were clear, accepted standards about whether and what treatment is best. If that were so, then treatment rates could be correlated with quality, broadly defined. Higher quality care might be expected in areas that follow the accepted standard more closely. Better outcomes could be do to broadly better care, not just to the particular treatment choice.

    However, in prostate cancer, there is no standard about what treatment is best. I accept the IVs as valid in this case.

    Among the other cancer treatment IV studies I found, some of which Zeliadt cites, several also exploit practice pattern variations:

    • Yu-Lao et al.: Again, prostate cancer, and, notably, appearing in JAMA. Yes, JAMA published an IV study based on practice pattern variation. More on why I am excited about that below.
    • Brooks et al.: Breast cancer
    • Earle et al.: Lung cancer

    I cannot say whether practice patterns make for valid IVs for breast and lung cancer at the time the Brooks and Earle studies were published. I’d have to think about it, and I have not. I merely note that exploiting practice pattern variation for IV studies is not novel, though it is not widely accepted either, particularly in medical journals. I think it should be, though only for cases for which a good argument about validity can be made, as I believe it can be for prostate cancer and, I am sure, some other conditions.

    Of course I would prefer to see more randomized controlled trials (RCTs) on all the areas of medicine in need of additional evidence. But those areas are, collectively, a massive territory. We neither have the time nor have we demonstrated a willingness to spend the money required to conduct RCTs in all areas. We have to prioritize. For cases for which IV studies are likely to be reasonably valid, we ought to apply the technique, not necessarily instead of an RCT — though with resource constraints, such an argument could be made — but certainly in advance of one.

    IV studies are cheaper, faster, and offer other advantages. They don’t require enrollment of patients. They can exploit the large, secondary data sets already in existence (Medicare, Medicaid, VA, commercial payers, hospital systems, and the like). As such, they permit stratification by key patient demographics that RCTs are often underpowered to support. Even when an RCT is warranted, a good IV study done in advance can help to refine questions and guide hypotheses.

    Given the vast need for evidence that overwhelms our capacity to provide it via RCTs, there isn’t a good argument for not doing IV studies in cases for which they justifiably valid. However, part of the package of scaling up an IV research agenda is publishing the findings in top journals — not just health economics journals, but also top medical journals like JAMA. This will require more clinical reviewers of manuscripts to gain comfort with the IV approach (start here). It will also require medical journals to solicit reviews by those who can vouch for instruments’ validity or point out when they are unlikely to be so.

    It’s hard and expensive to create purposeful randomness, as is required in an RCT. Yet, there is so much natural randomness around. We should be exploiting it. Good quasi-randomness is a terrible thing to waste.

    @afrakt

    Share
    Comments closed
     
    • IV methods are great but too often are researchers not careful to fully describe the limitations of inference from these methods. Estimation with valid IV’s identifies a treatment effect for those patients whose treatment choice was influenced by variation in the instrument – this is not an unbiased treatment effect estimate for everyone in the population, or even the full sample of treated individuals, and cannot be generalized without further strong assumptions about the true distribution of treatment effects in the population and how patients and physicians may be making treatment decisions based on the theorized unmeasured patient clinical characteristics, or other confounders. That’s not to say the estimates aren’t useful – they can tell a great story about possible effects of expanding treatment rates or shed light on potential impacts of policy if policy-relevant instruments are used. But many don’t understand that even if a truly valid instrument is available, IV is not simply a silver bullet to “fix” confounding. There could be some concern that the audiences of the bigger medical journals, while and ideal target for the information IV could bring to bear, may be more apt to misjudge the inferences which could be gleaned from these IV estimates.