Last week, the American Hospital Association (AHA) posted a critique of a recent, NEJM-published study by Sunita Desai and Michael McWilliams examining the effects of the 340B Drug Pricing Program, which has the goal of enhancing care for low-income patients. That AHA critique links to a methodological review by economist Partha Deb of the study’s methods. (Partha Deb was kind enough to speak with me and disclosed that he was compensated by the AHA for his time to prepare his review. At the time we spoke, that financial relationship was not disclosed in his online review, something he said he would try to correct.)
The study used a regression discontinuity design, exploiting a threshold in the program’s eligibility rules for general acute hospitals — hospitals with disproportionate share hospital (DSH) adjustment percentages greater than 11.75% are eligible for the program. The study findings suggest that the 340B Program has increased hospital-physician consolidation and hospital outpatient administration of intravenous and injectable drugs in oncology and ophthalmology, without clear evidence of benefits for low-income patients.
As Desai and McWilliams note in their paper, the study had several limitations. One that holds for all regression discontinuity studies is that the estimates pertain to hospitals close to the threshold. Another is that the study relies on data from Medicare and the Healthcare Cost Report Information System. The AHA claims that these limitations and other issues raised by Deb constitute “major methodological flaws” that “negate” the study’s findings.
In a subsequent post, the AHA suggests that the study was unnecessary because the authors could have just asked hospitals how they were using resources generated from the 340B Program.
The authors’ sent me a response to these critiques, which I have agreed to host here at TIE. I will let that response speak for itself. Read it.
But I do want to make two additional comments. First, the notion that we should only learn how a program works by “just asking” those that participate in it or benefit from it is absurd. To be sure, much insight can be gained from such qualitative work. But independent, objective, quantitative work is also essential to unbiased assessment of programs. I reject the AHA’s dismissal of research on these grounds.
Second, there is something troubling to me about advocacy organizations hiring top academics to critique specific studies. The potential for conflicts of interest is obvious. That is not to say there is anything wrong with Deb’s critique, but it is hard to know, in general, what role the financial relationship plays in these kinds of situations. At a minimum, that financial relationship should be disclosed.