• Ten impressions of big data: Claims, aspirations, hardly any causal inference

    “Big data” is all the rage. I am curious what people think big data can do, and what some claim it will do, for health and health care. I’m curious how people think causal connections will arise from (or using) big data. In large part this consideration seems overlooked, as if credible causal inferences will just emerge from the data, announcing themselves, dripping wet with self-evident validity. I am concerned.

    I’ve been collecting excerpts of articles on big data, many sent to me by Darius Tahir, whom I thank. What I’ve compiled to date is below and in no particular order. For each piece, the author (with link to original) is indicated, followed by a quote. In many cases, what’s quoted is not an expression of the author’s views, but a characterization of the views of individuals about whom the author is reporting. I encourage you to click through for details before jumping to conclusions about who holds what view.

    Also, do not interpret these as suggesting I do not see promise in big data. I do! I just think how we use data matters just as much as, if not more than, how much data we have. We should marry “big data” with “smart analysis” not just “big claims.”

    1. Bill Gardner has not overlooked causal inference:

    Here’s where the ‘big data’ movement comes in. We can assemble data sets with large numbers of patients from electronic health records (EHRs). Moreover, EHRs contain myriad demographic and clinical facts about these patients. It is proposed that with these large and rich data sets, we can match drug X and drug Y patients on clinically relevant variables sufficiently closely that the causal estimate of the difference between the effects of drug X and drug Y in the matched observational cohort would be similar to the estimate we would get if we had run an RCT.

    2. David Shaywitz echos Bill and also notes the views of others that begin to shade toward the magical or mystical (“something will emerge”):*

    Clinical utility, as Haddow and Palomaki write, “defines the risks and benefits associated with a test’s introduction into practice.” In other words, what’s the impact of using a particular assessment – how does it benefit patients, how might it adversely impact them? This may be easiest to think about in the context of consumer genetic tests suggesting you may be at slightly elevated risk for condition A, or slightly reduced risk for condition B: is this information (even if accurate) of any real value? [...]

    The other extreme, which Stanford geneticist Atul Butte is perhaps best known for advocating, is what might be called the data volume perspective; collect as much data as you possible can, the reasoning goes, and even if any individual aspect of it is sketchy or unreliable, these issues can be overcome with volume. If you examine enough parameters, interesting relationships are likely to emerge, and the goal is to not let the perfect be the enemy of the good enough. Create a database with all the information you can find, the logic goes, and something will emerge.

    3. Darius Tahir reminds us that we’re most readily going to find correlations (implication: not causation) in a hypothesis-free space:

    Supplementing medical data with consumer data might lead to better predictions, he, and the alliance, reasoned.

    In the pilot program, the network will send its health data to a modeler, which will pair that information with consumer data, such as credit card and Google usage. The modeler doesn’t necessarily have a hypothesis going in, Cantor said.

    “They’re identifying correlations between the consumer data and healthcare outcomes,” he said.

    4. Amy Standen really frightens me with the scientific-method-is-dead idea:

    “The idea here is, the scientific method itself is growing obsolete,” [...]

    [S]o much information will be available at our fingertips in the future that there will be almost no need for experiments. The answers are already out there. [...]

    Now, Butte says, “you can connect pre-term births from the medical records and birth census data to weather patterns, pollution monitors and EPA data to see is there a correlation there or not.” [...]

    Analyzing data is complicated and requires specific expertise. What if the search engine has bugs, or the records are transcribed incorrectly? There’s just too much room for error, she says.

    “It’s going to take a system to interpret the data,” she says. “And that’s what we don’t have yet. We don’t have that system. We will, I mean for sure, the data is there, right? Now we have to develop the system to use it in a thoughtful, safe way.”

    5. Chris Anderson says that numbers can speak for themselves:

    Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all. [...]

    With enough data, the numbers speak for themselves. [...]

    “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot. [...]

    Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

    6. Bernie Monegain writes about about Partners HealthCare’s chief information officer James Noga’s dream of moving beyond prediction (for which correlations that aren’t causation can be useful) and designing interventions (for which causality is crucial):

    He likes to employ a travel analogy. Drivers once got maps to travel from one point to another — they basically figured it out themselves — then they went to predictive analytics to find the best route to get from point A to point B.

    “Then as you get into prescriptive analytics, it actually tells you on the way real time, an accident has happened and reroutes you,” said Noga.

    “With big data you’re really talking about data that’s fast moving and perpetually occurring, actually able to intercede rather than merely advise in terms of the care of patients,” he said. “On the discovery side with genetics and genomics using external data sources, I think the possibilities of what I would call evidence-based medicine, and being able to drive that to drive better protocols on the clinical side is endless in terms of the possibilities.”

    7. Veronique Greenwood offers concrete examples and a warning:

    Back in her office, [Jennifer Frankovich] found that the scientific literature had no studies on patients like this to guide her. So she did something unusual: She searched a database of all the lupus patients the hospital had seen over the previous five years, singling out those whose symptoms matched her patient’s, and ran an analysis to see whether they had developed blood clots. “I did some very simple statistics and brought the data to everybody that I had met with that morning,” she says. The change in attitude was striking. “It was very clear, based on the database, that she could be at an increased risk for a clot.” [...]

    For his doctoral thesis, [Nicholas Tatonetti] mined the F.D.A.’s records of adverse drug reactions to identify pairs of medications that seemed to cause problems when taken together. He found an interaction between two very commonly prescribed drugs: The antidepressant paroxetine (marketed as Paxil) and the cholesterol-lowering medication pravastatin were connected to higher blood-sugar levels. Taken individually, the drugs didn’t affect glucose levels. But taken together, the side-effect was impossible to ignore. “Nobody had ever thought to look for it,” Tatonetti says, “and so nobody had ever found it.” [...]

    There are numerous correlations like this, and the reasons for them are still foggy — a problem Tatonetti and a graduate assistant, Mary Boland, hope to solve by parsing the data on a vast array of outside factors. Tatonetti describes it as a quest to figure out “how these diseases could be dependent on birth month in a way that’s not just astrology.” Other researchers think data-mining might also be particularly beneficial for cancer patients, because so few types of cancer are represented in clinical trials. [...]

    In the lab, ensuring that the data-mining conclusions hold water can also be tricky. By definition, a medical-records database contains information only on sick people who sought help, so it is inherently incomplete. Also, they lack the controls of a clinical study and are full of other confounding factors that might trip up unwary researchers. Daniel Rubin, a professor of bioinformatics at Stanford, also warns that there have been no studies of data-driven medicine to determine whether it leads to positive outcomes more often than not. Because historical evidence is of “inferior quality,” he says, it has the potential to lead care astray.

    Yet despite the pitfalls, developing a “learning health system” — one that can incorporate lessons from its own activities in real time — remains tantalizing to researchers.

    8. Vinod Khosla expresses some ambitions:

    Technology will reinvent healthcare. Healthcare will become more scientific, holistic and consistent; delivering better-quality care with inexpensive data-gathering techniques and devices; continual monitoring and ubiquitous information leading to personalized, precise and consistent insights. New medical discoveries will be commonplace, and the practices we follow will be validated by more rigorous scientific methods. Although medical textbooks won’t be “wrong,” the current knowledge in them will be replaced by more precise and advanced methods, techniques and understandings.

    Hundreds of thousands or even millions of data points will go into diagnosing a condition and, equally important, the continual monitoring of a therapy or prescription. [...]

    Over time, we will see a 5×5 improvement across healthcare: 5x reduction in doctors’ work (shifted to data-driven systems), 5x increase in research (due to the transformation to the “science of medicine”), 5x lower error rate (particularly in diagnostics), 5x faster diagnosis (through software apps) and 5x cost reduction.

    9. Larry Page thinks government regulation is slowing the promise of big data:

    I am really excited about the possibility of data also, to improve health. But that’s– I think what Sergey’s saying, it’s so heavily regulated. It’s a difficult area. I can give you an example. Imagine you had the ability to search people’s medical records in the U.S.. Any medical researcher can do it. Maybe they have the names removed. Maybe when the medical researcher searches your data, you get to see which researcher searched it and why. I imagine that would save 10,000 lives in the first year. Just that. That’s almost impossible to do because of HIPPA. I do worry that we regulate ourselves out of some really great possibilities that are certainly on the data-mining end.

    10. Lindsey Cook writes about some of the barriers to big data (legal issues, physicians’ concerns, patients’ misunderstandings, technological barriers, misplaced research funding), though not about causal inference. Her piece includes a primer on what “big data” means (“an incredibly large amount of information”).

    Big data is already producing research that has helped patients. For example, a data network for children with Crohn’s disease and ulcerative colitis called ImproveCareNow helped increase remission rates for sick children, according to Dr. Christopher Forrest and his colleagues, who are creating a national network of big data for children in the U.S.

    * By Twitter, David points to his other work in this area, which I have not read at the time of this writing: here, here, and here.


    Comments closed
  • What happens when hospitals flip to for-profit status?

    When for-profit Vanguard Healthcare bought the Detroit Medical Center in late 2010, worried murmurs rippled through the community. DMC is the largest provider of charity care in the state, and it wasn’t clear whether Vanguard would feel obliged to honor the hospital’s charitable mission—would they keep serving the uninsured, or would access and quality of care suffer? These concerns were echoed by patient advocates when Tenet Healthcare—another for-profit—acquired Vanguard in 2013, absorbing DMC in the process.

    Were people right to worry? Maybe not, according to a new study published today in JAMA by Karen Joynt, John Orav, and Ashish Jha.

    Conclusions and Relevance: Hospital conversion to for-profit status was associated with improvements in financial margins but not associated with differences in quality or mortality rates or with the proportion of poor or minority patients receiving care.

    Existing literature on what happens when a hospital turns for-profit is dated:

    Most of the data on conversions are from the 1990s, and those data generally suggest that conversions were associated with higher margins but also higher mortality rates. However, these transitions took place during an era in which national efforts such as the Hospital Compare program designed to monitor hospital quality were not yet in existence and prior to the emergence of powerful consumer advocate groups focused on quality and safety, such as the Leapfrog Group. Thus, whether prior findings on conversions would hold today is unclear.

    The JAMA study compares 237 conversion hospitals against controls matched on size, teaching status, and geographic market (hospital referral region). The authors used a difference-in-differences model and Medicare data to compare changes in financial performance, quality of care, and patient populations.

    Conversion hospitals started with lower margins at baseline relative to controls; those margins increased by 2.2% on average  compared to only 0.4% growth among matched controls. Quality—performance on process measures and mortality—did not meaningfully change for conversion hospitals, nor did patient volume. The proportions of poor, disabled, and minority patients also appear to be unaffected by the transition to for-profit.

    It’s not clear where added revenue came from—the study did not find higher Medicare patient volumes or evidence of increased Medicare payments. The authors suggest two possible mechanisms: the hospitals found ways to cut costs or extracted better payments from private insurers.

    Although we cannot test this directly, it is possible that the corporations purchasing these hospitals brought experienced management to struggling institutions, which allowed them to improve their efficiency. For a hospital with persistently negative margins, for-profit status may also bring access to capital and other financial resources that can lead to changes in the hospital’s economic viability.

    We want to clarify that mechanism—some of the revenue probably is efficiency gains, but I imagine a lot of conversions also represent consolidation, and there’s plenty of concern about consolidation increasing prices to go around.

    And there are other knots left to untangle. The authors looked at Medicare data, so it’s not clear whether results generalize to the uninsured, privately insured, or those on Medicaid. The study finds good signals that hospitals didn’t reduce care provided to vulnerable populations: the authors looked specifically at the proportion of Medicare enrollees who were poor (dual-eligible with Medicaid) or disabled, and found no changes associated with conversion. The administrative data doesn’t paint a full picture of the population’s risk profile, though, which could have changed in ways that weren’t picked up. There also wasn’t a reliable measure for charity care provided by the hospitals, so the authors “cannot rule out a decrease in this specific type of care provision.”

    This is important research, and I hope we see more of it.

    Adrianna (@onceuponA)

    Comments closed
  • On an Antibiotic? You May Be Getting Only a False Sense of Security

    The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

    The best way to prevent transmission of Ebola in the United States is to identify and quarantine those with the disease as soon as possible. However, the first Ebola patient in this country was, unfortunately, released after going to an emergency room, even though he had symptoms indicative of the disease. He was sent home on antibiotics.

    The antibiotics were, of course, of no use in treating Ebola. They’d be of no use for any viral infection, for that matter. Even if the patient had actually had a sinus infection, as his doctors initially believed, antibiotics probablywouldn’t have done much for that either.

    Yet antibiotics are regularly prescribed in this manner. Cases like this highlight a real, but often ignored, danger from their overuse: a false sense of security.

    As a pediatrician and a parent, I’ve seen many protocols and procedures that require the use of antibiotics for a number of illnesses that may not necessitate them. These plans are in place, ostensibly, to protect other children from getting sick. They rest on the idea that someone on antibiotics is no longer contagious.

    This is, tragically, often not the case. If you’ve had a small child with pinkeye, you know that few diseases can get your toddler banned from preschool faster. Most of the time, he will not be able to return to school until he has been on antibiotic drops for 24 hours.

    This assumes, of course, that the pinkeye is caused by bacteria. Often, it is not. Up to 20 percent of conjunctivitis can be caused by adenovirus alone. Pinkeye caused by a virus will be completely unaffected by any antibiotic drops; children will be infectious long after receiving them. Moreover, physicians are pretty much unable to distinguish between bacterial and viral conjunctivitis.

    Even if we were, there’s little evidence than 24 hours of antibiotic drops do much of anything to render a child noncontagious. Most of the outcomes studied include things like “early microbiological remission,” which means eradicating the infection by Day 2 to 5 of therapy. However, some children still haven’t achieved this outcome even by Day 6 to 10. Drugs simply work differently in different people.

    Strep throat isn’t much better. Resistance in group A streptococcus, the cause of strep throat, is negligible. Yet even with proper therapy, it can be very difficult to eradicate the pathogen from carriers. This has led to outbreaks among family members and closed communities even when people are properly treated.

    Even in the best-case scenario, being “on an antibiotic” isn’t much protection for others. And often, antibiotics offer no protection at all.

    Only about a quarter of children who have acute respiratory tract infections(including sinus, ear, throat and bronchial problems) have an illness caused by bacteria. But about twice that number are prescribed antibiotics for their symptoms. These extra drugs provide no useful benefit. Although they might make the patients feel better through a version of the placebo effect, they certainly don’t prevent transmission of nonbacterial pathogens from one person to another. So if they give people a false sense of reassurance that they are no longer contagious, leading them to relax their usual precautions, the antibiotics are most likely doing harm. (This is separate from the other kind of harm usually associated with antibiotic overuse: stimulating resistance in either the bacteria you’re trying to treat, or in other bacteria that happen to be present in the body.)

    Every time a patient comes to the office with an upper respiratory infection, and we prescribe an antibiotic, we imply that we’ve taken care of the problem. We give patients an incorrect impression that the drug will make them better, and will begin to kill off the germs affecting them. We also give the impression that they will be less of a risk to their friends, family and close contacts. After all, they’re “on an antibiotic.”

    Confronted with this information, physicians will often fall back on the excuse that their patients “demand” it. But too often, it’s physicians, not patients or parents, who are the problem.

    A study published in 1999 in the journal Pediatrics examined expectations and outcomes regarding visits to the pediatrician for a child’s cold symptoms. The only significant predictor for an antibiotic prescription was if a physician thought a parent wanted one. They wrote one 62 percent of the time when they assumed a parent expected a prescription, but only 7 percent of the time when they thought parents didn’t. However, it turned out that the doctors often guessed wrong as to what parents actually desired.

    Another study, published in 2003 in the Annals of Emergency Medicine, had similar findings. Doctors were more likely to prescribe an antibiotic fordiarrhea when they assumed that patients expected it, but they correctly guessed patients’ expectations only a third of the time. Physicians were also more likely to prescribe antibiotics for patients with bronchitis and other respiratory infections if they believed patients wanted them, but correctly identified those expectations only about a quarter of the time. In yet another study, physicians even prescribed antibiotics to 29 percent of patients whodidn’t want them.

    It’s time that we stopped viewing the overuse of antibiotics as a victimless crime. According to reports, Thomas Eric Duncan showed up in the emergency room with a 103-degree fever, a headache and abdominal pain. He rated that pain an 8 on a scale of 1 to 10. After receiving tests, he was thought, perhaps, to have sinusitis, and was given an antibiotic. I cannot guess what was in the physicians’ heads that day, but I think it’s likely they thought the antibiotics would do little harm, and potentially some good.

    We physicians may believe that antibiotic prescriptions are what patients want, but it may be time to recognize that they are sometimes more for us than for them. Moreover, the false sense of security they provide may do more harm than good.


    Comments closed
  • Supplement ingredients are a real problem

    In last week’s Healthcare Triage News, I covered the fact that many supplements have switched from 1,3-dimethylamylamine (banned) to 1,3-dimethylbutylamine (not banned yet).

    Yesterday, in JAMA, a new study told an even worse story:

    The US Food and Drug Administration (FDA) initiates class I drug recalls when products have the reasonable possibility of causing serious adverse health consequences or death. Recently, the FDA has used class I drug recalls in an effort to remove dietary supplements adulterated with pharmaceutical ingredients from US markets. Approximately half of all FDA class I drug recalls since 2004 have involved dietary supplements adulterated with banned pharmaceutical ingredients.

    Prior research has found that even after FDA recalls, dietary supplements remain available on store shelves. However, it is not known if the supplements on sale after FDA recalls are free of the adulterants. In the present study, dietary supplements purchased at least 6 months after FDA recalls were analyzed to determine if banned drugs were still present.

    This study looked at supplements being sold 6 months after bans went into place. Between 2009 and 2012, 274 supplements were recalled by the FDA. About 10% of those were recalled because of an adulteration of an ingredient, were still available for purchase in mid 2013.

    At least one of the adulterants (banned) were still identified in two-thirds of the products. About 85% of sports enhancement supplements were still adulterated. So were 67% of weight loss supplements, and 20% of sexual enhancement supplements. And lest you think this is a foreign problem, 65% of the products made by US manufacturers were still adulterated.

    And get this: while 63% of the supplements contained the same banned supplement before the ban was enacted, more than 20% contained at least one additional banned ingredient.

    Supplement ingredients are not a problem to be ignored. We don’t regulate them nearly as carefully as drugs and other substances.


    Comments closed
  • What’s in a name: Medicaid “beneficiaries” edition

    Maybe I should have known this. Maybe I did know it and forgot. Maybe there’s a good reason for that.

    Surely the ACA’s implementers knew what they were doing when they began a campaign to convert all relevant Code of Federal Regulations language from Medicaid enrollees to Medicaid beneficiaries. Medicaid enrollees have always just been that—unlike Medicare beneficiaries—a naming convention emphasizing the provisional, conditional nature of the Medicaid entitlement. And the announcement accompanying the change acknowledged as much.

    The Code of Federal Regulations was revised on 15 and 16 July 2012 to change the word “recipient” to “beneficiary.” The following is excerpted from 77 FR 29002-01, which appeared on May 16, 2012 in the Federal Register:

    Removal of the Term “Recipient” for Medicaid: We have removed the term “recipient” from current CMS regulations and made a nomenclature change to replace “recipient” with “beneficiary” throughout the CFR. In response to comments from the public to discontinue our use of the unflattering term “recipient” under Medicaid, we have been using the term “beneficiary” to mean all individuals who are eligible for Medicare or Medicaid services.

    Just what is unflattering about the term “recipient” may be understood only in context; similarly, what is empowering about “beneficiary” may also only be understood in context. Medicare and Medicaid beneficiaries now stand on equal dignatorial ground.

    That’s from Ann Marie Marciarille’s “The Medicaid Gamble.”

    By and large, I tend to call people on Medicaid “enrollees,” and probably still will. There are two problems with “beneficiary.” First, it’s considered jargon and, apparently, is confusing or foreign to readers not steeped in health policy. Still, I do use the term, typically for people on Medicare, for which no active effort is required to receive the benefit.* Second, despite what they’re called, that’s just not the case for people on Medicaid. They really do have to enroll to benefit. So, I take issue with “beneficiary” even if it’s the regulation.

    * UPDATE: This is only true for age-based Medicare eligibility.


    Comments closed
  • The relationship between consolidation and prices

    One of my biggest concerns about ACOs is that the push towards provider consolidation may lead to increased spending that might overwhelm the savings they’re supposed to generate. Austin has a nice piece on this here. A just-published paper in JAMA is on point. “Physician Practice Competition and Prices Paid by Private Insurers for Office Visits“:

    IMPORTANCE Physician practice consolidation could promote higher-quality care but may also create greater economic market power that could lead to higher prices for physician services.

    OBJECTIVE To assess the relationship between physician competition and prices paid by private preferred provider organizations (PPOs) for 10 types of office visits in 10 prominent specialties.

    DESIGN AND SETTING Retrospective study in 1058 US counties in urbanized areas, representing all 50 states, examining the relationship between measured physician competition and prices paid for office visits in 2010 and the relationship between changes in competition and prices between 2003 and 2010, using regression analysis to control for possible confounding factors.

    EXPOSURES Variation in the mean Hirschman-Herfindahl Index (HHI) of physician practices within a county by specialty (HHIs range from 0, representing maximally competitive markets, to 10 000 in markets served by a single [monopoly] practice).

    MAIN OUTCOMES AND MEASURES Mean price paid by county to physicians in each specialty by private PPOs for intermediate office visits with established patients (Current Procedural Terminology [CPT] code 99213) and a price index measuring the county-weighted mean price for 10 types of office visits with new and established patients (CPT codes 99201-99205, 99211-99215) relative to national mean prices.

    The authors looked at more than 1050 counties in the US to see if changes in physician competition were associated with prices between 2003 and 2010. They used the HHI (also well described on the blog) as a measure of competition. The main outcomes of interest were (1) the mean price paid to physicians in each specialty by PPOs and (2) a price index of the mean price for 10 types of office visits relative to national prices.

    Variation existed in competition by counties. The 90th percentile HHIs were 3-4 times higher than in the 10th percentile. They also found that prices were $5.85 – $11.67 higher in the counties with the highest decile of HHI versus the lowest decile. Price indexes in the same deciles were 8%-16% higher as well. Over seven years of the study, prices went up more in areas of less competition than in areas of more competition.

    I’ve been generally concerned about ACOs and consolidation based on theory. After all, more market power means better negotiating power means higher prices. But this adds evidence and data to my worries.


    Comments closed
  • Sudden global epidemics, basic research, and NIH funding

    The CDC is the US public health service that is primarily tasked with protecting the nation from epidemics. But the NIH matters too because it develops vaccines to prevent the spread of epidemics and therapies to treat the infected. So in the light of Ebola, we should reflect on the value of NIH research and on what it costs.

    Francis Collins, the Director of the NIH, made a controversial claim that

    “…if we had not gone through our 10-year slide in research support, we probably would have had a vaccine in time for this that would’ve gone through clinical trials and would have been ready.”

    But as Michael Eisen and Sarah Kliff argue, Collins can’t claim that there would have been a vaccine if only we had had more funding. Kliff:

    Collins’ statement… irks scientists [because it conveys] certainty, the idea that if only more money had been spent, we’d likely have a vaccine by now. They know that’s not how vaccine development works. Scientists don’t get to name a price for the development of a vaccine — the science is just too uncertain.

    Put another way: if it’s so easy, why is there still no effective HIV vaccine?

    Eisen believes that Collins could have made a stronger case for funding the NIH: Basic research is essential for coping with novel pathogens.

    Collins should be out there pointing out that the reason we’re even in a position to develop an Ebola vaccine is because of our long-standing investment in basic research, and… by having slashed the NIH budget…, we’re making it less and less likely that we’ll be equipped to handle… future challenges to public health. (emphasis added)

    For example, when AIDS appeared we didn’t know that HIV existed, let alone how to treat it. Scientists were able to identify the pathogen and develop treatments because the cancer research community had achieved a deep understanding of retrovirus biology. Retroviruses were, at the time, a relatively recondite topic with no evident application to a highly lethal and communicable disease. My friend David States (@statesdj) recalls that:

    Gallo and Montagnier [the co-discoverers of HIV] were both retroviroligists funded for work on cancer viruses. If their labs were not already highly skilled in identifying and culturing retroviruses, and the lentivirus family had not already been characterized, it would have taken at least a year to develop the necessary skills and technology to detect HIV, diagnostic tests would have been similarly delayed.

    Let’s accept David’s guess that funding of research on retroviruses sped up the development of a medical treatment for HIV by a year. What was the benefit of saving a year in drug development?

    There is no easy answer, because the dynamics of the AIDS epidemics depend on many factors. However, I think we can find a plausible lower bound on the benefits of accelerating treatment research by a year as follows.

    The rate of new HIV infections in the US has stabilized at about 50,000 cases per year, so the principal benefit of getting treatment a year earlier is that effective drugs were available to an additional 50,000 HIV+ people. (Unfortunately, not all of them get them, but that’s a discussion for another time.)

    Moreover, Goldman and his colleagues estimate that early treatment of HIV using highly active antiretroviral drugs has prevented 13,500 new infections per year since 1996. So, another benefit of getting an effective treatment a year earlier is an additional 13,500 HIV- people.

    AIDS is expensive, so preventing HIV infections also saves a lot of money. The CDC estimates that the costs of the new infections that occur in the US in just one year, summed over the lifetimes of all the newly infected patients, will be $16.6 billion. So the lifetime health care costs of a single infection will be about $330,000. If so, just one additional year of effective treatments will save the US about $4.5 billion dollars by preventing 13,500 infections. That $4.5 billion is 15% of the $30 billion annual NIH budget.

    Funding basic research at the NIH helps protect Americans and others against the risk of emerging global epidemics. Even obscure health research saves lives. The 10-year slide in the NIH budget has been penny wise but pound foolish.


    Comments closed
  • Placebos, deception, and cost-effectiveness

    In “A Duty to Deceive: Placebos in Clinical Practice,” Bennett Foddy argues that physicians should use placebos in certain circumstances. He makes some fine points, but I didn’t buy every step of his logic.

    According to the American Medical Association’s (AMA’s) code of medical ethics, a placebo may be used “only if the patient is informed of and agrees to its use.” Since a placebo’s effectiveness hinges on believing it will work, which for most people means believing it includes some active agent or procedural step (in the case of surgery), disclosing that a pill or a procedure is a placebo is likely to render it useless.

    But the AMA’s guidance offers a back door. It continues,

    A placebo may still be effective if the patient knows it will be used but cannot identify it and does not know the precise timing of its use. A physician should enlist the patient’s cooperation by explaining that a better understanding of the medical condition could be achieved by evaluating the effects of different medications, including the placebo. The physician need neither identify the placebo nor seek specific consent before its administration.

    Perhaps whether or not you think this is ethical depends on whether you think it qualifies as informed consent and whether you think informed consent is necessary anyway. (Though, if not, an even greater level of subterfuge may be acceptable to you.)

    Foddy makes an argument for a greater degree of deception in use of placebos than the AMA recommends, but only when no other therapy is superior.

    [W]hen placebos are used in good faith by a doctor who believes no better treatment to be available, it becomes possible to alleviate symptoms that cannot otherwise be alleviated. Since deceptive placebos are sometimes the best treatment, it is possible that they may be prescribed in a manner that is ethically defensible.

    Further, Foddy thinks the AMA has gone too far.

    If patient expectations can control the magnitude of the placebo effect, then the AMA is just wrong when it suggests that doctors can reveal the inert nature of placebo treatments without compromising their efficacy. If placebos are to be used in the clinic at all, they ought to be used deceptively.

    It’s hard for us to become comfortable with deception, as it challenges autonomy. And yet, if a (deceptive) placebo is used only when it’s the best available treatment, perhaps we should welcome it. Foddy has some advice for patients.

    I am not sure that patients have an expectation of complete honesty from their physicians (at least in the absence of AMA provisions that explicitly mandate it). But even if patients hold this expectation, they ought not to hold it with regard to the deceptive use of placebos. Responsible placebo use, being unlike other forms of clinical deception, bears no risk of making patients worse off, beyond the baseline risks incurred by any other effective therapy. It is a type of deception that patients ought to be thankful for, just as we are thankful when we receive a mendacious compliment from a friend.

    I disagree a bit here. Some placebos come with risks that could make patients worse off. Consider sham surgery that would require anesthesia (which carries risks) and perhaps some penetration of the skin. Even sham acupuncture penetrates the skin. Surely there’s some risk of infection. But there’s yet another consideration, which is suggested by the following passage in which Foddy argues that informed consent is not as vital in clinical practice as for clinical research.

    Research and clinical practice are not alike. In research, the subjects provide a service (data) to the researchers. If the researchers do anything to the subjects without their express consent, then they are ignoring the wishes of the subjects and treating them as mere means to obtaining the data. By contrast, in the case of clinical practice, a patient can always refuse treatment, even if the treatment is a deceptive placebo. It is thus impossible for a doctor to instrumentalize a patient who presents of her own free will to the clinic, as long as some beneficial treatment is offered in exchange for payment.

    First of all, this is illogical because a subject in a study can also refuse treatment or switch form control to treatment groups in many cases. Refusal and trading information for participation are two different things. In addition, though Foddy mentions the fact that in clinical practice doctors exchange provision of treatments for money (as well, in many cases, information, though not always formally), he doesn’t connect the dots. Payment to the doctor is a cost to the patient, society, or both, depending on financing arrangements. In other words, receipt of a placebo is never completely costless. Even the time spent receiving it is a cost.

    One could argue that patients can and should make a cost-effectiveness evaluation, that we need not interfere with a placebo-for-hire transaction on their behalf. Foddy did not make this argument. But even if he had, it’s problematic because patients rarely pay the full cost at time of treatment. Placebos, therefore, might be over-provided relative to their value.

    In addition, the employment of deception necessitates—indeed is—an information asymmetry, a type of market failure. Is the doctor to tell the patient not only that a placebo will help but how much it will do so (e.g., 50% reduction in pain or similar)? To maximize effectiveness, shouldn’t the doctor overstate its likely amount? In effect, isn’t all placebo prescribing just this? Doesn’t this interfere with the patient’s ability to make a rational cost-effectiveness assessment?

    None of this necessarily means it’s wrong to provide placebos or use deceit in doing so.  My point is that the bar over which one must pass to justify these is never at ground level. There is both an ethical bar, which is Foddy’s concern, and a cost-effectiveness one, which he ignores but shouldn’t.


    Comments closed
  • Healthcare Triage: Doctors, Money, and Conflicts of Interest

    I’m a doctor. My father is a doctor. My colleagues are doctors, the people I train are doctors, lots and lots of my friends are doctors. But that doesn’t meant that doctors sometimes aren’t blind to certain issues like their own financial conflicts of interest. Sometimes we have to poke doctors with a stick. That’s how we show our love. Conflicts of interest are the topic of this week’s Healthcare Triage:

    This episode is adapted from my NYT piece on the topic. References can be found in the links there.


    Comments closed
  • What is and is not “wellness”?

    From “What is wellness now?” by Anna Kirkland:

    So why, then, has accident prevention not been part of wellness discourse? It is personal, nonexpert-driven, and part of individual responsibility for health. (In the workplace context, preventing industrial accidents predates any wellness concerns and would be considered already addressed with other policies.) Storing firearms separately from ammunition and in a locked gun safe is not part of wellness, for example, nor is learning to swim or taking a defensive driving course. Accident prevention is too distant from the body and too isolated and abruptly temporal to count as wellness. Wearing a life jacket while boating or refraining from texting while driving are not lifestyles. A cynic might say that it is not wellness unless its achievement also advances one’s ability to appear physically as an elite member of our society—thin, toned, and energetic at any age. [...]

    Wellness in the United States has become more focused on the attainment of specific biometric goals at the same time as it has become highly managerialized within the business world as employers seek to lower their health care costs.

    Perhaps another way to ask the question is, where do wellness and public health intersect, if at all?


    Comments closed