Via Strange Signs:
Research fails to show that green coffee extract works. It also fails to show travel bans are a good idea for Ebola. This is Healthcare Triage News.
For those of you who came here for references of more information:
- Randomized, double-blind, placebo-controlled, linear dose, crossover study to evaluate the efficacy and safety of a green coffee bean extract in overweight subjects (Retracted)
- Authors retract green coffee bean diet paper touted by Dr. Oz
- Green Coffee Bean Manufacturer Settles FTC Charges of Pushing its Product Based on Results of “Seriously Flawed” Weight-Loss Study
- The evidence on travel bans for diseases like Ebola is clear: they don’t work
- Mitigation strategies for pandemic influenza in the United States
- Human Mobility Networks, Travel Restrictions, and the Global Spread of 2009 H1N1 Pandemic
Chris Conover says it should have the freedom not to do so:
Last week, an advisory board recommended that Arkansas’s Medicaid program cover Kalydeco, a cystic fibrosis drug [which would cost the program] $239,000 per patient year. [... B]ecause “ Arkansas appears to be the only state preventing patients who meet the eligibility criteria established by the U.S. Food and Drug Administration” the state is being sued on grounds that its policy violates a federal statute requiring state Medicaid programs to pay for all medically necessary treatments. [...]
[P]art of the reason the Arkansas lawsuit is getting leverage is because of evidence that cost appeared to be a factor underlying the decision to deny coverage for Kalydeco. [...]
The WHO considers a medical intervention to be “not cost-effective” if it costs more than three times a nation’s per capita GDP per year of life saved. With U.S. GDP per capita currently at $51,749, it is pretty obvious that $239,000 lies pretty far outside the bounds of what WHO would deem cost-effective. [...]
[T]his 7-page National Health Law Program summary of medical necessity under Medicaid highlights the complexity of the problem. The upshot is that “medical necessity” is never defined explicitly either in the Medicaid statute or regulations. It has been fleshed out in case law and administrative rulings. The Stanford definition of medical necessity which has been adopted by a number of state Medicaid programs [has] a very restrictive definition: “An intervention is considered cost effective if the benefits and harms relative to costs represents an economically efficient use of resources for patients with this condition.”
Such a definition does not permit administrators to do what the Oregon Medicaid program did many years ago: rank order all treatments by their cost-effectiveness and eliminate from coverage all treatments above a certain cost per added year of life threshold. [Here's one,* of many, papers on Oregon's experience with cost-effectiveness ranking.] So how did Oregon get away with adopting cost-effectiveness rankings? By getting a waiver. [...]
Chris goes on to argue that Arkansas, and all states, should be able to apply cost-effectiveness criteria without waivers. More at the link.
* Apart from the link in brackets, all others are in Chris’s original. They are not mine.
Whenever international comparisons are made on quality metrics, inevitably someone complains that the results are invalid because it’s impossible to standardize quality across different systems. Those people are not entirely wrong. It is hard to make sure that you are measuring the exact same thing in different countries. That’s why it’s important to use a variety of metrics, and to acknowledge the limitations in any study and any comparison.
That said, it’s sometimes easier to standardize metrics of access. I talk about this over at the AcademyHealth blog in my latest post. Go read!
JAMA is chock full of great papers this week. “Total Expenditures per Patient in Hospital-Owned and Physician-Owned Physician Organizations in California“:
Importance Hospitals are rapidly acquiring medical groups and physician practices. This consolidation may foster cooperation and thereby reduce expenditures, but also may lead to higher expenditures through greater use of hospital-based ambulatory services and through greater hospital pricing leverage against health insurers.
Objective To determine whether total expenditures per patient were higher in physician organizations (integrated medical groups and independent practice associations) owned by local hospitals or multihospital systems compared with groups owned by participating physicians.
Design and Setting Data were obtained on total expenditures for the care provided to 4.5 million patients treated by integrated medical groups and independent practice associations in California between 2009 and 2012. The patients were covered by commercial health maintenance organization (HMO) insurance and the data did not include patients covered by commercial preferred provider organization (PPO) insurance, Medicare, or Medicaid.
Main Outcomes and Measures Total expenditures per patient annually, measured in terms of what insurers paid to the physician organizations for professional services, to hospitals for inpatient and outpatient procedures, to clinical laboratories for diagnostic tests, and to pharmaceutical manufacturers for drugs and biologics.
Exposures Annual expenditures per patient were compared after adjusting for patient illness burden, geographic input costs, and organizational characteristics.
The gist of this was that researchers wanted to see what expenditures were per patient in physician-owned groups versus hospital-owned groups. They adjusted for patient health, geography, and other organizational characteristics.
Physician-owned groups had expenditures of $3066 per patient, versus $4312 in hospital-owned groups. Groups owned by multihospital systems had expenditures of $4776. Even after adjust for other factors, the difference between expenditures remained significant ($435 more for hospital-owned groups and $704 more for multihospital-owned groups).
Consolidation seems to be the trend of the day. Do with this what you will.
This analysis suggests a future for ‘‘wellness’’ initiatives: a great deal of negative wellness [basically, underwriting, cost shifting, or back-door risk-rating] in the form of crude insurance rate or hiring discrimination, diverse and visible but largely cosmetic wellness programs used as cheap recruitment and retention, or corporate public relations. There will be some scope for positive wellness [investment in health promotion] in the same high-skill or low-turnover firms that have incentive to train employees. Just as American employers routinely demand skills from employees and then complain to governments when the requisite skilled employees do not appear in the open market, we should expect that American employers will be much more likely to demand health from prospective employees than they are likely to actually invest in it. Negative wellness policies and continued underprovision of population health is consequently the likely future.
That’s from Scott Greer and Robert Fannion. The basic insight, which the paper spells out multiple times, is that there’s little incentive for employers to make wellness investments in workers who aren’t likely to stay long enough to generate a positive return. However, shifting the cost of health care onto workers who, all else being equal, cost employers more in health care costs (e.g., smokers) is in employers’ financial interests.
“Big data” is all the rage. I am curious what people think big data can do, and what some claim it will do, for health and health care. I’m curious how people think causal connections will arise from (or using) big data. In large part this consideration seems overlooked, as if credible causal inferences will just emerge from the data, announcing themselves, dripping wet with self-evident validity. I am concerned.
I’ve been collecting excerpts of articles on big data, many sent to me by Darius Tahir, whom I thank. What I’ve compiled to date is below and in no particular order. For each piece, the author (with link to original) is indicated, followed by a quote. In many cases, what’s quoted is not an expression of the author’s views, but a characterization of the views of individuals about whom the author is reporting. I encourage you to click through for details before jumping to conclusions about who holds what view.
Also, do not interpret these as suggesting I do not see promise in big data. I do! I just think how we use data matters just as much as, if not more than, how much data we have. We should marry “big data” with “smart analysis” not just “big claims.”
1. Bill Gardner has not overlooked causal inference:
Here’s where the ‘big data’ movement comes in. We can assemble data sets with large numbers of patients from electronic health records (EHRs). Moreover, EHRs contain myriad demographic and clinical facts about these patients. It is proposed that with these large and rich data sets, we can match drug X and drug Y patients on clinically relevant variables sufficiently closely that the causal estimate of the difference between the effects of drug X and drug Y in the matched observational cohort would be similar to the estimate we would get if we had run an RCT.
2. David Shaywitz echos Bill and also notes the views of others that begin to shade toward the magical or mystical (“something will emerge”):*
Clinical utility, as Haddow and Palomaki write, “defines the risks and benefits associated with a test’s introduction into practice.” In other words, what’s the impact of using a particular assessment – how does it benefit patients, how might it adversely impact them? This may be easiest to think about in the context of consumer genetic tests suggesting you may be at slightly elevated risk for condition A, or slightly reduced risk for condition B: is this information (even if accurate) of any real value? [...]
The other extreme, which Stanford geneticist Atul Butte is perhaps best known for advocating, is what might be called the data volume perspective; collect as much data as you possible can, the reasoning goes, and even if any individual aspect of it is sketchy or unreliable, these issues can be overcome with volume. If you examine enough parameters, interesting relationships are likely to emerge, and the goal is to not let the perfect be the enemy of the good enough. Create a database with all the information you can find, the logic goes, and something will emerge.
3. Darius Tahir reminds us that we’re most readily going to find correlations (implication: not causation) in a hypothesis-free space:
Supplementing medical data with consumer data might lead to better predictions, he, and the alliance, reasoned.
In the pilot program, the network will send its health data to a modeler, which will pair that information with consumer data, such as credit card and Google usage. The modeler doesn’t necessarily have a hypothesis going in, Cantor said.
“They’re identifying correlations between the consumer data and healthcare outcomes,” he said.
4. Amy Standen really frightens me with the scientific-method-is-dead idea:
“The idea here is, the scientific method itself is growing obsolete,” [...]
[S]o much information will be available at our fingertips in the future that there will be almost no need for experiments. The answers are already out there. [...]
Now, Butte says, “you can connect pre-term births from the medical records and birth census data to weather patterns, pollution monitors and EPA data to see is there a correlation there or not.” [...]
Analyzing data is complicated and requires specific expertise. What if the search engine has bugs, or the records are transcribed incorrectly? There’s just too much room for error, she says.
“It’s going to take a system to interpret the data,” she says. “And that’s what we don’t have yet. We don’t have that system. We will, I mean for sure, the data is there, right? Now we have to develop the system to use it in a thoughtful, safe way.”
5. Chris Anderson says that numbers can speak for themselves:
Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all. [...]
With enough data, the numbers speak for themselves. [...]
“Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot. [...]
Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.
6. Bernie Monegain writes about about Partners HealthCare’s chief information officer James Noga’s dream of moving beyond prediction (for which correlations that aren’t causation can be useful) and designing interventions (for which causality is crucial):
He likes to employ a travel analogy. Drivers once got maps to travel from one point to another — they basically figured it out themselves — then they went to predictive analytics to find the best route to get from point A to point B.
“Then as you get into prescriptive analytics, it actually tells you on the way real time, an accident has happened and reroutes you,” said Noga.
“With big data you’re really talking about data that’s fast moving and perpetually occurring, actually able to intercede rather than merely advise in terms of the care of patients,” he said. “On the discovery side with genetics and genomics using external data sources, I think the possibilities of what I would call evidence-based medicine, and being able to drive that to drive better protocols on the clinical side is endless in terms of the possibilities.”
7. Veronique Greenwood offers concrete examples and a warning:
Back in her office, [Jennifer Frankovich] found that the scientific literature had no studies on patients like this to guide her. So she did something unusual: She searched a database of all the lupus patients the hospital had seen over the previous five years, singling out those whose symptoms matched her patient’s, and ran an analysis to see whether they had developed blood clots. “I did some very simple statistics and brought the data to everybody that I had met with that morning,” she says. The change in attitude was striking. “It was very clear, based on the database, that she could be at an increased risk for a clot.” [...]
For his doctoral thesis, [Nicholas Tatonetti] mined the F.D.A.’s records of adverse drug reactions to identify pairs of medications that seemed to cause problems when taken together. He found an interaction between two very commonly prescribed drugs: The antidepressant paroxetine (marketed as Paxil) and the cholesterol-lowering medication pravastatin were connected to higher blood-sugar levels. Taken individually, the drugs didn’t affect glucose levels. But taken together, the side-effect was impossible to ignore. “Nobody had ever thought to look for it,” Tatonetti says, “and so nobody had ever found it.” [...]
There are numerous correlations like this, and the reasons for them are still foggy — a problem Tatonetti and a graduate assistant, Mary Boland, hope to solve by parsing the data on a vast array of outside factors. Tatonetti describes it as a quest to figure out “how these diseases could be dependent on birth month in a way that’s not just astrology.” Other researchers think data-mining might also be particularly beneficial for cancer patients, because so few types of cancer are represented in clinical trials. [...]
In the lab, ensuring that the data-mining conclusions hold water can also be tricky. By definition, a medical-records database contains information only on sick people who sought help, so it is inherently incomplete. Also, they lack the controls of a clinical study and are full of other confounding factors that might trip up unwary researchers. Daniel Rubin, a professor of bioinformatics at Stanford, also warns that there have been no studies of data-driven medicine to determine whether it leads to positive outcomes more often than not. Because historical evidence is of “inferior quality,” he says, it has the potential to lead care astray.
Yet despite the pitfalls, developing a “learning health system” — one that can incorporate lessons from its own activities in real time — remains tantalizing to researchers.
8. Vinod Khosla expresses some ambitions:
Technology will reinvent healthcare. Healthcare will become more scientific, holistic and consistent; delivering better-quality care with inexpensive data-gathering techniques and devices; continual monitoring and ubiquitous information leading to personalized, precise and consistent insights. New medical discoveries will be commonplace, and the practices we follow will be validated by more rigorous scientific methods. Although medical textbooks won’t be “wrong,” the current knowledge in them will be replaced by more precise and advanced methods, techniques and understandings.
Hundreds of thousands or even millions of data points will go into diagnosing a condition and, equally important, the continual monitoring of a therapy or prescription. [...]
Over time, we will see a 5×5 improvement across healthcare: 5x reduction in doctors’ work (shifted to data-driven systems), 5x increase in research (due to the transformation to the “science of medicine”), 5x lower error rate (particularly in diagnostics), 5x faster diagnosis (through software apps) and 5x cost reduction.
9. Larry Page thinks government regulation is slowing the promise of big data:
I am really excited about the possibility of data also, to improve health. But that’s– I think what Sergey’s saying, it’s so heavily regulated. It’s a difficult area. I can give you an example. Imagine you had the ability to search people’s medical records in the U.S.. Any medical researcher can do it. Maybe they have the names removed. Maybe when the medical researcher searches your data, you get to see which researcher searched it and why. I imagine that would save 10,000 lives in the first year. Just that. That’s almost impossible to do because of HIPPA. I do worry that we regulate ourselves out of some really great possibilities that are certainly on the data-mining end.
10. Lindsey Cook writes about some of the barriers to big data (legal issues, physicians’ concerns, patients’ misunderstandings, technological barriers, misplaced research funding), though not about causal inference. Her piece includes a primer on what “big data” means (“an incredibly large amount of information”).
Big data is already producing research that has helped patients. For example, a data network for children with Crohn’s disease and ulcerative colitis called ImproveCareNow helped increase remission rates for sick children, according to Dr. Christopher Forrest and his colleagues, who are creating a national network of big data for children in the U.S.
When for-profit Vanguard Healthcare bought the Detroit Medical Center in late 2010, worried murmurs rippled through the community. DMC is the largest provider of charity care in the state, and it wasn’t clear whether Vanguard would feel obliged to honor the hospital’s charitable mission—would they keep serving the uninsured, or would access and quality of care suffer? These concerns were echoed by patient advocates when Tenet Healthcare—another for-profit—acquired Vanguard in 2013, absorbing DMC in the process.
Were people right to worry? Maybe not, according to a new study published today in JAMA by Karen Joynt, John Orav, and Ashish Jha.
Conclusions and Relevance: Hospital conversion to for-profit status was associated with improvements in financial margins but not associated with differences in quality or mortality rates or with the proportion of poor or minority patients receiving care.
Existing literature on what happens when a hospital turns for-profit is dated:
Most of the data on conversions are from the 1990s, and those data generally suggest that conversions were associated with higher margins but also higher mortality rates. However, these transitions took place during an era in which national efforts such as the Hospital Compare program designed to monitor hospital quality were not yet in existence and prior to the emergence of powerful consumer advocate groups focused on quality and safety, such as the Leapfrog Group. Thus, whether prior findings on conversions would hold today is unclear.
The JAMA study compares 237 conversion hospitals against controls matched on size, teaching status, and geographic market (hospital referral region). The authors used a difference-in-differences model and Medicare data to compare changes in financial performance, quality of care, and patient populations.
Conversion hospitals started with lower margins at baseline relative to controls; those margins increased by 2.2% on average compared to only 0.4% growth among matched controls. Quality—performance on process measures and mortality—did not meaningfully change for conversion hospitals, nor did patient volume. The proportions of poor, disabled, and minority patients also appear to be unaffected by the transition to for-profit.
It’s not clear where added revenue came from—the study did not find higher Medicare patient volumes or evidence of increased Medicare payments. The authors suggest two possible mechanisms: the hospitals found ways to cut costs or extracted better payments from private insurers.
Although we cannot test this directly, it is possible that the corporations purchasing these hospitals brought experienced management to struggling institutions, which allowed them to improve their efficiency. For a hospital with persistently negative margins, for-profit status may also bring access to capital and other financial resources that can lead to changes in the hospital’s economic viability.
We want to clarify that mechanism—some of the revenue probably is efficiency gains, but I imagine a lot of conversions also represent consolidation, and there’s plenty of concern about consolidation increasing prices to go around.
And there are other knots left to untangle. The authors looked at Medicare data, so it’s not clear whether results generalize to the uninsured, privately insured, or those on Medicaid. The study finds good signals that hospitals didn’t reduce care provided to vulnerable populations: the authors looked specifically at the proportion of Medicare enrollees who were poor (dual-eligible with Medicaid) or disabled, and found no changes associated with conversion. The administrative data doesn’t paint a full picture of the population’s risk profile, though, which could have changed in ways that weren’t picked up. There also wasn’t a reliable measure for charity care provided by the hospitals, so the authors “cannot rule out a decrease in this specific type of care provision.”
This is important research, and I hope we see more of it.
The following originally appeared on The Upshot (copyright 2014, The New York Times Company).
The best way to prevent transmission of Ebola in the United States is to identify and quarantine those with the disease as soon as possible. However, the first Ebola patient in this country was, unfortunately, released after going to an emergency room, even though he had symptoms indicative of the disease. He was sent home on antibiotics.
The antibiotics were, of course, of no use in treating Ebola. They’d be of no use for any viral infection, for that matter. Even if the patient had actually had a sinus infection, as his doctors initially believed, antibiotics probablywouldn’t have done much for that either.
Yet antibiotics are regularly prescribed in this manner. Cases like this highlight a real, but often ignored, danger from their overuse: a false sense of security.
As a pediatrician and a parent, I’ve seen many protocols and procedures that require the use of antibiotics for a number of illnesses that may not necessitate them. These plans are in place, ostensibly, to protect other children from getting sick. They rest on the idea that someone on antibiotics is no longer contagious.
This is, tragically, often not the case. If you’ve had a small child with pinkeye, you know that few diseases can get your toddler banned from preschool faster. Most of the time, he will not be able to return to school until he has been on antibiotic drops for 24 hours.
This assumes, of course, that the pinkeye is caused by bacteria. Often, it is not. Up to 20 percent of conjunctivitis can be caused by adenovirus alone. Pinkeye caused by a virus will be completely unaffected by any antibiotic drops; children will be infectious long after receiving them. Moreover, physicians are pretty much unable to distinguish between bacterial and viral conjunctivitis.
Even if we were, there’s little evidence than 24 hours of antibiotic drops do much of anything to render a child noncontagious. Most of the outcomes studied include things like “early microbiological remission,” which means eradicating the infection by Day 2 to 5 of therapy. However, some children still haven’t achieved this outcome even by Day 6 to 10. Drugs simply work differently in different people.
Strep throat isn’t much better. Resistance in group A streptococcus, the cause of strep throat, is negligible. Yet even with proper therapy, it can be very difficult to eradicate the pathogen from carriers. This has led to outbreaks among family members and closed communities even when people are properly treated.
Even in the best-case scenario, being “on an antibiotic” isn’t much protection for others. And often, antibiotics offer no protection at all.
Only about a quarter of children who have acute respiratory tract infections(including sinus, ear, throat and bronchial problems) have an illness caused by bacteria. But about twice that number are prescribed antibiotics for their symptoms. These extra drugs provide no useful benefit. Although they might make the patients feel better through a version of the placebo effect, they certainly don’t prevent transmission of nonbacterial pathogens from one person to another. So if they give people a false sense of reassurance that they are no longer contagious, leading them to relax their usual precautions, the antibiotics are most likely doing harm. (This is separate from the other kind of harm usually associated with antibiotic overuse: stimulating resistance in either the bacteria you’re trying to treat, or in other bacteria that happen to be present in the body.)
Every time a patient comes to the office with an upper respiratory infection, and we prescribe an antibiotic, we imply that we’ve taken care of the problem. We give patients an incorrect impression that the drug will make them better, and will begin to kill off the germs affecting them. We also give the impression that they will be less of a risk to their friends, family and close contacts. After all, they’re “on an antibiotic.”
Confronted with this information, physicians will often fall back on the excuse that their patients “demand” it. But too often, it’s physicians, not patients or parents, who are the problem.
A study published in 1999 in the journal Pediatrics examined expectations and outcomes regarding visits to the pediatrician for a child’s cold symptoms. The only significant predictor for an antibiotic prescription was if a physician thought a parent wanted one. They wrote one 62 percent of the time when they assumed a parent expected a prescription, but only 7 percent of the time when they thought parents didn’t. However, it turned out that the doctors often guessed wrong as to what parents actually desired.
Another study, published in 2003 in the Annals of Emergency Medicine, had similar findings. Doctors were more likely to prescribe an antibiotic fordiarrhea when they assumed that patients expected it, but they correctly guessed patients’ expectations only a third of the time. Physicians were also more likely to prescribe antibiotics for patients with bronchitis and other respiratory infections if they believed patients wanted them, but correctly identified those expectations only about a quarter of the time. In yet another study, physicians even prescribed antibiotics to 29 percent of patients whodidn’t want them.
It’s time that we stopped viewing the overuse of antibiotics as a victimless crime. According to reports, Thomas Eric Duncan showed up in the emergency room with a 103-degree fever, a headache and abdominal pain. He rated that pain an 8 on a scale of 1 to 10. After receiving tests, he was thought, perhaps, to have sinusitis, and was given an antibiotic. I cannot guess what was in the physicians’ heads that day, but I think it’s likely they thought the antibiotics would do little harm, and potentially some good.
We physicians may believe that antibiotic prescriptions are what patients want, but it may be time to recognize that they are sometimes more for us than for them. Moreover, the false sense of security they provide may do more harm than good.
In last week’s Healthcare Triage News, I covered the fact that many supplements have switched from 1,3-dimethylamylamine (banned) to 1,3-dimethylbutylamine (not banned yet).
Yesterday, in JAMA, a new study told an even worse story:
The US Food and Drug Administration (FDA) initiates class I drug recalls when products have the reasonable possibility of causing serious adverse health consequences or death. Recently, the FDA has used class I drug recalls in an effort to remove dietary supplements adulterated with pharmaceutical ingredients from US markets. Approximately half of all FDA class I drug recalls since 2004 have involved dietary supplements adulterated with banned pharmaceutical ingredients.
Prior research has found that even after FDA recalls, dietary supplements remain available on store shelves. However, it is not known if the supplements on sale after FDA recalls are free of the adulterants. In the present study, dietary supplements purchased at least 6 months after FDA recalls were analyzed to determine if banned drugs were still present.
This study looked at supplements being sold 6 months after bans went into place. Between 2009 and 2012, 274 supplements were recalled by the FDA. About 10% of those were recalled because of an adulteration of an ingredient, were still available for purchase in mid 2013.
At least one of the adulterants (banned) were still identified in two-thirds of the products. About 85% of sports enhancement supplements were still adulterated. So were 67% of weight loss supplements, and 20% of sexual enhancement supplements. And lest you think this is a foreign problem, 65% of the products made by US manufacturers were still adulterated.
And get this: while 63% of the supplements contained the same banned supplement before the ban was enacted, more than 20% contained at least one additional banned ingredient.
Supplement ingredients are not a problem to be ignored. We don’t regulate them nearly as carefully as drugs and other substances.