David Frakt, Associate Professor of Law at Western State University College of Law and Lieutenant Colonel in the U.S. Air Force Reserve JAG Corps, isn’t completely satisfied with the Defense Department’s new Manual for Military Commissions. Since he was lead defense counsel with the Office of Military Commissions he knows what he’s talking about.
The Manual is the primary implementing regulation for the Military Commissions Act of 2009, containing detailed procedural guidance, rules of evidence, and a penal code with explanations of the offenses which may be prosecuted in these military tribunals.
On the whole, the 2009 MCA is substantially fairer than the 2006 version of the law and the new Manual also contains some significant improvement over the previous version. The standards for admissibility of coerced statements and hearsay evidence, for example, now are much closer to the standards which apply in general courts-martial and federal court. There is, however, some very troubling language in the new Manual relating to the proof required to convict for certain offenses, which undermines the Obama Administration’s claims of respect for the law of war and adherence to the rule of law.
Maybe you’re a Google Reader user (if not why not?) and you want to subscribe to the same feeds I do with just a few clicks. Now you can. I’ve created Google Reader Bundles for each of the six main categories of blogs to which I subscribe. Click on the following to view the feed and subscribe:
Health Care (Alan Katz, Kaiser Health News, Healthcare Economist, Health Affairs, Health Policy & Marketplace Review, Rational Arguments, O’Neill Health Reform, Robert Pear, Daily Dose, Merrill Goozner, Health Policy and Communications Blog, Health Reform GPS, Shots, The Health Care Blog).
Economics (Follow the Money, Cheap Talk, Brad Delong, Donald Marron, Economist’s View, Economix, Econbrowser, Baseline Scenario, Greg Mankiw, Paul Krugman, Less Wrong, Woodward & Hall, Marginal Revolution, Planet Money, Overcoming Bias, TheMoneyIllusion, self-evident).
Issues (The Agenda, Matthew Yglesias, Ezra Klein, Wonk Room, FiveThirtyEight.com, Kevin Drum, Jonathan Chait).
Government-Politics (Political Insider, CBO Blog, Talking Points Memo, Center on Budget and Policy Priorities, Political Wire, Tax Policy Center, The Monkey Cage, OMB Blog).
Finance (Bad Money Advice, The Finance Buff, Get Rich Slowly, MoneyEnergy, The Oblivious Investor, Vanguard Blog).
Science (Dot Earth, RealClimate, Skeptical Science, The Oil Drum, The Vine).
Other Blogs I Like (xkcd blog and comics, Blunt Object, Newsless.org, Opinionator, Organon).
See also my News & Links feed (same as my Google Reader Shared Items).
I head out in the morning to attend the annual meeting of the Pediatric Academic Societies. It’s in Vancouver this year, so I will be travelling much of tomorrow.
Blogging may be sporadic. But I’ll do my best to keep up. I’ll also let you know if I learn anything interesting.
If you happen to be there, please come say hello. I’ll be presenting some of my research Saturday morning.
I hate to say it, but the more I experience the health care system the more I recognize how much health care is not worth its price. I’m not saying all medical care is useless. Far from it. Some things are well understood, and some cures are effective and life-saving. I’m just saying the limitations of medical science and medical practice are larger than most realize or admit, including my younger self.
A surprising number of medical practices have never been rigorously tested to find out if they really work. Even where evidence points to the most effective treatment for a particular condition, the information is not always put into practice. “The First National Report Card on Quality of Health Care in America,” published by the Rand Corporation in 2006, found that, overall, Americans received only about half of the care recommended by national guidelines.
Certainly we can learn more with research of the right type. And we should do more about aligning financial incentives with good practice and the following of effective guidelines. Naturally, there is the potential (but not a certainty) that more funding for comparative effectiveness research can help. Aschwanden:
A $1.1 billion provision in the federal stimulus package [will provide] funds for comparative effectiveness research to find the most effective treatments for common conditions. But these efforts are bound to face resistance when they challenge existing beliefs. … [N]ew evidence often meets with dismay or even outrage when it shifts recommendations away from popular practices or debunks widely held beliefs.
Aschwanden’s piece goes on to describe how to present evidence to convince practitioners and the public to change firmly held but incorrect beliefs. There’s a mistaken idea that the truth will simply be accepted, when in fact people are generally unable to shed their false mental models. “How do you convince doctors and patients to dump established, well-loved interventions when evidence shows they don’t actually improve health?” she asks.
Aschwanden’s solution is to emphasize the narrative, even the argument by analogy, not the cold, hard facts. This gets to the issue of what most people take as evidence. People like stories, not numbers. It isn’t the facts they need updated so much as their mental model. Shifting belief is less about marshaling the latest research and more about appealing to intuition.
Proponents of comparative effectiveness research look for answers in large-scale trials, but these studies hinge on statistics about large groups of people. Such number crunching rarely has the power of personal anecdote.
That’s all fine, as far as it goes, but it misses a key point. We do need the studies and evidence first. Only then can we figure out how to present empirical findings in a convincing way. But what constitutes scientific evidence? Must comparative effectiveness research necessarily be conducted by clinical trial? A randomized trial can produce the most convincing evidence, but it isn’t always practical or possible (recall mydebate with Robin Hanson on this point).
Observational studies using sound methods that exploit non-experimental randomness can provide high quality evidence. This fact and the associated technique (instrumental variables) are understood by many economists, some physicians, and too few epidemiologists (I eagerly await the day when I’m convinced otherwise). Results from such studies can influence thinking and practice. The American Heart Association and American Stroke Association consider a collection of nonradnomized studies to be an equivalently convincing source of evidence as a single randomized trial, though less convincing than data from multiple randomized trials, which is a sensible position (see figure below from their guidelines).
Unfortunately, far too much thinking in medicine and in the rest of our lives is based on the lowest form of evidence, if any. The consensus opinion of experts may be better than nothing but not necessarily. The history of medicine (and most human endeavors) has shown time and again that opinion, even consensus opinion, is often wrong, sometimes tragically so. Moreover, far too often consensus or even one’s own opinion is allowed to override more objective forms of evidence. That’s the psychological problem Aschwanden addressed and may be the most important fact of all.
You may remember that a few months ago, I – along with some other people – were a bit upset about an article Megan McArdle wrote in the Atlantic. Austin Frakt got a few of us together, and we wrote a letter to the editor. Unfortunately, they didn’t publish it. More unfortunately, none of the other letters they did publish accomplished the same goals as ours. So we’re posting it anyway. I’m still not inclined to re-start my subscription.
To The Atlantic Editor:
Megan McArdle’s March 2010 article, “Myth Diagnosis,” distorts the scientific record in asserting that, “Quite possibly, lack of health insurance has no more impact on your health than lack of flood insurance.” Citing a tiny fraction of the literature on this topic, she concludes that we should know far more about the relationship between health insurance and mortality before considering major reforms to the health care system. But we already know vastly more than McArdle lets on.
For example, she characterized one study, which did not find a decrease in mortality risk due to insurance, as “what may be the largest and most comprehensive analysis yet done of the effect of insurance on mortality.” That sounds as if this single study is determinative. Yet no study in a social science could be. In truth, that insurance and the access to care it facilitates improves health and reduces mortality risk is as close to an incontrovertible truth as one can find in social science.
Viewed as a whole, the body of evidence shows that this relationship is well established. Last year, comprehensive literature reviews conducted by the Institute of Medicine and published in the Milbank Quarterly concluded that the overwhelming majority of well-conducted studies have found important health benefits of insurance, including lower risk of mortality. In addition to quasi-experimental research, several observational studies by leading researchers that controlled for a robust set of characteristics have demonstrated a 35-43% greater risk of death within 8-10 years for adults who were uninsured at baseline and even higher relative risks for older uninsured adults with treatable chronic conditions, such as diabetes and hypertension. These and other relevant studies are described in three online summaries posted in response to McArdle’s article—by Stan Dorn on Ezra Klein’s blog at the Washington Post (tinyurl.com/StanDorn), Harold Pollack on The New Republic’s The Treatment blog (tinyurl.com/HPollack), and by J. Michael McWilliams on Austin Frakt’s blog The Incidental Economist (tinyurl.com/JMMcWill).
But McArdle did not make her readers aware of this body of evidence. Instead, she cherry-picked work that supported her conclusion, ignoring every study published since 1994 that is inconsistent with her argument. It is one thing to argue that we should reassess proposed approaches to health reform. It is quite another to misrepresent a body of work in support of that conclusion and further mislead readers that such work does not exist.
No one could object to The Atlantic‘s support for a wide range of opinion columns. But The Atlantic is a respected, widely read home to intellectually honest and rigorous journalism. One hopes that, before publishing an article like McArdle’s at a key juncture of the national debate over health reform, the magazine’s editors would have made sure that the article fairly reflected the available evidence. Sadly, McArdle’s article did not come close to meeting that standard.
Austin Frakt, PhD Assistant Professor of Health Policy and Management School of Public Health Boston University
Stan Dorn, JD Senior Fellow Urban Institute
Jack Hadley, PhD Professor and Senior Health Services Researcher Dept. of Health Policy and Management George Mason University
Aaron E. Carroll, MD, MS Associate Professor of Pediatrics Director, Center for Health Policy and Professionalism Research Indiana University School of Medicine
Lisa I. Iezzoni, MD, MSc Professor of Medicine, Harvard Medical School Director, Mongan Institute for Health Policy Massachusetts General Hospital
Unfortunately, the letter I drafted with colleagues in response to Megan McArdle’s March 2010 The Atlantic article “Myth Diagnosis” was not published in the magazine. Those that were did not make the same points we did (letters available online). Below is the text of our letter.
To The Atlantic Editor:
Megan McArdle’s March 2010 article, “Myth Diagnosis,” distorts the scientific record in asserting that, “Quite possibly, lack of health insurance has no more impact on your health than lack of flood insurance.” Citing a tiny fraction of the literature on this topic, she concludes that we should know far more about the relationship between health insurance and mortality before considering major reforms to the health care system. But we already know vastly more than McArdle lets on.
For example, she characterized one study, which did not find a decrease in mortality risk due to insurance, as “what may be the largest and most comprehensive analysis yet done of the effect of insurance on mortality.” That sounds as if this single study is determinative. Yet no study in a social science could be. In truth, that insurance and the access to care it facilitates improves health and reduces mortality risk is as close to an incontrovertible truth as one can find in social science.
Viewed as a whole, the body of evidence shows that this relationship is well established. Last year, comprehensive literature reviews conducted by the Institute of Medicine and published in the Milbank Quarterly concluded that the overwhelming majority of well-conducted studies have found important health benefits of insurance, including lower risk of mortality. In addition to quasi-experimental research, several observational studies by leading researchers that controlled for a robust set of characteristics have demonstrated a 35-43% greater risk of death within 8-10 years for adults who were uninsured at baseline and even higher relative risks for older uninsured adults with treatable chronic conditions, such as diabetes and hypertension. These and other relevant studies are described in three online summaries posted in response to McArdle’s article—by Stan Dorn on Ezra Klein’s blog at the Washington Post (tinyurl.com/StanDorn), Harold Pollack on The New Republic’s The Treatment blog (tinyurl.com/HPollack), and by J. Michael McWilliams on Austin Frakt’s blog The Incidental Economist (tinyurl.com/JMMcWill).
But McArdle did not make her readers aware of this body of evidence. Instead, she cherry-picked work that supported her conclusion, ignoring every study published since 1994 that is inconsistent with her argument. It is one thing to argue that we should reassess proposed approaches to health reform. It is quite another to misrepresent a body of work in support of that conclusion and further mislead readers that such work does not exist.
No one could object to The Atlantic‘s support for a wide range of opinion columns. But The Atlantic is a respected, widely read home to intellectually honest and rigorous journalism. One hopes that, before publishing an article like McArdle’s at a key juncture of the national debate over health reform, the magazine’s editors would have made sure that the article fairly reflected the available evidence. Sadly, McArdle’s article did not come close to meeting that standard.
Austin Frakt, PhD Assistant Professor of Health Policy and Management School of Public Health Boston University
Stan Dorn, JD Senior Fellow Urban Institute
Jack Hadley, PhD Professor and Senior Health Services Researcher Dept. of Health Policy and Management George Mason University
Aaron E. Carroll, MD, MS Associate Professor of Pediatrics Director, Center for Health Policy and Professionalism Research Indiana University School of Medicine
Lisa I. Iezzoni, MD, MSc Professor of Medicine, Harvard Medical School Director, Mongan Institute for Health Policy Massachusetts General Hospital
A number of you have asked me to explain in more detail (after listening to me on Sound Medicine) why nutrient based regulations for school lunches can be a bad idea. Specifically, you want to know why I have a problem with limiting the percentage of calories that can come from fat.
First of all, I have no problem with the idea of keeping the number of fat calories low. That makes intuitive sense. The problem is that, sometimes, by focusing on the nutrients and not the food, you can lead schools to do bad things.
Let’s say we mandate that you can’t have more than 25% of calories come from fat in a meal. Then, let’s say that on the abysmally low amount we give schools to make lunches, they come up with a burger and fries that are passable. The lunch has 700 calories. One problem – 200 calories are from fat. That’s 29%; it’s too much.
Now, the school could try and start over. But it’s easy to make fries and they already bought the burgers and buns. They can’t remove fat. But what they can do is increase the calories in the lunch! If they give the kids a couple of cheap candies – 100 cal0ries of pure sugar – then now the lunch has 800 calories. Because 200 calories still only come from fat, they’ve now met the 25% rule.
Is this good for kids? Absolutely not. It would be much better to have food based (not nutrient based) guidelines that would require the school to make a healthier lunch without resorting to candy. That’s part of what the new law hopes to do.
Those of you who have been reading the blog for a while should recognize the Wyden-Bennett bill. The Wyden amendment was somewhat based on this proposal.
Look, there is no way you could call the Wyden-Bennett bill liberal. It had real bipartisan support – and for good reason. It was a market-based, voucher approach to insurance that would have eliminated Medicaid, decoupled insurance from employment, and made the country into a massive exchange.
It’s not even close to single-payer. And, while it is reform, it has enjoyed plentyofsupportinthepast from people on both sides of the aisle.
Bennett isn’t a liberal. He’s not even a moderate. But he’s a legislator: He’s willing to work with the other side to get things done. And he’s paying for it now.
The result of this isn’t just that Bob Bennett might lose his seat. It’s that other legislators will stop legislating. It’s that all Bennett’s friends will see what happened to their old colleague and go pale. It’s that compromise will become too dangerous to seriously contemplate, and so the possibility for compromise will become even more remote.
At some point, maybe this is a good thing. If compromise is impossible, better that we just get some loons into the Senate and admit the institution’s modern composition and lift the strictures on majority action. But let’s at least call this what it is: Bennett is not in trouble because he is a liberal. He’s in trouble because he’s a legislator.
I’ve said it again and again. There is still much work to be done. This bodes poorly for any of it happening.
When I nap, there’s at least a 50-50 chance that I’ll wake up feeling groggy and awful. Whatever cognitive benefits naps offer, they’re vastly outweighed by the period of time in which I’m useless and unhappy and desperate to go back to sleep. And it’s not as if I’m bad at waking up in general: So far as the morning goes, my experience is that I’m better and quicker at waking up than most. So what gives?
What gives is that Klein’s naps are not the right length. And that doesn’t mean they’re too short. They are more than likely too long. For most people optimal nap length is less than 30 minutes (for me it is 20). The trick is to enter the first few lighter stages of sleep and then exit before experiencing the deeper ones. Going deep risks sleep inertia, that horrible, groggy feeling to which Klein refers.
Now, I’m not a sleep scientist so take all of the above with a grain of salt. (One might do better to consult the folks over at NY Times’ All-Nighters.) However, from 9th grade biology (when I wrote a report on the subject) to fatherhood (when I read many books on it) I’ve had a decades-long interest in sleep. I trained myself to power nap in high school thinking it’d be handy in college and beyond (correct I was). I’ve fought occasional battles with insomnia which have motivated me to contemplate sleep and why it is necessary, yet sometimes elusive.
One thing I learned in my amateur study of the subject is that the body has several different systems that regulate sleep. They’re quasi-independent and can get out of phase. They’re particularly apt to do so when you wake up at the “wrong time,” like from a deep sleep phase. Part of you is still asleep even though you seem awake. Your brain remains in a zombie-like netherworld until the systems re-sync hours later. That’s why you can be a morning person who fails at naps. Consider a shorter one. Sleep on it, literally.
I’m well-known in a tiny (now broader) circle as a good napper. As few as five minutes of shut-eye and I’m refreshed and recharged. When the need arises I can nap briefly just about anywhere. I’ve napped at rock concerts and on all manner of surfaces and in contorted positions. But I’ve wondered, does nap location/position matter? A new paper in Biological Psychology, summarized at BPS Research Digest, suggests it does.
Even naps as short as ten minutes have been shown to provide psychological benefits in terms of reduced fatigue and improved concentration (pdf). But would-be nappers face some strategic decisions, most obviously – does it matter whether I nap in my chair or ought I try to find somewhere to lie down? And then … if remaining seated, is it okay to lean forwards and rest my head on a desk?
When it comes to napping while leaning back in a chair or car seat, past research has shown that the further you can lean back, the better, at least in terms of subjective fatigue and reaction times. Now Dayong Zhao and colleagues have addressed the leaning forward issue, comparing lying-down napping and leaning-forward napping, and they’ve found that the former is the most effective, but that the leaning-forward variety still has clear benefits compared with no nap at all.
Nowadays most of my napping–which isn’t much–occurs on my train ride home from work. The most important consideration for me is neck alignment. Leaning my head back may find a stable surface for it, but it can irritate my neck. Best results seem to occur with folded arms and head slightly slumped forward. But the new research suggests I’m giving something up with a forward-tilt.
One thing to note is the studies referenced are all very small. Why can’t researchers get more people to participate in napping studies?
Austin and Aaron are participants in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.