This may be the only sentence you remember. Still, I’m going to write a bunch more. They’ll explain how I write.
I always start with something that interests me. In this case, I’m interested in showing you how I write a column-length piece, about 800 words, give or take. I’m going to show you how I do that by doing that. It’ll take several days, perhaps a week because that’s how I write them—in short sessions each morning, sometimes only 15 minutes long, rarely more than 45.
On the first morning—this morning—I only try to figure out how the piece starts—the lede—and rough out what the rest might look like. The lede is up there at the top, the first 1-3 sentences. If I only read those, they should make me want to read more. They almost tell me what the next paragraph should be. When the writing is going well, the next sentence is always obvious. It practically writes itself. I can feel it.
If I can’t feel the next sentence, it means the previous one isn’t quite right. It’s time to stop and think and rewrite. It may be time to stop for the day and come back tomorrow.
Possible sketch of the rest:
Macro editing: ordering stuff
Micro editing: getting words right
After you’ve made your point, what else to say? Or are you done?
Find a reviewer/editor
The last few sentences are hard. Here’s a trick. Look at your first few sentences. They may be the only ones readers remember.
No matter how hard I try, I can’t make this myth go away. Until I do, I will keep on posting this every Thanksgiving.
While not everyone stoops to the level of Seinfeld’s Jerry and George, who used tryptophan in turkey to lull a girl asleep so that they could play with her toys, the supposed sleep-inducing effects of tryptophan in turkey are commonly recounted at American Thanksgiving feasts and in the popular media around the holidays.
Scientific evidence does support a connection between tryptophan and sleep. L-tryptophan has been marketed as a dietary supplement to aid with sleep. Tryptophan also may have an effect on the immune system, with possible benefits for autoimmune disorders such as multiple sclerosis.
The truth is, turkey is not to blame for your sleepiness. Chicken and ground beef contain almost the same amount of tryptophan as turkey — about 350 milligrams per 4 ounce serving. While you might have heard someone claim that turkey made them drowsy, you have probably never heard someone say that chicken, ground beef, or any other meat made them sleepy. Swiss cheese and pork actually contain more tryptophan per gram than turkey, and yet the American classic, a ham and cheese sandwich, somehow escapes blame.
The amount of tryptophan in a single 4 ounce serving of turkey (350 milligrams) is also lower than the amount typically used to induce sleep. The recommendations for tryptophan supplements to help you sleep are 500 to 1000 milligrams. Many scientists also think the limited amount of tryptophan in turkey would be offset by the fact that it is generally eaten in combination with other foods and not on an empty stomach. While one clinical trial found comparable results for tryptophan from a food protein-source and pharmaceutical grade tryptophan, this study also used an extremely rich source of tryptophan, deoiled gourd seeds, which have twice the tryptophan content of turkey. In this trial, and in general use of supplements, tryptophan is taken on an empty stomach to aid absorption. Although we did not locate any experimental evidence to support this claim, many believe that the presence of other proteins and food in the stomach during the feasts generally associated with turkey consumption would limit the absorption of the tryptophan in the turkey.
There are other elements of the holiday feasts that can induce drowsiness. Large meals have been shown to cause sleepiness regardless of what is eaten because the body increases blood flow to the stomach, and decreases blood flow and oxygenation to the brain. Meals both high in proteins or in carbohydrates may cause drowsiness. And don’t forget about the booze. One or two glasses of wine, especially for people who only drink occasionally, can increase drowsiness.
Have a happy Thanksgiving, everyone. Stop blaming the turkey for your sleepiness.
If you prefer your debunking in video form, enjoy this Healthcare Triage:
In the 1990s, Oregon’s Medicaid program began using a system in which 688 procedures were ranked according to their cost effectiveness, and only the first 568 were covered. Doing so freed up enough money to cover many more people who were previously uninsured.
But the plan hit a snag in 2008 when a woman with recurrent lung cancer was denied a drug that cost $4,000 a month because the proven benefits were not enough to warrant the costs. The national backlash to this illuminated our collective difficulty in discussing the fact that some treatments might not be worth the money. The Oregon health plan made things worse in this case, however, by offering to cover drugs for the woman’s physician-assisted suicide, if she wanted it. Even supporters of the plan found the optics of this decision difficult to accept.
[A cost-effectiveness] threshold need not be hard and fast across treatments. The clinical needs of particular subgroups, together with other ethical considerations—such as whether the treatment is for an underserved population or in an emerging, high-need area—might counsel for higher or lower thresholds in particular cases.
Through a process of community meetings, public opinion surveys on quality of life preferences, cost–benefits analyses and medical outcomes research, the commission then ranked these condition/treatment pairs according to their “net benefit.” These rankings were intended to reflect community priorities regarding different medical conditions and services, physicians’ opinions on the value of clinical procedures and objective data on the effectiveness of various treatment outcomes. The list itself was meant to create an objective and scientific vehicle for setting priorities for medical spending. The initial incarnation of the rankings was generated by a mathematical formula that integrated the data from clinicians, the public and outcomes research. Future reorderings and additions of services were to be incorporated into the list on the basis of that formula. The Oregon approach to rationing, which simultaneously drew on public preferences and cost–benefit analyses, thus represented an unusual marriage of health services research and deliberative democracy.
(More about Oregon’s approach and its evolution here.)
So, yes, the idea was to come up with a list and to draw a line, covering only more highly valued services “above the line” and not covering those “below the line.” This application of a “mathematical formula” that “integrated data” sounds very cold and bureaucratic. But the process included pathways for other criteria to influence coverage decisions too: public input that solicited community priorities and physicians’ opinions, for example.
Guess what? Ultimately every coverage decision in America does. And every coverage decision ends up in the same place: either something is covered or not. Every process by which an organization arrives at a coverage decision can be, in hindsight, harshly critiqued for arriving at the “wrong” one in this case or that. It always seems cold and bureaucratic in the end. Every process, even the warmest, most patient-centered, and least bureaucratic ones have flaws and limitations. Mistakes, like the one Aaron wrote about, always arise.
Oberlander et al. wrote that, in fact, Oregon Medicaid ended up excluding very few services. It covered more under its new system than it did previously, and it saved very little (2%). Even though Oregon did draw a line, of sorts, it was a “fuzzy” one. Lots of things got covered that, by the formula, shouldn’t have. To avoid or resolve controversies and ethical issues, some services were moved over the line “by hand.” What started as objective and formula-driven ended up with a large, subjective component.
This is as it should be. Mature calls for more consideration of cost-effectiveness in coverage decisions are purposefully not calls for cost-effectiveness to be the only consideration. Those who make them understand the limitations of cost-effectiveness analysis. Apart from the obvious fact that the public would, with good reason, reject pure, data-driven coverage determinations, it’s clear that such a process cannot and does not accommodate fairness and other ethical considerations. These must, somehow, be added to the mix, and organizations like ICER and NICE and Oregon’s Health Services Commission do so.
Oregon’s experience, though different from what many may think, is still a cautionary tale. But it cautions against a trap that I think we’re unlikely to stumble into. Cost effectiveness is absolutely worth bringing to bear on coverage decisions, but not to the exclusion of other criteria. Few think otherwise. Or few enough to not matter much anyway. If any public or private payer makes coverage decisions entirely from cost effective analysis in the US, I’ll freely admit I was wrong.
On Monday, The Upshot ran the following piece (copyright 2015, The New York Times Company) as part of an interactive in which they republished many of my columns on food and nutrition. It’s awesome, and it’s gorgeous. Go check it out!
Thanksgiving is one of our favorite holidays in large part because of the big meal. Few celebrations center so completely on a feast. Given our collective concern over health and nutrition, it is inevitable that many people worry about how much they should eat in one sitting.
Of course, it’s not healthy to eat yourself sick — consuming too much, too fast, can lead to indigestion and other problems. People at high risk for heart disease, blood clots or diabetes shouldn’t throw out their doctor’s recommendations. But for most people, this isn’t the day to worry about food. As I have frequently written, one of the keys to healthful eating, and a good life, is everything in moderation — including moderation.
Tara Parker-Pope did the math a few years ago at the Well blog and found you’re probably eating around 1,000 calories at Thanksgiving. That’s not bad at all, as feasts go. Even with a big piece of pumpkin pie with whipped cream (400 calories) and two glasses of wine (250 calories), you’ll be hard pressed to get to 2,000 calories. A moderately active adult man should consume, on average, 2,400 to 2,800 calories a day and a woman about 2,000 calories, so as long as you take it easy the rest of the day, there’s nothing offensively gluttonous about eating a big Thanksgiving meal.
Enjoy the big dinner and enjoy a second helping of advice on eating and drinking that we have collected here.
In a 2009 Value in Health paper, Joseph Lipscomb and colleagues concisely summarized some of the common critiques of use of quality adjusted life years (QALYs) in economic and policy analysis. Below is an even more concise summary, informed by their paper.
QALYs are summations of terms involving four kinds of measures: (1) a health state (e.g., having a stroke or not, though probably more detailed than that), (2) the probability of being in that health state, (3) the value of that health state (e.g., how much better or worse it makes one’s life, in some precise sense), and (4) a discount factor (e.g., how much less you care about being in that state next year vs. this year).
Most debates about QALYs focus on defining health states, assessing their valuation, or how to weigh QALYs against other ethical or distributional concerns not included in their calculation. (I’m also aware debates have arisen over the discount rate.)
The authors raise these main concerns, among others:
Value of health states can be assessed in many ways. There are several, common measurement systems for health-related quality of life. They provide “similar but not identical trends,” so “they will yield different QALY scores and thus possibly different conclusions about the cost-utility of interventions of interest.”
QALYs ignore or assume away some issues, like fairness and distributional concerns. If a QALY-based approach suggested that health interventions for men yield higher QALYs than for women, should we fund health care accordingly? Few would find that fair.
A related area of concern is motivated by the question: Whose preferences are embodied in QALYs and whose should be? Should QALYs reflect the preferences of individuals experiencing various health states or the preferences of individuals imagining different health states? Or, should QALYs reflect societal value of health?
In the US, QALYs aren’t that relevant to policy-making. Are they not relevant because of methodological issues (like those in #1, above) or cultural issues (which arise in #2)? A related concern is that QALYs might be biased by financial conflicts of interest or applied in a biased way.
The authors go on to address these concerns, suggesting ways to modify QALYs or adapt them into a process that is sensitive to some of their limitations. If nothing else, the paper serves as a handy reference to a lot of QALY and QALY limitations literature. This post is not intended to be anything close to a complete enumeration of those limitations.
I am fortunate to have medical practitioners as friends willing to provide some care (advice, really) by phone (when appropriate). I like it not because it explicitly saves anyone money. I like it because it saves me tons of time. If it’s good enough for me, why not others?
If there is something fundamentally different about telemedicine, it is that many of the costs it increases or decreases have been off the books. We almost never internalize the costs patients face when they travel to appointments and wait. We sometimes recognize the costs of building waiting rooms and the time it takes for clinicians to get through a single patient encounter. We feel most palpably the charges that are recorded in insurance claims. We are often blind to the costs that result from needed care that was too hard to access. […]
The innovation that telemedicine promises is not just doing the same thing remotely that used to be done face to face but awakening us to the many things that we thought required face-to-face contact but actually do not.
People with psychiatric disorders are excluded from medical research to an unknown degree with unknown effects. We examined the prevalence of reported psychiatric exclusion criteria using a sample of 400 highly-cited randomized trials (2002-2010) across 20 common chronic disorders (6 psychiatric and 14 other medical disorders). […] Non-psychiatric conditions with high rates of reported psychiatric exclusion criteria included low back pain (75%), osteoarthritis (57%), COPD (55%), and diabetes (55%). The most commonly reported type of psychiatric exclusion criteria were those related to substance use disorders (reported in 48% of trials reporting at least one psychiatric exclusion criteria). General psychiatric exclusions (e.g., “any serious psychiatric disorder”) were also prevalent (38% of trials). Psychiatric disorder trials were more likely than other medical disorder trials to report each specific type of psychiatric exclusion (p’s < .001). […] Clinical trials greatly influence state-of-the-art medical care, yet individuals with psychiatric disorders are often actively excluded from these trials. This pattern of exclusion represents an under-recognized and worrisome cause of health inequity. Further attention should be paid to how individuals with psychiatric disorders can be safely included in medical research to address this important clinical and social justice issue.
That’s from the abstract of a paper by Keith Humphreys, Janet Blodgett, and Laura Roberts. I have not read the paper, so perhaps it makes the following point. If patients with these conditions, including substance use disorder, cannot be included in some trials for justifiable reasons (which is plausible, but I have not thought it through), one approach to studying their treatment outcomes is to rely on observational studies (yes, they have their limitations).
But, there’s a massive problem with that as well and regular readers can guess what it is. Today researchers can only obtain the key Medicare and Medicaid data they typically rely on for observational work that has been scrubbed of all substance use disorder-related records. These include those with principal diagnoses of such disorders, as well as those with a secondary diagnosis of one. That means even observational studies with Medicare and Medicaid data of low back pain, osteoarthritis, COPD, and diabetes cannot include the very patients that are also frequently excluded from randomized trials.
I agree with the authors that this, in total, is “a worrisome cause of health inequity.”
Not to dismiss the political salience of rising deductibles, in his talk preceding presentation of Hamilton Project papers in October, Jason Furman shared some important context. First, deductibles are rising, but their rate of growth has slightly moderated since 2010, as shown from two different sources of data in Furman’s figure just below.
Still, growth is growth, and the figure shows deductibles going up. However, it is worth knowing how quickly total out-of-pocket costs, not just deductibles, are going up. Relative to total spending in employer-sponsored plans, out-of-pocket spending has actually trended down, as shown in another of Furman’s charts, just below.
(Source: Medical Expenditure Panel Survey)
This is consistent with data from the Health Care Cost Institute, but only for the most recent year of its data. In its chart, just below, the bars correspond to dollars (left hand vertical axis) and the dashed lines to percent changes (right hand vertical axis). Both payers (i.e., employers) and insured (i.e., workers) are paying more in absolute terms each year, but the growth in out-of-pocket spending moderated, falling below payer spending growth in 2014.
In 2014, for 55-64 year olds, growth in out-of-pocket spending halted, as shown in the next chart.
Some of the Twitter discussion centered on details of the specific data sources used in the charts above, in an attempt to explain differences. For instance, the Medical Expenditure Panel Survey relies on self reported spending, the HCCI sample doesn’t capture some insurers, and so forth. Nevertheless, I see the findings above largely pointing in the same direction in one respect: out-of-pocket spending—even deductible—growth has slowed, and come down relative to total spending at least in 2014, if not over a longer period.
If deductibles are rising but out-of-pocket costs are falling (at least in relative terms), are people bearing more or less risk? There’s likely heterogeneity. It’s possible some are bearing more, others less. How well can we specify and quantify this?
If people hate deductibles so much, perhaps they’ll start coming down. But, in exchange, what if out-of-pocket limits go up? Is that a good trade-off?
To what extent are employers making workers pay more without offsetting changes in wages? Again, there’s likely heterogeneity in this, as well as short- and long-term changes.
Austin and Aaron are participants in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.