Via The Sun:
The top left one would seem to subsume the others.
I have a new post up on the JAMA Forum. What’s going on here?
In one experiment, family practitioners were presented with 1 of 2 scenarios involving a man with chronic hip pain. In one scenario, respondents were asked to select between referring the man to surgery only or to do so in combination with initiating ibuprofen. In the other scenario, a third option was added: surgical referral and initiation of piroxicam. When this third option was added, more respondents elected surgical referral only, relative to the 2-option scenario (72% elected surgical referral vs 53% elected surgical referral plus ibuprofen). Somehow, offering the option of another drug decreased the appeal of prescribing any drug.
Also, here’s a bit of bonus material that I cut for length:
Consumers more readily buy items—including big ticket items like cars—when they’re labeled “on sale,” even if the price hasn’t changed. After the Texas electricity market was deregulated in 2002, the incumbent provider retained a dominant market share, even though customers could have saved nearly $150 per year by switching. Other work shows that the order and manner in which options are presented to consumers affects their choices.
Thorough searching and comparing isn’t easy. One study estimated that, on average, consumers would have to save $200 for the effort of searching for and switching to a new auto insurance policy to be worth it. Another posits that consumers are rational in their lack of thorough consideration of durable goods’ energy efficiency (like that of cars and large home appliances), because doing takes considerable time and effort.
Of course, it would not be rational (I assert) for you not to invest the time and effort to read the post.
The following originally appeared on The Upshot (copyright 2016, The New York Times Company). It also appeared on page A3 of the print edition on January 5, 2016. I thank Jennifer Gilbert for provision of research assistance for this post.
After one of her operations, my sister-in-law left the hospital so quickly that she couldn’t eat for days; after other stays, she wasn’t discharged until she felt physically and mentally prepared. Five days after his triple heart bypass surgery, my stepfather felt well enough to go home, but the hospital didn’t discharge him for several more days.
You undoubtedly have similar stories. Patients are often left wondering whether they have been discharged from the hospital too soon or too late. They also wonder what criteria doctors use to assess whether a patient is ready to leave.
“It’s complicated and depends on more than clinical factors,” said Dr. Ashish Jha, a Harvard physician who sees patients at a Boston Veterans Affairs hospital. “Sometimes doctors overestimate how much support is available at home and discharge a patient too soon; sometimes we underestimate and discharge too late.”
Changing economic incentives — which are not always evident in individual cases — have also played a role in how long patients tend to stay. Recent changes to how hospitals are paid appear to be affecting which patients are admitted and how frequently they are readmitted.
What is clear is that hospital stays used to be a lot longer. In 1980, the average in the United States was 7.3 days. Today it’s closer to 4.5. The difference isn’t because hospitalized patients are becoming younger and healthier; by and large, today’s patients are older and sicker. Yet they’re being discharged earlier.
One big reason for the change came in the early 1980s. Medicare stopped paying hospitals whatever they claimed their costs were and phased in a payment system that paid them a predetermined rate tied to each patient’s diagnosis. This “prospective payment system,” as it is called, shifted the financial risk of patients’ hospitalization from Medicare to the hospital, encouraging the institutions to economize.
One way to economize is to get patients out of the hospital sooner. The prospective payment system pays a hospital the same amount whether a Medicare patient stays five days or four. But that extra day adds costs that hit the hospital’s bottom line.
So it’s in a hospital’s financial interest to encourage doctors to discharge patients sooner. A physician who practices at a Boston-area teaching hospital told me that hospital administrators exert social pressure on doctors by informing them that their patients’ stays are longer than that of their peers. It’s now easier for doctors to discharge patients sooner to a skilled nursing facility — where they’ll be monitored and professionally cared for — because so many more of them have been built in recent years.
Almost since the prospective payment system started, experts have raisedconcerns that it would lead to higher rates of readmissions. After all, patients discharged more quickly may tend to be sicker, more prone to complications or require a level of care that’s harder to provide outside the hospital. It seems logical, therefore, that more of them would need to return to the hospital. Evidence backs this logic. In the United States andother nations, when lengths of stay decline, readmissions rise.
Until recently, hospitals did not suffer financially when a patient was readmitted, so long as it was more than 24 hours after discharge. Indeed, readmission represented only additional revenue. If reducing lengths of stay increased readmissions while decreasing costs of each stay, hospitals benefited financially on both ends of the equation.
But Medicare and private insurance companies picking up the tab lose money when a patient is readmitted. In some cases, a longer initial hospital stay that avoids a readmission is worth the additional upfront investment.
The federal government has created several new programs that penalize hospitals for readmissions. Under Medicare’s Hospital Readmissions Reduction Program, hospitals now lose up to 3 percent of their total Medicare payments for high rates of patients readmitted within 30 days of discharge. This fiscal year — the fourth one of the program — Medicare will collect $420 million from 2,592 hospitals that had readmission rates higher than deemed appropriate.
Since 2010, when almost one in five Medicare hospital patients returned within 30 days, hospital readmissions have fallen considerably. Though this fact was highlighted by the Obama administration, some people are seeing evidence that hospitals are gaming the metric. For instance, patients who are placed under “observation status” are not counted in the readmissions metric even though they may receive the same care as patients formally admitted to the hospital. Likewise, patients treated in the emergency room and not admitted to the hospital do not affect the readmissions metric either. As readmissions have fallen, observation status stays and returns to the emergency department after a discharge have risen.
“When asked by hospital administrators to keep patients in observation status, many physicians comply,” Dr. Jha told me. “Some hospitals’ electronic medical systems will alert emergency physicians when a patient has been recently discharged, and they’re encouraged to keep them in the emergency department and not readmit them.”
The influence of hospital financing is hardly perceptible to an individual patient. But the record is clear: Financing matters, and it affects both hospital admission and discharge decisions.
This past fall, I participated in a series of discussions hosted by the American Journal of Managed Care about health reform and the changing health insurance and delivery landscape. The video below is one exchange from the series, focused on the medical loss ratio regulation as well as price and quality transparency and the ability of patients to use such information.
I was joined by
I’ll post other videos from the discussion series, but if you can’t wait, you’ll find a couple more here.
The following originally appeared on The Upshot (copyright 2015, The New York Times Company). It began with this flattering note from the editors: “We have always been impressed by how much Austin Frakt gets done. In September last year, he wrote a memorable article for his blog, The Incidental Economist, about how he does it. (We highly recommend it.) So we asked him for advice on sticking to a resolution. As we expected, he came up with a method that others may find useful.”
Do you have trouble sticking to a New Year’s resolution? You should do whatever works for you, but in case it’s helpful, I think I have found a way to increase the chances you reach a goal.
Contemplating a resolution, I start with two questions: “Why don’t I do this already?” and “Why do I feel the need to do this now?”
The first question is practical; it seeks the barrier. The second is emotional; it seeks the motivation necessary to sustain an effort to remove the barrier. I might as well not initiate a resolution unless I can target the right obstacle and have sufficient desire to overcome it. Without those, the resolution is doomed from the start.
Last summer, I felt scattered and unable to focus. I wasn’t working as efficiently as before, whether writing a research article or an Upshot article. Feeling less productive made me unhappy. That was my emotional motivation to change, but what was the barrier?
Answers like “It’s the nature of the information age” or “I’m over 40” would not do. Those excuses don’t provide a modifiable contributor to my loss of focus, so they’re the wrong targets. The right answer was that I felt like I was bouncing from task to task all day — because I was. My work days had become cycles of: type a few sentences, check email, check Twitter, check the news, repeat. This process was itself interrupted by sporadic meetings and phone calls.
I couldn’t focus because I had spent years training my mind not to do so.
Because this was a problem of my own making, I could change it. I devised a schedule with several hourslong blocks per day for uninterrupted work. I dedicated other periods of time for meetings and phone calls. The plan allowed for checking email, Twitter and news only a few times per day (morning, noon, evening). Phone and desktop alerts were to be turned off. Each morning would begin with about 45 minutes of blog post writing, precisely the time of day when my brain is best suited for it. No longer would I squeeze writing around other things — five minutes here, 10 there.
Plan in hand, next came the test of whether it was sound. For one month, last August, I fully dedicated myself to the schedule. Apart from meetings or phone calls I was not at liberty to reschedule, I did not cheat. Testing a change with a time-limited commitment is a trick I’ve used before, including curing myself of insomnia.
I’m not the only one. Over coffee at a Boston cafe, a medical student, Karan Chhabra, related a similar approach. Inspired by a suggestion in The Huffington Post, he and some friends resolved to make specific, personal changes like meditating, flossing regularly and not complaining. Each resolution was for a month at a time. For accountability, they entered their resolutions into a shared Google document. Not every one stuck, but Karan credits his routine flossing to this effort.
This approach has two benefits. First, fully committing to a change is the only way to know if it is a helpful one. If Karan or I made only a partial effort and failed, we wouldn’t know if that was because it was a bad idea or that we just didn’t give it a solid try. Second, a monthlong commitment provides a concrete time for assessment. When we attempt a change, neither Karan nor I presuppose we’ve got the right approach. We’ll know better at the end of the month. (You may find a month is not long enough for you. Change it. The idea is to specify a period of full commitment, as a test.)
The test of my new schedule was successful. A month into it, my productivity increased, and I felt more focused. Just as with my insomniacure, months later I retain the habits I started to develop that first month of commitment. (It doesn’t always work out this way. After a monthlong test, I abandoned running down and up the stairs at work every hour. It was a strategy to move more, but I found it too disruptive.)
This January I’ll test another resolution — to improve my memory. Asking colleagues to remind me what we discussed last week is embarrassing and wastes time. This is my motivation to change. I suspect a reason my memory isn’t as good as it used to be is that I’ve left no time in my day to reflect on and review past events or decisions. Life has become a constant blur of information and commitments. That’s my obstacle to overcome.
My plan: In January, I will dedicate a portion of my commute to reflection, letting my mind mull over whatever seems important. Will this resolution work? After a monthlong, committed test, I’ll know.
Perhaps you’ve seen the trailer for Superhuman, the forthcoming Fox show that will “test the abilities of 12 ordinary people to use their extraordinary skills,” including “nearly super-human” memory. The trailer shows a man adding ten two digit numbers, each of which he saw for only 100 milliseconds. Another contestant can remember a long sequence of chess pieces. Another can remember the names of 200 people he met briefly. And another can remember 15 phone numbers of people he just met.
Does it take a different kind of memory to do these things? Or are they the work of ordinary minds after extraordinary practice? K. Anders Ericsson and William Chase say it’s the latter.
In short-term memory, most people can store, in order, about seven pieces of information, like the ordered digits in a phone number. It takes effort to transfer them to long-term memory — so much so that few think they could ever memorize more than a single phone number in the time Superhuman contestants do vastly more.
A simple example shows this is a false assumption. Shown the state of a mid-game chess board for five or ten seconds, a chess novice can only recall the position of a handful of pieces — that which she can stuff into short-term memory. A chess master, on the other hand, can recall the position of nearly all the pieces. But, if the pieces are arranged randomly — i.e., in a manner that could not (or is very unlikely to) arise in an actual game — a master’s recall is no better than a novice’s.
The difference between the novice and the master is that the latter has studied the game far longer. In doing so, he has infused spatial relationships among pieces with meaning that allow him to code and store them in memory more efficiently. As the random configuration results show, a master’s memory isn’t actually better. It’s the meaningful, higher level abstraction with which he processes actual game configurations that give his quite ordinary memory a boost. He’s not remembering more. He’s remembering differently, not piece-by-piece, but larger structures.
In one-hour sessions, several times per week for nearly two years, Ericsson and Chase showed an undergraduate, called “SF”, sequences of random digits at a rate of one per second. At first, he could remember no more than the rest of us: seven digits. By the end of the experiment, he could recall 80, and without any instruction on how to boost his memory.
The approach SF developed on his own, over time, was to encode sequences of digits at a higher level of abstraction. One of his technique’s was to associate them with running times. He was a runner, and typical times of various length races were meaningful to him. To SF, a sequence of eight random digits would turn into two, memorable racing times. When given sequences of numbers were not amenable to his encoding techniques (e.g., because they could not be parsed as racing times), his performance dropped back to beginner level, just like the chess masters looking at random arrangements of chess pieces.
You can probably relate to the fact that study subjects can recall only about seven random words, but can recall twice that many when organized into a meaningful sentence. Well, duh! To those who have spent a lifetime “practicing” making sense and remembering sequences of words that mean something — that is, all of us — words that have meaning when strung together are more memorable. Just like the chess masters with chess positions and SF with running-time like numbers, when a list of words has meaning, it’s more easily transferred to long-term memory.
In all instances, that which is memorized has been compressed. The chess master sees higher-level structure among the individual pieces. SF turns multiple digits in to single race times. We turn sequences of words into sentences that mean something — a single thing. The meaning-adding compression seems to facilitate short- to long-term memory conversion.
Whatever you see on Superhuman, don’t be fooled. Almost nobody has innate, superhuman memory. We all remember roughly the same amount. Some people have just practiced special techniques to rapidly encode into meaningful stories/images/structures/whatever what to us is a bunch of random, unmemorable stuff. The techniques and what they encode are matched. The chess master can remember configurations of chess pieces well, but not random words. SF developed techniques for random digits, but that’s not applicable to chess configurations.
The Superhuman contestants have done an extraordinary amount of work to make the most of their ordinary memories, but just for specific tasks. What’s inspiring, perhaps, is that, with practice, you and I could enhance our memories too. My question is, can we do so in a generalized way, improving our memories for anything, not just one specific kind of thing?
The following originally appeared on The Upshot (copyright 2015, The New York Times Company).
Most people would agree that it would be better to prevent cancer, if we could, than to treat it once it developed. Yet economic incentives encourage researchers to focus on treatment rather than prevention.
The way the patent system interacts with the Food and Drug Administration’s drug approval process skews what kinds of cancer clinical trials are run. There’s more money to be made investing in drugs that will extend cancer patients’ lives by a few months than in drugs that would prevent cancer in the first place.
That’s one of the findings from the work of Heidi Williams, an M.I.T. economics professor and recent MacArthur Foundation “genius” grant winner, who studied the problem along with Eric Budish, a University of Chicago economics professor, and Ben Roin, assistant professor of technological innovation, entrepreneurship and strategic management at M.I.T.
“R & D on cancer prevention and treatment of early-stage cancer is very socially valuable,” the authors told me in an email, “yet our work shows that society provides private firms — perhaps inadvertently — with surprisingly few incentives to conduct this kind of research.”
To secure F.D.A. approval, after patenting a drug, drug companies race the clock to show that their product is safe and effective. The more quickly they can complete those studies, the longer they have until the patent runs out, which is the period of time during which profit margins are highest. Developing drugs to treat late-stage disease is usually much faster than developing drugs to treat early-stage disease or prevention, because late-stage disease is aggressive and progresses rapidly. This allows companies to see results in clinical trials more quickly, even if those results are only small improvements in survival.
This very lesson is taught in some medicinal chemistry textbooks. For instance, one notes that “some compounds are never developed [into drugs] because the patent protected production time available to recoup the cost of development is too short.”
The clinical trials necessary for the F.D.A. to approve drugs for commercialization take years. Though a patent lasts 20 years (before any extensions), a typical drug comes to market with about 12.5 years of patent life remaining. But would-be innovators have some control over the length of time between receipt of a patent and F.D.A. approval — the “commercialization lag.” By studying patients in whom safety and efficacy can be demonstrated more quickly, innovators can reduce this lag. (Recent studies suggest that commercialization lag times may be decreasing for some types of drugs.)
Many more cancer trials focused on treatments for patients with late-stage cancers than for early-stage cancers, according to the study. Between 1973 and 2011, there were about 12,000 trials for relatively later-stage patients with a 90 percent chance of dying in five years. But there were only about 6,000 focused on earlier-stage patients with a 30 percent chance of dying. And there were over 17,000 trials of patients with the lowest chance of survival (those with recurrent cancers) but only 500 for cancer prevention, which confer the longest survival gains. The bias toward studies focused on patients with shorter survival duration is more prevalent among privately financed trials than for publicly financed ones.
Ms. Williams’s study estimated that the commercialization lag’s incentive to invest in drugs of shorter duration benefit led to 890,000 lost life-years among American patients found to have cancer in 2003 alone.
There are several possible ways to address the commercialization lag. One idea, included in legislation working its way through Congress, is to more routinely confer F.D.A. approval based on indications of improved health that can be measured more quickly than survival — so-called surrogate endpoints, like cancerous white blood cell counts and bone marrow characteristics in leukemia studies. These measures are highly correlated with survival, so they are a reliable way to speed up leukemia drug trials.
According to the study’s analysis, this approach can work. For cancer drugs approved based on some types of validated surrogate endpoints, the researchers found no difference in the number of clinical trials by survival rate. This suggests that surrogate endpoints can undo the bias that arises from the commercialization lag. To date, the only privately financed drugs to prevent cancer — the survival benefits of which would not be apparent for many years — have been F.D.A.-approved based on surrogate endpoints.
Use of surrogate endpoints with no known or strong relationship to survival is controversial. For example, the prostate-specific antigen test level — assessed with a blood test — is correlated with the amount of cancer in the prostate but has limited value in predicting prostate cancer survival. So, though they may be lucrative to drug companies, one would have little confidence that drugs approved based on P.S.A. test results would confer survival benefits. A recent systematic review found that most surrogate endpoints examined in cancer drug trials are weakly related to survival. Though most cancer drugs in recent years have been approved on the basis of surrogate endpoints, a majority of them have unknown or no beneficial survival effects.
Another approach is to extend the period of a drug’s market exclusivity to compensate for the commercialization lag. The Hatch-Waxman Act of 1984 already permits a partial extension — a half year for every year in clinical trial, up to a maximum of five additional years. Ms. Williams’s analysis suggests this is the right idea, but that there are still many potential drugs that receive only very short periods of market exclusivity. The Affordable Care Act includes a provision that grants 12 years of market exclusivity beginning from F.D.A. approval — a half year less than the typical 12.5 years remaining on a patent — but it applies only to biologic drugs.
Drug patents incentivize innovation, and F.D.A. approval is a check regarding drug safety and efficacy. The way they work together affects the incentives for research and could reduce something many would view as highly valuable: cancer prevention.
TIE Note: Vinay Prasad wrote a rebuttal to the paper by Ms. Williams I discussed above.
The conventional view is that we develop skills up to our innate limits. K. Anders Ericsson makes a strong claim that this is wrong. Instead, our skills plateau because we lose attentiveness to how we perform tasks.
Initially, when beginning to learn something — chess, typing, tennis, driving, …, anything — we work hard to reach a minimum acceptable level of proficiency. It’s the one at which we won’t badly embarrass or hurt ourselves or others. Typically, after 50 hours or so of practice, according to Ericsson, the frequency of very bad mistakes falls to a very low level, even without sustained attention. That’s when we begin to switch off that attention and go on auto-pilot. It’s the transition to mindless automation that signals the beginning of the end of our ability to improve. By tuning out, we’re no longer aware of how to get better.
We compound our error by assuming we don’t improve because we’ve hit some innate barrier to doing so. We’re not smart enough or fit enough, we tell ourselves. But efforts to identify innate, binding barriers have largely failed. Though there certainly are physical and mental limits to performance, those are not what stops us from improving. Instead, we usually stop ourselves.
What’s needed to improve is to retain a focus on doing so. This is the difference between an amateur attitude and a truly professional one.
But it’s not mere will. Wanting to improve is not enough. We have to deliberately apply the right techniques. It helps to define concrete goals, and focus on “well-defined tasks,” wrote Ericsson. It’s necessary to obtain performance feedback. We cannot improve without knowing if we’re heading in the right direction. Best of all, we need not devote every waking hour to getting better. Slow and steady wins the race. Regular practice, up to one hour per session or per day, is often enough.
A key element to getting better is to put yourself in the same position you were in as a novice. When starting any new activity, you made mistakes with regularity, you noticed them, and you worked out ways to stop making them. That’s the same way to improve, even after achieving higher levels of proficiency. You have to push yourself to the point of making mistakes, have a means of identifying those mistakes, and then work to find ways to not make them.
Extensive research on typing provides some of the best insights into how speed of performance can be increased through deliberate practice that refines the representations mediating anticipation. The key finding is that individuals can systematically increase their typing speed by exerting themselves as long as they can maintain full concentration, which is typically only 15–30 minutes per day for untrained typists. While straining themselves to type at a faster rate—typically around 10–20% faster than their normal speed—typists seem to strive to anticipate better, possibly by extending their gaze further ahead. The faster tempo also serves to uncover keystroke combinations in which the experts are comparatively slow and less efficient. By successively eliminating weaknesses, typists can increase their average speed and practice at a rate that is still 10–20% faster then the new average typing speed.
What works in typing works in many other domains: spend regular time getting out of the comfortable, automatic zone and pushing performance into the error-making zone. Then, eliminate the errors. Easier said than done, of course. Coaches and teachers can help.
The practice of medicine poses some challenges to this paradigm, however. Ericsson notes two difficulties:
Finding ways to improve medical performance is likely worth a great deal more than finding ways to improve athletic performance. The latter is clearly simpler, and we routinely witness records fall with regularity. Yet, it’s enormously uplifting to recognize that today’s top runners and swimmers, say, are not physiologically different from those of decades ago. They’re just better at getting better.
In many things, we all could be so. We accept performance plateaus too readily. They are not as inevitable and necessary as they seem.
One of the small, but important, decisions I have to make when writing Upshot posts is how to reference an academic paper. What do I call this thing to which I’m linking?
Here’s the deal: People really like to see their work in The New York Times. I get it. I like it too! It’s a sign that the work is important. The organizations with which one is affiliated care about that. It’s good publicity, especially if they’re named.
Readers want to know what kind of person the researcher is: Is he a law professor? Is she an economist? Who are you talking about, Austin?
I want everyone to be happy (why not?), but getting all of that information for all authors — names, affiliations, titles, etc. — into a piece is not so simple. Imagine:
A study by Marie Curie, a physicist and chemist at the University of Paris, Albert Einstein, a physicist at the Institute for Advanced Study in Princeton, N.J., Paul Erdős, an itinerant mathematician, Jonas Salk, a physician at the University of Pittsburgh, Adam Smith, an economist at Glasgow University, Issac Newton, a physicist at Cambridge University, and Alan Turing, a mathematician at Manchester University found that …
Even if that very long handle for a paper works upon first introduction — and I don’t think it would — it will absolutely not work for subsequent references.
I need short handles. “Curie et al.” — prominently featuring the first author — would be the typical academic citation. (Side note: I see that the Journal of Health Politics, Policy and Law now uses full names: “Marie Curie”.) Consequently, I often write things like, “University of Paris physicist and chemist Marie Curie and colleagues” for the first reference and “Ms. Curie’s study” for subsequent ones. (Over at JAMA Forum, they require titles to be listed, so you see things like “Marie Curie, M.A.”*)
This throws a lot of recognition to Marie Curie and the University of Paris but to no other individuals or institutions. Is that fair?
No! Things are never fair. However, if Ms. Curie is first author because it is her study, on which she did most of the work, and those other guys helped, but not much, then it’s arguably as close to fair as I can make it, given the constraints.
But, what if the order of authors does not reflect whose study it is or how much work they did? In this case, for instance, they’re ordered alphabetically. It’s possible contributions were about equal across collaborators.
This is the situation for the paper I highlighted in my Upshot post that appears today. My original draft referenced it as a study by “University of Chicago economics professor Eric Budish and colleagues.” In a phone conversation with the authors, I learned that he’s first author only due to his good fortune of working with colleagues with names further along in the alphabet. In the revision, all the authors get a mention. However, it turned into “Ms. Williams’ study” because I thought that readers would like to know that she won a MacArthur Foundation “genius” grant. The authors felt that was fine. (I’d not have done it if they didn’t say so.)
All of this came to mind when I saw Justin Wolfers’ post about coverage of the recent Case-Deaton paper, for which Anne Case is the lead author.
Slate’s David Plotz described the research as having been written by “Nobel Prize-winning economist Angus Deaton and Anne Case, who is his wife, and also a researcher.” Likewise, Ross Douthat, writing in The New York Times Sunday Review, described this as research by “Nobel Laureate Angus Deaton and his wife, Anne Case.”
Did I fall into the same trap? I highlighted the award-winning author, so I’m guilty, maybe (?). I didn’t highlight the apparent lead author (for good reason, though). I didn’t bias my highlighting toward a man over a woman, so good for me (?).
Apart from judging my own performance, this just further illustrates how tricky a thing short-handle citation can be. It’s never fair!
Dan Diamond had a very good take on this matter.
I reached out to both Anne Case and Angus Deaton. (Starting with a note that began “Professors Case and Deaton – Good morning!”) I even remember deliberately typing out their email addresses in that order, to make sure I was reflecting authorship. [Side note: I also take care to list email addresses in status or ownership order, when I can. Either Dan and I are both weird in this way or we’re both geniuses.]
But only Deaton responded. And because only his comments found their way into my article, I focused more on Deaton’s role as a result.
This is a useful consideration. Very often one individual (other than me) provides a lot more input into my Upshot pieces than any others — either tipping me off to research papers or providing expertise as a reviewer of a draft. Thanks to Dan, when that happens, I now make sure to cite them by name (if they’ve done some relevant work I can use), even include a quote if it makes sense (and they wish to participate in that way). I worry less about naming other, less instrumental authors in work I cite, especially that which I cite in passing (that isn’t the centerpiece of a post).
I’d like to throw credit to everyone and every organization, but it’s just not feasible. Faced with constraints, I should at least mention those who have been most helpful and responsive. There’s not always a right way to do this, even if there are some clear wrong ones.
* Searching, I could only find reference to Marie Curie’s master’s degree in physics. If she holds other degrees, someone point me to a reference please!