• Paying people to quit smoking

    It’s one of those things that I think we consider too rarely: paying patients to be healthier. After all, we have no problem penalizing them for being unhealthy (ie wellness programs). But the latter is totally accepted, and the former is often considered ridiculous. But here’s the NEJM with a study to change your mind. “Randomized Trial of Four Financial-Incentive Programs for Smoking Cessation“:

    BACKGROUND: Financial incentives promote many health behaviors, but effective ways to deliver health incentives remain uncertain.

    METHODS: We randomly assigned CVS Caremark employees and their relatives and friends to one of four incentive programs or to usual care for smoking cessation. Two of the incentive programs targeted individuals, and two targeted groups of six participants. One of the individual-oriented programs and one of the group-oriented programs entailed rewards of approximately $800 for smoking cessation; the others entailed refundable deposits of $150 plus $650 in reward payments for successful participants. Usual care included informational resources and free smoking-cessation aids.

    So here’s the deal. Researchers randomly assigned employees, along with their relatives and friends, to one of four programs to help them quit, or to “usual care”. The randomization was stratified over two variables, whether they had full healthcare benefits through the employer and whether their annual household income was at least $60,000. This was to balance recruitment in those areas.

    Two of the programs involved an individual incentive. The first was a straight payment system, with participants getting $200 at 14 days, 30 days, and 6 months, with a potential $200 bonus at the end of their enrollment if they were still not smoking. So they could get $800 total potentially. They were checked by laboratory testing to see if they were smoke free. The second individual assessment was the same, but involved requiring the participants to pony up a refundable $150 at the start of the trial They’d get that back if they didn’t smoke.

    The other two groups were collective. The first was collaborative. Participants were enrolled in groups of six. At each time point, they all received $100 for each member that was still smoke free. In this way, they could earn up to $600 per check, with the $200 bonus still available. Thus, there was $2000 total potentially available, depending on how many in the group stuck with it. This was supposed to see if getting people incentivized to work together might help.

    The last group was competitive, and also involved deposits. Everyone had to pony up $150. People were paid more if others failed. They could receive between $2000 and $1200 at each time period, with the $200 bonus at the end, for a potential $3800 dollars. Again, though, they’d get more money if fewer people quit. They were kept anonymous, though, so people couldn’t sabotage each other.

    They got more than 1000 people to participate. Overall, people liked the rewards based programs much more than they liked the deposit based programs. About 90% agreed to participate in the rewards program, versus only about 14% agreeing to the deposit programs. In other words, they didn’t like the idea of risking their own money. But what we really care about was the different quit rates. In an intention to treat analysis, the quite rates were significantly higher with all of the incentive programs than with usual care, which had a quit rate of 6%.

    At 6 months, the individual deposit program had a quit rate of 9.4%, and the competitive deposit program had a quit rate of 11.1%. The individual rewards program had a quit rate of 15.4%, and the collaborative rewards program had a quit rate of 16%. All much better than the 6% in usual care.

    The sad news is that almost all of these pretty much halved at 12 months, but still – the programs were generally better than usual care.

    And let’s not forget, more quitting is better. So how much did it cost for each 6-month quit?  It was $122 in usual care, $1,058 in individual rewards, $1,193 in collaborative rewards, $542 in individual deposits, and $858 in competitive deposits. Is that worth it? Might be. We pay a lot more for things that do us a lot less good than quitting smoking would.

    @aaronecarroll

     

    Share
    Comments closed
     
  • Obamacare’s big gamble on hospital productivity

    The following originally appeared on The Upshot (copyright 2015, The New York Times Company).

    Can hospitals provide better care for less money? The assumption that they can is baked into the Affordable Care Act.

    Historically, hospital productivity has grown much more slowly than the overall economy, if at all. That’s true of health care in general. Productivity — in this case the provision of care per dollar and the improvements in health to which it leads — has never grown as quickly as would be required for hospitals to keep pace with scheduled cuts to reimbursements fromMedicare.

    But to finance coverage expansion, the Affordable Care Act made a big bet that hospitals could provide better care for less money from Medicare. Hospitals that cannot become more productive quickly enough will be forced to cut back. If the past is any guide, they may do so in ways that harm patients.

    The Obamacare gamble that hospitals can become much more productive conflicts with a famous theory of why health care costs rise. William Baumol, a New York University economist, called it the “cost disease.” (He wrote a book about it by that title; I blogged on it as I read it if you’d like to quickly get the gist.)

    This theory asserts that productivity growth in health care is inherently low for the same reason it is in education: Productivity-enhancing technologies cannot easily replace human doctors or teachers. In contrast with, say, manufacturing — a sector in which machines have rapidly taken over functions that workers used to do, and have done them better and more cheaply — there are, at least for the time being, far fewer machines that can step in and outperform doctors, nurses or other health sector jobs.

    But a new study casts doubt on that theory and suggests Obamacare’s bet may indeed pay off. The study, published in Health Affairs by John Romley, Dana Goldman and Neeraj Sood, found that hospitals’ productivity has grown more rapidly in recent years than in prior ones. Hospitals are providing better care at a faster rate than growth in the payments they receive from Medicare, according to the study.

    romley fig

    [Note: y-axis is cumulative percent increase in productivity, as defined in the chart’s footnote.]

    This is both good news for patients and good news for the financing of the health reform law, which assumes hospitals will become significantly more productive. This bet is built into a schedule of reductions in the rate of growth in Medicare payments to hospitals. According to the law, those rates are to be reduced commensurate with the productivity growth of the overall economy. The only way for hospitals to keep up is if their productivity rises just as quickly.

    The cost disease theory says it can’t be done. This, according to the theory, is what causes health care spending growth to outpace that of the overall economy.

    Computers, cellphones, televisions — over the years they’ve all gotten better and cheaper. High productivity growth in such sectors — not mirrored in health care — leads to wage growth in those sectors. Higher wages provide more resources to spend on goods and services. Because health care is valuable, we use those resources to pay health care workers more, too, to keep them from doing something else. This helps explain why health care spending outpaces economic growth: We keep paying more for health care (through growing wages) without getting more (because of low productivity growth).

    Not all economists find every detail of the cost disease theory compelling. Some have argued, for example, that it gives short shrift to ways in which the quality of care changes, along with its price. Heart attack treatment certainly costs more today than a decade ago. Perhaps it’s also better. The acceptance of inevitably low health care productivity growth also troubles some economists.

    Amitabh Chandra, a Harvard economist, is one of them: “In Baumol’s view, as long as there is a steady stream of innovation in sectors others than health care — from cars to computers to everything on Amazon — we’ll be able to spend even more on health care, despite its jaundiced productivity growth. But if productivity in health care improves, too, then think about how much more health care we’ll be able to afford.”

    If the cost disease theory’s premise of low health care productivity growth holds, then the idea of tying reductions in the growth of Medicare payments to hospitals to economic growth — as the Affordable Care Act does — spells trouble.

    The findings by Mr. Romley and colleagues from the Schaeffer Center for Health Policy and Economics at the University of Southern California are a hopeful sign this need not happen. A strength of the study is it incorporated an aspect of the quality of care into its measure of productivity: whether the care received kept more patients alive and out of the hospital for at least 30 days. The findings were qualitatively similar for shorter (two weeks) or longer windows (one year). This distinguishes it from other approaches that measure productivity according to how many procedures a hospital can do per dollar, but not how well they do them.

    According to the analysis, productivity fell for heart attack and heart failure patients between 2002 and 2005, after which it began to rise. For hospital care for all three conditions examined — heart attacks, heart failure and pneumonia — productivity growth accelerated after 2007. By 2011 it was more than 14 percent over the level it had been in 2002.

    The source of the broadest optimism from the study: Hospital productivity increased in the most recent years faster than that of the overall economy.

    Though the study is an important one, we should interpret it with some caution. It examined only one measure of productivity; it examined only three conditions in Medicare patients; and it examined data only through 2011. More studies like this one — but using different methods and more recent data — could confirm or refute these findings.

    Nevertheless, for decades the conventional wisdom has been that hospitals — and the health care sector in general — could not become more productive, explaining its growing expense. This new study suggests that such a cost disease may not be as inherent as once believed — and that the health care law’s cuts to Medicare are not as risky a bet as they once seemed.

    @afrakt

    Share
    Comments closed
     
  • On Timing and King

    Since I’ve received a number of questions recently about the timing of King v. Burwell and its aftermath, I thought it was worth addressing them all in one place.

    When will we get a decision?

    The Court is likely to release its opinion in the last week of June. A decision could come sooner, but it probably won’t. The case was only argued in March, which is fairly late in the term, and it’s going to take time for the justices to write their opinions and to work out language with colleagues who wish to sign onto those opinions. Plus, the justices have a bunch of other opinions to write before they skip town for the summer. It’s a busy time.

    When will the decision take effect?

    If the government loses in King, there’s a small chance that the Court will stay its decision. If it doesn’t, however, the administration will have little choice but to comply within 25 days.

    Here’s why. Per Rule 45 of the Supreme Court’s rules, an opinion takes effect 25 days after its release in any case that was appealed from a state court. King wasn’t an appeal from a state court, though. The case came from the Fourth Circuit. And Rule 45 doesn’t exactly say when a decision will take effect—in legal jargon, when the Court’s mandate will issue—with respect to the lower federal courts.

    But don’t get too hung up on precisely when the mandate will issue. The executive branch’s compliance with a Supreme Court judgment is more about respecting the decision of a co-equal branch than it is about adhering to a formal judicial order. After King is decided, the Obama administration will have 25 days to consider asking the Court to rethink its decision. The administration probably won’t bother; doing so would be pointless. But after it throws in the towel, the administration couldn’t flout the Supreme Court’s decision without provoking a minor constitutional crisis.

    When will people start losing coverage?

    Once the administration complies with the Court’s decision, the IRS will no longer have the authority to cut subsidy checks—called “advance payment tax credits”—to insurers in 34 states. When residents in those states go on HealthCare.gov to pay their monthly premiums, perhaps on August 1, they’ll be asked to pay the full cost of their coverage.

    If they don’t—and most won’t—their insurers will terminate their coverage. Those terminations will, in most states, become effective 30 days after nonpayment. Millions of people are thus likely to lose coverage by Labor Day.

    There’s been some suggestion that, under the ACA, insurers must wait 90 days before terminating coverage for non-payment. But that’s wrong. The ACA does require insurers to give notice 90 days before ending a “particular type” of plan. The provision does not govern, however, where an individual’s coverage is canceled for failure to pay.

    Do the states have time to transition to state-based exchanges for 2016?

    Under HHS’s current rules, states that wish to operate state-based exchanges for 2016 have to secure conditional approval by mid-June—which is to say, about two weeks from now. Needless to say, no state (with the possible exception of Pennsylvania) will hit that deadline. HHS could adjust its rules, but even if it does, open enrollment is set to begin on November 1. States will thus have a scant four months to get new exchanges up and running. In most if not all states, that won’t be nearly enough time.

    @nicholas_bagley

    Share
    Comments closed
     
  • What one does with journal article meta data

    Why do authors need to report conflicts of interest when they publish a medical study? Austin had a thoughtful post last week in response to Lisa Rosenbaum’s NEJM essays (here, here, and here) on researchers and conflicts of interest.

    ‘Conflict of Interest’ refers to financial relationships between the authors of a research article and the manufacturer of the intervention being studied. Rosenbaum argued that there is an unfair and unwarranted prejudice against researchers who have such relationships, because the existence of a conflict of interest does not necessarily imply that the researcher is biased. The prejudice against researchers working with industry impedes the progress of research.

    Austin took her reasoning to a practical conclusion. He imagines himself reading an article and trying to evaluate its credibility. Medical journal articles typically have a footnote reporting meta data on the conflicts of interest reported by authors (e.g., “Dr. Jones was a paid consultant to the medication’s manufacturer, Big Pharma Inc.”). Austin questions whether he should even read that footnote, because

    Once I gather the meta data [about the authors’ conflicts of interest], what should I do with it?

    Austin’s right. Just knowing that Jones consults to Big Pharma doesn’t help you evaluate whether Jones’ study is valid. I don’t think there is a fair or even effective way for an individual reader to use meta data about authors to evaluate an individual article. I don’t read those footnotes either.

    Nethertheless, it is vital that those footnotes are there. Meta data are essential for meta analyses, which are systematic reviews of the effectiveness of research. Meta analyses statistically combine the results of many studies to summarize their data into a single estimate of the effect of a treatment. Moreover, they explore the heterogeneity of treatment effects, looking for differences between studies that may explain why the treatment seemed to work better in one study than another.

    Meta analyses frequently find that treatment works better in industry-funded studies than in non-industry funded studies. A recent Cochrane Review of research of the effects of industry sponsorship on research reported that:

    We found that drug and device studies sponsored by the manufacturing company more often had favorable results (e.g. those with significant P values) and conclusions than those that were sponsored by other sources. The findings were consistent across a wide range of diseases and treatments.

    We can only see this pattern by looking across many studies using journal article meta data. Of course, the Cochrane reviewers’ conclusions can be disputed on empirical grounds. Which is, of course, the great thing about having the meta data, because with the meta data we’re not limited to our moral intuitions in evaluating the validity of the empirical literature, taken as a whole.

    So here is one reason why reporting of conflicts of interest is essential: there is a substantial risk (not certainty) of industry bias in research reports. We need to track it and understand it, and we can’t do this without required disclosures of conflicts of interest. I expect that both Austin and Lisa Rosenbaum agree with me on this point.

    There remains an important question about what we should do to correct for industry bias in research results, if and when it’s confirmed. Just briefly:

    1. Suppose a meta analysis of a treatment for a specific drug, say, finds that (a) that the treatment effect averaged across studies is greater than zero (i.e., the treatment works), but (b) industry-funded studies tended to report bigger treatment effects. Then I’d conclude that the average treatment effect is likely an overestimate. I’d be cautious in using it. I’d also conclude that we need more studies of the topic.*
    2. Suppose that many meta analyses find an association between larger treatment effects and industry funded studies, which, I believe, exists.* Then I’d conclude that we need to improve our research methodology. Such effort is already underway: many current reforms in the conduct and reporting of medical research—for example, the clinical trials registry—have been motivated in part by concerns about bias associated with industry funding.

    What I wouldn’t conclude is that we should ban industry-funded clinical trials or ignore their findings entirely. Nor, without specific evidence of wrongdoing, would I assume that an industry-funded researcher is a shill or a fraud.

    Let me add that Rosenbaum has raised many important questions about our moral attitudes toward researchers and their relationships with industry. We should continue to require conflict of interest reporting, but we should also have the discussion about moral attitudes that Rosenbaum calls for.


    * Note that, as always, a simple correlation does not clarify the causal mechanisms that underlie it. An association between the size of a treatment effect and the study outcome needn’t imply that industry is cheating. For example, suppose that industry researchers more accurately target populations in which a treatment is likely to work or work better. Such targeting could either be viewed as “gaming” or as a means of providing useful, population-specific information. So a finding that industry-sponsored trials work better should open a question about what industry does differently. But—one more time—we can’t have that discussion without the meta data.

    @Bill_Gardner

    Share
    Comments closed
     
  • Stay calm and update priors as warranted

    I want to flag something meta about my Upshot post today, in which I describe a study that suggests hospital productivity has increased in recent years (through 2011). The study findings surprised me. Based on prior work and history, I am highly skeptical hospitals can maintain the high productivity growth it suggests.

    Put another way, writing about the study by John Romley, Dana Goldman and Neeraj Sood the way I did was counter to confirmation bias. I’ve posted about the hospitalor health care—productivity problem many times on TIE, as I linked to in the piece. It would have been easy to cling fast to the view that hospitals can never become substantially more productive (the cost disease) and to discount the Romley et al. study for any number of reasons. (I mention caveats at the end of the piece; more have been suggested to me on Twitter.) I find it more interesting and rewarding to take the study at face value—to challenge and update my own priors, if even provisionally.

    I suspect some will read the piece as Obamacare boosterism. That’s a mistake. I don’t do that. The ACA really did make a big and risky bet that hospitals could increase productivity. I’ve worried about it for years. I hope it’ll pay off, as the study suggests. We should be prepared for the possibility it won’t. While we wait, we should be brave enough to assimilate new evidence independent of what it implies about the ACA.

    Stay calm and update priors as warranted.

    @afrakt

    Share
    Comments closed
     
  • Question of the day

    Via StuffJournalistsLike:

    question of the day

    @afrakt

    Share
    Comments closed
     
  • Healthcare Triage News: David Sackett

    David Sackett passed away recently. We don’t usually do obituaries here, but this one seems appropriate. This is Healthcare Triage News.

    This was adapted from a piece I wrote for the AcademyHealth blog. All the references and links are there.

    @aaronecarroll

    Share
    Comments closed
     
  • Christopher Ingraham is making my life easier

    I’m giving him full credit right up there in the title. Twice in the last two weeks I was all riled up and feeling the need to blast out posts on how everyone needed to stop freaking out and pay attention to real risks and not the scream du jour. But  before I could even get to it, there was Christopher Ingraham in the Washington Post, doing it for me.

    First up was the horrific train accident on the East Coast. Let’s acknowledge that it’s a horrible tragedy, ok? It’s also totally reasonable that it captured our attention. I can’t even fault people for being concerned that our rail infrastructure might need some updating, although I don’t think it’s clear yet that this was the cause of the crash.

    But then I started hearing from people complaining that rail travel was unsafe, period. Or at least unsafe compared to other forms of travel. You hear the same sort of thing whenever there’s a plane crash, even though that’s like the safest way to travel. And you all know that I hate when people ignore that car travel is pretty much the unsafest way to go, especially since accidents are the number one killer of children.

    So I planned to make a chart on how all of these things compared to each other, but there was Christopher Ingraham, on the case already:

    travel

    Yes, trains are less safe than planes, buses, or subways, but still WAY safer than driving. So deciding to cancel that 150 mile train trip and drive instead would not be rational. Thanks, Chris!

    And then, this week, he took on laundry pods. those are those little prepackaged detergent things for the dishwasher or laundry. There were news stories in the fall about how kids were going to the ER in droves because they were eating them. The usual panic buttons got pushed. But, again, I wanted more information. How many is “droves”? How does this compare to other panics?

    I was reminded of a bit I wrote about Plan B not too long ago, when people “worried” that would be taken inappropriately and people would overdose:

    All drugs, when improperly used, carry significant effects. In 2009, there were over 70,000 calls to poison control centers for concerns about acetaminophen and more than 88,000 for ibuprofen. More than 30,000 calls were made for diphenhydramine, and 4 of those cases resulted in deaths. Just looking at kids 5 years of age and under, there were more than 130,000 calls for analgesics, 53,000 for vitamins, 48,000 for antihistamines, and 45,000 for cough and cold preparations. And yet, no one seems to be too concerned that these medications could be purchased “alongside bubble gum and batteries”. And, for the record, battery ingestions killed 4 kids in that age group that year.

    It’s all about context. So I planned to write a post on how calls to poison control for laundry pods compared to other things. But there was Christopher Ingraham, on the case already:

    Pods

    And, of the 11,000 laundry pod calls in 2013, only 54 resulted in a major injury and only 2 resulted in death. In fact, only 29 kids aged 1-4 died of ALL accidental poisonings in 2013. Guns and assaults killed way more. Car accidents killed 454 (see above).

    We need to keep these things in perspective. Chris is helping.

    @aaronecarroll

    Share
    Comments closed
     
  • Wreck the RUC

    Yesterday, the Government Accountability Office (GAO) released a withering report on how Medicare sets the fee schedule for paying physicians.

    The American Medical Association/Specialty Society Relative Value Scale Update Committee (RUC) has a process in place to regularly review Medicare physicians’ services’ work relative values (which reflect the time and intensity needed to perform a service). Its recommendations to [CMS], though, may not be accurate due to process and data-related weaknesses. First, the RUC’s process for developing relative value recommendations relies on the input of physicians who may have potential conflicts of interest with respect to the outcomes of CMS’s process. . . . . Second, GAO found weaknesses with the RUC’s survey data, including that some of the RUC’s survey data had low response rates, low total number of responses, and large ranges in responses, all of which may undermine the accuracy of the RUC’s recommendations. For example, while GAO found that the median number of responses to surveys for payment year 2015 was 52, the median response rate was only 2.2 percent, and 23 of the 231 surveys had under 30 respondents.

    . . . [T]he evidence suggests—and CMS officials acknowledge—that the agency relies heavily on RUC recommendations when establishing relative values. For example, GAO found that, in the majority of cases, CMS accepts the RUC’s recommendations and participation by other stakeholders is limited. Given the process and data-related weaknesses associated with the RUC’s recommendations, such heavy reliance on the RUC could result in inaccurate Medicare payment rates.

    This isn’t the first time the RUC has come in for serious criticism. Nor will it be the last. Rife with conflicts of interest and not especially transparent, the RUC is a specialist-dominated committee that “donates” more than $8 million of its own services each year to Medicare, presumably out of the goodness of its heart.

    The RUC’s job is to tell CMS how much time and effort it takes to provide medical services in the hopes of influencing how Medicare pays physicians. Since CMS has been starved of the resources necessary to independently review physician services, the agency has little choice but to rubber-stamp most of the RUC’s recommendations.

    In recent years, Congress has taken modest steps to fix the problem. The Protecting Access to Medicare Act of 2014, for example, appropriates $2 million each year to enable CMS to collect information directly from physicians about the relative value of their services. But CMS doesn’t have a plan about how it will spend that money, and in any event $2 million won’t go far when it comes to reviewing thousands of physician services.

    Doing the job right would cost real money, but it’d be a pittance when compared to the $70 billion spent on physician payments in 2013. If we insist on running Medicare on a shoestring, we shouldn’t be surprised when it doesn’t work very well. Sometimes you get what you pay for.

    @nicholas_bagley

    Share
    Comments closed
     
  • My moral struggles with journal article meta data

    I recommend Lisa Rosenbaum’s three-part NEJM series on financial conflicts of interest (links: part 1, part 2, and part 3). Though it is thought provoking throughout, this single sentence was enough to occupy my mind for several hours:

    Once moral intuitions enter the picture, the need to rationally weigh trade-offs is often eclipsed by unexamined convictions about right and wrong.

    It is now commonplace for authors to disclose potential financial conflicts of interest (COI) to journals and institutional review boards (IRBs) before paper publication and initiation of research, respectively. You can most easily find COI statements at the end of many published papers, or accompanying them online. Here’s just a part of one COI disclosure for a paper I pulled at random from the NEJM archives:

    coi

    The paper is about a drug (bevacizumab) manufactured by Genentech (as Avastin), so this particular COI disclosure for this particular author is relevant. (This author is one of 18 or so on the paper. Most of the others have no such disclosed COI, though some do.)

    If I’ve ever read any COI disclosures as part of reading or evaluating a published study, it’s only been a few times. I have purposefully avoided them for many years. Why?

    I worry about bias: my own. I simply don’t know what to make of COI disclosures. It’s easy to detect a potential or appearance of a COI. It’s much harder to decide how to weigh that when evaluating a study. Sure, it’s a data point that could be meaningful. So could a myriad of “irregularities” that might show up in a full body MRI on a patient with no symptoms of disease. I worry about false positives and emotional harm. How does this author’s prior financial relationship with Genentech affect the published research? Does it affect my head even more?

    I do not want to worry about COI (or worry about my worry about it) when evaluating a paper’s methods.

    Several years ago I received an email encouraging me to consider the work of a certain author. The work was relevant to whatever I was blogging about at the time. But I knew that author had substantial industry funding for his work, and decided I wasn’t going to read or consider his work on that basis. I emailed back as much.

    I regret that decision and that email. I should have considered the work on its merits. My assessment that it could not have been worthwhile was a biased one. I don’t read COI disclosures because I want to protect myself from that bias, acknowledging that I might be blinding myself to the authors’ own biases. There’s no way to win here.

    For the same reason, for years I didn’t read authors’ bios. With respect to the quality of the work, why should their institution, titles, or other credentials matter? Either their study is sound or it isn’t. If I can’t assess that from a paper’s text and figures alone (as a blinded reviewer would), then that’s a problem, but it’s not one that can be resolved by knowing an author’s pedigree any more than it can be resolved by knowing her skin color.

    In fact, for years I didn’t even read authors’ names on papers. I barely knew who wrote what, until it came time to cite stuff. Then I had to know names. Over the years I came to recognize some, got to know scholars across the country.

    Now I’m friends with and colleagues of many. I know where they work. I know their credentials. I consider by lines along with article titles when deciding what to read. There are some authors whose work I never want to miss. Is this a bias? Time being finite, it certainly crowds out reading others’ work.

    All this meta data—names, affiliations, degrees, potential COI—can bias. Once it enters my head, I cannot tell the extent to which it does. I could argue that I’m merely being Bayesian when I use prior knowledge of the authors’ work or their institutions. (This one has a well-earned reputation for good work; this other one is from a “lesser” institution widely thought to have an ideological perspective.) And maybe that’s right. But I could also argue that I’m using—even subconsciously—this meta data to unfairly evaluate the work.

    Lisa is right that once intuitions—moral and otherwise—like these enter the picture, we’re already in difficult terrain. Problems arise by unexamined convictions, she wrote. But, for me, problems arise by examined ones as well. I do think money influences, as do relationships and beliefs. But when I examine my own feelings about these, I’m no closer to understanding the extent to which I use them in my own biased way, if at all.

    Once I gather the meta data, what should I do with it? What have I already done?

    @afrakt

    Share
    Comments closed