• Alan Monheit’s litmus test for credibility

    If the use and abuse of evidence in policy debates concerns you, you  might find Alan Monheit’s editorial in Inquiry stimulating. There’s a lot in it, including his litmus test for credibility.

    My own litmus test for the credibility of such institutions involves the following questions: Are think tanks on the right ever willing to acknowledge instances in which the market may be inadequate in allocating resources or that government intervention in private markets may be a necessity? Are there circumstances in which taxes should be increased beyond supporting national defense and having the resources to respond to catastrophes? For more progressive research institutions, my test is just the opposite: Would they be willing to acknowledge the advantages of some market-based outcomes or to concede that a reduction in entitlement spending or tax subsidies may be required to enhance public welfare? From my perspective, partisan institutions that exhibit knee-jerk reactions to particular policy initiatives lose credibility as legitimate contributors to policy debates.

    I have my own, related test. What’s yours? If you don’t have one, why not?

    Monheit’s piece is ungated. I encourage you to read it and bring your thoughts back here for a discussion. All views welcome.

    @afrakt

    Share
    Comments closed
     
    • As alluded to in the opinion piece. The problem is not symmetric. The right wing is dominated by revanchist who demand fealty to notional ideas of what true conservatism entails. Until this changes, it would be difficult to have evidence based discussions.

    • Among the more informative (and enjoyable) reads I have come across in the last year on this subject, come from the NYT and I reference them below.

      Thomas Edsall asked thought leaders on the right and left what they thought the other side got right and wrong. Fascinating stuff, and I have gone back to these columns many times, and with each pass I learn something new.

      The opinions offered–in my mind–form some of the foundational output of think tanks–the focus of Monheit in his piece. I cant recommend these enough.

      http://campaignstops.blogs.nytimes.com/2012/01/22/what-the-left-gets-right/
      http://campaignstops.blogs.nytimes.com/2012/01/15/what-the-right-gets-right/

    • Monheit’s litmus test is a poor one because it is fundamentally dogmatic. Any think tank that advances views inconsistent with Monheit’s pre-existing “middle-of-the-road” policy views is, by his test, not to be trusted. As a consequence Monheit’s test for evaluating think tanks will inevitably perpetuate that “middle-of-the-road” view, whether or not it deserves to be perpetuated.

      In truth, in evaluating think tanks’ output, there is no avoiding the hard work of engaging directly with their analysis. This means evaluating whether they: (1) present evidence honestly; (2) argue in good faith; and (3) offer conclusions that follow from the evidence presented, the arguments made, and the authors’ clearly-stated moral and ethical commitments. Leaping ahead to look at the conclusions, as Monheit proposes, is not a valid shortcut.

    • I think the question have they analyzed the problem in a rigorous and unbiased way. That has addressed research from non-affiliated sources.

      Take, for example, the Ryan plan idea of replacing Medicare with vouchers, with the idea that competition would drive down prices. However, employer-provided insurance is a market three times the size of Medicare. Why isn’t it driving down prices? Without a rational analysis of that question, the idea that vouchers will contain cost is more faith in dogma than a rational expectation of the way our market will work.

    • Think tanks and other research institutions are likely to have biases of some sort, whether dictated from the top-down by funders, or resulting from the personal ideology of the staff involved. That said, an organization with biases can still produce quality (and credible) policy analysis. As a reader, the things would give me confidence in the quality of an organization’s policy analysis would be:

      1. A thorough review of literature that doesn’t ignore papers with different viewpoints.

      2. A consideration of opposing arguments and literature with different findings (if available), and a fair analysis of their limitations.

      3. Humility and accountability. Is the organization capable of looking back at its previous work to analyze where it went wrong? Can its ideology fail, or only be failed? Can it acknowledge limitations to its ideology?

      4. Making empirical rather than political/ideological arguments.

      I guess #3 and #4 are pretty analogous to Monheit’s litmus test. The problem with a lot of “evidence-based” policy analysis is that there is often evidence to support multiple viewpoints, so I think keeping an eye out for cherrypicking of studies is my #1 concern.

    • Monheit’s article raises lots of issues with which I am very sympathetic. But if he were doing a peer review of an article, he’d probably be more systematic in what he’d demand:

      Literature review, or some other demonstration of adequate understanding of the problem to be addressed. Use of review articles. Use of authorities with whom the writer might not agree, but the writer does not misrepresent. A context. An explanation of why the problem arises.

      Reliable methods, with documentation of the reliability of the methods. Or if the evaluator is not capable of judging that directly, methods used by other researchers in the field, especially top researchers.

      Reliable evidence, with documented processes by which it was obtained. (Such as social statistics from the Census Bureau, IRS, BLS, etc.) This is so basic, yet often forgotten. For instance, why cite a CBO study, rather than data from the IRS? The IRS data process is documented. CBO is making inferences from it.

      Arguments that are based on reliable evidence, made clearly enough so that the ideological and moral assumptions – and theoretical assumptions – are apparent. If this is economics or health policy, there will be ideological, moral, and theoretical assumptions.

      If I can *rationally* trust the writer or publishing institution to do evaluation for me, that’s good.

      But I agree with Matt that sometimes one just has to do the work of checking for oneself. When Mitt Romney said in one Presidential debate that most employees of private companies work for companies that are taxed as individuals I was shocked not to find any factchecking of a very significant claim about the structure of US capitalism. The source of the claim: http://www.s-corp.org/2012/10/05/s-corporations-featured-in-presidential-debate/ (the second of two links to studies) I’m not an economist, but I had to do the fact checking myself.

      As a librarian who teaches information literacy, I think this evaluation process is becoming increasingly difficult. There is declining social expectation that it is possible to evaluate information. Information production is an industry, with a huge variety of products, little of which meets quality control standards. Individuals are often on their own.

      • “but I had to do the fact checking myself. ”

        Yes. I did, too.

        http://www.bloomberg.com/news/2012-10-04/time-to-debunk-the-myth-of-small-business-as-job-engine.html

        “Two recent reports by Ernst & Young LLP

        Such findings, though, are at odds with those of several other studies, including one in 2011 by the U.S. Treasury Department. It attempted to better define small business by looking at criteria such as income, labor, and other business expenses and tax deductions. Its conclusion: “Many filers are not engaged in business activity as it is traditionally understood.” Just 20 million of the 34.7 million filers reporting pass-through income qualified as a small business, Treasury said. Of those, about one-fifth qualified as an employer.”

    • Maybe I should add that I’d be happy to cite Monheit’s work any time.

      Also, maybe I should add that I concluded that the lobbying group Romney cited was probably correct about the number of Americans working for “pass through” businesses – as far as I can tell. It appears to be a development of 1980s tax policies. This does not, however, imply that such policies are good, as Romney seemed to assume.

    • 1) I am suspicious of anyone who refuses to look at data when it is available. Someone with an agenda will claim that theory or logic are enough to decide an issue.

      2) An unwillingness to consider the other side’s best arguments suggests a lack of credibility. Going after straw men is too easy.

      3) Citing unknown or marginal thinkers with whom they wish to argue. Choose arguments posed by your opposition’s best thinkers.

      4) Never conceding that errors have been made in the past.

      Steve

    • My test for Republicans:

      Can you tell me how much defense spending is enough? Can you give a good reason why we need to spend so much more that the rest of the world.

      My test for Democrats:

      Can you tell me how much spending on schooling is enough? Can you give a good reason why we need to spend so much more that the rest of the world.

    • A partial list, in no particular order:

      -Assuming that correlation equals causation.

      -Assuming that proven statistical significance is sufficient proof of practical or clinical significance.

      -Reasoning from intentions and motives rather than consequences outcomes when evaluating policy choices and people who champion or oppose them. Prohibition is one of many disastrous policy initiatives championed by well intentioned campaigners.

      -Incapacity or unwillingness to make appropriate map/territory distinctions when evaluating the predictive capacity or utility of a model or body of literature. “Value at Risk” modelling was a wonderful tool for mitigating risk in computational simulations of financial markets, but was rather less successful when applied to real markets.

      -Forgetting that garbage-in equal garbage-out. The greater the level of statistical aggregation in a data set, or the larger the number of variables in a multivariate regression, the greater the methodological caution necessary to avoid the GIGO problem.

      -Zero awareness of the fact that a relatively small fraction of all of the consequential knowledge or information that humans or societies operate on is can be adequately formalized, centralized, or abstracted into a numerical value.

      -Assuming that the associations between variables in complex social processes are necessarily constant over time. Crime and unemployment, GDP growth and employment growth, etc.