Detecting bias

I wrote the following almost a week ago. Since then, a lot has happened and this post seems even more relevant. In addition to reading this, or instead, consider listening to the recent EconTalk podcast, which is an in-depth interview with the journalist who exposed Andrew Wakefield’s autism-MMR research as a fraud. Wakefield’s story exemplifies the inability to update one’s thinking and rhetoric in the face of conflicting evidence. His original sins were bad enough, but this sin against science is the most damaging. Mistakes, bad research, and incorrect interpretations happen all the time, mostly for honest reasons, though not always. The hallmark of a good scientist, an honest journalist, a trustworthy publisher, and a responsible leader or citizen is admitting as much when confronted with more credible, contradictory evidence.

Like just about everyone else who has opinions, I’d like to claim I’m not biased. But, I know better. Nobody is unbiased. The only sure way to be unbiased is to either decide everything randomly or to not come to any conclusion about anything, outside, perhaps, of mathematics.

The problem is, we have to live. And life involves choices. And choices require decisions. Even not deciding is a decision. Few really want to decide randomly, which is impractical anyway.* So, we must do the best we can with the evidence at hand, mixed with our own views, desires, and assumptions. Evidence rarely points in one direction only and is never complete. Our views, desires, and assumptions–which are necessary to live–drive a considerable amount of what we do and what we believe.

Worse, none of us has time to learn everything known about everything. We must rely on experts to make our way in the world. Nobody is truly an independent thinker.

What I really want to get to is: how does one pick experts? That’s a very hard question. Over the years, I’ve developed some personal guidelines. I thought I’d share them briefly and then note an interesting paper related to all this.

I prefer experts who attack and defend the science and substance of an issue, not the people behind it. This is hard to do. It’s so easy to vilify others as those who just don’t get it. When I see experts doing that I worry. Are they in it for the truth or in it for the game? Do they just get a rush out of being right or playing to their crowd? Do they get paid for it? I really worry.

Within reason, I want my expert to consider all (or the preponderance) of information on a topic. More importantly, I want to see that my expert changes his thinking in the face of contradictory, but credible, evidence. If one doesn’t update one’s views and rhetoric on a subject when new information is presented, they’re likely biased. It is important to take on new information. The world is dynamic and static thinking is quickly out of date.

When my expert is shown to be wrong–i.e., the evidence isn’t what he thinks it is–he should acknowledge that fact, thank his critics, and not feel ashamed about it. Nor should he lash out and show how his critics are also wrong, though they may be. When he does, it’s a warning sign that he really isn’t comfortable with his own fallibility. Honesty is not afraid of fallibility.

I want my expert to acknowledge uncertainty. He can (and should) still make a decision about what he thinks is the right, overall interpretation, but he should note when all data doesn’t point the same way. I don’t think he needs to be overly dramatic about this. He need not hedge with every sentence or piece of writing. Just a nod or two now and then is sufficient. Fundamentally, I still need a decision. Being wishy-washy isn’t helpful.

I trust humble experts. Sure, I’m entertained (if not enraged) by the brash, loudmouths who think they know it all and aren’t afraid to tell other people so. But I don’t trust them. They have too much ego at stake. Uncertainty is good, to a point (as noted).

About uncertainty in the area of policy analysis, Charles Manski wrote an interesting paper last summer, Policy Analysis with Incredible Certitude. It’s worth a read and is not gated. Here’s the abstract:

Analyses of public policy regularly express certitude about the consequences of alternative policy choices.Yet policy predictions often are fragile, with conclusions resting on critical unsupported assumptions or leaps of logic. Then the certitude of policy analysis is not credible. I develop a typology of incredible analytical practices and gives [sic] illustrative cases. I call these practices conventional certitude, dueling certitudes, conflating science and advocacy, wishful extrapolation, illogical certitude, and media overreach.

There’s a lot of wisdom in the Manski article. I don’t agree with all of it, but I did find the discussion thought provoking. If you’re thinking this is relevant to the CBO, you’re right. Manski has some concerns about the office and thinks it (or it’s staff, really) should express less certainty about budgetary scores of legislation.

Actually, I think they did a fine job doing just that in analyzing the ACA, for example. Though they produced one, official score for formal, legislative use, they also produced others with different assumptions. That’s good. That’s about as far as they need go. At the end of the day, we have to make a decision. One can take uncertainty too far.

The biggest problem, as I see it, is it is so darn hard to tweak legislation without opening it up to major overhaul. That’s a problem in our government’s structure and procedures, including the filibuster. If we could tweak things more easily then mistakes would be less costly. Throwing additional uncertainty into the mix doesn’t strike me as helpful. It probably would only contribute to the paralysis.

* It’s impractical for many reasons, one of which is that it is biased to decide a yes/no choice with 50/50 odds if that does not reflect the underlying probability distribution (assuming we know it).

Hidden information below

Subscribe

Email Address*