Self-driving cars are almost here, aren’t they?

I think that artificial intelligence will revolutionize medicine (and everything else). Smarter people than me think so too.

But how soon will this happen? One reason to think that the revolution is nigh is the recent successes of other AI-driven technologies, like drones, or the apparent progress in developing self-driving cars. However, I recently had a conversation with Ari Allyn-Feuer,* a smart bioinformatics grad student, that led me to question how much we know about the safety of robotic cars. And if it is hard to prove that robotic cars are safe, it’s going to be even harder to prove that artificially-intelligent medical technologies are safe.

Before looking into this, I had a mental model that said that robots were safer than human drivers. The implicit argument was something like this:

  1. Human drivers are dangerous.  Tens of thousands of people die on the roads every year.  The great majority of these crashes are associated with impaired humans or driver errors.
  2. Robotic cars are safe. Google’s cars have been driving on our roads with no fatalities. We have decades of experience with marine and aviation autopilots. Someone died recently in a Tesla, but that’s just one.
  3. Conclusion: Get humans out of the driver’s seats. Do it now.

Notice what’s missing from this argument. There are no data. Bad sign.

How dangerous are human drivers? There were 32,675 deaths from vehicular accidents in 2014. Yet we are not frightened when we drive. Are we crazy? Well, no. The denominators in traffic safety are huge: American vehicles drove  3,026 billion miles in 2014. Thus there were only 1.08 deaths per 100 million miles driven. We have a 1 in 10,000 chance of dying in a car crash in any given year.

How safe are robot cars? Chris Urmson, the director of Google’s self-driving car program, reported that as of May 2015, their robot cars had driven more than 1.7 million miles. This is, literally, an astronomical number: it’s driving to the Moon and back three times. Having no fatalities in that many miles is impressive. Except…  you would expect only 0.2 fatalities in 1.7 million miles on US roads with human drivers. Unfortunately, we do not have nearly enough miles on Google cars to compare them to human drivers.

Perhaps we should compare safety by counting crashes, which are far more common than fatalities. Urmson reports 11 Google crashes during those 1.7 million miles, which is 6.5 crashes / million miles driven. Compare this to the National Highway Transportation Safety Administration’s (NHTSA’s) report of 6,064,000 police-reported crashes in 2014 for 3,026 billion vehicle miles. This is only 2.0 crashes / million miles. The relative risk of a Google crash would then be 3.2 times the risk for a human driver (1.0 means that the risk of Google cars and human cars is equal, relative risk > 1 means that Google cars are more dangerous).

However, these crash rates are not truly comparable. On the one hand, Google cars may be operating on better roads and weather than the typical human driver. On the other hand, and more importantly, human crashes are badly underreported. But how much are they underreported? Using a telephone survey, the NHTSA estimated that 30% of crashes are unreported. If we correct the number of human crashes for underreporting, we get an estimate of 2.9 crashes / million miles. If so, Google cars have a relative risk of 2.2 compared to humans. Worse, the rate of underreporting is itself uncertain. Another NHTSA report quotes an estimate of 55% of crashes being unreported, which means 4.6 crashes / million miles. Even so, Google cars are more crash prone (relative risk = 1.4).

So the risk estimates for both robots and humans have a great deal of uncertainty, the robots because of small denominators and the humans because we hide our mistakes to an unknown degree. The upshot is that we don’t know if robotic drivers are as safe as humans. My back of the envelope calculations suggest that they might be a bit less safe. We shouldn’t expect that self-driving cars on the highway are imminent (Tesla agrees).

So, should we get Google cars off the roads? That’s not my view. Even if Google cars aren’t as safe as human drivers, they are very safe: None of the 11 Google accidents has injured a human. Robot drivers are machine learners, so they will get better at driving as they accumulate miles on the road. Robots will eventually be safer than humans, making a marginally higher risk now worth it in the long run. A more effective way to reduce traffic fatalities is to develop better ways to take drunk drivers off the road.

All this has implications for the automation of medicine. To me, the measurement problems in medical safety look harder than those in auto safety. The denominators for many potentially automatable medical procedure are much smaller than road safety denominators. Many medical errors and mistakes do not have mandated reporting. Underreporting is common and the rate of underreporting is highly uncertain. So it’s going to be extremely hard to evaluate the safety of medical robots from observational data. It will be even harder to measure it from RCTs comparing humans and robots, because the denominators will be tiny and human performances in the human arms of the trials will likely be better than routine human clinical practice. So for safety reasons alone, the automation of medicine will be even slower than the automation of driving.

Finally, I’m amused that I let myself be fooled by the hype about robotic cars. Distrust non-data-based journalism: you can get hosed if you accept it without checking.


*Ari deserves credit for any good ideas in this post. However, I come to slightly different conclusions than he does, so I am responsible for any errors.

@Bill_Gardner

Hidden information below

Subscribe

Email Address*