While I embrace many (most?) things with a technology flair, I have to admit to being a bit amused with all the recent breathless excitement exuded over the idea of self-driving cars. To be sure, technology is making them ever safer and more fuel efficient. It is also augmenting the driving experience so that paper maps are going the way of LP recordings and being “lost” is now something of a personal choice rather than a state of condition, but I suspect that the rush to sell ad space may have people overlooking a few practical realities that could lead to surprising, if not dire, unanticipated, consequences.
I’m quick to point out, by the way, that driving my vintage BMW has a completely different set of experiences than driving a new one. No airbags, crash avoidance alarms, proximity radar or backup cameras. Driving it requires real and significant concentration and the consequences of an error can be real and immediate. While I’d never want to retrofit a blind spot warning system, I can appreciate the value. Heck, from the dark ages back in 1984, the vintage ride doesn’t even have a single cup holder.
I can appreciate ABS brakes and traction control too. These technology dependent devices can make drivers feel invincible … or at least support the idea that training and engagement are less important than they once were, since mistakes can much more easily be recovered from. Further, I can also appreciate that under “normal” conditions while cruising down the Interstate and experiencing the commuter equivalent of the “Talladega Draft”, where any open space on the road is an invitation for someone to dive in for advantage, advanced computer control can stay on top of following distances and emergency braking procedures better than the average distracted commuter trying to manage the car, the coffee cup, the kids and the cell phone concurrently.
No, my real concern will come about when we get to the point of auto-driver being a real possibility. At that point, under normal conditions, the onboard systems could handle all the easy stuff with a minimal amount of drama or trauma. Parking between 2 stationary objects? No sweat. Maintaining following distances at 70mph? Again, not much of an issue. The concern will emerge when bad or unexpected or unusual things happen and the computer control gives up and hands it back to the now, even more woefully unprepared occupant, under the tag line of “I don’t know what to do, you take it!”. A failed sensor, a set of road conditions that are unexpected, and a wide range of other factors could create scenarios where the on-board systems decide that they have reached max capacity. Or there’s just the Help Desk Rule #1 for electronic devices: when all else fails, reboot, and start clean.
In other forms of transport, such as high-speed trains and airliners, there is significant control automation even for such dicey maneuvers as station stops and landings. In the main, it works great. But when it goes wrong, it can go spectacularly wrong.
As a backup, these devices have alternative systems, called engineers or pilots, who are well trained and capable of taking over navigation in mid-transaction. They have a full training and testing regimen that they need to follow in order to maintain their certifications. When the training kicks in, the auto pilot comes off, and the results are generally good. Even at that, however, they aren’t perfect as some recent plane crashes have suggested. Training really does matter. A lot.
Which gets back to the driverless car concept. If the occupants are going to be expected to “take over” at any point in the journey, where is the training and experience going to come from? How will they practice dicey moments to build an experience base rather than becoming unwitting guidance systems for land-locked missiles that run amok?
Renting a car today can provide an interesting view if the future. Mastering such simple tasks as turning cruise control on and off varies so much between brands and model years that the first few miles out of the lot are like a training mission of their own.
So, one consequence of increasingly automated vehicles could be fewer, but sadly more spectacular, crashes that are hard to pinpoint “blame” for. The conversation around who is liable in such circumstances could be both long and full of rich legal entanglements. Breathlessly talking about self-driving cars and the end of accidents as we know them may be both significantly premature and a preview to different and more nuanced or complex dialogue.
Of course, on a weekend that required a surprisingly large number of re-boots to both my real world laptop and tablet devices at unfortunate moments, I find myself a little less concerned. If the technology crashes on an Excel problem, how can it possible handle a Jersey Jug-handle first time, every time? Or, maybe I should be more concerned. Time will tell. And that could be the actuarial nightmare scenario.