Machine Learning and Self-Driving Cars
I never feel like I can stay caught up with IPAM and Simons Intitute workshop videos. For better or for worse, I get most of my current ML knowledge from those sources. Recently I watched an excellent talk by Richard Murray in IPAM’s workshop on Intersections between Control, Learning and Optimization. If you know me you know how much I like that topic.
Murray’s talk is “Can We Really Use Machine Learning in Safety Critical Systems” (slides). Right now, his answer, specifically for self-driving cars, is an emphatic NO, though he sees ML as a powerful and inevitable enabler for those platforms. He’s an authority on this topic and I’m not going to summarize his excellent talk.
But I made a mental note during the talk that I was reminded during his discussion around SAE Level 3 “Conditional Automation” of Lisanne Bainbridge’s Ironies of Automation. The paper, first published in 1983, has 2143 citations according to Google Scholar, yet I rarely see it mentioned in non-specialist discussions. In Bainbridge’s terms, SAE Level 3 is a totally ironic specification:
The driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene.
To Bainbridge, automation proceeds by extracting from the problem the subset of tasks realizable on the controller, effectively stripping out all of the “easy” parts first. What’s left, i.e. what’s too difficult to model and control, is often then the leftover mess that’s presented to the human operator in moments of failure, to the surprise of the operator, who was told that he or she could put their attention elsewhere. Disaster often ensues. One has to look no further than the recent grounding of the Boeing 737 MAX for real-world examples of this. For Bainbridge, this fact was ironic, since we have effectively replaced a human system with a potentially more failure-prone human-machine one.
It’s also important to understand that in the process of reconfiguring the problem to be amenable to automation, much of the environment that the operator traditionally uses to reason about the system are also now missing: lights that used to flash in a certain pattern don’t anymore; levers that used to control the system no longer do so since they are now under the domain of a (failed) controller. This means that not only does the operator have to solve a problem in a very short time, but they must solve one for which both their training and heuristics are ineffectual.
It always amazed me that SAE defined Level 3 at all, since I thought that everyone in controls and safety knew this. As it stands, and as you can get from Murray’s talk, it’s just a wide moat that needs to be crossed if you want to get to actual (as opposed to marketing-defined) self-driving. Whether ML is part of what remains to be seen, but it’s hard to imagine at this point that it wouldn’t be.
Murray’s talk also emphasized to me that (1) it is unlikely that self-driving cars will ever be as safe as buses and trains; and (2) even that will never happen unless we make the same kind of structural accommodations for self-driving cars that we’ve made for busses and trains. It’s often occurred to me as another irony that in order to make it possible for Waymo cars to drive fully autonomously on Alma Street in Palo Alto, we would need to do for them the same thing that we have to do for the Caltrain that runs next to the street: removing cars, removing people, enclosing it in concrete barriers. Can someone explain to me why we don’t just invest more in public transportation?