Mobileye has developed a mathematical model that spares an autonomous vehicle from the blame for an accident as long as it's following a pre-determined set of "clear rules for fault in advance."
Mobileye has developed a mathematical model that spares an autonomous vehicle from the blame for an accident as long as it’s following a pre-determined set of “clear rules for fault in advance.”
Intel released this week a “newsbyte” summarizing a talk given by Mobileye CEO/Intel Senior Vice President Amnon Shashua at a conference in Seoul, where he explained his “Formula to Prove Safety of Autonomous Vehicles.”
An academic paper and layman’s summary paper co-authored by Professors Shashua and Shai Shalev-Shwartz (Mobileye’s VP of Technology) details “a formal, mathematical formula” to ensure that a self-driving vehicle “operates in a responsible manner and doesn’t cause accidents.” Perhaps more important is that this presentation appears to signal the desire of Intel/Mobileye to collaborate with the industry on standards that “definitively assign accident fault” when human-driven and self-driving vehicles inevitably collide. The objective is a mathematical model that spares an autonomous vehicle (AV) from the blame for an accident as long as it’s following a pre-determined set of “clear rules for fault in advance.”
At first blush, this proposal struck me as callous and incredibly self-serving. When Intel described Mobileye’s efforts as “offering the autonomous driving industry a way to prove the safety of autonomous vehicles,” I had to stop and think.
Mike Demler, a senior analyst at the Linley Group, put the issue succinctly: “In what universe is it a good idea for an industry to define for itself what the definition of safety is for the products it produces?”
But as I waded through the Shashua paper, I conceded that Mobileye is among the world’s foremost technology developers on issues related to self-driving cars. Mobileye has been doing this for a while and giving it a lot of thought.
‘Winter of autonomous driving’
Given that, I find it interesting that Mobileye acknowledges that: a) “Complete avoidance of every accident scenario is impossible;” b) without a “formalized model for fault,” the auto industry can’t develop driving policy software that avoids accidents caused by autonomous vehicles; and 3) the industry needs a solution that avoids “the data-intensive validation process that most AV developers seem to be planning.” Mobileye sees the latter as “not feasible (whether performed on-road or in a simulated environment).”
In other words, Mobileye’s technical paper exposes what many observers have suspected all along. Autonomous cars will go nowhere unless the auto industry develops a solid safety validation process, a set of rules that allows the creation of decision-making software and offers the public “provable safety assurances,” as Mobileye put it.
As Mobileye’s authors themselves noted, they have every reason to worry that the lack of the standardization of safety assurance and scalability would push the current industry’s interest in AV into “a niche academic corner, and drive the entire ﬁeld into a ‘winter of autonomous driving.’”
These are all such thorny issues that nobody in the industry has thus far willingly brought up and discussed them at length.
Intel/Mobileye deserves a few props for initiating an effort to clarify the unresolved issues. Some industry analysts suggest that Mobileye’s proposal has “good potential for improving autonomous driving.”
According to Mobileye’s logic, if you have a formalized model to assign fault for an accident, engineers can more easily develop driving policy algorithms that avoid AV-caused accidents. Further, such a model can enable engineers to develop “an efficient validation process” for autonomous cars, without resorting to exhaustive on-the-road and simulation testing.
Noting that autonomous vehicles have 360-degree vision and very fast reaction times without any distractions, the authors boasted that by combining these advantages with a formalized model to determine fault, “AV developers can design a system where the software can evaluate every command against this model.”
From the engineering viewpoint, Mobileye’s proposal seems to make almost possible the once insurmountable challenge of developing accident-free AVs.
But here’s the rub. Mobileye’s logic falls apart, unfortunately, in a real world where humans maintain certain social norms, customs and behavior.
Next page: Only if the rules are predetermined