Making Sense of the Vehicle Automation Levels

Article By : Colin Barnden

The sequential numbering standard has been widely misinterpreted to mean one level leads to the next. It does not.

At the beginning of May, SAE International once again updated its levels of driving automation as described in its J3016 standard (shown below). No taxonomy is better known or more widely referenced than J3016 in mapping the journey from human- to machine-driven vehicles.

(Click on image to enlarge.)

I see two distinctly different roles for technology to improve safety on public roads: To make humans safer drivers, and to replace humans as drivers. However, these are two entirely independent development tracks which do not intersect.

They are parallel, not convergent.

The use of a sequential numbering system (from 0 to 5) has been widely misinterpreted to mean one level of J3016 leads on to the next. It does not. Broadly speaking, using technology to make human drivers into safer drivers roughly covers developments from Level 0 to Level 2; using technology to replace humans as drivers covers Levels 4 and 5.

The illusion of a continuum occurs at Level 3, where the machine drives until it doesn’t, then the human is expected to assume both the driving task and legal liability. Practical experience tells us this is a nonsensical idea, backed up by countless videos of Tesla drivers seeking to trick Autopilot and decades of human factors research proving humans cannot fulfil this responsibility safely.

Intentionally or not, J3016 has been misappropriated by startups and disruptors into a “race to Level 5,” resulting in a widely held belief that technology for “low levels” of driving automation becomes obsolete with the development of “high levels” of driving automation.

This conclusion is erroneous.

We can see evidence for this narrative in the absence of even a single company valued at more than $1 billion that is developing proven safety technology such as automatic emergency braking (AEB) or vision-based driver monitoring systems (DMS). Meanwhile Waymo recently raised another $2.5 billion, while investors salivate over almost any lidar announcement.

In reality, Level 5 makes no practical sense and Level 3 is unsafe at speeds over 25 mph, meaning the haughty promises to “save lives” using high levels of driving automation now rest squarely on the shoulders of Level 4 developers such as Waymo.

However Waymo has recently been making news for all the wrong reasons, with the departures of CEO John Krafcik and CFO Ger Dwyer, followed by video of a driverless Waymo unable to navigate traffic cones in a construction zone.

Which begs the question, if behind the confident façade of the main suppliers, are the real-world challenges of developing autonomous driving technology to safely transport humans proving much harder to address than we are being led to believe? Let’s take a closer look.

Introducing Waymore

Waymo doesn’t really have a problem recognizing traffic cones, but it does appear to have a problem recognizing that public roads are an example of a complex, not a complicated, system. I have written previously about the unpredictability described by complexity theory, and it remains unclear how machine learning software trained using brute force road testing can realistically lead to a machine that navigates uncertainty on public roads more safely than an attentive and engaged human.

Unknown, extreme, and rare events are known as “edge cases.” Complexity theory teaches us there are an infinite number of possible combinations of events. The machine learned perception system in an autonomous vehicle must be trained to understand all possible scenarios within its operational design domain. It will fail in unfamiliar situations, just as the Waymo failed with cones.

Phil Koopman, co-founder of Edge Case Research and an engineering professor at Carnegie Mellon University discusses these issues in a video on heavy tail distribution in the real world (see below). Note Koopman’s conclusion that “Humans are good at heavy tail.”

(Click on image to enlarge.)

In another video, Koopman estimates the quantity of brute force road testing necessary to validate the safety case for autonomous driving technology, of which the Waymo Driver is one example. See below, showing Koopman’s analysis that the answer might be around 2 billion miles.

(Click on image to enlarge.)

If Waymo Driver accumulates “over 20 million miles on real-world roads since 2009” then that equates to only 1 percent of the necessary distance estimated by Koopman. For all the over Level 4 autonomous driving, these calculations imply the technology requires way more money, time and testing before the suppliers are anywhere close to proving the safety case.

In which case, please allow me to introduce Waymore.

This analysis suggests many more testing miles to go, prompting me to question whether investors in the Level 4 suppliers truly grasp the scale of the challenge ahead and have the money, patience and nerve to complete the journey?

Or, did the comfort blanket of J3016 and the promises of autonomous driving technology inadvertently create the longest, most expensive cul-de-sac in history?

Scooters, NHTSA and NCAP

In February, Waymo announced expanded testing to include San Francisco. In the accompanying blog post, the company observed:

“When asked to name factors making it hard to get around the city, 63 percent of respondents pointed to dangerous drivers, 74 percent to parking and 57 percent to stressful commutes. Worryingly, nearly a quarter didn’t feel safe on San Francisco’s roads at all.”

San Franciscans’ worries probably weren’t soothed when a Waymo test vehicle then promptly collided with a pedestrian riding a scooter. In a statement, Waymo said: “The autonomous specialist had recently disengaged the vehicle from autonomous mode and was driving in manual mode when the vehicle entered the intersection and made the left turn. After turning, and while still in manual mode, the vehicle came into contact with an individual on a motorized scooter.”

Pause and consider that every Waymo test vehicle is outfitted with more sensors and processors than nearly all privately-owned passenger vehicles, and yet when operating in manual (human driven) mode the collision avoidance technology was insufficient to even prevent a collision with a scooter.

How worried are San Franciscans now, I wonder?

AEB Vulnerable Road User (AEB-VRU) has for several years been specified by the European New Car Assessment Program (Euro NCAP) for passenger vehicles. However, the San Francisco incident implies Waymo is deploying test vehicles on public roads fitted with collision avoidance technology for manual mode that does not even meet historical Euro NCAP standards, let alone exceed the current one.

Why are there no guidelines published by the U.S. National Highway Transportation Safety Administration (NHTSA) specifying minimum AEB performance standards for test-level AVs when operating in manual mode on public roads? Also, why are there no guidelines specifying minimum performance standards for driver monitoring systems to assess distraction and fatigue in human backup drivers?

These are two obvious safety cases which have been overlooked.

While little can be expected of AV foxes guarding the public hen house, the regulatory environment shifted dramatically this week when NHTSA required operators of test-level AVs to report all crashes, with the publication of Standing General Order.

This is likely to be the first of several significant changes to the regulatory environment for AV testing and development, with lawmakers now increasingly questioning the promises of the AV industry while listening carefully to safety advocacy groups such as Consumer Reports and the Center for Auto Safety, which are pushing instead for proven vehicle safety technology such as AEB, DMS and lane-departure warning systems to become mandatory.

Keeping the human driver attentive and engaged is the primary role of DMS. AEB and lane-keeping systems that provide longitudinal speed assist and lateral lane support, respectively. These proven vehicle safety technologies look set to save many more lives in the decades ahead than anything “self-driving” at Levels 3, 4 or 5.

Although well-known and widely referenced, perhaps the J3016 spec just isn’t that helpful after all.

This article was originally published on EE Times.

Colin Barnden is principal analyst at Semicast Research and has over 25 years of experience as an industry analyst. He is considered a world expert on market trends for automotive vision-based driver monitoring systems (DMS). He holds a B.Eng. (Hons) in Electrical & Electronic Engineering from Aston University in England and has covered the automotive electronics market since 1999.

 Lucky Draw 2021

Leave a comment