Bumpy Road Ahead for AV

Article By : Junko Yoshida

AV tech companies are approaching safety in virtually the least safe way possible. We identified 5 things still missing in AV development practices.

Few industry insiders were surprised last week when Cruise, General Motors Co.’s self-driving unit, dealt autonomous driving a major setback by delaying deployment of the first GM robotaxi, which it had promised to introduce in late 2019.

The announcement was an admission of the fast-growing realization that getting a highly automated vehicle on the road might not be rocket science but it’s not far from it either.

The lingering mystery is whether there’s a practical way for AV companies to demonstrate that their vehicles are safe. “Historically the autonomous vehicle industry has operated under a cloak of secrecy,” Phil Koopman, CTO, Edge Case Research observed.

Winning the trust race
But if AV companies are serious about winning the trust race, more honesty and transparency about their self-driving cars is essential.

Jack Weast
Jack Weast

Last week, we caught up with Jack Weast, vice president, autonomous vehicle standards, and senior principal engineer at Intel. He said“I think both industry and media have been complicit in hyping this and not being open and honest enough about the realities of the technology.”

Indeed, we the media share some blame. Many reports tend to frame the topic as an AV horse race. Weast said, “Yeah, the horse race has encouraged people to declare crazy dates [for AV launch].”

The AV industry “aspires to have a zero-accident future, but as long as there’s human drivers mixed in on the roads with automated vehicles, there’s going to be accidents,” noted Weast. “I think the trick is figuring out this question of how safe is safe enough, and how do you accept that?”

The key questions are: How does any AV company know when its AVs are safe enough for commercial launch? And who gets to decide that?

Explaining Cruise’s decision to delay robotaxi deployment, GM Cruise CEO Dan Ammann blogged that “in order to reach the level of performance and safety validation required to deploy a fully driverless service in San Francisco,” GM will significantly increase “testing and validation miles.”

Unfortunately, if there were a set quantity of “testing and validation miles” Cruise must travel before the commercial robotaxi launch, Ammann neglected to mention it. Nor did he discuss the specific “thresholds” or “requirements” he believes his AVs must clear.

Phil Koopman
Phil Koopman

Thus far, few AV companies have disclosed their test goals. None have they articulated how they are measuring the safety of their AVs before, during or after testing.

Edge Case Research’s Koopman would like to see, at minimum, AV companies “publishing safety metrics to demonstrate they are operating safely,” before test cars hit the streets.

But local governments today are demanding very little from companies seeking permissions to operate public road tests. The public is kept in the dark until the next accident, or another AV company admits its robotaxis are not road-ready.

Koopman wrote in his blog, “In fairness, if there is no information about public testing risk other than hype about an accident-free far-flung future, what is the public supposed to think? Self-driving cars won’t be perfect. The goal is to make them better than the current situation. One hopes that along the way things won’t actually get worse.”

5 Things Still Missing in AV Development Practices

EE Times talked to several safety experts and automotive analysts to find out what paths the AV industry might take to win public trust. Combining what we’ve heard from multiple sources, there are five things still missing in AV suppliers’ vehicle development practices:

  1. Establishing metrics for testing,
  2. Adopting “safety by design” AV development,
  3. Sharing, among AV developers, of data collected during testing,
  4. Building a feedback loop, and
  5. More sophisticated system-level simulations.

1. Making the safety case before public road testing
First, let’s talk about the safety of public-road testing.

Koopman insists that AV testers must ensure safety during street tests. If an AV commits another Uber-like fatality, that one accident could kill the AV market altogether. He listed three common misconceptions about public AV testing.

a) A magical deadline.
This is the belief that there will come a day when AV companies wrap up public road testing. Then automakers will launch perfect, accident-free, highly automated vehicles and they’ll never have to test them again.

“Not so,” said Koopman. “Public road testing will be with us for a long, long time.” Every time a new sensor is added to an AV model, or when a robotaxi starts driving in a new ODD (Operational Design Domain), a new set of road tests is required. “Testing will never go away.”

b) Safety drivers.
Even after the Uber fatality, the standard argument for public road testing is, “Don’t worry. We have a safety driver.”

In Koopman’s view, safety drivers are not so all-fired safe. Indeed, the probability of a supervisor failure appears to increase as the autonomy failure rate decreases, he explained. But what if we were to install not one but two safety drivers? This is not much better, said Koopman. There have been incidents of both pilots in a passenger aircraft falling asleep. 

Well then, how about a driver monitoring system? “That’s not a guarantee either,” Koopman said. “There are reports that some truck drivers have learned to sleep without closing their eyes.” 

Safety drivers, pilots and truck drivers are all human. They get bored and they get tired. “You can’t bargain with human nature,” said Koopman.

c) Disengagement
By law, people actively testing self-driving cars on public roads in California are required to disclose the number of miles driven and how often human drivers had to take control, a moment of crisis known as a “disengagement.”

Koopman doesn’t believe this is the right metric. Disengagements tend to incentivize test operators to minimize interventions, which could lead to unsafe testing.

Any serious effort to build a safer AV must use disengagement data to improve the technology, not to tout victory in a safety contest. AV testing operators should see every incident, mishap and near-miss as a failure in the test-program safety process.

Koopman told EE Times, “What scares me is the ones safety drivers didn’t notice. We don’t know if we just got lucky or we should do something before we get unlucky.”

Given that public trust in autonomous vehicle technology has already eroded, he told EE Times, “Each new mishap has the unfortunate potential to make that situation worse, regardless of the technical root cause.”

Koopman wants AV testing operators to make the safety case first before hitting the road. Citing his paper entitled “Safety Argument Considerations for Public Road Testing of Autonomous Vehicles,” co-authored with Beth Osyk, Koopman said, “We wrote that paper to provide a public starting point.” He added, “I’m sure other approaches would work as well, but ultimately all the things in that paper must be dealt with one way or another.”

High-Level On-Road Testing Safety Argument (Source:  Technical paper by Phil Koopman)

High-Level On-Road Testing Safety Argument (Source: Technical paper by Phil Koopman)

2.  Safety by Design
Second, let us go back to the basic design methodologies used by AV developers.

The aerospace and rail industries have faithfully followed a “safety by design” concept, observed Intel’s Weast. “We believe the automated vehicle should be also have safety-by-design at the start.”

Wait! You mean, the AV industry doesn’t start with safety by design? And what exactly does “safety by design” mean?

Weast explained, “You formally define the vehicle architecture and design of the algorithms [used in the automated driving system] on paper first. You formally verify them.” Take the AV’s decision-making capabilities. “You try to prove its correctness using mathematic, logic and other formal proof and verification techniques.”

In airplane design, for example, “You know, it’s going to fly from a physics standpoint because you’ve proven that it will fly…You can prove that on paper.”

AV companies — especially those who have grown up in the culture of “move fast and break things” — have often opted for “alternate approaches.” They skip the part about building rigorous mathematical models first and applying formal verification to validate that the design matches the original specs. 

Weast explained how some AV designers tend to work: “Hey, I’ve built an AV. I just started writing code but didn’t do any formal designs or any design verification. I’ve got a pile of code and I’m just going to test it.”

Their typical approach is to “test and iterate, test iterate, test iterate,” noted Weast. Repeating the process, developers hope to gather statistical evidence to convince themselves that the AV they designed is safe. Weast explained, “They would say, ‘I’ve driven 10 million miles… I’ve driven a hundred million miles without an accident. Okay, that means it’s safe.’”

Is it?

Weast isn’t sure. Because “you haven’t actually verified that the design is safe. What you’ve done is just gathered statistical evidence … and [you are saying] that this pile of code you’ve got seems to work.”

Calling such an approach “not sound,” Weast said that at Intel/Mobileye, “we believe in formally verified safety models like RSS (Responsibility-Sensitive Safety — a mathematical model for autonomous vehicle safety), and the safety by design approach.”

Formal verification is not just safer. It’s the right thing to do. Weast added, “It also makes the test and verification process actually much simpler and easier because now you’re just checking if it matches the specification.”

Koopman believes that if AV developers can’t explain to themselves why their test AVs are acceptably safe in a methodical, rigorous way, the vehicles are probably unsafe.  “That is bound to catch up with them as operations scale up.”

Responsibility-sensitive safety (RSS) explained in the context of autonomous vehicles.

3.     Data sharing among AV testers
Safety experts are hoping to see more “transparency backed with technical credibility” from AV testers. They hope to see AV testers start sharing data before, during and after road tests.

This defies common practice within the AV industry.

“Sharing data is difficult,” said Joe Barkai, industry consultant/analyst for automotive, IoT and other emerging technologies.  “Especially, when it’s difficult to differentiate between those elements that can serve everyone without infringing on proprietary and strategic information, and those that should not be shared.” Barkai acknowledged, “The highly competitive culture of technology protection stands in the way of developing platforms and data models that promote collaboration at the platform level.”

But to establish best practices, Koopman said, “Almost nothing about testing safety has anything to do with the autonomy ‘secret sauce.’”

Some very competitive OEMs and tech suppliers are beginning to soften their attitude.

“Let’s not compete on safety. Let’s collaborate on safety,” said Weast. Intel/Mobileye is walking the walk on that. It recently published — together with ten automotive and mobility industry leaders — a white paper, “Safety First for Automated Driving.”

Weast explained, “Let’s be more transparent and open about what we think is the right way to design a safety-by-design vehicle. Let’s publish it for the world to see and comment on,” so that “we can ask, hey, do you agree, or did we miss something here?”

The whitepaper, described as a “Framework for Safe Automated Passenger Vehicles,” is not a standard. But Weast said it could lead to an industry standard, or at least parts of it can be used in the future standard.


Summary of the Test Strategy (Source: Framework for Safe Automated passenger vehicles)

Summary of the Test Strategy (Source: Framework for Safe Automated passenger vehicles)

4. Building a feedback loop to design test scenarios
Industry analysts also find it critical to build a feedback loop into the testing of self-driving cars.

Venki Agaram, director and reliability engineering practice at CIMdata, suggested to EE Times, “It may not be pure data sharing. But AV testers should be jointly designing certification tests.”

Agaram explained, “AV developers have a good common understanding of the technology needed to get an AV going. They are all individually searching for extreme cases that will make the AVs unsafe, ineffective, or annoying.” Agaram believes that eventually, “They will have to pool that data to come up with standards and regulations.”

Agaram urges AV testers to start “designing tests for certification (which is validation) and explain to each other as to what data they have seen to support the choice of those test scenarios.”

Intel’s Weast acknowledged that adopting something like the German accident database — GIDAS (German In-Depth Accident Study) — would be useful. Sharing accident scenarios should happen not just among AV companies, but also on a government-to-government basis, said Weast.

As critical as data sharing is, Koopman said that “even a suggestion of publishing safety metrics for AV public road testing could be controversial among AV companies.”

He added, “If organizations really don’t want to release information publicly, another option is to create a lightweight standard practice and have some interested neutral third party evaluate them under an NDA then publish a summary report card.”

Koopman suggested. “That might take some time, but road testing is going to be with us for a long while to come… What matters is getting some collaboration across the industry on road testing safety right now.”

5. Advanced simulation
While AV companies love touting the miles they’ve driven their AVs on public roads, they rarely talk about simulation miles. Waymo is an exception. Earlier this month, it announced that it has driven “10 billion autonomous miles in simulation.”

David Fritz
David Fritz

But AV companies don’t appear to be so enthusiastic for simulation, partly because they find the simulation tool market fragmented, each tool only responsible for mimicking certain elements of a system and doing it in isolation. “Thus far, there had been no system-level, single AV simulation platforms available,” explained David Fritz, Global Technology Manager, Autonomous and ADAS at Siemens. That is, until Siemens launched PAVE 360 Siemens a few months ago, he claimed.

PAVE360 is a unified platform — developed by multiple vendors — that plugs in simulation tools to provide a high-fidelity model of an entire “virtual” car, Fritz explained. The platform can do “full, closed-loop validation” of the sensing/decision-making/actuating” of automated driving systems.

PAVE360 brings together highly independent 'physics, actuation, sensors, embedded software and SoCs' into a single system-level vehicle platform. (Source: Siemens/Mentor)

PAVE360 brings together highly independent “physics, actuation, sensors, embedded software and SoCs” into a single system-level vehicle platform. (Source: Siemens/Mentor)

For car OEMs and chip designers developing AV solutions, they need a tool to determine their software and hardware do indeed operate correctly “in the context of the entire vehicle and the environment” within which it is operating, explained Fritz. Further, simulation allows them to explore different architectures and configuration based on empirical data, while shortening the time to market, he added.

And let’s not kid ourselves. Random testing cannot guarantee coverage or corner cases (which are not often possible with physical platforms). OEMs need to see correlation between virtual and physical models.

As expressed in the whitepaper entitled “Framework for Safe Automated Passenger Vehicles,”  the AV industry remains skeptical of the accuracy of simulation.

Simulation introduces models to represent the behavior of the system of interest. Models are abstractions from the physical reality and rely on simplifications of the true complexity in the real world.

For example, a vehicle dynamics model may capture the forces acting on the vehicle as a result of actuation, friction and the earth’s gravity, but exclude the effect of electromagnetic forces or lunar gravity on the vehicle.

Consequently, simulations can be accurate only to some degree. Understanding the accuracy offered by a simulation is key to determining and arguing its use during development and validation activities.

Fritz asserts that PAVE360 will eliminate such traditional concerns over simulation.

But here’s the thing. Asked about PAVE 360, Intel/Mobileye’s Weast told EE Times he isn’t familiar with it. He added, “I agree simulation is a valuable tool during development phases of an AV. However, it is not an appropriate tool for safety assurance.”

Given that PAVE 360 was announced only a few months ago, Agaram said, “I am not surprised if Mobileye’s Weast hasn’t heard about it.” Barkai suspects that Weast isn’t against simulations. Rather, he’s arguing “not to forgo physical testing altogether.” He explained, “Simulation provides additional assurance and should reduce, but not eliminate, the need for physical testing. The culture of driving early production on test track, crash testing, etc. is very strong.”

Siemens concurred. “Simulation alone is indeed not an appropriate tool for safety insurance.” However, “simulation used in conjunction with physical tests (e.g. laboratory tests, test-track tests, field tests) is an appropriate tool for safety insurance, provided that it is used within a proper methodological framework (e.g. correlation/validation, amount of scenarios, amount of variations, etc.),” the company stressed.

The power of the PAVE360 methodology is in its ability to correlate simulation with the physical platform. Siemens’s Fritz noted that PAVE360 “supports SOTIF (Safety of the intended functionality) scenarios.” He is convinced it can be used for testing corner cases not easily covered with a physical platform.

If an advanced unified simulation platform like PAVE360 could indeed reduce the scope of physical testing to a level more practical than simply driving billions of miles, as claimed by Siemens, simulations can potentially strengthen a now shaky case for testing robocars on the same streets where school buses go.

Subscribe to Newsletter

Leave a comment