An industry campaign to limit meaningful regulation relies on myths about AVs that are easily debunked.
Autonomous Vehicles (AVs) have the potential to improve road safety, provide mobility for those unable to drive and yield other benefits. Today these goals remain aspirational. However, AV testing on public roads poses serious risks to vulnerable road users. Despite these risks, the AV industry campaigns for favorable regulatory treatment for both current testing and future general deployment. This campaign to limit meaningful regulation relies on a number of myths about AVs that are easily debunked.
Safely operating immature, developmental technology on public roads, even with safety drivers, requires transparency and cooperation. Yet the industry’s stance towards regulatory authorities is all too often one of stonewalling, empty rhetoric, and opacity. To make matters worse, few regulatory agencies have deep expertise in the area of AVs, and so can struggle to counter unreasonable AV company statements.
To help level the playing field, here is a “Dirty Dozen” list of myths commonly used by the industry in discussions regarding regulatory oversight for AVs.
The informal version of this myth is that humans drive drunk, so computer drivers will be safer. In general, the implication is that AVs will be safer by not making those same mistakes.
To be sure, many crashes are caused by impaired drivers. However, the “94 percent” estimate is a misrepresentation of the original source. What the study data actually shows is that 94 percent of the time a human driver might have helped avoid a bad outcome. That is not the same as causing a crash. The source explicitly says that this is not 94 percent “blame” on the human driver.
Beyond the 94 percent figure being more complex than just “driver error,” AVs will make different mistakes. This should be abundantly clear to anyone watching automation road test videos. Yes, the technology will improve, but it remains to be seen how long it will take for AVs to be “net safer” than human drivers in complex driving situations when AV shortcomings, as well as benefits, are factored into the analysis.
This is a false dilemma. You can easily have both innovation and regulation—if the regulation is designed to permit innovation via simply requiring the industry to follow its own standards.
For example, you can regulate road testing safety via requiring conformance to SAE J3018. That standard helps ensure that human safety drivers are properly qualified and trained. It also covers conducting testing in a responsible manner consistent with good engineering validation and road safety practices. It places no constraints on the autonomy technology being tested. (Applicability to purely cargo vehicles is covered here.)
Specific standards relevant to deployed AVs are discussed below. None of those standards stifle innovation. Rather, they promote a level playing field so companies can’t skimp on safety to gain competitive advantage while putting other road users at undue risk.
If a company states that safety is their highest priority, how can that possibly be incompatible with regulatory requirements? After all, those requirements represent an industry consensus on safety standards written and approved by the industry itself – often with representatives from these very companies participating in the standards process. The AV industry ought to compete on features other than safety—with safety held to a uniformly very high standard (as in aviation).
Existing regulations (with the exception of the New York City Department of Transportation recently requiring J3018) do not require conformance to any safety standard covering autonomy, and do not set any level of required safety. At worst, it is the Wild West. At best, there are requirements for driver licensing, insurance and reporting.
Regulatory assurance of safety, if any, is little more than taking the manufacturer’s word for it. More is required.
Current Federal Motor Vehicle Safety Standards (FMVSS) do not cover computer-based system safety. They are primarily about testing headlights, seat belts, air bags and other basic vehicle safety functions. Merely passing FMVSS, while useful and important, is not enough to ensure conventional vehicle safety, let alone AV safety.
The National Highway Traffic Safety Administration (NHTSA) generally operates reactively to bad events. If car companies do not voluntarily disclose issues, many injury and fatality loss events are typically required before NHTSA forces action. For example, it took 11 crashes involving the Tesla driver assistance “autopilot” over 3.5 years to prompt NHTSA action.
Safety regulators should think hard about an approach in which “safety” means requiring insurance to compensate the next of kin after a fatality. With multibillion-dollar development war chests, a few million dollars of payout after a mishap seems scant deterrent to safety shortcuts in the race to autonomy.
These statements misrepresent how actual safety standards work. ISO 26262, ISO 21448, ANSI/UL 4600 all permit significant flexibility in support of safety. All three work together to fit any safe AV.
ISO 26262 ensures safe operation for conventional computer-based functions. ISO 21448 deals with the inherent limitations in sensors and surprises in an open external environment by covering so-called Safety of the Intended Function for AVs. ANSI/UL 4600 works with 21448 and 26262 to cover AV system-level safety, encompassing the vehicle and its support infrastructure.
The U.S. Department of Transportation has already proposed this set of standards in an Advanced Notice of Proposed Rulemaking. All these standards allow developers to do more than required. All are sufficiently flexible to accommodate any AV. None of them force a company to be less safe (a truly laughable argument). None of the standards constrain the technical autonomy approach beyond requiring safety.
A significant reason that local and state regulations are a “patchwork” is that in each jurisdiction the AV companies play hardball, negotiating to minimize regulation. Typically, this involves statements that if regulation is too intense companies will take their jobs and spending elsewhere, as well as earning the area a reputation of being hostile to innovation and technology.
The outcome of each negotiation is different, resulting in somewhat different regulations or voluntary guidance. The patchwork is largely self-inflicted by the AV companies themselves.
Moving to regulation based on industry standards would help the situation. A federal regulation that prevents states from acting but does not itself ensure safety would make things worse.
Typically, the “spirit” statements made by AV developers rely on the notion that there might be a need for deviation from the standard. A concrete example of such a deviation need isn’t stated, nor is what it might mean to conform to the “spirit.”
The standards are all flexible enough that if you actually conform to the “spirit” and intent, you can conform to the standard. However, if you’re in a hurry or want to cut costs, a strategy of saying you follow the “spirit” might come in handy. A better practice would involve consultation with regulators to either confirm a practice or obtain a limited exemption properly structured to preserve safety – if such a need can indeed be identified.
Companies that truly value safety should support transparent conformance to industry consensus standards to raise the bar for competitors. If they don’t, that could provide protective cover for any potential actors who practice “safety theater.”
Consider whether you would ride in an autonomous airplane in which the manufacturer said: “We conform to the spirit of the aviation safety standards, but we’re very smart and our airplane is very special so we took liberties. Trust us – everything will be fine.”
The proposed U.S. DOT plan to invoke industry standards mentioned above makes sense because it addresses precisely this concern. The industry has already created relevant safety standards. Regulators simply say: “Follow your own industry safety standards, just like all the other safety-critical industries do.”
If we could trust any industry to self-police safety in the face of short-term profit incentive and inevitable organizational dysfunction, then we wouldn’t need regulators. But that isn’t the real world. Achieving a healthy balance between the industry taking responsibility for safety with oversight from regulators is important.
Road testing safety is all about whether a human safety driver can effectively keep a test vehicle from creating elevated risk for other road users. That has nothing to do with autonomy intellectual property, the point of which is to dispense with the human safety driver.
Testing safety data need not include anything about the autonomy design or functional performance. For example, consider reporting how often test drivers fall asleep while testing. A non-zero result might be embarrassing, but how does that divulge secret autonomy technology data?
The safety benefits of AVs are aspirational, promised at some ever-receding point in the future. Moreover, there is no real proof to show that AVs will be substantially safer than human-driven vehicles, especially when competing with active safety features for human-driven vehicles such as automated emergency braking.
Ignoring industry best practices to put vulnerable road users at risk today should not be permitted in a bid to maybe, perhaps, someday, eventually save potential later victims if the technology proves economically viable.
Bad press from a high-profile mishap can easily set the whole industry back. No company should be rolling the safety-shortcut dice to hit a near-term funding milestone while risking both people’s lives and the reputation of the entire industry.
In other words: “We’ve gotten lucky so far, so we plan to get lucky in the future.” If there is no evidence of robust, systematic safety engineering and operational safety practices, this amounts to a gambler on a winning streak claiming they will keep winning indefinitely.
We should not be giving developers a free pass on safety until someone is killed. This is especially true for testing practices that in effect use safety drivers as a moral crumple zone.
Many know what happened in the 2018 Tempe, Arizona, Uber road testing fatality. The most recent version of SAE J3018 for road testing safety incorporates lessons learned from that tragic fatality. If testers won’t follow that consensus industry standard, they haven’t really taken those lessons to heart.
Regulators should pause to consider the consequences of putting vulnerable road users at potentially increased risk for the benefit of for-profit companies. Those companies are using public roads as an experimental testbed in their high-stakes race to autonomy. While road testing brings with it jobs and prestige for being tech-friendly, even a single testing fatality can draw worldwide negative attention to a region.
Regulators charged with ensuring safety should not feel inhibited from merely asking developers to follow the industry safety standards that in most cases the companies themselves helped write. Vulnerable road users should not act as unwitting test subjects for AV testers who aren’t by their actions truly putting safety first.
(A more detailed treatment that encompasses still more myths can be found at Philip Koopman’s blog, Safe Autonomy.)
This article was originally published on EE Times.
Philip Koopman is an associate professor at Carnegie Mellon University specializing in autonomous vehicle safety. He is on the voting committees for the industry standards mentioned. Regulators are welcome to contact him for support.