Teaching a car to anticipate a pedestrian suddenly stepping onto the roadway remains the hardest thing to do.
Would highly automated vehicles make driving safer? In principle, the answer is yes. But pitching autonomous vehicles (AVs) as a paragon for an accident-free future?
As Jack Weast, vice president, autonomous vehicle standards, and senior principal engineer at Intel recently told us, “Someday, we'll get there [accident-free society] … probably when we remove all the human drivers off the road.”
“Driving is an inherently risky activity,” he added. This reality applies to both robotic cars and human drivers. Accidents will happen either in a self-driving car or human-driven car.
In theory, with a host of sensors ranging from vision to radars and lidars, self-driving cars should be as good or even better than human drivers at detecting and avoiding obstacles, provided there are no sensor failures or bugs in the software stack. But teaching a car to anticipate pedestrian stepping suddenly onto the roadway remains a challenge.
Human intuition slows us down when driving in a residential neighborhood with no sidewalks. Some drivers go slower than 25 miles per hour — just to be on the safe side. It isn’t regulations but common sense that prompts this sort of defensive driving.
But for a machine, what does “drive cautiously” mean? That's a puzzler for a programmer, Weast explained. “How do you make [being] cautious ‘machine interpretable,’ and how do you explicitly define it?”
Human drivers can intuit these sorts of things, but to make human intuition comprehensible to a machine we need to formalize it mathematically, define it, and pick the number behind the definition — whether the result is deemed a “safe speed” in a certain neighborhood or a safe following distance for AVs.
Surprisingly enough, these are questions nobody is asking. It follows that nobody seems to have an answer — at least not yet.
Bumpy Road Ahead for AV
Safety vs. Utility
In theory, we could make autonomous vehicles that drive at a steady speed of 20 miles per hour. These AVs would drive cautiously, vigilant to avoid any accident at all costs. While such a behavioral model might be considered safe driving, an AV that drives like an old lady would stifle the appeal and “usefulness” of autonomous vehicles. An overly cautious, slow-moving AV would create new traffic jams wherever it traveled, stacking lines of human-driven cars behind it. Road rage would inevitably ensue. Worse, the supposedly safe AV would likely trigger accident by driving so unnaturally.
We want self-driving cars to drive cautiously, but we also expect them to be assertive when necessary. Above all, we want AVs to drive like people. The Rand Corp. calls this “roadmanship,” defined as AVs that don’t create hazards and “play well with others.”
Making an AV both safe and “useful” is neither a theoretical nor philosophical discussion. This is a technical and practical conundrum with which today’s AV designers are wrestling. Intel’s Weast frames this issue as “Safety vs. Utility.”
What’s a safe following distance for AV?
Driving at a safe following distance seems like an elemental practice, even for student drivers.
But for a machine to follow safely, an AV must be taught to balance many factors. It must consider its velocity, road friction (is it driving on wet pavement?) and its reaction time. The most important parameter in this calculation, according to Weast, is “What is my assumption of the reasonable worst-case braking of the vehicle I am following?”
Here, the problem gets tricky because not all cars are created equal. As the table below shows, the maximum braking capability of a 2018 Porsche 911GT3 is 12.57 meters per second. In contrast, 1996 Honda Civic brakes at 8.19 meters per second (mps). This is a big difference.
If you are the automated vehicle following another car on the road, how do you figure a safe following distance? If the vehicle up ahead has a higher braking capability, you need a bigger cushion to prevent a rear-end collision.
Speaking about the determination of following distance for AVs, Weast said, “Let’s say we pick a number, such as 9.8 meters per second.” That falls into an automatic emergency braking range number. As long as the lead vehicle doesn’t decelerate with a velocity greater than 9.8 mps, the distance seems safe. But what if the AV is following a 2016 Porsche 911 and the Porsche brakes? The AV “may not be able to stop in time without crashing into it.” Was it OK for the AV to assume that 9.8 meters per second would work?
Meanwhile, certainly it is possible to design an AV that drives naturally like a human, but that AV will not be able to provide perfect safety assurance. Is this acceptable?
So has the industry determined the ideal following distance for AVs? Intel/Mobileye has been asking, but Weast said, “I haven’t met anybody who’s got the answer.” In fact, it turns out that nobody wants to pick a number. Apparently, according to Weast, most regulators told the Intel/Mobileye team, “We're not going to pick the number. That's not what we do.”
But that’s not exactly accurate. Given that regulators decide on a speed limit for every new road, Weast told them, “You actually do pick a number. You balance the usefulness and throughput of traffic on that road with safety.”
In fact, if regulators fixed the maximum speed on every road in the world at 20 miles an hour, “you know, we can probably get rid of a lot of accidents…maybe millions,” Weast suspected. “But we don’t want to do that because we lose the efficiency of the transportation system.”
Responsibility Sensitive Safety (RSS)
Of all the players in the AV industry, Intel/Mobileye has probably given the most thought to safety vs. utility. This is largely because its team pioneered in the development of Responsibility-Sensitive Safety (RSS), a mathematical model for autonomous vehicle safety. Weast called RSS “a valuable tool,” providing a mathematical model and parameterized scenarios. Intel/Mobileye used RSS to consult with regulators and industry players on AV safety.
Weast offered a couple of examples to demonstrate that teaching AVs to do the right thing could get complicated even in seemingly run-of-the-mill driving: a residential neighborhood with no sidewalks and a 25-mph speed limit.
There, “it’s reasonable for the automated vehicle to assume that that a pedestrian could move in any direction because we're both sharing the road,” said Weast. “As human drivers, we intuitively know that, and we usually slow down even though the speed limit is 25.”
Weast offered another scenario. “It's a 45-mile an hour road. Let's say it's four lanes, two in each direction with kind of a suicide lane in the middle… and let's say there are sidewalks.”
A human driving in this situation can comfortably go 45 miles an hour, even with pedestrians present because it’s reasonable to assume that that a pedestrian won’t jump into the road … “because there is a sidewalk.”
But could someone jump into the road? “Absolutely, they could,” Weast said.
To build the safest automated vehicle, though, he asked, “Should we assume that the pedestrian will stay on the sidewalk or do we assume the pedestrian can move in any of 360 degrees like we would assume in that residential neighborhood where there is no sidewalk?”
From a safety standpoint, it might be better for the AV to assume that anything could happen. The result, however, is an AV that never gets up to 45 miles per hour.
In Weast’s opinion, this is why the industry, government and society “need to have an honest conversation.” In Weast’s two simple scenarios, it’s not clear what the automated vehicle can consider “reasonable” behavior.
Obviously, we want AVs that are useful. But do regulators, industry and public accept AVs that might not be able to offer perfect safety?
Weast asked: Where do you draw the line? What’s the number (of a safe following distance, for example)? What’s reasonable for the AV to assume?
These are “unspoken things that a lot of folks don't want to address because it's really hard to answer,” Weast said.
Driving data sets from human drivers
To initiate the dialogue for which nobody seems to have an answer, Intel/Mobileye believes more data might help. It is funding research to look at naturalistic driving data from human drivers. The idea is to take the RSS safety model, analyze human driving, and extract the parameters the drivers use. “We can plug those numbers into the RSS model and to see if we can use the same numbers that human drivers are using.” This research, in different geographies, can give the industry a starting point for a comparative discussion.
Presumably, safety could be dialed up, or down, in increments. Running the resulting numbers through a traffic simulator could help to figure out various impacts on the road. Weast acknowledged that this will not produce the magical answer. But he added, “Getting more data would help us get our minds wrapped around this challenge.”
Intel/Mobileye’s research partner is the Karlsruhe Institute Technology. Professor Christoph Stiller presented in 2012 a new dataset, now known as KITTI Vision Benchmark Dataset. His team originally captured it from a Volkswagen station wagon for use in mobile robotics and autonomous driving research.
Along with KITTI, Intel/Mobileye also plans to leverage the VTTI (Virginia Tech Transportation Institute) data set. VTTI is known to have done a 100-Car Naturalistic Driving Study.
Weast explained, “We have tools and algorithms, which we could then apply to any naturalistic driving data set from anywhere in the world.” The goal is to “come up with different human parameters for different regions.” Indeed, driving is cultural, since people in Germany drive differently from those China. In Weast’s view, the opportunity here is “to understand the human parameters” his team could plug into the RSS model.
By comparing what automated vehicles are implementing with how humans are driving, Weast hopes society might create more formal rules, even determining reasonable AV speed limits.
Asked if anyone else doing anything similar, Weast said, “Not that we've seen, honestly.”
Weast wants regulators and the industry to explore:
- How do we want autonomous vehicles to operate?
- What can autonomous vehicles assume about the behavior of others on the road?
- And what happens if AV crashes while doing what it was supposed to do, following its proper, mathematical assumptions, only to encounter someone who violated those proper, mathematical assumptions?
For sure, Intel/Mobileye is going places where no car has gone before. Weast said, “We have work to do but we're happy to be leading this.”