Why is self-driving so hard and so complex? Humans have walked on the moon, split the atom and flown faster than the speed of sound, and yet self-driving continues to elude us. Why?
Why is self-driving so hard and so complex? Humans have walked on the moon, split the atom and flown faster than the speed of sound. Yet despite the best efforts of our smartest engineers, backed with many billions of dollars from our wisest VCs and promoted with the passion of our most enthusiastic optimists, self-driving continues to elude us. Why?
2020 will go into the record books not only as the Covid-19 year, but also when the invincibility of the Masters of the AV Universe started to wane. Karl Iagnemma, founder of nuTonomy and currently president and CEO of a joint venture between South Korea’s Hyundai and Aptiv, says he didn’t expect AV to be this hard.
“Vehicles are these massively complex systems, and to [build self-driving cars], we need to integrate them with another very complex system and do it in a way that’s reliable and cost-optimized. It’s really, really hard,” says Iagnemma, “I think that’s one of the things that most players in the industry underappreciated, myself included.”
Let’s see if we can help Karl to understand why the self-driving problem is really, really hard and what the challenge is for the autonomous vehicle (AV) industry. Let’s start with Swiss watches, so take a look at the picture below and answer this question: Is a watch mechanism complex or complicated?
Did you zoom right in to take a really good look at the precision engineering? Beautiful, isn’t it? Just remember, that watch mechanism fits on your wrist and weighs a couple of ounces. With maintenance, the mechanism could last hundreds of years and the operation of every moving part will never change. There is no uncertainty and it works like, well, clockwork. In other words, it’s complicated.
The AV industry seems not to have paid attention to the well-understood and well-documented differences between complex and complicated systems. This isn’t a new field of study and this piece in Harvard Business Review explains the differences succinctly.
In particular, complicated systems: have many moving parts, but they operate in patterned ways. The electrical grid that powers the light is complicated: There are many possible interactions within it, but they usually follow a pattern. It’s possible to make accurate predictions about how a complicated system will behave. For instance, flying a commercial airplane involves complicated but predictable steps, and as a result it’s astonishingly safe. Implementing a Six Sigma process can be complicated, but the inputs, practices, and outputs are relatively easy to predict.
Rail and air travel can be thought of as complicated systems, just as a Swiss watch or an individual car is complicated. Conversely, road travel — encompassing the ever-changing interrelationships of cars, trucks, bicyclists and pedestrians — is complex.
Financial writer Jim Rickards explains complexity theory in detail in his book Currency Wars. The characteristics of complexity are summarized excellently here and include:
Rickards was general counsel for hedge fund Long-Term Capital Management (LTCM) in 1998 when LTCM almost caused a catastrophic collapse of the global financial system. Among other subjects, Rickards’ books document in detail his understanding of complexity theory and how it relates to financial markets.
What piqued my interest in that story was that the techniques used by LTCM to optimize automated trading algorithms and the methods used by AV tech companies to train self-driving AI are essentially the same. LTCM used huge amounts of computing power, coupled with vast quantities of historical bond price data to build a probabilistic predictive trading model.
The model succeeded spectacularly, outperformed humans and generated stellar trading returns for several years, right up to the point at which it didn’t and then collapsed catastrophically. 22 years after the demise of LTCM, and despite the remarkable improvements in computing power and advances in AI and deep learning since then, it remains impossible to perfectly model complex systems — a lesson the AV industry appears to have overlooked.
Using this understanding of complex and complicated systems, what might it tell us about the nature of the problems faced by the AV developers and how might it explain their pivot away from robotaxi development to other forms of AV?
Let’s have a look at some developments:
What do these developments have in common? They all focus operation into a domain that is much less complex and much more predictable. Equally proposals to alter transportation policy, for example with the introduction of AV-only lanes, or to ban humans from driving altogether merely seek to change public roads and highways from complex to complicated systems. That’s not so much creating solutions to real-world problems, as trying to fix the outcome to suit yourself.
The best of both worlds
What does experience teach us about the relative capabilities of humans and machines? That machines are vastly better than humans at highly predictable, repetitive and monotonous tasks. If you want an airplane to maintain level flight at 33,000 feet, at a speed of 500 knots and on a heading of 270 degrees for four hours, a machine is superior than a human pilot at that task. Let’s call that skills and rules.
In comparison, in circumstances which are unpredictable, uncertain, or unknown, humans are much more adept than machines. Never has this better been demonstrated than Capt. Chesley “Sully” Sullenberger landing Flight 1549 on the Hudson River following dual engine failure resulting from bird strike. No probabilistic deep learning algorithm would have come up with that solution. Let’s call that knowledge and expertise.
Missy Cummings discusses skills, rules, knowledge and expertise in her paper “Rethinking the maturity of artificial intelligence in safety-critical settings”.
One of the conclusions reads:
While AI augmentation of humans in safety-critical systems is well within reach, this success should not be mistaken for the ability of AI to replace humans in such systems. Such a step is exponential in difficulty and with the inability of machine learning, or really any form of AI reasoning, to replicate top-down reasoning to resolve uncertainty, AI-enabled systems should not be operating in safety critical systems without significant human oversight.
Road networks are complex and we are now witnessing the reality of the AV industry failing to develop AI and deep learning which can navigate the uncertainty inherent in complex systems. Just as hedge fund LTCM was defeated by complexity in financial markets in 1998, so the outcome for the AV industry is likely to be similarly unfortunate.
The long-term safety-first future that I see for the auto industry is a human/machine collaboration: Humans responsible for the driving task, with machines backing them up firstly by monitoring for distraction and fatigue; and secondly by providing longitudinal and lateral assistance to correct for minor control errors, in the form of automated braking and lane-keeping systems.
Thus, driver monitoring systems and ADAS look to be the future, as legislation already passed in Europe and currently passing through Congress would suggest. Expect to see AV technology adopted in slower, simpler, closed-campus applications many years, and probably even decades, before it is tested and validated for safe use in privately-owned passenger vehicles on public roads.
To sum up, cars are complicated, but roads are complex. It’s as regular as clockwork. Perhaps someone could tell Elon?