For the foreseeable future, AI developers must proceed with caution in building safe, robust automated systems.
First, a few perspectives on AI: It’s a misnomer! AI is neither artificial, nor is it intelligent. AI cannot recognize things without extensive human training. AI exhibits completely different logic from humans in terms of recognizing, understanding and classifying objects or scenes. The label implies that AI is analogous to human intelligence.
AI often lacks any semblance of common sense, can be easily fooled or corrupted and can fail in unexpected and unpredictable ways. In other words—proceed with caution.
This column looks at how AI technologies are affecting the automotive industry. We’ll consider these questions:
AI development has three phases: build AI models, train AI models using relevant data and, lastly, use the trained model to solve problems, the inferencing stage.
Most AI models are based on multiple versions of neural networks and learning networks. Examples include convolutional neural networks, generative adversarial networks, deep reinforced learning, federated learning, transfer learning and others. Each brings different advantages and drawbacks. All are evolving rapidly.
The table below summarizes the advantages and drawbacks of AI technologies along with safety considerations and proposed regulations.
AI is primarily used to solve complex problems. Since the auto industry has plenty of difficult problems, AI is playing a growing role in advancing auto technology. The promise of deploying AVs is primarily dependent on new AI technology. There seems to be near consensus that neural network advances are the leading approach for reaching future AV deployment success.
The good news is that AI and especially neural network technology is early in its R&D phase. This implies that future advances are ahead with breakthrough innovation expected. With extensive AI investments continuing across the globe, it is a good bet that AI and neural networks will solve many more complex problems—including challenges in the automotive industry.
Among the challenges in developing and deploying AI technologies is adequate training of neural networks. In general, the more complex the problem, the more complex the neural network model must be. That implies large models. Training requires vast resources and expertise to design and test AI models that rely on large data sets to verify that models work as advertised.
AI models requires extensive training, which means acquiring large databases. Larger sets of training data are becoming available, but training remains a time consuming and expensive task. Most training data also must be labeled by humans to allow the AI models to learn and become proficient. There is growing concern that biases are creeping into training data.
Then there is the black-box problem: It remains difficult to determine how AI models make decisions. Such obscurity remains a big problem for autonomous systems. Better solutions are needed.
Another issue involves a model’s sensitivity to minor data changes. That vulnerability creates security concerns, including the potential to hack autonomous systems and the resulting threat to AV safety.
A lack of AI expertise is another big drawback in the auto and other industries, a skills gap that is not likely to be remedied anytime soon.
The problem-solving inference phase also has drawbacks. Large models, especially for AVs, require tremendous computing resources to crunch sensor data and support complex software. Those resources also require power, which is always limited in auto applications.
Emerging technologies will improve capabilities and reduce inferencing costs, including emerging AI chip technology, declining lidar prices and increased sensor performance.
The biggest drawback for the inferencing is the black-box problem, or AI explainability. AI systems remain unable to explain how they arrive at decisions, creating a host of AI trust issues. For automotive applications, that’s a non-starter. (I’ll explore issues around AI explainability in a future column.)
Automotive AI requires much greater safety than other consumer segments. Hence, greater emphasis on AI safety and R&D are a must. To that end, Georgetown University’s Center for Security and Emerging Technology (CSET) has released a pioneering report examining the unintended consequences of AI and the potential impact.
The CSET report identifies three basic types of AI failures—robustness, specification and assurance failures. Robustness failure means AI systems receives abnormal or unexpected inputs that cause them to malfunction. In specification failure the AI system is trying to achieve something subtly different from what the designer intended, leading to unexpected behaviors or side effects. Assurance failure means the AI system cannot be adequately monitored or controlled during operation
The report released in July includes examples of what unintended AI crashes could look like (the authors prefer the term “accident”), and recommends actions to reduce the risks while making AI tools more trustworthy.
Explainable AI, XAI, is a method for mitigating the black-box effect, allowing better understanding of which data is required to enhance model accuracy. XAI research sponsored by the Defense Advanced Research Projects Agency seeks to develop machine learning technologies that produce more explainable models, while retaining a high level of learning performance and accuracy. XAI would also enable human users to understand, trust and manage AI models. XAI can also characterize its own abilities and provide insights into its future behavior.
AI and the General Data Protection Regulation are closely tied. GDPR affects AI development in Europe and other regions. The regulation explicitly covers automated, individual decision-making and profiling. The rule protects consumers from the legal consequences of both. Automated, individual decision-making in this case includes decisions made by AI platforms without any human intervention. Profiling means the automated processing of personal data to evaluate individuals.
For automotive applications, this primarily affects content delivery systems and user interfaces.
The European Union is preparing an AI regulation similar to GDPR, a new rule that is likely to have as broad an impact as GDPR. A draft proposal representing a legal framework for regulating AI was released in April.
The EU proposal seeks to identify high-risk AI technology and its applications aimed at critical infrastructure such as transportation that could endanger citizens. This means autonomous vehicles will be a target of AI regulation.
Fines under the EU proposed AI legislation could run as high as up to €30 million, or 6 percent of a company’s global revenue, whichever is higher. Maximum fines under GDPR are €20 million, or 4 percent of global revenues.
The table below summarizes AI technology integrated with auto electronics. Not included are AI used in auto manufacturing, supply chains management, quality control, marketing and similar functions where AI is making significant contributions.
Decisions generated by neural networks must be understandable. If not, it is hard to comprehend how they work and correct errors or bias.
Neural networks decisions also must be stable—this is, remain unchanged despite minor differences in visual data. This is especially important for AVs. Small strips of black and white tape on stop-signs can make them invisible to AI-based vision systems. That’s an example of unacceptable neural network performance.
AV applications require better technology to understand edge cases or new driving events not experienced by previous software driver training. This remains a key limiting factor for deploying AV systems in volume.
Current AI use
Speech recognition and user interfaces have been the most successful AI-based application in automotive. These applications leverage AI technology used in smartphones and consumer electronics for deployment in infotainment and human-machine interfaces. Alexa, CarPlay, Android Auto and similar products have been introduced in most new models and model updates.
Remote diagnostics is a leading telematics application. The addition of AI technology can help predict future device failures, for example.
AI-based vision systems are used in driver monitoring systems for ADAS-equipped cars. DMS is expected to see rapid growth with improved AI technology.
Many ADAS functions also use AI technology, including adaptive cruise control to multiple versions of parking assist. L1 and L2 ADAS vehicles will use increasing amounts of AI technology in new models.
Emerging AI use
Limited driving pilots are emerging from multiple OEMs. They are often called L2+, but that terminology is not included in current standards. Calling them autopilots is a mistake since it confuses consumers and implies more capability than exists. And they have caused crashes.
L3 vehicles have been available for several years, but deployment has been limited due to regulatory restrictions. Regulations allowing L3 AVs are emerging, and L3 vehicles use much AI technology.
Both OTA software and cybersecurity functions are adding AI technology via embedded software clients along with cloud-based services and analytics software.
An emerging AI application is autonomous vehicle development and testing for multiple AV use cases. About 5,000 AVs are in testing or pilot mode, mostly in China and U.S. They include goods AVs, autonomous trucks, robo-taxis and fixed-route AVs.
Future AI use
AV use cases are the most valuable and difficult applications for AI technology. The goal is a software driver that is better than the best human drivers with none of the drawbacks of human behavior.
Software development is ripe for AI-based technology improvements. Identifying and fixing software bugs is likely to happen in the next decade via innovate AI technology.
Cybersecurity advances derived from AI technology are perhaps the most pressing need for the automotive and other industries. The requirements are attracting large, ongoing investments.
AI technology has become a major driving force in the automotive industry (pun intended). So far, two companies have led in adopting AI technology in automotive—Nvidia and Tesla. Nvidia is the clear leader in providing chips and software standards for creating and using AI models. Tesla is steadily deploying AI, in particular to its overly ambitious autopilot.
A future column will address the results Tesla’s recent AI Day, including groundbreaking efforts aimed at future of neural network training.
Meanwhile, many more companies are focused on automotive AI: Mobileye is the leader in ADAS advances with AVs on its drawing board; Google-Waymo has pioneered development of software drivers.
As safety concerns grow, AI developers must heed caution signs lest unintended consequences stifle innovation. Topping this list are unlocking AI black boxes that limit deployment of trust systems. Elsewhere, bias in training data is a growing problem that is difficult to assess and consequently hard to solve.
AI regulation is on the way from the EU, and other regions will follow.
For the foreseeable future, AI developers must proceed with caution in building safe, robust automated systems.
Egil Juliussen has over 35 years’ experience in the high-tech and automotive industries. Most recently he was director of research at the automotive technology group of IHS Markit. His latest research was focused on autonomous vehicles and mobility-as-a-service. He was co-founder of Telematics Research Group, which was acquired by iSuppli (IHS acquired iSuppli in 2010); before that he co-founded Future Computing and Computer Industry Almanac. Previously, Dr. Juliussen was with Texas Instruments where he was a strategic and product planner for microprocessors and PCs. He is the author of over 700 papers, reports and conference presentations. He received B.S., M.S., and Ph.D. degrees in electrical engineering from Purdue University, and is a member of SAE and IEEE.