AI-Based Xavier tech gets an early introduction to the market
Nvidia is in Munich this week to declare war that it is coming after the advanced driver assistance system (ADAS) market. The GPU company is now pushing its AI-based Nvidia Drive AGX Xavier System — originally designed for Level 4 autonomous vehicles — down to Level 2+ cars.
In a competitive landscape already crowded with ADAS solutions provided by rival chip vendors such as NXP, Renesas, and Intel/Mobileye, Nvidia is boasting that its GPU-based automotive SoC isn’t just a “development platform” for OEMs to prototype their self-driving vehicles.
At the company’s own GPU Technology Conference (GTC) in Europe, Nvidia announced that Volvo cars will be using the Nvidia Drive AGX Xavier for its next generation of ADAS vehicles, with production starting in the early 2020s.
NVIDIA's Drive AGX Xavier will be designed into Volvo's ADAS L2+ vehicles. Henrik Green (left), head of R&D of Volvo Cars, with Nvidia CEO Jensen Huang on stage at GTC Europe in Munich. (Photo: Nvidia)
Danny Shapiro, senior director of automotive at Nvidia, told us, “Volvo isn’t doing just traditional ADAS. They will be delivering wide-ranging features of ‘Level 2+’ automated driving.”
By Level 2+, Shapiro means that Volvo will be integrating “360° surround perception and a driver monitoring system” in addition to a conventional adaptive cruise control (ACC) system and automated emergency braking (AEB) system.
Nvidia added that its platform will enable Volvo to “implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.”
It remains unclear if car OEMs designing ADAS vehicles are all that eager for AI-based Drive AGX Xavier, which is hardly cheap. Shapiro said that if any car OEMs or Tier Ones are serious about developing autonomous vehicles, taking an approach that “unifies ADAS and autonomous vehicle development” makes sense. The move allows carmakers to develop software algorithms on a single platform. “They will end up saving cost,” he said.
Phil Magney, founder and principal at VSI Labs, agreed. “The key here is that this is the architecture that can be applied to any level of automation.” He said, “The processes involved in L2 and L4 applications are largely the same. The difference is that L4 would require more sensors, more redundancy, and more software to assure that the system is safe enough even for robo-taxis, where you don’t have a driver to pass control to when the vehicle encounters a scenario that it cannot handle.”
Better than discrete ECUs
Another argument for the use of AGX for L2+ is that the alternative requires the use of multiple discrete ECUs. Magney said, “An active ADAS system (such as lane keeping, adaptive cruise, or automatic emergency braking) requires a number of cores fundamental to automation. Each of these tasks requires a pretty sophisticated hardware/software stack.” He asked, “Why not consolidate them instead of having discrete ECUs for each function?”
Scalability is another factor. Magney rationalized, “A developer could choose AGX Xavier to handle all these applications. On the other hand, if you want to develop a robo-taxi, you need more sensors, more software, more redundancy, and higher processor performance … so you could choose AGX Pegasus for this.”
Is AGX Xavier safer?
Shapiro also brought up safety issues.
He told us, “Recent safety reports show that many L2 systems aren’t doing what they say they would do.” Indeed, in August, the Insurance Institute for Highway Safety (IIHS) exposed “a large variability of Level 2 vehicle performance under a host of different scenarios.” An EE Times story entitled “Not All ADAS Vehicles Created Equal” reported that some [L2] systems can fail under any number of circumstances. In some cases, certain models equipped with ADAS are apparently blind to stopped vehicles and could even steer directly into a crash.
Nvidia’s Shapiro implied that by “integrating more sensors and adding more computing power” that runs robust AI algorithms, Volvo can make their L2+ cars “safer.”
On the topic of safety, Magney didn’t necessarily agree. “More computing power doesn’t necessarily mean that it is safer.” He noted, “It all depends on how it is designed.”
Lane keeping, adaptive cruise, and emergency braking for L2 could rely on a few sensors and associated algorithms while a driver at the wheel manages events beyond the system’s capabilities.
However, the story is different with a robo-taxi, explained Magney. “You are going to need a lot more … more sensors, more algorithms, some lock-step processing, and localization against a precision map.” He said, “For example, if you go from a 16-channel LiDAR to a 128-channel LiDAR for localization, you are working with eight times the amount of data for both your localization layer as well as your environmental model.”
But really, what does Nvidia have that competing automotive SoC chip suppliers don’t?
Magney, speaking from his firm VSI Labs’ own experience, said, “The Nvidia Drive development package has the most comprehensive tools for developing AV applications.”
He added, “This is not to suggest that Nvidia is complete and a developer could just plug and play. To the contrary, there is a ton of organic codework necessary to program, tune, and optimize the performance of AV applications.”
However, he concluded that, in the end, “you are going to be able to develop faster with Nvidia’s hardware/software stack because you don’t have to start from scratch. Furthermore, you have DRIVE Constellation for your hardware-in-loop simulations where you can vastly accelerate your simulation testing, and this is vital for testing and validation.”
— Junko Yoshida, Global Co-Editor-In-Chief, AspenCore Media, Chief International Correspondent, EE Times>