Bosch and Daimler intend to mass-produce autonomous taxis early next decade
A German automotive tag team of tier one automotive supplier Bosch and OEM Daimler announced Tuesday their choice of Nvidia as the AI platform for development of a robotaxi scheduled for mass production in the early 2020s.
The announcement connects previously-revealed separate partnerships and collaborations, solidifying the three-way alliance.
Danny Shapiro, Nvidia’s senior director of automotive, told EE Times in a phone interview, “This isn’t a transactional deal… such as Nvidia supplies chips to Bosch, and Bosch provides modules to Daimler. This is a strategic partnership, in which each company has a distinct role to play.”
Specifically, the two German companies — Bosch and Daimler — will join to deploy Nvidia’s Drive Pegasus platform for “machine-learning methods in generating vehicle-driving algorithms,” according to Bosch. Nvidia will provide its Drive Pegasus based platform, including “high-performance AI automotive processors along with system software,” Bosch added.
The alliance is focused on bringing highly automated vehicles to urban streets, seeking to populate cities with robotaxis.
Phil Magney, founder and principal at VSI Labs, told EE Times, “We have known about the Bosch, Daimler, and Nvidia stuff for a while now. Now it is official that the architecture will be Nvidia Pegasus and Bosch will be the tier one.”
He noted that a handful of robo-taxi companies have announced plans to use the Pegasus. However, this Bosch-Daimler-Nvidia announcement is “the first OEM/Tier-one partnership with hardened plans to design their robo-taxi around the Drive Pegasus architecture,” he explained.
What about Daimler-Xilinx deal?
As reported previously, Xilinx and Daimler are working on AI-based solution, but the two companies failed to provide any details on this.
Magney suspects that they are two separate programs for Daimler. He explained, “Like many OEMs, when it comes to automated driving there are at least two different programs going on. One being automated driving (L2/L3) for series production, and the other for robo-taxis (L4+). It is possible that Nvidia is being used for the robo-taxi program while Xilinx is being used for the ADAS or incremental (L2+) programs.”
Further, he added, “Also, Xilinx and Nvidia are not necessarily mutually exclusive. An automated driving stack has multiple threads going on at once for different tasks within the AV stack. Some things are better suited for the GPU architecture while other might be better suited for FPGAs.”
Bosch, a leading supplier of sensors and automotive parts, started using the prototype of Nvidia’s Xavier to develop AI-based AV modules last year. The deal between Bosch and Nvidia was announced in March 2017.
Based on its own experience in developing sensor processing units for AVs, Bosch estimated the ECU network for automated urban driving must be able to handle “hundreds of trillions of operations per second.”
In fusing the sensory data collected and transmitted by radar, video, lidar and ultrasonic sensors, the ECU network is expected to do everything from assessing information (including object detection and map localization) to planning the trajectory of a vehicle “within just 20 milliseconds,” according to Bosch.
Of course, the amount of sensor data such an ECU must deal with is enormous. One video sensor alone, such as Bosch’s stereo video camera, generates 100 gigabytes of data in just one kilometer, according to Bosch. The pressure on the ECU network to quickly process such combined data is also intense because safety depends on that processing speed. This is part of the reason why Bosch and Daimler chose Nvidia. They believe the Drive Pegasus platform can keep pace their specified computing power.
What’s inside Pegasus platform?
Assuming that massive computing performance is necessary to run multiple complex algorithms in parallel and execute them within milliseconds, what kind of hardware is necessary?
Meet Drive Pegasus. Nvidia boasts that Pegasus, consisting of two Xavier SoCs and two yet-to-be announced “next-generation” GPUs, delivers 320 TOPS (trillions of operations per second) to “handle diverse and redundant algorithms.” More important, Shapiro stressed, “Pegasus offers the most energy efficient solution at one TOPS per watt.”
Asked about the new GPU inside Pegasus, Shapiro declined to provide details, noting that the company has not disclosed it yet.
Shapiro also emphasized that Nvidia designed safety into its Xavier SoC “from the ground up.” The company designed safety technology throughout the AV computing platform from hardware to software stack, concentrating on tools and methods to “create software that can perform as intended, reliably and with backups.”
Further, Shapiro pointed out that Nvidia invited TÜV SÜD, a German company that assesses compliance to standards, to perform a safety concept assessment of Nvidia’s Xavier SoC.
Nvidia quoted TÜV SÜD’s lead assessor stating, “Our in-depth technical assessment confirms the Xavier SoC architecture is suitable for use in autonomous driving applications and highlights Nvidia’s commitment to enable safe autonomous driving.”
Bagaboo for AI-enabled vehicles
Nevertheless, there is this lingering question. Can AI be trusted to do the right thing when driving in a real-world environment? The difficulty of verification and validation on AI-enabled vehicles remains a bugaboo for numerous safety experts.
To that end, Mobileye is proposing what it calls a “responsibility-sensitive safety (RSS)” model. Under the RSS model, Mobileye is introducing a deterministic system to compensate for “a probabilistic AI system.”
Asked if Nvidia is planning to embrace RSS, Shapiro said, “That requires a much longer talk.” For now, he said, “we are planning to incorporate checks and balances in our system” – most likely, similar to a “checker/doer” approach.
— Junko Yoshida, Chief International Correspondent, EE Times