Xilinx and Daimler Team Up

Article By : Junko Yoshida, EE Times

Where exactly do FPGAs fit in the ever-electrifying automotive landscape?

SAN FRANCISCO — While AI processors and AI-ready SoCs are getting all the attention from the investment community because they are deemed critical to emerging highly automated vehicles, what about FPGAs? Where do they stand in the AI silicon race?

In a move to reaffirm its long-standing contributions to the automotive industry and future roles that FPGAs are expected to play inside autonomous vehicles, Xilinx disclosed a collaboration with Daimler AG to develop “ultra-efficient AI solutions” for future Mercedes-Benz models.

According to the two companies, Daimler is building an “in-car system” using Xilinx technology for AI processing in automotive applications.

Details of the partnership, however, remain sketchy.

Asked about the timing for the system launch or exclusivity of the Xilinx-Daimler deal (“Did Xilinx unseat competing chip vendors like Mobileye or Nvidia?”), Willard Tu, senior director for automotive at Xilinx, declined to comment.

Mike Demler, senior analyst at the Linley Group, pointed out, “Daimler has ‘selected’ Mobileye and Nvidia, too.” He sees Xilinx’s announcement on Daimler as more or less “boiler-plate PR.” But he also added, “It will be interesting to see what [Daimler’s] ‘in-car system’ turns out to be.”

FPGAs, though, are more popular in automotive than most people realize, according to Phil Magney, founder and principal of VSI Labs.

“While everyone would love to claim ownership of an ASIC, the majority of cutting-edge processing is done with FPGAs that give you the chance to apply your proprietary instruction sets on a compute-efficient platform,” Magney told EE Times. “ASICs are nice, but before you lock down your instruction sets, you are going to try lots of variations. FPGAs accommodate changes on the fly, so you can tweak your instruction sets and try new things.”

Automotive heritage
Speaking of the company’s heritage in the automotive market, Tu explained that Xilinx FPGAs originally got into the automotive infotainment system as glue logic before the company later found its sweet spot in the ADAS market. Tu stressed that FPGAs are best suited for handling increasingly complex ADASes and automated driving.

Click here for larger image 
Where FPGAs are designed into a vehicle. (Source: Xilinx)

Where FPGAs are designed into a vehicle. (Source: Xilinx)

In 2014, Xilinx chips were adopted by 14 car makers and designed into 29 models. By 2018, Xilinx chip solutions extended their reach to 29 makes in 111 models, according to the company.

Unlike a host of AI processor startups that are just getting into the autonomous vehicle market, Tu said, “At Xilinx, we do understand what it takes to deliver automotive quality. We’ve been shipping our products for the automotive market for a long time.”

In the ADAS market, Xilinx’s FPGAs have proven to be instrumental in processing complex sensory data coming from a variety of sensors including image sensors, lidars, or radars.

Xilinx, in fact, has the second-largest share in the automotive computer vision processing market next to Mobileye, said Tu. However, he quickly acknowledged that there is a huge gap between No. 1 (Mobileye) and No. 2 (Xilinx). Noting that Xilinx works with a host of tier ones including Bosch, Magna, and Continental on the image processing, Tu explained, “There are five reasons why they want to work with Xilinx on this.”

First, Xilinx FPGAs would allow differentiations for car OEMs who want to run their own proprietary image processing algorithms. In contrast, Mobileye offers a “one-size-fits-all” solution, said Tu.

Second, Xilinx is an “open box” for tier ones, who must deliver and guarantee functional safety compliant with ISO 26262. Mobileye, on the other hand, provides a “black box” — keeping tier ones and OEMs in the dark as to what exactly is going on with Mobileye’s software inside their box.

But if Mobileye guarantees that its black box is compliant with ISO 26262, where is the problem? Tu said, “Mobileye is a chip vendor. Ultimately, if some safety issues cropped up on the system level, it is car OEMs who will have to take the responsibility, not the chip supplier.”

Third, Xilinx’s image processing solution provides flexibility in terms of where in a vehicle it should be installed, said Tu. It can be put in the front camera, on the windshield, or even in the central module.

Fourth, “We offer scalability,” said Tu. Combined with Arm subsystems including Cortex-A53 and Cortex-R5, Xilinx-designed ZU2 to ZU5 can add more programmable fabrics as demanded by applications. As new car assessment programs add new requirements every 12 to 16 months, “we can offer more flexibility in our solutions compared to SoCs,” stressed Tu.

Fifth, Xilinx is proud of FPGA’s adaptability, which can go a long way with changing requirements for functionalities demanded by the automotive industry.

An example of this adaptability can be, perhaps, best described in what Xilinx calls the Dynamic Function eXchange (DFX). Assume that Xilinx’s chip was being used in the Level 3 automated vehicle for monitoring a driver. That same chip can be reprogrammed to do parking for valet parking applications, for example. The programmability would allow the personality of the chip to change, explained Tu.

90% market share in lidars
Xilinx’s other claim to fame is the company’s dominance in the lidar market. Xilinx’s chips are not only in several major tier ones’ lidars but also in lidars designed by most startups, according to Tu. Considering so many different technologies are implemented in a variety of lidars and the technology influx will continue, it is understandable that many lidar vendors are turning to Xilinx’s programmable solutions.

“Apparently, lots of lidars use FPGAs, especially ones that are intelligent and have some unique processing requirements,” said Magney. “FPGAs allow the lidar maker to update and tweak their processing requirements. And because most lidar vendors are new companies, an ASIC is not a practical solution at this stage.”

But in the bigger scheme of things, Demler questioned how significant that 90% market share in lidars really is. He noted, “It’s a pretty small market … it has to be less than 1 million units per year.” He added, “I’m speculating that Xilinx probably has a relationship with Velodyne, and they’re extrapolating.”

In Demler’s view, the lidar market today is “predominantly mapping and industrial applications, not automotive.”

Processing at the edge?
While the lidar market might remain still minuscule, Xilinx might have a bigger role to play in sensory data processing at the edge in general.

“We are also seeing some movement of processing at the edge where sensors will handle some processing of the data,” said Magney. “Lidar and radar, for example, generate massive amounts of data, so there is some movement to put some of the processing in the sensor module, particularly for ADAS.”

Magney noted, “Xilinx FPGAs claim an important part of the value chain for ADAS solutions and may be well-suited for processing rich imaging radar, which is pretty cutting-edge.”

From ADAS to AV
While the FPGA is doing well in the ADAS market, Tu believes that FPGA’s fundamental advantage — “low latency and higher throughput” — will truly shine in the highly automated vehicle market.

When GPUs carry out deep-learning inferences, they require a parallel batch of massively parallel data to go through single-instruction multiple data (SIMD). In hopes of doing more computing and less fetching, the industry has tried developing wide SIMD architecture. But you can only make register files so wide.

In contrast, FPGA does “batch-less” inference, said Tu, which results in “low and deterministic latency, higher throughput regardless of batch size, and consistent compute efficiency.”

Xilinx Batchless Inferences

FPGAs offer “batch-less” inferences. (Source: Xilinx)

While that’s true in principle, the industry observers are reserving their rights to make a judgement as to how Xilinx would ultimately stack up against an Nvidia or a Mobileye in the AV market.

A few months ago, Xilinx talked about what it calls adaptive compute acceleration platform (ACAP), which the company claims will “exceed traditional CPU and GPU in performance.”

However, the company has pulled back the details, at least for now. Tu told us that the tape-out of ACAP will happen this year, but product shipments to customers won’t start until 2019.

When asked to compare the Xilinx solution against Nvidia or Mobileye, Magney noted, “It is too early to tell. Mobileye is best in class with vision-based algorithms that are hardened against a tightly integrated instruction set (ASIC). Tough to beat from a vision standpoint at present day, but Xilinx will offer an open solution rather than a closed solution, which is attractive.”

Regarding Nvidia, Magney added, “It would be hard to know if Xilinx is any more efficient than Nvidia” unless it is performed against some serious benchmarking. “Nvidia offers a full stack and the most complete SDK, so for developers, the Nvidia stack is more complete at this time.”

But in general, Demler noted, “We see FPGAs being used for AI acceleration in data centers, so I’m not surprised that Xilinx would build on that to develop a more specialized architecture. With their DSPs and parallel architectures, FPGAs offer the kind of computing capability that is well-suited for neural-network acceleration.”

At the end of the day, though, “the challenge versus other solutions is cost and power,” said Demler.

While noting that the Daimler deal is “a significant win for Xilinx,” Magney said that it is hard to know when the end application is going to be made available.

In the press release, Daimler said, “As part of the strategic collaboration, deep-learning experts from the Mercedes-Benz Research and Development centers in Sindelfingen, Germany, and Bangalore, India, are implementing their AI algorithms on a highly adaptable automotive platform from Xilinx.” The company said that Mercedes-Benz “will productize Xilinx’s AI processor technology, enabling the most efficient execution of their neural networks.”

— Junko Yoshida, Chief International Correspondent, EE Times

Subscribe to Newsletter

Leave a comment