Autonomous Driving Development Stuck in the Slow Lane

Article By : Lauro Rizzatti

L3 vehicles were expected to hit the road by the end of the last decade, and early L4 prototypes would be available this year. However, only limited rollouts of either have been announced so far. This article explores the reasons for the delay and envisions what is needed to achieve the objective.

Remember that satisfying exclamation, “Look Ma, no hands!” when you were a child learning to ride a bicycle with no hands on the handlebars? Who wouldn’t want to experience that feeling again, only this time behind the wheel of an automobile?

It could happen. The Society of Automobile Engineers (SAE) formalized the realization of self-driving vehicles under Standard J3016 in 2014. The standard calls for a progressive acceleration of automation through five levels, from L1 to L5, starting from no automation at Level 0 (L0) to no human intervention at Level 5 (L5) — specifically, no hands or feet (Figure 1).

Autonomous driving progresses through five levels of increasing automation, from “Everything On” to “Mind Off,” for a “Look Ma, no hands or feet” experience
Figure 1: Autonomous driving progresses through five levels of increasing automation, from “Everything On” to “Mind Off,” for a “Look Ma, no hands or feet” experience. (Source: Society of Automobile Engineers)

L3 vehicles were expected to hit the road by the end of the last decade, and early L4 prototypes would be available this year. Only limited rollouts of either have been announced so far.

This article explores the reasons for the delay and envisions what is needed to achieve the objective.

The brain behind autonomous-driving vehicles

As you might expect, as the degree of automation moves up the ladder, the complexity of the task grows exponentially. A powerful electronic brain is needed, assisted by a comprehensive set of sensors tasked with collecting massive quantities of data of diverse types. The data must include static and dynamic objects surrounding the vehicle in motion, environmental characteristics, and geographic coordinates to localize the vehicle, identifying its environs and highlighting visible and hidden obstacles.

The industry settled on a brain architecture consisting of three stages — perception, motion planning, and motion execution — operating in sequence (Figure 2).

Autonomous-driving vehicles sit on an architecture of three stages known as the automated driving control loop
Figure 2: Autonomous-driving vehicles sit on an architecture of three stages known as the automated driving control loop. (Source: Lauro Rizzatti)

Starting from the Perception stage, the autonomous-driving brain perceives the environment surrounding the vehicle by collecting raw data from several types of sensors and elaborating that data via complex algorithms. Once the Perception stage completes, the Motion Planning stage takes over to make educated decisions and plan the route ahead. Finally, the Motion Execution stage steers the vehicle according to the planned route.

At Level 3 and above, the types and number of sensors expand dramatically and include cameras, radar, LiDAR, sonar, infrared, inertial measurement units, and global positioning systems (GPS). At L4, it is estimated that up to 60 sensors will be necessary (Figure 3).

Up to 60 sensors may be necessary for L4 AV
Figure 3: Up to 60 sensors may be necessary for L4 autonomous driving. (Source: TSMC)

Perception is the key stage for achieving Level 3 and above. Advanced data-processing techniques such as sensor fusion elaborate the massive data collected by the multitude of sensors in real time to improve the system’s perception of the environment. Failure to accurately understand the environment surrounding the vehicle may compromise the outcome and lead to disaster.

Autonomous-driving algorithms play a critical role in the Perception stage. Algorithms for processing sensory data are still evolving, and new algorithms are regularly released.

Architectural requirements to achieve L4/L5 autonomous driving

The autonomous-driving scenario is paved with challenges that impose rigorous, inflexible, and hard-to-meet design requirements. None of the existing CPU, GPU, or FPGA architectures can meet them all. Instead, a completely new design, conceived from the ground up with an innovative approach, is vital.

Seven requirements stand out:

  1. Massive compute power, efficiently delivered
  2. Very low latency
  3. Minimal energy consumption
  4. Combination of artificial intelligence/machine learning and digital signal processing (DSP) capabilities
  5. Deterministic processing
  6. Reprogrammability
  7. Affordable pricing

All seven are necessary (Figure 4).

Seven fundamental requirements are essential to implementing L4/L5 autonomous driving vehicles
Figure 4: Seven fundamental requirements are essential to implementing L4/L5 autonomous-driving vehicles. (Source: Lauro Rizzatti)

Massive compute power, efficiently delivered

Moving up the autonomous-driving ladder, the processing power requirements increase exponentially, from hundreds of gigaFLOPS at L1 to tens of teraFLOPS at L2 to hundreds of TFLOPS at L3. At L4/L5, the required processing power reaches 1 petaFLOPS or more.

More critical is the ability to deliver that compute power as actual power that is usable at any given instant. Said differently, the efficiency of an autonomous-driving processor expressed in percentage of theoretical power must exceed 80%.

Very low latency

The Perception stage must elaborate the massive input data as quickly as possible — with a latency of less than 30 ms — to avoid catastrophic consequences under unpredictable circumstances, such as when a pedestrian suddenly crosses the road in front of a vehicle.

Minimal energy consumption

Both low average and peak power consumption are critical to avoid draining the autonomous-driving vehicle battery and to prevent overheating the electronics. Limiting the power consumption to less than 100 W is reasonable.

Combination of AI/ML and DSP capabilities

While machine learning and deep neural network computing are necessary for advanced autonomous-driving algorithm processing, they are not sufficient.

The latest state-of-the-art algorithms require the combination of AI/ML with DSP devices tightly coupled to limit latency and reduce power consumption.

Deterministic processing

Safety and security play a key role in an autonomous-driving scenario. Intrinsically, AI algorithms produce responses with less than 100% accuracy, missing the target of ensuring 100% deterministic responses.
DSP can help meet the goal.

Reprogrammability

State-of-the-art algorithms will continue to evolve in the foreseeable future. The ability to reprogram the brain of the autonomous-driving vehicle in the field is mandatory.

Affordable pricing

All consumer products, even in the luxury vehicle category, are cost-sensitive. To ensure the success of a brain architecture for an autonomous-driving vehicle, its pricing ought to be less than US$100 in volume.

Conclusion

It may be a while before you exclaim, “Look Ma, no hands!” while behind the wheel of an automobile driving down the highway, but it will happen. It means waiting for the technology to catch up with our imaginations.

Designing an L4/L5 autonomous-driving brain mandates a cutting-edge architecture that can achieve petaFLOPS processing power with 80% or higher efficiency and latency of less than 30 ms, consume fewer than 100 W, and sell at less than US$100. Only custom processors can meet all seven requirements.

 

This article was originally published on EE Times Europe.

Expo booth:

New products & solutions, whitepaper downloads, reference designs, videos

Conference sessions:
  • Internet of Things (IoT)
  • Supply Chain
  • Automotive Electronics
  • Wave of Wireless
3 Rounds Lucky Draw:

Register, join the conference, and visit the booths for a chance to win great prizes.

Subscribe to Newsletter

Leave a comment