NCAP roadmaps include high-resolution object detection technology for a path toward L2+ class ADAS and pedestrian safety.
Global New Car Assessment Program (NCAP) roadmaps continue to evolve in lockstep with growing consumer demand for sophisticated safety features previously reserved as premium features for high-end vehicles. Autonomous emergency braking (AEB) is a prime example. Formerly exclusive to luxury-class vehicles, AEB has since become a required feature for automotive OEMs seeking five-star safety ratings for their mass market-accessible vehicle models due to some government bodies’ mandated regulation.
In parallel to this market trend, AEB technology itself is evolving to meet stringent safety and reliability standards — and consumer expectations. With continued advancements in high-precision automotive radar sensors, AEB object detection capabilities are expanding to improve sensitivity to complex urban environments with obstructed pedestrians, cyclists, pets, and more. This is where previously AEB could only detect braking vehicles and other large, unobstructed objects directly ahead.
Today, much of the development activity in automated driving technology like AEB is focused on L1 and L2-class vehicle automation. But automotive OEMs have their sights set on L3 vehicles, and L2+ has emerged as a key stepping-stone along the way.
L2+ enables a significant improvement in safety and comfort over L2, with advanced features like autonomous highway pilot, effectively bridging the gap to L3 by offering ‘L3-esque’ features and driver experiences in a manner whereby the driver’s attention and supervision are still required, in conformance with L2. In principle, the L2+ vehicle can fully drive by itself in defined situations like on the highway, but the driver is not allowed to let go of the steering wheel entirely and has to be in a position to take over the controls at all times.
The L2+ designation will serve OEMs and consumers well as countries and regulatory bodies worldwide begin to adapt and amend their legislation to pave the way for L3 vehicles, which require less driver interaction and intervention than L2/L2+. Much of the debate associated with the L3 roll-out will focus on the liability issues in vehicle accidents where the driver fully deferred monitoring and control functions to the vehicle.
Higher integration, lower cost structures
In parallel, cost efficiencies remain a paramount concern for OEMs and Tier 1 suppliers. As radar-supported features, like AEB, make the transition from passive warnings to active safety features, they need to be achieved at attractive price points and mainstream commercial-scale while fulfilling ever-increasing safety and security requirements. Ultimately, OEMs want to deliver a higher quality experience and better sensing capabilities for their customers in a cost-competitive marketplace.
At the third-party technology supplier level, continued innovation in RFCMOS process technology will provide additional cost efficiencies. The move to RFCMOS technology allows for smaller process nodes and hence higher integration, yielding transceiver modules with significantly smaller footprints and improved cost optimization at very low power dissipation. This is essential for building small sensors for easy placement around the car.
Whereas legacy radar transceivers were built with discrete, dedicated ICs (Rx, Tx, VCO etc.), RFCMOS processes have enabled radar transceivers to evolve into fully integrated transceiver chips. The volume ramp of RFCMOS-based transceiver chips will ultimately help accelerate the flow of advanced AEB technology into OEMs’ midrange to entry vehicle fleets.
Automotive radar processors (MCUs) are evolving in parallel, moving to deeper sub-micron process nodes and integrating more processing power, dedicated signal-processing-accelerators, higher levels of security, and functional safety features (up to ASIL-D level). These combined technological gains at the transceiver and processor layers allow for the development of much smaller, more power-efficient modules that can further reduce overall costs.
The 77GHz radar resolution advantage
While NCAPs don’t mandate one specific sensor technology to achieve a safety requirement, radar has been heavily adopted for AEB because of its ability to make very accurate range and Doppler measurements due to the physics of the technology while functioning under all weather conditions, which is not the case for other technologies like camera or lidar. Today, radar systems can sense large-sized cross-sections (vehicles and large objects), and the vehicle will automatically brake. But the cross-sectional view of a pedestrian is quite different, requiring exacting resolutions, sensitivity and performance levels from the onboard MCU and radar transceiver.
Industry transition from 24 GHz to 77 GHz will enable the considerably higher resolutions needed for high-precision radar-based pedestrian and cyclist detection. 77 GHz radar solutions operate in frequency band 76GHz to 81GHz and this enables the improvement of a radar sensor’s range resolution and object detection down to approximately 5 cm cross-sections. This change translates to potentially up to 25x improvement in range resolution, determining how far apart objects need to be before they are distinguishable as distinctly separate objects. This increase translates to better detection and tracking capabilities among tightly clustered, adjacent objects.
The ability to discern between multiple objects (e.g., a vehicle and a pedestrian) becomes paramount for higher-order, automated decision-making based on the known movement patterns and attributes of these two “object” classes. With this 77 GHz performance boost, typical front radar applications like AEB have been vastly improved, and radar ranges have expanded much farther than what could have been achieved at 24 GHz.
The AEB challenge is significantly compounded in densely populated urban settings, where a much wider field of view is required. Vehicle sensors need to detect pedestrians and cyclists at long distances, as well as pedestrians suddenly crossing into a vehicle’s field of view from behind parked vehicles and other visibility obstructions.
For the foreseeable future, radar sensors will remain the established/optimal vehicle sensor option for vehicle safety applications like AEB, providing the affordability, reliability and functionality needed to enable the next generation of safer, smarter cars. And while vision sensors are essential for most object recognition and classification, radar sensors are immune to vision sensor deficiencies that limit their effectiveness in low light, glaring light and inclement weather conditions, while providing the high-precision, vehicle-to-object distance depth and velocity ranging capabilities that vision sensors cannot.
Radar is well established as the most robust of the competing sensor technologies (vision and lidar), making it an obvious candidate to fulfil NCAP requirements with minimal design risks and/or development friction. Radar’s ability to operate very reliably in almost any kind of environmental conditions – from bright light to fog to rain to darkness – is unmatched among competing sensor technologies like vision or lidar targeting AEB and other ADAS applications.
Tuning AEB performance
AEB systems based solely on radar technology can introduce some challenges, however. Driving scenarios may occur where these systems might apply the brakes when there’s no actual obstruction in front of the vehicle. These false positives are typically caused by false reflections/ghost images in the radar field of view. Radar is proven to be very good at detecting ghost images because of the different harmonics at play, but this can hinge on many factors – particularly the distance between the vehicle and the object.
This challenge is addressed by the significantly improved radar processors combined with intelligent chirp patters and ever-improving RF performance of the RFCMOS radar transceivers with multiple transmit and receive channels. To a great extent, this can readily be neutralized with fine-tuned processing algorithms, enabling “pure” detections with few to no false positives. A faster sampling rate can also help neutralize false positives, enabling a larger overall dataset for automated decision making. In this scenario, the onboard radar sensors can tell if an object looks consistent throughout multiple frames, and the braking decision will be made accordingly. The vastly improved radar MCUs with accelerated processing capabilities can support concurrent processing of a vast amount of data to enhance the overall performance of the radar system.
Future trends and the safety imperative
As automotive radar technology advances, vehicle sensor configurations will likely evolve as well. System designers are already exploring the use of radar to precisely map the entire environment around the vehicle, improving overall scene perception. The advent of imaging radar will further accelerate this trend, providing the ability to more accurately characterize the surrounding environment with multiple transceivers cascaded together to achieve tremendous improvement in angular resolution and object separation; this is while adding a new dimension of elevation sensing which is proving to be a very important feature of the radar sensor for enabling automated driving. In turn, these initiatives will fuel the proliferation and eventual ubiquity of corner radar sensors.
To further improve detection and resolution for AEB applications, OEMs are also evaluating the use of radar and camera sensors leveraged together in a manner that enables occluded/obstructed object detection. The fusion of these sensor technologies could introduce a compelling option for automated ‘round robin’ decision making. If the sensors don’t agree on what they are seeing, the resulting decision tree will minimize false positives – while significantly improving overall perception capabilities.
At the MCU data processing layer, these trends require third party technology suppliers like NXP to provide OEMs and Tier 1’s with a scalable platform to develop on, with the MIPS performance and memory headroom to accommodate their ambitious radar technology initiatives. Hardware acceleration, performance-per-watt attributes, and code portability with opportunities to scale the performance according to the use-case needs will remain critical factors at the processing layer, leveraging development environments that won’t disrupt established workflows or introduce redundant R&D paths.
A continuous integration of functionalities at the transceiver level is every bit as critical. The ideal module will integrate signal generation, amplification, receiving, mixing, conditioning, and digitization on one piece of silicon. This will ease the development of integrated radar-based safety systems for mainstream vehicle fleets servicing commercially attractive markets while paving the way for standard fitment. NXP has achieved this transceiver module integration via our RFCMOS process.
There are numerous advantages to sourcing both transceiver and MCU from the same vendor, with established experience in both domains and deep insight into the device integration. Most importantly, OEMs and Tier 1’s need absolute assurance that the devices they source from third-party technology suppliers are designed from the ground up to conform with stringent automotive safety standards, from system design to the supporting collateral. This requires a strong and trusted knowledge base spanning the entire methodology to qualify functionally safe devices while also meeting the next-generation cybersecurity requirements.
This article was originally published on EE Times.
Karthik Ramesh is the director product marketing at NXP Semiconductors responsible for automotive radar solutions, essential for next-generation automated driving. He has more than 11 years of industry experience having worked for NXP and Bosch.