What Does It Take to Make Car See Better?

Article By : Junko Yoshida

What does it take to improve vehicle senses — to make cars see better? Multiple sensing modalities, a better in-vehicle network, a common sense layer, and a lot more...

What does it take to make cars see — to improve vehicle senses? The short answer: it’s complicated.

The market has lots of CMOS image sensors that can capture stunning pictures that dazzle the human eye. What about sensors that not only generate images but place them into a context that enables machines to easily and accurately digest the data?

Industry experts know that just a single sensing modality can’t do the job (though Mobileye might disagree).

By mixing and matching different sensors — vision, radar and LiDAR, ultrasound — autonomous vehicle (AV) developers are looking for ways to orchestrate data generated by more than one sensor. They believe fused sensory data can get closer to human perception.

But there’s a complication: whatever sensors perceive at the edge won’t stay at the edge. Captured sensory data must be processed inside a vehicle to be interpreted by machines. This requires massive processing power inside the vehicle brain. It demands updated in-vehicle networks fed by a fatter pipe with very little latency. In the end, it takes a sensor village of enabling machines to make safe and sound decisions.

Vehicle senses are manifestly complicated.

To make some sense of it, we wrote the book on the subject. The just-published volume, entitled “Aspencore Guide to Sensors in Automotive – Making Cars and See ahead,” is available at the EE Times Store.

But these are not laurels we’re resting upon. Technology continues to advance. To better understand AV system designers’ challenges, we turned to AutoSens.

The AutoSens conference is built on a forum where experienced engineers and professionals gather to compare notes on driver-assist (ADAS) and AV development. Produced by Sense Media, AutoSens has hosted a series of conferences over the last several years. This year’s event was of course virtual. We sent AspenCore’s best editors to the latest, virtual AutoSens Brussels Edition*, which just concluded. They came back with a range of technology/product stories. Their reporting is the basis of a Special Project, offering readers a snapshot of the current state of the automotive sensing world.

What we learned

Of all subjects, “sensor mix” is an eternally popular topic that spawns disparate opinions; no single right answer exists. Questions include what exactly is the right sensor mix, and how best to optimize it. Anne-Françoise Pelé, editor-in-chief of EE Times Europe, captured the debate at the recent AutoSens conference. Pelé also covered the broadly diverse LiDAR landscape, where demand, technology and prices are continuously shifting.

Gina Roos, editor-in-chief of Electronic Products, cut to the chase and discussed the biggest pain points for automotive image sensors. Can your vehicle’s lane-keeping feature perform even when the paint on the lane marker is faded or obscured by rain? Does your car recognize a red light when the traffic signals are flickering LEDs?

EE Times also caught up with Ross Jatou, vice president and general manager of the automotive solutions division at On Semiconductor during virtual AutoSens. In our chat, we discussed topics ranging from driver monitoring systems and LiDARs to edge processing, NCAP (New Car Assessment Program) and the changing relationship between car and driver.

Sensor degradation, an issue little covered and yet fraught with potentially huge consequences, popped up. As cars rack up miles, cameras and radar will inevitably be obscured by mud, leaves and other real-world messiness. How do you keep sensors clean? Further, the quality of sensors will eventually deteriorate with age, weather, wear and tear. How will robocars know it’s time to get a new pair of glasses? Majeed Amhad, EDN’s editor-in-chief, explored those issues with Rob Stead, an organizer of AutoSens.

AutoSens is all about perception. But the next phase for the conference, I suspect, is how best to add a “common sense” layer to robocar perception. EE Times recently explored efforts to make sense of how “driving policy” like Responsibility-Sensitive Safety connects to perception.

Highlighting this Special Project is Rob Stead’s essay, “Safety, Not Autonomy, Is the Objective.” Stead wrote:

With all the hype about robotaxis and the utopic future of mobility over the past five years or so, we have lost sight of what autonomy was all about in the first place. Right now, the elastic motion of the sine wave is bringing us back around to focus on what the original objective was, namely, safety.

We couldn’t agree more.

Subscribe to Newsletter

Leave a comment