Deep learning may have been pitched as an answer to highly automated driving's problems, but it’s clear that deep learning is no longer a panacea.
Aside from the human factor, automakers also face the business model dilemma. Many carmakers—not limited to Toyota—may find it hard to justify a business that focuses singularly on the roadmap for autonomous cars.
Even among auto makers with aggressive plans for autonomous cars (i.e. the rollout of Level 4 autonomous cars in 2021), we’ve detected a more nuanced tone.
Last year, deep learning was pitched as an answer to the most complex problem in highly automated driving. This still holds true, but it’s clear that deep learning is no longer a panacea. The non-deterministic nature of neural networks has become a cause of concern for anyone who has to test and verify the safety of autonomous cars.
Last piece of puzzle
In short, those days are over when tech companies can simply declare that autonomous cars are a piece of cake already eaten.
Mobileye’s co-founder, CTO and chairman Amnon Shashua, for instance, pointed out that among all the technology developments (i.e. sensing, mapping) in which the company is engaged, “We find the development of ‘Driving Policy’ technology as the last piece of the autonomous driving puzzle.”
By “Driving Policy,” he is referring to the use of artificial intelligence to teach autonomous vehicles, for example, how to merge into traffic at roundabouts.
Driving Policy is “behaviour,” and “this is a hard problem to solve,” explained Vision System Intelligence founder Phil Magney.
“After all, we humans all drive differently,” added Roger Lanctot, associate director in the global automotive practice, at Strategy Analytics. Driving Policy is about “building human behaviour into software,” he said, creating a black box that nobody right now has any means to test or verify its safety. This a problem that could take “10 years,” Lanctot added. Alternatively, the auto industry might have to bend some ASIL-D safety standards to accommodate the testing of deep learning-based autonomous cars, he said.
In the following pages, EE Times breaks down the new automotive trends spotted at CES, and we try to explain how robo-car conversations are being altered by adding the human factor to the design process.
The topics covered in the following pages include: two different types of AI applications (“Chauffeur” and “Guardian” as defined by Toyota; “Auto-Pilot” and “Co-Pilot” pitched by Nvidia’s CEO Jen-Hsun Huang); Chip vendors’ dual strategy—HOG and CNN—in designing next-generation fusion chips; Cars that understand drivers' needs; Need to teach cars how to negotiate traffic; Do autonomous cars need a watchdog chip?