Coherent sensor takes LiDAR to a new level of performance by providing dual-polarization intensity information while enabling immunity to multi-user and environmental interference.
Machine vision is an essential feature in many real–world applications, enabling machines to sense and perceive the world around them. SiLC Technologies, Inc. (SiLC) launched its Eyeonic Vision Sensor to deliver the coherent vision and chip–scale integration to the broader market. SiLC’s latest vision sensor takes LiDAR to a new level of performance by providing dual–polarization intensity information while enabling immunity to multi–user and environmental interferences.
Carmakers are introducing more and more advanced ADAS solutions on last generation vehicles, with the aim of reaching autonomous Level–4 driving one day, where cars will be able to perform all driving functions without any driver intervention. Another challenging application is autonomous robots, where machine vision guides robots through warehouses as a means to improve logistics chains as well as avoid any obstacles in their path.
Machine vision solutions require advanced sensors able to acquire data with real–time behavior, process it at a firmware or hardware level, and provide high–level information to decision–making algorithms (eventually AI–based). Examples of sensor technologies suitable for automotive and robotic machine vision applications are radar and LiDAR. In this article, we will introduce a novel LiDAR technology, developed by the California-based startup SiLC, that relies on a coherent sensor to enable 4D vision for automotive, robotics, and industrial applications.
The SiLC Eyeonic Vision System
Launched in December 2021 and demonstrated at the CES 2022 last January, the Eyeonic Vision Sensor is a frequency modulated continuous wave (FMCW) LiDAR that provides not only depth information, but also velocity and polarization intensity data.
In an interview with EE Times, Ralf Muenster, vice president of business development and marketing at SiLC said that “the innovation is that, for the first time, someone actually integrated on a single chip all the photonics functions needed to enable a coherent vision sensor.”
The “Eyeonic” sensor (visible in Figure 1) is based on the FMCW approach, which is more technically complex than a conventional LiDAR, but offers additional functionalities and the ability to shrink systems to chip–scale. Eyeonic is the first commercially available chip–integrated FMCW LiDAR sensor, with a small footprint that meets even the most stringent low–cost and low–power criteria.
Figure 1 shows the internal part of the sensor, which integrates onto a silicon photonics chip with an ultra–low linewidth laser, a semiconductor optical amplifier, Germanium detectors, and meters of optical circuits (waveguides).
SiLC’s Eyeonic sensor differs from other similar competitive solutions, which need 2, 3, or even 4 chips and requires them to be connected with some sort of coupling optics or fiber. “Moreover, every time you do that, you lose anywhere from 3 to 10 dB, which you really can’t afford because photons are precious and you don’t want to waste them”, Muenster said.
Another significant distinction from traditional LiDAR sensors is the technology utilized to install the device at the system level. By using a time of flight (ToF) technique, current 3D vision systems rely on high–power lasers with a wavelength of 905 nanometers and highly sensitive detectors. Early versions of these technologies have succeeded well enough to allow for early deployments in autonomous vehicle experiments. However, expensive manufacturing procedures have limited their resolution and cost–effective scaling. Moreover, eye safety issues have limited their range, while multi–user crosstalk is likely to limit their wider adoption.
Basically, ToF–based sensors transmit one or more laser pulses, wait for their return to the detector, and then compute the round–trip time; more or less accurately to a centimeter, or mostly, a few centimeters.
“Some people are intrigued by time of flight because a 950 nanometer wavelength requires easily manufacturable and low–cost CMOS detectors. However, this solution needs multiple chips, which have to be aligned very carefully”, Muenster said.
To address eye safety regulatory concerns and permit volume implementation with little multi–user interference, a move to FMCW technology at 1550 nanometer wavelength is widely recognized. Because of the expense and the number of components required, this method has not been widely used in the past.
According to SiLC, its silicon photonics integration platform is a cost–effective solution that combines all of the high–performance components required into a single silicon chip using existing semiconductor fabrication processes; resulting in a low–cost, compact, and low–power solution. Silicon manufacturing enables complicated gadgets and technologies to be scaled up in large quantities at a low cost.
FMCW is a technique widely used in coherent Doppler–based radar. With it, an FMWC is transmitted over time. Then, as the return pulse comes back, the difference (offset) in terms of frequency between the transmitted and received pulse can be computed. Due to the Doppler effect, that offset is a function of the distance and of the velocity of the detected (reflecting) object. This is the principle on which a coherent laser operates.
FMCW has a slew of advantages. The first is that it works in any lighting condition and is immune to environmental interference and crosstalk; that means it cannot be hacked by reflecting sunlight with a mirror directly into the center of the sensor. The precision is extremely high (millimeter–level) even at high distances, and detection is possible at long range.
Another FMCW key factor is it requires much less power than ToF to achieve the same distance. Due to its coherence, the Eyeonic sensor adds the capability to provide an instantaneous measure of the target velocity, becoming in effect a true 4D sensor. Moreover, SiLC is offering dual–polarization intensity, in addition to the chip integration, which enables material identification and surface analysis (see Figure 2).
“Velocity allows the machine vision system to draw outlines around the detected object, and polarization intensity can help determine what the object is. Since you have a velocity vector you already know where the object is going to be next. So, you don’t have to infer that information by using machine learning and neural network training,” Muenster said.
SiLC’s solution is agnostic to the scanner as well. That means it can operate with any type of scanner; frame rate and resolution are completely configurable, depending on the specific application.
The Eyeonic sensor is offered in both fiber–pigtailed and fibreless configurations. The former allows for design flexibility by supporting configurations where the FMCW LiDAR transceiver and scanning unit are at different locations, while the latter has the lowest cost in a compact configuration.
Starting in Q2 2022, SiLC’s Eyeonic Vision System will be made available to strategic customers, and it will offer a complete vision system for easy and quick evaluation by system integrators and end–users. It will be a compact and powerful FMCW LiDAR system available on the market, featuring a broad range of accessories to suit any customer application.
This article was originally published on EE Times.
Maurizio Di Paolo Emilio holds a Ph.D. in Physics and is a telecommunication engineer and journalist. He has worked on various international projects in the field of gravitational wave research. He collaborates with research institutions to design data acquisition and control systems for space applications. He is the author of several books published by Springer, as well as numerous scientific and technical publications on electronics design.