There are different types of LiDAR. Those involved in the auto industry ought to be aware of the relative strengths and weaknesses of each.
It is widely recognized that advanced driver assistance systems (ADAS) and autonomous driving (AD) can be successful with effective sensing of the environment surrounding the vehicle feeding into the algorithms enabling autonomous navigation. Given the absolute reliance on sensing in life-critical situations, multiple sensor modalities are used with the data being fused together to augment each other and provide redundancy. This allows each technology to play to its strengths and deliver a better-combined solution.
The three modalities that will be prominent for the sensor used in vehicles for ADAS and AD going forward are image sensors, radar, and LiDAR (Light Detection and Ranging). Each of these sensors has its own strengths and together they can comprise a complete sensor suite delivering data to enable the autonomous perception algorithms to make decisions with sensor fusion — the ability to provide color, intensity, velocity, and depth for every point or kernel in the scene.
Of these three principle modalities, LiDAR is the most nascent technology to be commercialized for mass-market use, even though the concept of using light to measure distance goes back decades. The market for automotive LiDAR is set to show spectacular growth rising from $39 million in 2020 to a projected $1.75 billion in 2025, according to Yole Développement, driven by the proliferation of autonomous systems requiring the complete sensor suite. The opportunity is so large that there are well over 100 companies working on LiDAR technology, with cumulative investments into these companies exceeding $1.5 billion dollars by 2020 — and this was prior to the deluge of SPAC-driven initial public offerings by more than a handful of LiDAR companies that began in late 2020. But when there are so many companies working on a single technology — some of which are fundamentally different such as the wavelength of light being used (prominent examples being 905nm and 1550nm) — t is inevitable that there will be a winning technology and consolidation, as has been seen time and time again, whether it was Ethernet for networking or VHS for video.
When one looks at the users of LiDAR technology — the automotive vehicle manufacturers, along with the companies that design and build autonomous robotic vehicles for transporting people and goods — the most important thing in their minds is their requirements. Ultimately, these companies want suppliers to provide them with LiDAR sensors that are low-cost with a high degree of reliability while meeting the performance specifications of ranging and detection of low-reflectivity objects. Though all engineers have strong viewpoints, these companies are likely to be agnostic to the implementation of the technology if the supplier can meet the performance and reliability requirements at the right cost. And that leads to the fundamental debate that this article aims to help settle: Which wavelength will prevail for automotive LiDAR applications?
To begin to address this question, it is necessary to understand the anatomy of a LiDAR system, of which there are different architectures. Coherent LiDAR, a type of which is referred to as frequency-modulated continuous wave (FMCW), mixes a transmitted laser signal with reflected light to compute the range and velocity of objects. FMCW offers some advantages but it remains relatively uncommon when compared to the most common LiDAR approach, direct time-of-flight (dToF) LiDAR. This implementation measures distance to an object by timing how long it takes for a very short pulse of light sent out from an illumination source to be reflected off an object and returned to be detected by the sensor. It uses the speed of light to directly calculate the distance to the object using the simple mathematical formula relating time, speed, and distance. A typical dToF LiDAR system has six major hardware functions, although the choice of wavelength mostly impacts the transmit and receive functions.
Table 1 shows a list of the various LiDAR manufacturers that range from known automotive Tier-1s to startups across all regions of the globe. Based on market reports and public information, the vast majority of these companies operate their LiDARs at near-infrared (NIR) wavelengths, as opposed to short wave infrared (SWIR) wavelengths. Furthermore, while the SWIR-focused suppliers working on FMCW are restricted to those wavelengths, most of those with a direct time-of-flight implementation have a path to making a system with NIR wavelengths, should they choose, while being able to leverage a lot of their existing IP around functions such as beam-steering and signal processing.
Given that the majority, but not all, of these manufacturers have chosen NIR wavelengths, how they came to this decision and what the implications are should be considered. At the heart of the discussion is some basic physics related to the properties of light and semiconductor materials making up the components used in LiDAR.
Photons fired by the laser in a LiDAR system, which are intended to be bounced off objects and received by the detector, have to compete with ambient photons coming from the sun. Looking at the spectrum of solar radiation and taking into account atmospheric absorption, there are “dips” in the irradiance at certain wavelengths that would reduce the amount of photons existing as noise for the system. At 905nm, there is about 3x higher the amount of solar irradiance than at 1550nm, meaning a NIR system has to contend with more noise that can interfere with the sensor. But this is just one of the factors to take into account when choosing a wavelength for a LiDAR system.
The components responsible for sensing the photons in the LiDAR system are different types of photodetectors, so it is important to explain why they may be made up of different semiconductor materials depending on the wavelength to be detected. In a semiconductor, a band gap separates the valence and conduction bands. Photons provide the energy to help electrons overcome that band gap and make the semiconductor conductive, thus creating a photocurrent. Every photon’s energy is related to its wavelength, and a semiconductor’s band gap is related to its sensitivity — this is why different semiconductor materials are needed depending on the wavelength of light that is to be detected. Silicon, which is the most common and cheapest semiconductor to manufacture, is responsive to visible and NIR wavelengths up to about 1000nm. To detect wavelengths beyond that in the SWIR range, alloying of more exotic group III/V semiconductors can be done to make materials like InGaAs capable of detecting those wavelengths of light, from 1000nm to 2500nm.
Early LiDARs used PIN photodiodes as sensors. PIN photodiodes have no inherent gain and as a result, are not able to detect weak signals easily. Avalanche photodiodes (APDs) are the most prominent type of sensor used in LiDAR today and provide a moderate amount of gain. However, APDs also need to operate in linear mode like PIN photodiodes to integrate signal from photon arrivals and also suffer from poor part to part uniformity, while requiring very high bias voltages. The newest types of sensors that are increasingly being used in LiDARs are built on single photon avalanche diodes (SPADs), which have a very large gain and are able to produce a measurable current output from every single photon detected. Silicon photomultipliers (SiPMs) are arrays of silicon-based SPADs that come with the added advantage of being able to distinguish single photons from multiple photons by looking at the amplitude of the generated signal.
Circling back to the relevance to the topic of wavelengths, all of these types of photodetectors can be built on silicon (for NIR detection) or III/V semiconductors (for SWIR detection). On the other hand, manufacturability and cost are key to viability for the technology, and CMOS silicon foundries allow for high-volume and low-cost manufacturing of such sensors. This is a primary reason why SiPMs are being increasingly adopted for LiDAR on top of allowing for higher performance. While APDs and SPADs for SWIR exist, it is difficult to integrate them with readout logic due to the fact that the processes are not silicon-based. Lastly, III/V-based SPAD arrays and photomultipliers (analogous to SiPMs) for SWIR have not yet been commercialized, so the ecosystem availability favors the NIR wavelengths.
Generating photons involves an entirely different process. A semiconductor p-n junction as the gain medium can be used to make a laser; this is done by way of pumping a current through the junction causing the resonant emission of photons as atoms go to lower energy bands, resulting in a coherent laser beam output. Semiconductor lasers are based on direct band gap materials like GaAs and InP, which are efficient for the generation of photons that happens when atoms go to a lower energy band, unlike indirect band gap materials such as silicon.
There are two main types of lasers used in LiDAR: edge-emitting laser (EEL) and vertical cavity surface-emitting laser (VCSEL). EELs are more widely used today, owing to their lower cost and higher output efficiency than VCSELs. They are more difficult to package and build into arrays and also suffer from a wavelength shift across temperature which causes the detectors to have to look for a wider band of photon wavelengths, allowing for more ambient photons as noise to also be detected. Despite the higher cost and lower power efficiency, the newer VCSEL technology has the advantage of easy and efficient packaging since the beam is generated from the top. The market adoption of VCSEL is increasing as its costs will continue to decrease significantly and the power efficiency will improve. EELs and VCSELs exist for both NIR and SWIR wavelength generation, with a key difference — NIR wavelengths can be generated with GaAs, while SWIR wavelengths require the use of InGaAsP. GaAs lasers are able to use larger wafer size foundries leading to lower cost, again pointing to an advantage for the ecosystem of NIR LiDAR manufacturers from both a cost and supply chain security perspective.
Laser Power and Eye Safety
While talking about the wavelength debate, it is imperative to consider the eye safety implications of a LiDAR system. The concept of dToF LiDAR involves using short laser pulses with a high peak power over a certain angle of view to be emitted to the scene. A pedestrian standing in the path of a LiDAR’s emission path needs to be assured that their eyes will not be damaged by a laser being fired in their direction, and IEC-60825 is a specification that dictates how much the maximum permissible exposure is across the different wavelengths of light. While NIR light, similar to visible light, is able to pass through the cornea and reach the retina in the human eye, SWIR light is mostly absorbed within the cornea, and as a result, is able to be exposed at higher levels.
Being able to output multiple orders of magnitude higher laser power is an advantage for a 1550nm-based system from a performance perspective, as it allows for more photons to be sent out and thus be returned to be detected. Higher laser powers also come with a thermal tradeoff though. It should be noted that proper eye-safe design has to be done regardless of wavelength by clearly taking into account the energy per pulse and the size of the laser aperture. With a 905nm-based LiDAR, the peak power can be increased by either of these factors, as shown in Figure 7.