See how the Tangram Vision Platform can radically accelerate your perception roadmap.
Table of Contents
Welcome back to our series on different sensing modalities! In our first installment, we looked at 3D sensors. In this week's installment, we'll be exploring Light Detection and Ranging...aka LiDAR. Why is LiDAR cool? Well, LiDAR takes pictures of time!
We'll be focusing on the longer-range LiDAR systems that are often specified as part of the sensing package for robots, drones and autonomous vehicles. We won't cover the miniaturized LiDAR sensors used in Apple's latest iOS devices and a number of Android devices to enable features like augmented reality and FaceID. Those are also cool, but irrelevant if you're a roboticist or perception engineer trying to determine what LiDAR sensors to test for your device.
LiDAR has a few inherent advantages that make it a great choice for robot, autonomous vehicle (AV), and drone sensor arrays. To start with, many LiDAR sensors have exceptional ranges extending into the hundreds of meters. For fast-moving autonomous vehicles that require obstacle awareness at long distances, it's a clear choice. LiDAR sensors also often have very wide horizontal fields of view, and in the case of scanning LiDAR units (more on that below), they provide a 360° horizontal field of view. Again, this can be useful for autonomous vehicles that work in low-structured or unstructured environments where obstacles or paths of travel can appear in front of, besides, and behind the AV. LiDAR captures data at an extremely fast rate – on the order of millions of measurements per second, which again reinforces its suitability for high-speed applications like autonomous vehicles. Finally, LiDAR sensors offer increasingly fine resolution across a wide range, which allow them to provide data streams that support classification tasks and real-time obstacle avoidance tasks.
LiDAR is an optical sensing medium. Therefore, like other optical sensors, it can be challenged by environmental conditions like fog that can occlude its laser pulses and reflections, thus making it blind. LiDAR sensors tend to be more costly than other perception sensors, with prices that can reach into the tens of thousands of dollars per unit. Many LiDAR units have mechanical components that can fail and render the sensor useless, either temporarily, or permanently. LiDAR data streams can be massive, which makes transmitting complete data sets from the edge a challenge for devices that rely on cellular communication. Unfortunately, LiDAR data compression is much more complex and computationally costly than compression techniques for other sensor types, which limits the ability to minimize data transfer. Finally, LiDAR can only capture structure, not color, and therefore has limits to the types of data it can capture.
While LiDAR sensors can deliver large swaths of environmental data, they often require supplementation with other sensors to deliver the complete information set required by a mobile robot, drone, or an autonomous vehicle to operate. As noted above, LiDAR units can only capture structure, and not color. For certain object detection and classification tasks, color information is required. Therefore, a CMOS sensor like an RGB or HDR camera may be specified to work in unison with the LiDAR sensor. Similarly, for autonomous vehicle obstacle avoidance, LiDAR alone can't be fully relied on in conditions that can compromise their operation (fog, for instance). In a scenario such as this, using a radar sensor in conjunction ensures safe operation under most scenarios.
Like any other multi-sensor array, using a LiDAR sensor along with other sensors requires that sensor fusion tasks like time synchronization, spatial registration, and individual sensor calibration be performed continuously to guarantee that all data generated is usable. Of course, the Tangram Vision SDK can manage these complex multi-sensor tasks to maximize sensor array uptime and data quality.
The sheer amount of data that LiDAR sensors can generate make them heavy users of compute, as the host must process data streams that are both high frequency and uncompressed. Likewise, if data is to be streamed to a central server, these massive datasets can quickly take up available bandwidth, particularly in scenarios where a device can't connect to a local WiFi network and instead must use cellular or satellite data links. Lastly, LiDAR units use one or more laser emitters, and scanning LiDARs add an electric motor. This adds up to higher power draw than other sensors.
Until recently, most scanning LiDAR units used a 905nm wavelength. If stationary and concentrated on the human eye, this wavelength is capable of causing damage to the retina. Because the LiDAR unit scans so rapidly, these beams of laser light only reach the retina for a minimal amount of time. This minimizes eye safety risk. Furthermore, regulations limit the amount of laser light emission at theses wavelengths to ensure eye safety.
Yet those regulations also limit the effective range of these LiDAR units to around 100m. It is for this reason that 1550nm lasers have now become popular for LiDAR units. Over 1400nm, laser light no longer passes through the eye to the retina. Rather, it is stopped by the cornea and the lens. However, the cornea and lens themselves can be damaged by a sufficiently powerful pulse of 1550nm laser light. While most 1550nm LiDAR units aren't using sufficient power to cause this kind of damage, concerns remain that fiber amplification could create hazardous conditions and potential corneal burns.
LiDAR comes in two basic flavors: scanning (mechanical) and solid state. Each has its strengths and weaknesses, and they can be used independently or together.
Solid state LiDAR typically uses a single laser beam to illuminate the scene in front of it, and a time-of-flight (ToF) sensor array to capture the 3D data that is returned. These solid-state sensors have few to no moving parts, which makes them less expensive and more reliable than a typical scanning LiDAR sensor. However, they also cannot capture 360° data like a scanning LiDAR sensor can. More recent examples of solid state LiDAR sensors incorporate technologies like MEMS or beam steering that can manipulate the laser beam to scan a much wider field of view than a typical ToF sensor. In same cases, these LiDAR sensors can capture up to a 270° horizontal field of view.
Scanning LiDAR uses a single laser beam on a rotating spindle, or multiple laser beams in a 360° array, to capture a 360° 3D image of the environment around the sensor. Much like flash LiDAR units, scanning LiDAR units use ToF cameras to record the time taken for the laser pulses they emit to reflect off objects and return to the sensor. These sensors tend to be significantly more complex than flash LiDAR sensors, and therefore more expensive and prone to mechanical failure. However, they are able to capture greater amounts of data (this can also be seen as a negative, as datasets can become very large). Because they capture 360° of 3D imagery, scanning LiDAR units are excellent for tasks like mapping environments and providing enhanced object detection and obstacle avoidance for navigation.
One of the "OGs" of scanning LiDAR, Velodyne has been a market leader since their origins as a subwoofer company that just happened to be participating in the DARPA self-driving combat vehicle challenge. Since those early days, Velodyne has developed a comprehensive line of both scanning and flash LiDAR sensors. If you live in the Bay Area, you've no doubt seen many self-driving car and truck prototypes with Velodyne's 16- and 32-channel scanning LiDAR units placed at every corner of the vehicle.
Velodyne's Puck line of scanning LiDAR sensors are their most ubiquitous offering. The most popular model, simply called "Puck", was once known as the VLP-16. These sensors offer a 100m 360° range, yet are priced in the low thousands of dollars per unit. This makes them an obvious choice for prototyping and production over a large range of markets, including autonomous vehicles, drones, and service robots. Velodyne also produces the Puck LITE, which is specifically designed for applications like drones where weight is a key consideration.
Velodyne recently adding the Alpha Prime scanning LiDAR sensor to their range. This sensor has been specifically designed for autonomous vehicle applications where sensing is subject to a wide range of environmental conditions that would otherwise flummox non-specialized sensors. It should be noted that the Alpha Prime is effectively a reference design for prototyping.
Velodyne's older scanning LiDAR units, the HDL-32E and HDL-64E still enjoy popularity in industrial settings like heavy equipment and marine. However, for engineers looking for sensors to prototype with, the Puck series are the most likely choice.
Ouster's 360° LiDAR units take a traditional scanning mechanical approach and modify it with a set of proprietary digital emitting and receiving elements. A single column of VCSEL emitters rotate on a spindle, with a matched set of Ouster's proprietary SoC digital receivers that use SPADs to read environmental feedback. This architecture replaces more complex analog systems with a much simpler design, and under certain conditions can deliver greater sensitivity as well. It can be easily modified to add or subtract channels, and it can be tuned with different lenses to widen or narrow field of view, and increase or decrease effective range.
Ouster's OS0, OS1, and OS2 spinning LiDAR units are available in 32-, 64-, and high resolution 128-channel designs. The differences between each model has to do with configuring the vertical field of view and angle to optimize performance for near-, mid- and long-field sensing needs. The OS0 features a wide 90° vertical field of view (VFOV). However, to achieve this, range is limited to 50m. The OS1 cuts VFOV in half to 45°, but in turn more than doubles effective range to 120m. The OS2 narrows VFOV to 22.5°, but range doubles to 240m.
While Velodyne, Ouster, and Waymo have been making headlines with their scanning LiDAR units, Hokuyo has been silently capturing significant swaths of service robot and drone business with their broad lineup of different LiDAR units. As we've spoken to dozens of robotics companies, we've been shocked at the number that have chosen Hokuyo as their preferred LiDAR vendor. While traditionally a vendor into industrial and automation markets, Hokuyo's inroads are in part the result of a comprehensive lineup of over 30 distinct 2D and 3D LiDAR SKUs.
While you may be familiar with Waymo's self-driving vehicles, did you know that Waymo's self-developed LiDAR sensors are now available to purchase separately? Granted, from what we can tell, you need to be a large OEM with budgets in the many millions of dollars to even get a sample, but...it is possible. Unless you're making a self-driving taxi. In that case, you're entirely out of luck.
Waymo's Laser Bear Honeycomb is a compact spinning LiDAR unit with a wide 95° VFOV and an effective range that starts at 0m. Waymo also claims that their design stays cleaner longer than other LiDAR designs, for more robust deployments. Beyond these facts and figures, however, specifications are scarce.
Like Hokuyo, SICK has been developing and selling 2D and 3D LiDAR units into industrial and automation markets for quite some time. In the past, SICK's core strength in LiDAR were 2D sensors that were used for near- and mid-field obstacle detection. With the increase in the availability of autonomous mobile robots, SICK has been pulled into the robotics market where it now offers sensors that aid in navigation, obstacle avoidance, and obstacle detection.
The M Series of mechanical spinning LiDAR sensors includes four models, each tuned to different use cases. For roboticists and autonomous vehicle developers, the M8 is the most relevant. Available in three different ranges (from 100m to 200m), the M8 features very high point cloud density, with up to 1.3M points generated per second.
After much success with their scanning LiDAR sensors, Velodyne has expanded their lineup to include a trio of solid state flash LiDAR sensors. Each of these is tuned for a different use case.
The Velarray M1600 is designed primarily for mobile service robots operating at low speeds. The detecting range of 0.1 to 30 meters is well suited for delivery robots, warehouse robots and hospitality robots, but is not suited to autonomous vehicles that need to sense at greater distances. An added benefit of this eye-safe sensor is day/night operation with robust resistance to infrared washout from direct sunlight.
The Velarray H800 uses similar architecture to the M1600, but is tuned for highway speed applications like those used for advanced driver assistance systems (ADAS) and autonomous vehicles. The detecting range for this flash LiDAR sensor stretches out to 200m, which is in line with the outer ranges of some of the more powerful scanning LiDAR sensors made by Velodyne.
The newest member of the Velodyne flash LiDAR family is the Velabit. This tiny sensor made a splash at its launch with a promised per unit price of $99. With a range of up to 100m, it is just at the edge of what is acceptable for vehicular applications. However, for robotic and drone applications, it could be a game changer. The Velabit is not yet available for purchase.
The L515 is Intel's first foray into LiDAR for their RealSense line, but judging by the fast proliferation of models in their 3D sensing lineup, it won't be their last. The L515's 9m range won't find it added to any vehicles, but it could certainly be used for slower moving service robots for obstacle avoidance and navigation. As an added benefit, its low power consumption design makes it a potential pick for platforms like drones with power-constrained designs.
Intel's own marketing suggests that the L515 is best suited for relatively static tasks like object measurement and inventory counts. However, we've been speaking to a number of roboticists that are excited at the prospect of using the L515 for robotic navigation tasks.
At $4,000 a sensor, the Sense One isn't cheap, but the ruggedized build quality is impressive, and the quality of its data is phenomenal.
Sense Photonics uses a global shutter design for its imaging chip; what this means is that every pixel captures data simultaneously, which is very important for high speed applications. In those applications, a traditional rolling shutter introduces motion blur, which can compromise data quality. The Sense One can capture up to a million points per second, providing point cloud output that is among the finest we've ever seen.
Finally, the Sense One is built for heavy duty use cases. With no moving parts, GigE data transfer and a sealed, ruggedized power connector, it checks many of the boxes sought out by roboticists who have been burned in the past by flaky USB connections and non-industrial grade components.
Luminar has taken a full-stack approach to the design of their LiDAR system. From developing their own ASIC to their own machine learning perception software, Luminar is building a fully integrated hardware and software system designed specifically for the needs of highway speed autonomous vehicles.
Luminar's two current sensors — the Iris and the Hydra — claim massive 500m maximum range sensing, while still using eye safe lasers. The Iris is designed for near-term integration into ready to produce autonomous vehicles, while the Hydra is an R&D unit designed for teams creating autonomous prototypes.
Like Velodyne's VeloBit line, Luminar is also focused on bringing the cost of LiDAR sensors down significantly. Luminar's first design win is with Volvo, who intends to equip their passenger vehicles with Luminar LiDAR sensors as soon as 2022.
The 4Sight range from AEye is designed for highway speed vehicle applications. The flagship 4Sight M integrates AEye's AI libraries to extend data capture capabilities from LiDAR datasets while simultaneously reducing sensor energy consumption. Classifier sets includes vehicles, pedestrians, foliage and weather events like rain. The simplified 4Sight A is focused specifically on ADAS applications.
The S Series is Quanergy's entry into flash LiDAR. Like many of the other flash LiDAR sensors listed here, the solid state design minimizes moving parts, and Quanergy claims an impressive 100,000 hour mean time before failure (MTBF) rating, making it an option for industrial applications where sensor uptime is important.
To quickly compare some of the key specs of many of the LiDAR units we described above, we've created a quick reference chart. Click here to visit it.
LiDAR remains a popular choice for mobile robots, drones, and autonomous vehicles. As it gets adopted into mainstream products like passenger vehicles, the per unit costs should continue to drop, and the variety of available sensors and configurations should continue to grow. And, if you're designing a product that uses LiDAR and other sensors, the Tangram Vision SDK will assist in ensuring your entire sensor array works optimally and continually. Have any suggestions to make this article better? Tweet at us to let us know!