< Back to Blog

Debunking Sensor Range Claims: Is It Detection, or Recognition?

By
|
August 2, 2023
Sensors

Table of Contents

Introduction

The field of robotics and autonomous systems has seen exponential growth with the rise of artificial intelligence (AI). As sensors are an integral part of this ecosystem, a slew of manufacturers have made their way into these markets, each touting wide operational ranges for their sensors. Although these range claims might seem transparent at first, the shift from simple detection to sophisticated recognition tasks with AI's advent necessitates a more intricate understanding of these claims.

Sensor Range Specifications: An In-depth Look

Traditionally, a sensor's range is the maximum distance at which it can detect a target of a certain size under certain conditions. For instance, a LiDAR sensor might specify that it can detect a 1m² target with 10% reflectivity at a distance of 100m during daylight (reflectivity affects how much light a LiDAR pulse reflects off an object, thus impacting the maximum detection distance). While these specs provide useful benchmarks, their limitations become evident when the focus moves from mere detection to recognition – identifying what the detected object is.

Technical Limitations and Their Impact on Range and Recognition

Different sensor technologies have inherent limitations that affect performance at varying ranges. Taking LiDAR as an example, although it may have a maximum range of 100m, the divergence of its laser scan lines leads to decreasing point density with increasing range. This translates to reduced capabilities in recognizing smaller objects at larger distances.

Depth sensors, such as stereo cameras or Time-of-Flight (ToF) sensors, experience similar challenges. Depth fill, a crucial metric, generally sees a very rapid drop-off as distance increases, thus affecting the sensor's ability to extract accurate depth information that can be used to recognize a specific object type in a scene. Depth accuracy indicates how accurately a depth sensor can measure an object's distance at a certain point in a depth sensor’s range. Most depth sensors, however, are tuned for maximum accuracy only within a specific band of their total sensing range.

💡 Need a quick check on depth sensor ranges? Check out our interactive depth sensor visualizer (as seen in the cover image for this post)

Beyond Traditional Specifications: Recognizing Data Requirements

When moving from detection to recognition tasks, it is crucial to understand the data requirements for AI-based object recognition. For successful object recognition using machine learning (ML), the sensor must gather a sufficient volume of data points to make a match to an existing object in the ML dataset.

For instance, in many cases a LiDAR sensor might need to gather at least 50 data points on an object for the ML algorithm to recognize it accurately. At 25 meters of range, this may be easily achievable for a typical LiDAR unit. At 100 meters of range, however, it may prove to be extremely difficult. Add in closing speed from an autonomous vehicle to an object that must be detected quickly, and limitations may arise — and fast. Of course, there are always exceptions to this rule. Some classifiers may be able to recognize an obstacle to be avoided by calculating differences in range-rate. Yet, in other cases, these exceptions won’t be reliable enough for robust performance.

undefined
The Inverse-Square law. S represents a light source, while each r represents a measured point. Image courtesy of Wikipedia, and used with the Creative Commons CC BY-SA 3.0 license.

Don't hesitate to challenge range claims by doing your own calculations and field testing. Consider the volume of data required for object recognition by your specific ML algorithm. For a LiDAR sensor, you might use an equation based on the inverse-square law. A similar approach can be adopted for depth sensors by considering depth accuracy and the required volume of data.

The Multi-modal Approach: A Comprehensive Solution for Recognition at Range

In light of the limitations of individual sensor technologies, the future of robotics and autonomous systems is increasingly adopting a multi-modal approach. This approach combines multiple sensor modalities, such as LiDAR, depth cameras, and RGB cameras, to provide a more comprehensive perception of the environment.

Each sensor modality has strengths and weaknesses, and a multi-modal system leverages these traits to ensure accurate object recognition at different ranges. For instance, while an off-the-shelf depth camera might offer superior depth perception at close ranges, a LiDAR sensor may be better at recognizing objects at longer distances, or potentially a longer baseline depth sensor.

The Crucial Role of Sensor Fusion and Calibration

Effective sensor fusion and calibration are paramount to the success of the multi-modal approach. Sensor fusion involves integrating data from various sensors to create a cohesive understanding of the environment. Proper sensor calibration is crucial to ensure precise sensor readings and accurate alignment of data from different sensors.

Tangram Vision aids in both sensor fusion and calibration, helping to optimize the performance of your robotic and autonomous systems across a wide range of distances.

Conclusion

Understanding perception sensor range claims in the AI era requires an intricate understanding of these specifications and the underlying data requirements for AI-based object recognition. With the shift from detection to recognition, we need to consider not only traditional range specifications but also parameters like LiDAR reflectivity, depth accuracy, and the volume of data required for object recognition. The future of robotics and autonomous applications relies on a multi-modal approach, backed by effective sensor fusion and calibration. Having a handle on these complex considerations can guide you to make informed decisions when choosing the appropriate sensors for your applications.

Share On:

You May Also Like:

Accelerating Perception

Tangram Vision helps perception teams develop and scale autonomy faster.