< Back to Blog

HDR Cameras for Robotics & Autonomous Vehicles

By
|
October 13, 2021
Sensors

Table of Contents

The past few years has seen the acceleration of robots that work outdoors. Agricultural robots, lawn care robots, security robots, and — we can't forget — autonomous vehicles. Like any other robot, these autonomous platforms rely on sensors to understand where they are in the world, and what is around them. However, more so than their indoor counterparts, these outdoor robots face a particularly tricky adversary: the sun. The sun can shine so brightly it can blind cameras and depth sensors. Or it can disappear entirely at night, leaving almost no ambient light to be captured by a robot's visual sensors.

In some cases, modalities like LiDAR and radar can fill in sensory gaps left by a blinded camera or depth sensor. But their ability to do so is limited to a few specific tasks that may not always allow a robot to fulfill its mission. For instance, the point cloud generated by a LiDAR unit may provide the general shape and location of an object, but it may be hard pressed to deliver information that can specifically identify what an object is. And, practically speaking, LiDAR tends to be power hungry, less reliable (in the case of mechanical units), and much more expensive than a simple CMOS camera. We must also acknowledge that some of the world's leading mobility companies (Tesla and Ford) have stated that they will rely on cameras instead of LiDAR as the primary perception data sources for their ADAS and autonomy programs.

What Is An HDR Camera?

Enter high dynamic range (HDR) CMOS cameras. These cameras operate like any other CMOS cameras do, with the added benefit of delivering data in those low light and bright light conditions that would otherwise cause a standard CMOS camera to fail.

HDR cameras aren't necessarily distinct from standard CMOS cameras. The majority of differentiation comes from computational approaches to expand their dynamic ranges. For those that are likely to be used for robots and autonomous vehicles, the HDR camera will create a composite image from multiple frames captured with different settings. We'll refer to this as the composite exposure approach.

Composite Exposure

With a composite exposure approach, an HDR camera processes two or more frames captured in succession with different exposure and gain settings. Typically, one frame is a low exposure image that captures low-light detail. Another is a high exposure image that capture high-light detail. Others will be somewhere in the middle. These images are then combined to create a composite image that uses only the best data from each. In doing so, one frame is created with a much wider dynamic range than would otherwise be possible for a camera.

The left shows the low-exposure and high-exposure frames that are then composited together to create the HDR image at right. The bright patch of sunlit grass that was washed out in the high-exposure image now benefits from the detail captured by the low-exposure image. The hydrangeas in the back of the picture that were occluded by shadows in the low-exposure image now gain detail captured by the high-exposure image. It's the best of both worlds...almost.

Composite exposure has downsides, however. Because two or more images must be captured, shutter speed can be reduced by half or more, making them less capable in fast moving scenarios. A common error with this process is image artifacts in the form of motion blur or doubled objects that can introduce faulty data into robotic vision systems. In addition, the need to process two or more images can introduce latency into a vision processing pipeline. For high speed needs, this isn't optimal.

Deploying HDRs to Robotic And Autonomous Platforms

Practically speaking, what is the best way to employ an HDR camera for computer vision? In many ways, the considerations aren't materially different than they would be with a LiDAR sensor.

First, we recommend that you consider that most available HDR CMOS cameras will operate at a lower frame rate than a standard dynamic range CMOS camera. This is due to the need to process multiple frames in succession to create the composite HDR image. Some HDR cameras can also operate in a non-HDR mode, which lets them achieve a higher frame rate.

Next, we suggest performance testing an HDR to understand the amount of compute it will require for its processing tasks. HDR processing can occur onboard the camera, or via a dedicated image signal processor (ISP), or on your host CPU or GPU. For system designs where processing is done on host, the amount of compute required can become unwieldy if multiple HDR cameras are used simultaneously. In some cases, entire cores can be consumed by HDR processing, even with an optimized vision pipeline.

Finally, we would caution that HDRs can suffer from latency in ways that other CMOS cameras won't. This is particularly true if an HDR CMOS camera with onboard processing is chosen. Careful system tuning must be applied to account for this. Depending on your system's tolerance for latency, an HDR camera may not be an option. Then again, if your robotic platform must operate in high dynamic range conditions, you may not have a choice!

Choosing an HDR Camera

We've created a chart that shows a few of the more popular and available HDR CMOS cameras — click here to access an interactive version of this chart. We've included dynamic range, shutter speed, resolution, and other criteria to help you understand what is available. You can find datasheets for many of these cameras in the Tangram Vision sensor datasheet library.

Click here for an interactive version of this chart.

Calibrating HDR Cameras

Like any other sensor used on a robotic or autonomous vehicle platform, HDR CMOS cameras need periodic intrinsic and extrinsic calibration both individually and in context with the broader sensor array with which they work.

There's nothing particularly distinctive about calibrating HDR CMOS cameras. They can be addressed like any other CMOS camera, which means they are already supported by the Tangram Vision SDK's calibration suite.

If you found this article, send us a tweet to let us know. If you're building a robot or autonomous vehicle and deploying HDR CMOS cameras, get in touch and we'll help you optimize your sensor arrays!

Share On:

You May Also Like:

Accelerating Perception

Tangram Vision helps perception teams develop and scale autonomy faster.