< Back to Blog

Sensing Breakdown: Locomation Autonomous Relay Convoy

By
|
November 30, 2021
Autonomous Vehicles

Table of Contents

Introduction

In our last sensor breakdown, we looked at the Farmwise FT35, an autonomous agriculture robot deployed to automatically weed crops. Whereas the FT35's perception stack was largely focused on its weeding tasks, our latest subject uses its perception stack for a vastly different purpose: safe operation of a 35,000 pound autonomous semi truck operating at highway speeds.

The autonomous truck in question is engineered by Locomation. More precisely, the autonomous trucks in question are engineered by Locomation. Why the distinction? Read on to find out — and don't forget to subscribe to Tangram Vision's monthly newsletter for more content like this, plus perception tutorials, sensor insights, and industry analyses.

A Locomation Autonomous Relay Convoy™ (Image: Locomation)

The Locomation Autonomous Relay Convoy™

Over the past five years, a number of companies have emerged with the goal of deploying autonomy into commercial markets. One of the key value propositions for doing so has been the promise of increased efficiency. In the case of autonomous trucks, this can be achieved by removing a key limiting factor in keeping trucks operational 24/7: a driver's need to sleep. If you think this means the end of truck driving as a career, however, you'd be wrong. Enter Locomation.

Locomation pairs two trucks into a closely coupled convoy. While both trucks are autonomous operation cable, one truck has an alert safety driver at the wheel, while the other (operating at L4 autonomy) follows autonomously. The following truck uses data from the lead truck to inform its own actions, and allowing its driver to rest. With this model, Locomation can realize the promise of keeping two trucks operational for extended periods of time. There are other benefits as well. Both trucks use less fuel (5% for the lead truck, and 12% less for the following truck, according to Locomation). And the safety driver's actions in the lead vehicle are fed to the fully autonomous following vehicle to ensure that there are no discrepencies. Check out their cool video below that explains this system in detail:

The Locomation Truck's Sensors

Like other autonomous vehicles designed to operate at highway speeds, Locomation's trucks must be able to perform a variety of autonomy functions in a wide range of operating conditions. The functions to be performed include, but are not limited to:

  • Maintaining proper centering within lanes, whether marked or not
  • Localizing the truck against a relative coordinate frame
  • Sensing and avoiding unknown obstacles, both dynamic and static
  • Reacting to dynamic information in its operational domain, including traffic controls, emergency vehicles, and other common roadway features
  • Maintaining distance between itself and other vehicles. Due to Locomation's Autonomous Relay Convoy, this takes on particular importance as a design factor.

The environments in which the Locomation trucks must perform these functions are complex. They must all be successfully achieved:

  • In direct sunlight, with full saturation across the visible and non-visible light spectrums
  • In pitch black, with almost no ambient lighting
  • In heavy rain, dense fog, and other extreme weather conditions that can occlude some sensors
  • In locations where data connectivity becomes lost, leaving the truck to rely entirely on its own onboard capabilities
  • In featureless, self-similar environments, such as a blizzard

With the above in mind, we can think of many sensors that might be used to provide the required inputs to successfully achieve these functions in the environments noted.

Looking at Locomation's truck, a few of these are immediately apparent. Let's explore what we have!

Locomation's sensor array is primarily located on the rearview mirrors (Image: Locomation)

Mechanical spinning LiDARs are mounted on the truck's external rearview mirrors at around a 30° angle. These look to be either Velodyne or Hesai units, and we'd guess that the chosen models will be at least 64 channels for higher resolution mapping of the surrounding environment. Collectively, these four LiDAR units provide a three dimensional view of objects around a majority of the truck and trailer. The areas covered include both the left and right flanks of both tractor and trailer, as well as overlapping views in front of the truck. It is possible that these overlapping views could be either stitched or registered to allow for a seamless sweep of objects in the near to mid field of view ahead of the truck.

There are also cameras mounted below the LiDAR sensors in the mirror pods. We would guess that these are high dynamic range units to cope with the wide range of lighting conditions that the trucks will encounter on the open road. They are most likely made by OmniVision or Sony, given these manufacturers' growing popularity with automotive ADAS systems. There are two cameras facing forwards and two facing rearwards. The latter provide higher resolution imagery down the sides of the truck and trailer to assist in lane-keeping tasks, as well as object avoidance when near other vehicles and roadside obstacles.

💡 One neat feature: the following truck can dynamically stitch its LiDAR and camera views together with those of the leading truck, thereby creating a shared world model and providing the leading truck with full sensing coverage around its entire perimeter.
A radar unit can be seen mounted at the front of the truck, below the grille (Image: Locomation)

At the very front of the truck, mounted centrally in the bumper, a radar sensor provides long range detection of obstacles in a non-visual modality. This is particularly important given the range of lighting conditions and environmental occlusions (ie, fog) that the Locomation truck may encounter. Given the importance of this sensor's data in scenes where visual data is sparse, we'd guess that this is a higher quality radar unit than what is typically used in automotive ADAS systems for adaptive cruise control and forward collision warnings. Many automotive OEMs like Valeo and Continental are making higher quality radar units with increased resolution and high signal-to-noise capabilities, so it is likely that this sensor is sourced from one of these vendors.

While we can't see visual evidence, we are certain that an array of GNSS receivers is mounted behind the roof fairing to allow the truck to localize itself precisely against a regularly updated map.

We assume that other chassis inputs can be accessed and integrated into the truck's overall sensor array and autonomy logic. These can include the trigger wheels for the truck's anti-lock braking system, which can act as wheel encoders for odometry. Similarly, modern vehicles are often equipped with IMUs to sense sudden accelerations and decelerations for use with traction control systems. This IMU data can be used to understand events like unexpected camera pose changes. A scenario like this might occur during a sudden deceleration event which could cause the front of the chassis to dip down due to the deceleration forces. Last, but not least, modern vehicles provide a plethora of data streams for vehicle systems like throttle angle, brake actuation, and steering angle, all of which can further inform an autonomy system of the vehicle's current status.

Locomation's Perception Stack Challenges

Whoa, that's an awful lot of sensing! This comprehensive array provides a robust set of data with which the Locomation truck can understand its environment. But such a sophisticated array demands very careful integration and maintenance to ensure faultless and safe operation over long stretches of time.

First, let's consider data latency. When traveling at 55 mph, a semi truck is covering 81 feet per second. To come to a complete stop from 55 mph, the truck would need about 525 feet, and about six seconds. Therefore, any delay in data getting from sensor to onboard compute can be catastrophic. Locomation must therefore ensure that all sensor data can be ingested and processed in near real-time to ensure safe operation. Given the four LiDAR sensors and four HDR cameras, we'd guess that there would be a powerful onboard host with a few cores dedicated to this task. Add the task of vehicle-to-vehicle communication from the lead truck to the following truck, and the latency issue becomes an even trickier challenge. Not only does data need to be relayed in real time from one truck to another, but one or both trucks have to process additional data above what they would otherwise consume if operated independently. However the Locomation team achieves this real-time data transfer and processing, we applaud them for tackling such a complex problem. It is a clear technological differentiator for their platform.

A closeup look at Locomation's rearview mirror-mounted sensor array (Image: Locomation)

Next, let's talk about sensor calibration. Locomation's decision to use the truck's external rearview mirrors to mount much of the sensor array makes a lot of sense to achieve ~270° of coverage around the perimeter of both tractor and trailer. However, the rearview mirrors are also subject to torsional flexing and vibration that can easily throw a sensor array out of calibration. We assume that Locomation's approach is to calibrate the sensors on each individual mirror as a separate array, with the mirror's location relative to the truck chassis as part of a ground truth calculation to ensure proper spatial registration. However, we also assume that these mirror-based sensor arrays are the autonomy components that require the most regular calibration to ensure proper operation, both individually, and also calibrated as a pair. Given that online calibration regimens for multimodal systems like these are still a rarity, this need for regular sensor calibration is likely performed when the vehicle is at rest, which might reduce the full impact of Locomation's enhanced uptime value proposition. Because of Locomation's paired operation model, precise calibration takes on added importance, as the lead truck's sensor data informs the following truck's as well. If either truck's sensors are out of calibration, this mismatch could create an unsafe operating scenario that would likely trigger an alert for the automated convoy to stop.

🚀 As you might have guessed, this is the point where we insert a shameless plug for Tangram Vision: we could help Locomation, or any other autonomous vehicle company, optimize multi-sensor, multimodal calibration with a faster, more reliable process with our streamlined calibration system.

Finally, let's review sensor failures. Any autonomous vehicle that operates in a highway environment undergoes exceptional stresses that can lead to sudden failures. Temperature changes, constant vibration, physical impacts, and occlusions by mud, grime, and other contaminants can all temporarily or permanently sideline a sensor. For Locomation, understanding when a sensor is failing is critical for safe operation. Ideally, a failing sensor can be detected early to be proactively addressed when a vehicle is not operating at highway speeds. While some sensor failures (occlusion with dirt, for instance) can be easily rectified in field, others (for instance, a complete failure of a mechanical spinning LiDAR) require a vehicle to be taken out of service for an extended period of time, with all sensors recalibrated to each other and to the truck chassis before service can resume. Like any other vehicle system, proactive inspection, maintenance, and monitoring can ensure that failures are minimized, and can be addressed as quickly as possible when they do happen.

Conclusion — Did We Get It Right?

So there you have it; there's our best guess for the sensor array, sensor use cases, and sensor challenges for Locomation's Autonomous Relay Convoys. What do you think — did we nail it, or did we fail miserably?

If you enjoyed this post, stay tuned by subscribing to our newsletter, or following us on Twitter. We'll be doing more sensor array breakdowns like this again in the future. And, if you are working on a vision-enabled platform, check out the Tangram Vision Platform — our team of perception engineers is making sensor integration, calibration, and maintenance much more robust, but also much less time consuming.

Author's Note: I would like to thank Locomation's Co-Founder and CTO, Tekin Meriçli, for reviewing this article. Follow Tekin and Locomation on Twitter as they continue to build out their platform!

Share On:

You May Also Like:

Accelerating Perception

Tangram Vision helps perception teams develop and scale autonomy faster.