< Back to Blog

TVCal: Introducing Calibration Metrics

By
|
April 27, 2022
Tangram Vision Platform

Table of Contents

We’re excited to announce a new update to TVCal’s functionality: Calibration performance details and metrics! Have you ever wondered how precise/accurate/"good" your calibration is? With most calibration systems, the answer isn’t obvious, or could even be misleading.

With TVCal, you need wonder no more. Now every calibration processed through TVCal on the Tangram Vision Platform outputs a huge amount of useful, verifiable performance data: reprojection errors, world poses, feature locations, and more.

See the demo video here:

(Yes, it’s worthy of a demo video!)

Let’s break down the new calibration detail page that’s generated with every calibration run through TVCal.

Calibration Outputs

We start with the most practical part of the calibration: the output, as shown above. Here, you can find a visualization of the latest calibration results for your system (which we call the Plex, a holistic structure that includes all relevant components: sensors, chassis, and synthesized perception data streams). The full metrics and Plex JSON files that came from this calibration can also be downloaded for easy analysis and manipulation.

Output Metrics

Summary Statistics

This is where the fun begins! We start with the summary statistics for the entire calibration: object space RMSE and component-wise RMSE. RMSE stands for the Root Mean Square of the Error, which in this case is the residuals for all observations related to a single component. For a component that has been appropriately modeled (i.e. there are no un-modeled systematic error sources present), this represents the mean quantity of error from observations taken by a single component. Ideally, the RMSE should be as small as possible; this indicates that the calibration model that you chose for your camera fits this input data well.

This section also documents the posterior variance of the calibration. Sometimes referred to more completely as the "a-posteriori variance factor" or "normalized cost," the posterior variance is a relative measure of the gain/loss of information from the calibration. This term can be helpful, but is often misunderstood. Learn more about its application in the Advanced Topics section of the Tangram Vision Platform Documentation.

Clicking on any component in the “Per-Component Results” expands a detail page for that component (in TVCal parlance, a component is an individual part of a sensor or sensor array, like one of the two cameras in a passive stereo 3D sensor).

Component Detail: Feature Coverage

For cameras, the Per-Component Results include a few great features, including... features (figure one below)! Or more specifically, a map of the feature space as seen from the selected camera component. This is useful as a measure of data coverage in the observable domain of the image. All of calibration is about fitting a model to the available data; the more of the observable domain that is covered, the more confident we can be that our model fit conforms to the whole domain. In other words, full domain coverage prevents overfitting and underfitting of our model to the available data.

In Figure 1 below, the center of the image has good feature coverage, but the edges are sparse. This could prevent a proper fit of key parts of our camera model, e.g. distortion and affinity.

Figure 1: Feature Coverage

From here, we can generate a heat map of reprojection errors in radial, tangential, or UV space. This is an easy way to see error trends. For instance, if there’s a red shift or blue shift (positive or negative error) on the edges of the image, that could be an indication of unmodeled affinity. An example of the radial coverage heatmap for a calibration can be seen below (Figure 2).

Figure 2: Radial Error

Clicking on any point will highlight that point and give select point info (the table under the feature space as shown in Figure 3, below). This also highlights other features seen in this image capture. Just like with the larger heatmap, we can use this information to possibly deduce where shortcomings exist in our calibration model. In Figure 3 below, we can tell that the target captured in this image fit well to the model that we derived... except for that one blue outlier on the left-hand side. In this case, since the outlier is all by itself, the problem probably lies with the input data or the feature detection algorithm used to extract our object space. There’s a lot to learn from these charts!

Figure 3: Point-Specific Data

Component Detail: Reprojection Error Analysis

Reprojection errors, more technically referred to as image residuals, are often seen as the only error metric to measure precision within an adjustment. While not the only tool at our disposal, reprojection errors can tell us a lot about the calibration process and provide insight into what image effects were (or were not) properly calibrated for. The charts provided are incredibly useful when comparing calibrations for the same component over time, and likewise can be useful in deciding between different models (e.g. Brown-Conrady vs. Kannala-Brandt distortion, scale affinity vs. no affinity modeling, etc).

Radial Error - δr vs. r

The δr vs. r graph is a graph that plots radial reprojection error as a function of radial distance from the principal point. This graph is an excellent way to characterize distortion error, particularly radial distortions.

Figure 4: Radial Reprojection Error

Consider the Expected graph above. This distribution represents a fully calibrated system that has modeled distortion using the Brown-Conrady model. The error is fairly evenly distributed and low, even as one moves away from the principal point of the image.

However, if TVCal had been told not to calibrate for a distortion model, the output would look very different (The Poor Result). Radial error fluctuates in a sinusoidal pattern now, getting worse as we move away from the principal point. Clearly, this camera needs some distortion model of some kind in future calibrations.

Tangential Error - δt vs. t

Like the δr vs. r graph, δt vs. t plots the tangential reprojection error as a function of the tangential (angular) component of the data about the principal point.

This can be a useful plot to determine if any unmodeled tangential (de-centering) distortion exists. The Expected chart below shows an adjustment with tangential distortion correctly calibrated and accounted for. The Poor Result shows the same adjustment without tangential distortion modeling applied.

Figure 5: Tangential Error

Error in U and V

These plot the error in our Cartesian axes (δu or δv) as a function of the distance along that axis (u or v).

Both of these graphs should have their y-axes centered around zero, and should mostly look uniform in nature. The errors at the extreme edges may be larger or more sparse; however, the errors should not have any noticeable trend.

Figure 6: Cartesian Axes Error

💡 There are many more indicators of unmodeled intrinsics found in these charts. Visit the TVCal documentation for the Tangram Vision Platform for more insights.

Calibration Inputs

Last but not least, we come to calibration inputs. These are the datasets and component parameters that TVCal uses to generate the calibration output in the first place!

For example, the input Plex visualized below in Figure 7 reflects a calibration before correction via TVCal; after all, camera origins don’t usually blend into one another, right? However, with TVCal, a user can immediately deduce that the calibration is incorrect, both through the visualization, but also through the input metrics that correlate to the visualized Plex.

Given TVCal’s robust calibration capabilities, this forlorn calibration served as a sufficient starting point for TVCal to figure out how to correct it.

💡 Don't have any data ready to process? No worries; take this test dataset for a spin through the User Hub Quickstart.
Figure 7: An Uncalibrated Input Plex

Try It Yourself

Tangram Vision’s TVCal goes beyond simple single sensor calibration. If you’re looking for a comprehensive calibration solution for your multimodal robotics or AV/ADAS sensor suite, whether in the field or fresh off of the production line, we’re ready for you. Request a demo by signing up at hub.tangramvision.com.

Our goal at Tangram Vision is to fast track perception system development by providing better tools for the job. The progress shown here with TVCal is just the start; we’re building a comprehensive suite for faultless, plug-and-play operation of even the most complex sensor systems.

If this sounds like something that might save you or your team months to years of time and effort, drop us a line! And if saving perception engineers years of effort sounds like your calling, then I have even better news: we’re hiring! Check out our careers page for open positions.

Share On:

You May Also Like:

Accelerating Perception

Tangram Vision helps perception teams develop and scale autonomy faster.