    < Back to Blog

# One-to-many Sensor Trouble, Part 2

Tutorial

## Phase 2: Updating Your Prediction

### From state to sensors

While we were busy predicting, the GPS on our RC car was giving us positional data updates. These sensor measurements $$\textbf{z}_{t}$$ give us valuable information about our world.

However, $$\textbf{z}{t}$$ and our state vector $$\textbf{x}{t+1}$$ may not actually correspond; our measurements might be in one space, and our state in another! For instance, what if we're measuring our state in meters, but all of our measurements are in feet? We need some way to remedy this.

Let's handle this situation by converting our state vector into our measurement space using an observation matrix $$\textbf{H}$$:

$$\tag{3.1}\mu_{exp} = H \textbf{x}_{t+1}$$

$$\tag{3.2}\Sigma_{exp} = H \textbf{P}_{t+1} H^T$$

These equations represent the mean $$\mu_{exp}$$ and covariance $$\Sigma_{exp}$$ of our predicted measurements.

For our RC car, we're going from meters to feet, in both position and velocity. 1 meter is around 3.28 ft, so we would shape $$H$$ to reflect this:

$$\tag{3.3}\mu_{exp} =\begin{bmatrix}3.28 & 0 \\ 0 & 3.28\end{bmatrix}\begin{bmatrix}p_{t+1} \\ v_{t+1}\end{bmatrix}$$

$$\tag{3.4}\Sigma_{exp} =\begin{bmatrix}3.28 & 0 \\ 0 & 3.28\end{bmatrix}\textbf{P}_{t+1}\begin{bmatrix}3.28 & 0 \\ 0 & 3.28\end{bmatrix}$$

...leaving us with predicted measurements that we can now compare to our sensor measurements $$\textbf{z}_{t}$$. Note that $$H$$ is entirely dependent on what's in your state and what's being measured, so it can change from problem to problem. Fig. 1: Moving our state from state space (meters, bottom left PDF) to measurement space (feet, top right PDF).

Our RC car example is a little simplistic, but this ability to translate our predicted state into predicted measurements is a big part of what makes Kalman filters so powerful. We can effectively compare our state with any and all sensor measurements, from any sensor. That's powerful stuff!

💡 The behavior of $$H$$ is important. Vanilla Kalman filters use a linear $$F_{t}$$ and $$H$$; that is, there is only one set of equations relating the estimated state to the predicted state ($$F_{t}$$), and predicted state to predicted measurement ($$H$$).

If the system is non-linear, then this assumption doesn't hold. $$F_{t}$$ and $$H$$ might change every time our state does! This is where innovations like the Extended Kalman Filter (EKF) and the Unscented Kalman Filter come into play. EKFs are the de facto standard in sensor fusion for this reason.

Let's add one more term for good measure: $$R_{t}$$, our sensor measurement covariance. This represents the noise from our measurements. Everything is uncertain, right? It never ends.

### The Beauty of PDFs

Since we converted our state space into the measurement space, we now have 2 comparable Gaussian PDFs:

• $$\mu_{exp}$$ and $$\Sigma_{exp}$$, which make up the Gaussian PDF for our predicted measurements
• $$\textbf{z}{t}$$ and $$R{t}$$, which make up the PDF for our sensor measurements

The strongest probability for our future state is the overlap between these two PDFs. How do we get this overlap? Fig. 2: Our predicted state in measurement space is in red. Our measurements z are in blue. Notice how our measurements z have a much smaller covariance; this is going to come in handy.

We multiply them together! The product of two Gaussian functions is just another Gaussian function. Even better, this common Gaussian has a smaller covariance than either the predicted PDF or the sensor PDF, meaning that our state is now much more certain.

ISN'T THAT NEAT.

### Update Step, Solved.

We're not out of the woods yet; we still need to derive the math! Suffice to say... it's a lot. The basic gist is that multiplying two Gaussian functions results in its own Gaussian function. Let's do this with two PDFs now, Gaussian functions with means $$\mu_{1}, \mu_{2}$$ and variances $$\sigma_{1}^2, \sigma_{2}^2$$. Multiplying these two PDFs leaves us with our final Gaussian PDF, and thus our final mean and covariance terms:

$$\tag{3.5}\mu_{final} =\mu_{1} +\frac{H\sigma_{1}^2(\mu_{2} - H\mu_{1})}{H^2 \sigma_{1}^2 + \sigma_{2}^2}$$

$$\tag{3.6}\sigma_{final}^2 =\sigma_{1}^2 -\frac{H\sigma_{1}^2}{H^2 \sigma_{1}^2 + \sigma_{2}^2}H\sigma_{1}^2$$

When we substitute in our derived state and covariance matrices, these equations represent the update step of a Kalman filter.

$$\tag{3.7}\Rightarrow\textbf{x}{final} =\textbf{x}{t+1} +\frac{H\textbf{P}{t+1}(\textbf{z}{t} - H\textbf{x}{t+1})}{H^2 \textbf{P}{t+1} + \textbf{R}_{t}}$$

$$\tag{3.8}\Rightarrow\textbf{P}{final} =\textbf{P}{t+1} -\frac{H\textbf{P}{t+1}}{H^2 \textbf{P}{t+1} + \textbf{R}{t}}H\textbf{P}{t+1}$$

💡 See reference (Bromiley, P.A.) for a good explanation on how to multiply two PDFs and derive the above. It takes a good page to write out; you've been warned.

This is admittedly pretty painful to read as-is. We can simplify this by defining the Kalman Gain $$K_{t}$$:

$$\tag{3.9}K_{t} =\frac{H\sigma_{1}^2}{H^2 \sigma_{1}^2 + \sigma_{2}^2}$$

With $$K_{t}$$ in the mix, we find that our equations are much kinder on the eyes:

$$\tag{3.10}\textbf{x}{final} =\textbf{x}{t+1} +K_{t}(\textbf{z}{t} - H\textbf{x}{t+1})$$

$$\tag{3.11}\textbf{P}{final} =\textbf{P}{t+1} -K_{t}H\textbf{P}_{t+1}$$

We've done it! Combined, $$\textbf{x}{final}$$ and $$\textbf{P}{final}$$ create our final Gaussian distribution, the one that crunches all of that data to give us our updated state. See how our updated state spikes in probability (the blue spike in Fig. 3). That's the power of Kalman Filters! Fig. 3: Original state in green. Predicted state in red. Updated final state in blue. Look at how certain we are now! Such. Wow.

## What now?

Well... do it again! The Kalman filter is recursive, meaning that it uses its output as the next input to the cycle. In other words, $$\textbf{x}{final}$$ is your new $$\textbf{x}{t}$$! The cycle continues in the next prediction state.

Kalman filters are great for all sorts of reasons:

• They extend to anything that can be modeled and measured. Automotives, touchscreens, econometrics, etc etc.
• Your sensor data can come from anywhere. As long as there's a connection between state and measurement, new data can be folded in.
• Kalman filters also allow for the selective use of data. Did your sensor fail? Don't count that sensor for this iteration! Easy.
• They are a good way to model prediction. After completing one pass of the prediction step, just do it again (and again, and again) for an easy temporal model.

Of course, if this is too much and you'd rather do… anything else, Tangram Vision is developing solutions to these very same problems! We're creating tools that help any vision-enabled system operate to its best. If you like this article, you'll love seeing what we're up to.

### Code Examples and Graphics

The code used to render these graphs and figures is hosted on Tangram Vision's public repository for the blog. Head down, check it out, and play around with the math yourself! If you can improve on our implementations, even better; we might put it in here. And be sure to tweet at us with your improvements.

Check it out here: Tangram-Vision/Tangram-Vision-Blog

If this article didn't help, here are some other sources that dive into Kalman Filters in similar ways:

1. Faragher, R. (2012, September). Understanding the Basis of the Kalman Filter. https://drive.google.com/file/d/1nVtDUrfcBN9zwKlGuAclK-F8Gnf2M_to/view
2. Bzarg. How a Kalman filter works, in pictures. https://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/
3. Teammco, R. Kalman Filter Simulation. https://www.cs.utexas.edu/~teammco/misc/kalman_filter/
4. Bromiley, P.A. Products and Convolutions of Gaussian Probability Density Functions.  https://bit.ly/3nGDewe [PDF]

## Accelerating Perception

The Tangram Vision Platform lets perception teams develop and deploy faster.