top of page
Dehann Fourie

Is the IEKF better than Graph-centric Optimization/Inference?

Updated: Nov 4

Pop Quiz


What is the difference between the Iterated Extended Kalman Filter (IEKF) [Zarchan et al. 2005] and augmented state non-linear optimization routines [Newton's method]?



Answer

IEKF (similar to a regular Kalman filter) preserves the usual prediction step:





but tries to improve on linearization errors (of the observation function) during the measurement updates by iterating the Kalman correction step:








Notice how Kalman gain depends on whatever the state estimate (x-) is at this particular moment. At first, the Kalman gain and innovation is only a first order guess from the uncorrected state (x-). Subsequent cycles are assumed to converge towards the best possible state vector (x+) along with the linearization estimates (dh/dx). Pay close attention that only the linearization errors are driven down but the same measurement information is not “baked in” more than once


For the IEKF the measurement correction step continues until some stopping criteria is met, e.g. the state stabilizes or finite iterations. From there the usual Kalman prediction step continues as normal.


But Wait!

Why introduce another (deeper) inner optimization loop when the Kalman filter design is already recursive – this seems duplicit?


NavAbility’s Opinion:

The above IEKF design is a “bad implementation” of an augmented-state, data-fusion solution. We hope to convince you that there are much better ways to estimate the augmented state-vector (x+), ultimately through compute recycling, marginalization, and fluid re-linearization. This blog introduces one of the many motivations for graph based data fusion processing.


Contrasting Example


To simplify this blog, let’s focus on parametric graph optimization (although we have much more to offer in terms of non-Gaussian inference).

Let’s assume an augmented-state vector (x) of positions and orientations needed in some robotics data-fusion problem. We can write the optimal condition of the system with:







We have n-many observations from data – i.e. h(x).


The widely used workhorse in maximum likelihood estimation is iterated numerical sequence:








Where the state x is updated repeatedly as though taking steps. Here the step size is obtained from the gradients of observations (a.k.a. line search). Notice this is akin to IEKF measurement cycle above.


The optimal condition occurs when the sequence achieves some termination condition, e.g. fixed number of iterations or when additional steps make no meaningful change to the state, i.e.








Conclusion, TL;DR

Eq. (2) and Eq. (4) are basically equivalent, barring some detail on convergence speed & criteria, robustness, etc.


Graphs provide a natural language for developing, managing, computing augmented state optimization. Proper compute and data management (via graphs) is not only good engineering practice, in our opinion leads to far superior data processing algorithms, software stacks, and ultimately equipment performance and project lifecycle cost.



This and More:


This and many other reasons make graph-centric design a serious contender for data-fusion solutions. We invite you to get in touch with us at WhereWhen.ai to see if NavAbility Accelerator can enable your team to move faster and with more confidence.


Some of the other graph-centric benefits include:

  • Computationally bounded graph optimizations,

  • Beyond Gaussian graph optimization when models and data don’t match,

  • First class distributed compute and data management build directly into the graph model,

  • Much higher levels of code reuse.

  • Reducing the Kalman filter’s O(n^3) compute complexity to O(n.logn) by decomposing the inverse covariance matrix (a.k.a. the bi-adjacency graph).

bottom of page