Kalman smoothing and block tridiagonal systems: new connections and numerical stability results

Size: px
Start display at page:

Download "Kalman smoothing and block tridiagonal systems: new connections and numerical stability results"

Transcription

1 Kalman smoothing and bloc tridiagonal systems: new connections and numerical stability results Alesandr Y Aravin, Bradley M Bell, James V Bure, Gianluigi Pillonetto arxiv: v3 [matha] 4 Jul 013 Abstract The Rauch-Tung-Striebel (RTS) and the Mayne- Fraser (MF) algorithms are two of the most popular smoothing schemes to reconstruct the state of a dynamic linear system from measurements collected on a fixed interval Another (less popular) approach is the Mayne (M) algorithm introduced in his original paper under the name of Algorithm A In this paper, we analyze these three smoothers from an optimization and algebraic perspective, revealing new insights on their numerical stability properties In doing this, we re-interpret classic recursions as matrix decomposition methods for bloc tridiagonal matrices First, we show that the classic RTS smoother is an implementation of the forward bloc tridiagonal (FBT) algorithm (also nown as Thomas algorithm) for particular bloc tridiagonal systems We study the numerical stability properties of this scheme, connecting the condition number of the full system to properties of the individual blocs encountered during standard recursion Second, we study the M smoother, and prove it is equivalent to a bacward bloc tridiagonal (BBT) algorithm with a stronger stability guarantee than RTS Third, we illustrate how the MF smoother solves a bloc tridiagonal system, and prove that it has the same numerical stability properties of RTS (but not those of M) Finally, we present a new hybrid RTS/M (FBT/BBT) smoothing scheme, which is faster than MF, and has the same numerical stability guarantees of RTS and MF I ITRODUCTIO Kalman filtering and smoothing methods form a broad category of computational algorithms used for inference on noisy dynamical systems Since their invention [18] and early development [19], these algorithms have become a gold standard in a range of applications, including space exploration, missile guidance systems, general tracing and navigation, and weather prediction umerous boos and papers have been written on these methods and their extensions, addressing modifications for use in nonlinear systems [17], robustification against unnown models [7], smoothing data over time intervals [1], [7] Other studies regard Kalman smoothing with unnown parameters [9], constrained Kalman smoothing [8], and robustness against bad measurements [13], [6], [1], [9], [4], [1], [0], [6] and sudden changes in state [3], [4], [15] In this paper, we consider a linear Gaussian model specified Alesandr Y Aravin (saravin@usibmcom) is with the IBM TJ Watson Research Center, Yortown Heights, Y Bradley M Bell (bradbell@aplwashingtonedu) is with Applied Physics Lab University of Washington, Seattle, WA James V Bure (bure@mathwashingtonedu) is with Department of Mathematics University of Washington, Seattle, WA Gianluigi Pillonetto (giapi@deiunipdit) is with Dipartimento di Ingegneria dell Informazione, University of Padova, Padova, Italy as follows: x 1 = x 0 + w 1, x = G x 1 + w =,,, z = H x + v =1,,, (I1) where x 0 is nown, x,w R n, z,v R m(), (measurement dimensions can vary between time points) G R n n and H R m() n Finally, w, v are mutually independent zeromean Gaussian random variables with nown positive definite covariance matrices Q and R, respectively Extensions have been proposed that wor with singular Q and R (see eg [10], [], [11]), but in this paper we confine our attention to the nonsingular case a) Kalman smoothing algorithms: To obtain the minimum variance estimates of the states given the full measurement sequence {z 1,,z }, two of the most popular Kalman smoothing schemes are the Rauch-Tung-Striebel (RTS) and the Mayne-Fraser (MF) algorithm based on the two-filter formula RTS was derived in [8] and computes the state estimates by means of forward-bacward recursions which are sequential in nature (first run forward, then run bacward) An elegant and simple derivation of RTS using projections onto spaces spanned by suitable random variables can be found in [] MF was proposed by [5], [16] It computes the smoothed estimate as a combination of forward and bacward Kalman filtering estimates, which can be run independently (and in particular, in parallel) A derivation of MF from basic principles where the bacward recursions are related to maximum lielihood state estimates can be found in [30] A third algorithm we study (and dub M) was also proposed by Mayne It is less popular than RTS and MF and appears in [5] under the name of Algorithm A (while MF corresponds to Algorithm B in the same paper) M is similar to RTS, but the recursion is first run bacward in time, and then forward A large body of theoretical results can be found in the literature on smoothing, and there are many interpretations of estimation schemes under different perspectives, eg see [3] However, clear insights on the numerical stability properties of RTS, MF and M are missing In order to fill this gap, we first analyze these three smoothers from an optimization and algebraic perspective, interpreting all of them as different ways of solving bloc tridiagonal systems Then, we exploit this re-interpretation, together with the linear algebra of bloc tridiagonal systems, to obtain numerical stability properties of the smoothers It turns out that RTS and MF share the same numerical stability properties, whereas M has a stronger

2 stability guarantee b) Outline of the paper: The paper proceeds as follows In Section II we review Kalman smoothing, and formulate it as a least squares problem where the underlying system is bloc tridiagonal In Section III we obtain bounds on the eigenvalues of these systems in terms of the behavior of the individual blocs, and show how these bounds are related to the stability of Kalman smoothing formulations In Section IV we show that the classic RTS smoother implements the forward bloc tridiagonal (FBT) algorithm, also nown as Thomas algorithm, for the system (II7) We demonstrate a flaw in the stability analysis for FBT given in [9], and prove a new more powerful stability result that shows that any stable system from Section III can be solved using the forward algorithm In Section V, we introduce the bloc bacward tridiagonal (BBT) algorithm, and prove its equivalence to M (ie Algorithm A in [5]) We show that M has a unique numerical stability guarantee: in contrast to what happens with RTS, the eigenvalues of the individual blocs generated by M during the bacward and forward recursions are actually independent of the condition number of the full system A numerical example illustrating these ideas is given in Section VI ext, in Section VII, we discuss the MF smoother, elucidating how it solves the bloc tridiagonal system, and showing that the two-filter formula has the same numerical stability properties of RTS but does not have the numerical stability guarantee of M Finally, in Section VIII we propose a completely new efficient algorithm for bloc tridiagonal systems We conclude with a discussion of these results and their consequences II KALMA SMOOTHIG AD BLOCK TRIDIAGOAL SYSTEMS Considering model I1 and using Bayes theorem, we have p ( x z ) p ( z x ) p(x )=p(v )p(w ), (II1) and therefore the lielihood of the entire state sequence {x } given the entire measurement sequence {z } is proportional to ( p(v )p(w ) exp 1 =1 =1 (z H x ) R 1 (z H x ) 1 ) (x G x 1 ) Q 1 (x G x 1 ) (II) A better (equivalent) formulation to (II) is minimizing its negative log posterior: min f({x }) := {x } =1 1 (z H x ) R 1 (z H x ) + 1 (x G x 1 ) Q 1 (x G x 1 ) (II3) To simplify the problem, we introduce data structures that capture the entire state sequence, measurement sequence, covariance matrices, and initial conditions Given a sequence of column vectors{u } and matrices {T } we use the notation v 1 T v vec({v })=, diag({t })= 0 T 0 v 0 0 T We mae the following definitions: R=diag({R }) Q=diag({Q }) H = diag({h }) x=vec({x }) ζ = vec({x 0,0,,0}) z=vec({z 1,z,,z }) I 0 G= G I 0 G I (II4) (II5) With definitions in (II4) and (II5), problem (II3) can be written min x f(x)= 1 Hx z R Gx ζ Q 1, (II6) where a M = a Ma It is well nown that finding the MAP estimate is equivalent to a least-squares problem, but this derivation maes the structure fully transparent We now write down the linear system that needs to be solved in order to find the solution to (II6): (H R 1 H+ G Q 1 G)x=H R 1 z+g Q 1 ζ (II7) The linear system in (II7) has a very special structure: it is a symmetric positive definite bloc tridiagonal matrix To observe it is positive definite, note that G is nonsingular (for any models G ) and Q is positive definite by assumption Direct computation shows that D 1 A T 0 H R 1 H+ G Q 1 A D A T 3 0 G= 0, (II8) 0 A D with A R n n and D R n n defined as follows: A = Q 1 G, D = Q 1 + G +1 Q 1 +1 G +1+ H R 1 H (II9) with G +1 Q 1 +1 G +1 defined to be the null n n matrix This bloc tridiagonal structure was noted early on in [31], [14], [3] These systems also arise in many recent extended formulations, see eg [9, (15)], [8, (1)], [4, (813)] III CHARACTERIZIG BLOCK TRIDIAGOAL SYSTEMS Consider systems of form E = g T q 1 g (III1)

3 where I 0 q=diag{q 1,q }, g= g I 0 I Let λ min, λ max, and σ min, σ max denote the minimum and maximum eigenvalues and singular values, respectively Simple upper bounds on the lower and upper eigenvalues of E are derived in the following theorem Theorem 31: σmin (g) λ max (q) λ min(e) λ max (E) σ max(g) λ min (q) (III) Proof: For the upper bound, note that for any vector v, g v T g T q 1 gv λ max (q 1 ) gv σ max (g) λ min (q) v Applying this inequality to a unit eigenvector for the maximum eigenvalue of c gives the result The lower bound is obtained analogously: v T g T q 1 gv λ min (q 1 ) gv σ min (g) λ max (q) v Applying this inequality to a unit eigenvector for the minimum eigenvalue of c completes the proof From this theorem, we get a simple bound on the condition number of κ(b): κ(e)= λ max(e) λ min (E) λ max(q)σ max(g) λ min (q)σ min (g) (III3) Since we typically have bounds on the eigenvalues of q, all that remains is to characterize the singular values of g in terms of the individual g This is done in the next result which uses the relation I+ g T g g T 0 g T g= g I+ g T 3 a g g T (III4) 0 g I+ g T +1 g +1 where we define g +1 := 0, so that the bottom right entry is the identity matrix Theorem 3: The following bounds hold for the singular values of g: max ( { 0,min 1+σ min (g +1 ) σ max (g ) σ max (g +1 ) }) σmin (gt g) σmax (gt g) { 1+σ max (g +1 )+σ max (g )+σ max (g +1 ) } max (III5) Proof: Let v = vec({v 1,,v }) be any eigenvector of g T g, so that g T gv=λ v (III6) Without loss of generality, suppose that the v component has largest norm out of [1,,] Then from the th bloc of (III6), we get g v 1 +(I+ g T +1 g +1)v + g T +1 v +1 = λ v (III7) Let u = v v Multiplying (III7) on the left by vt, dividing by v, and rearranging terms, we get v +1 1+u g T +1 g v 1 +1u λ = u g v u g T +1 v σ max (g )+σ max (g +1 ) This relationships in (III8) yield the upper bound λ 1+σ max (g +1)+σ max (g )+σ max (g +1 ) and the lower bound λ 1+u g T +1 g +1u σ max (g ) σ max (g +1 ) 1+σ min (g +1) σ max (g ) σ max (g +1 ) (III8) (III9) (III10) Taing the minimum over in the lower bound and maximum over for the upper bound completes the proof The expression max(0, ) in (III5) arises since the singular values are nonnegative Corollary 31: Let v min be the eigenvector corresponding to λ min (g T g), and suppose that v min is the component with the largest norm Then we have the lower bound λ min (g T g) 1+σ min(g +1 ) σ max (g ) σ max (g +1 ) (III11) In particular, since g +1 = 0, λ min (g T g) 1 σ max (g ) (III1) The bound III1 reveals the vulnerability of the system g T g to the behavior of the last component For Kalman smoothing problems, the matrix g T q 1 g corresponds to G T Q 1 G The components g correspond to the process models G These components are often identical for all, or they are all constructed from ODE discretizations, so that their singular values are similarly behaved across By (III11), we see that the last component emerges as the weaest lin, since regardless of how well the G are behaved for =,, 1, the condition number can go to infinity if any singular value for G is larger than 1 Therefore, to guard against instability, one must address instability in the final component G IV FORWARD BLOCK TRIDIAGOAL (FBT) ALGORITHM AD THE RTS SMOOTHER We now present the FBT algorithm Suppose for = 1,,, b R n n, e R n l, r B n l, and for =,,, c R n n We define the corresponding bloc tridiagonal system of equations b 1 c T 0 0 c b 0 0 c 1 b 1 c T 0 0 c b e 1 e e 1 e r 1 r = r 1 r (IV1) For positive definite systems, the FBT algorithm is defined as follows [9, algorithm 4]: Algorithm 41 (Forward Bloc Tridiagonal (FBT)): The inputs to this algorithm are {c } =, {b } =1, and {r } =1 where each c R n n, b R n n, and r R n l The output

4 is the sequence {e } =1 that solves equation (IV1), with each e R n l 1) Set d f 1 = b 1 and s f 1 = r 1 For = To : Set d f = b c (d f 1 ) 1 c T Set s f = r c (d f 1 ) 1 s 1 ) Set e =(d f ) 1 s For = 1 To 1 : Set e =(d f ) 1 (s f ct +1 e +1) Before we discuss stability results for this algorithm (see theorem 4), we prove that the RTS smoother is an implementation of this algorithm for matrix C in (II8) Theorem 41: When applied to C in (II8) with r = H R 1 z+g Q 1 ζ, Algorithm 41 is equivalent to the RTS [8] smoother Proof: Looing at the very first bloc, we now substitute in the Kalman data structures (II9) into step 1 of Algorithm 41 Understanding this step requires introducing some structures which may be familiar to the reader from Kalman filtering literature P := Q H1 R 1 1 H 1 P 1 1 :=(G 1P 1 1 G T 1 + Q ) 1 = Q 1 ( Q 1 G ) ( P 1 := P H R 1 H P G Q 1 G d f = b c T (d f 1 ) 1 c = P 1 + G 3 Q 1 3 G 3 ) 1 ( ) Q 1 G (IV) These relationships can be seen quicly from [5, Theorem 7] The matrices P, P 1 often appear in Kalman literature: they represent covariances of the state at time given the the measurements {z 1,,z }, and the covariance of the a priori state estimate at time given measurements {z 1,,z 1 }, respectively The ey fact from IV is that d f = P 1 + G 3 Q 1 3 G 3 Using the same computation for the generic tuple (,+ 1) rather than (1, ) establishes d f = P 1 + G +1 Q 1 +1 G +1 (IV3) We now apply this technique to the right hand side of (II7), r= H R 1 z+g Q 1 ζ We have ( ) ( ) 1 ( ) y 1 := Q 1 G P G Q 1 G H1 R 1 1 z 1+ G 1 P x 0 y := H R 1 z + y 1 s f = r c T (d f 1 ) 1 r 1 = y (IV4) These relationships also follow from [5, Theorem 7] The quantities y 1 and y may be familiar to the reader from the information filtering literature: they are preconditioned estimates y = P 1 x, y 1 = P 1 1 x 1, where x is the estimate of x given {z 1,,z } and x 1 = G x 1 1 (IV5) is the best prediction of the state x given {z 1,,z 1 } Applying the computation to a generic index, we have s f = y From these results, it immediately follows that e computed in step of Algorithm 41 is the Kalman filter estimate (and the RTS smoother estimate) for time point (see (IV3)): ( ) 1 e =(d f ) 1 s f = P P 1 x = x (IV6) We now establish the iteration in step of Algorithm 41 First, following [8, (39)], we define C = P G T +1 P 1 +1 To save space, we also use shorthand At the first step, we obtain ˆP := P, ˆx := x e 1 =(d f 1 ) 1 (s f 1 ct e ) (IV7) (IV8) =(ˆP GT Q 1 G ) 1 ( ˆP 1 1 ˆx 1 G T Q 1 ˆx ) =(ˆP GT Q 1 G ) 1 ˆP 1 1 ˆx 1 C 1 ˆx = ˆx 1 C 1 (G n x 1 ˆx ) = x 1 1 +C 1 (x G x 1 1 ), (IV9) where the Woodbury inversion formula was used to get from line 3 to line 4 Comparing this to [8, (38)], we find that e 1 = x 1, ie the RTS smoothed estimate The computations above, when applied to the general tuple(, + 1) instead of ( 1,), show that every e is equivalent to x, which completes the proof In 1965, Rauch, Tung and Striebel showed that their smoother solves the maximum lielihood problem for p({x } {z }) [8], which is equivalent to (II3) Theorem 41 adds to this understanding, showing that RTS smoother is precisely the FBT algorithm Moreover, the proof explicitly shows how the Kalman filter estimates x can be obtained as the FBT proceeds to solve system (II7) For efficiency, we did not include any expressions related to the Kalman gain; the interested reader can find these relationships in [5, Chapter ] We now turn our attention to the stability of the forward algorithm for bloc tridiagonal systems One such analysis appears in [9, Lemma 6], and has been used frequently to justify the Thomas algorithm as the method of choice in many Kalman smoothing applications Below, we review this result, and show that it has a critical flaw precisely for Kalman smoothing systems Lemma 41: [9, Lemma 6] Suppose we are given sequences {q } =0, {g } =, and {u } =1, where each q R n n is symmetric postive definite, each u R n n is symmetric postive semidefinite, (in the Kalman context, u corresponds to H T R 1 H ) each g R n n Define b R n n by b = u + q 1 + g T +1 q 1 +1 g +1, where =1,,

5 Define c R n n by c = q 1 g, where =,, Suppose there is an α > 0 such that all the eigenvalues of g T q 1 g are greater than or equal α, = 1,, Suppose there is a β > 0 such that all the eigenvalues of b are less than or equal β, = 1,, Further, suppose we execute Algorithm 41 with corresponding input sequences {b } =1 and{c } = It follows that each d generated by the algorithm is symmetric positive definite and has condition number less than or equal β/α To understand what can go wrong with this analysis, consider b in the Kalman smoothing context where the corresponding matrix entries are given by [9, equations (1) and (13)] The matrix b in Lemma 41 is given by b = u + q 1 + gt +1 q 1 +1 g +1 where the correspondence to [9, (13)] is given by b corresponds to B, q corresponds to Q, g corresponds to G, and u corresponds to H T R 1 H In the context of [9], G +1 is the model for the next state vector and at = there is no next state vector Hence, G +1 = 0 Thus, in the context of Lemma 41, a = 0 and hence α = 0 which contradicts the Lemma assumptions Does this mean that the FBT algorithm (and hence the Kalman filter and RTS smoother) are inherently unstable, unless they have measurements to stabilize them? This has been a concern in the literature; for example, Bierman [10] suggests improved stability as a secondary motivation for his wor It turns out that this concern is not justified; in fact we can prove a powerful theorem that relates stability of the forward bloc tridiagonal algorithm to the stability of the system (III1), already characterized in Theorem 31 Theorem 4: Consider any bloc tridiagonal system A R n of form (IV1) and suppose we are given a lower bound α L and an upper bound α U on the eigenvalues of this system: 0<α L λ min (A) λ max (A) α U (IV10) If we apply the FBT iteration then d f = b c (d f 1 ) 1 c T, 0<α L λ min (d ) λ max (d ) α U (IV11) In other words, the FBT iteration preserves eigenvalue bounds (and hence the condition number) for each bloc, and hence will be stable when the full system is well conditioned Proof: For simplicity, we will focus only on the lower bound, since the same arguments apply for the upper bound ote that b 1 = d f 1, and the eigenvalues of d f 1 must satisfy α L λ min (d f 1 ) since otherwise we can produce a unit-norm eigenvector v 1 R n of d f 1 with vt 1 d f 1 v 1< α L, and then form the augmented unit vector ṽ 1 R with v 1 in the first bloc, and every other entry 0 Then we have ṽ T 1 Aṽ 1 < αl, which violates (VIII4) ext, note that b S 1 AS1 T = 0 d f 0 0 c 1 b 1 c T 0 0 c b where and d f = b c (d f 1 ) 1 c T I c (d f 1 S 1 = ) 1 I I I (IV1) Suppose now that d f has an eigenvalue that is less than α L Then we can produce a unit eigenvector v of d f with v T d f v < α L, and create an augmented unit vector which satisfies ext, note that ˆv T := ṽ T S 1 = ṽ = [ 0 1 n v T 0 1 n( ) ] T ṽ T S 1AS T 1 ṽ < α L [ v T c (d f 1 ) 1 v T 0 1 n( ) ] T, so in particular ˆv 1 From (IV13), we now have ˆv T A ˆv < α L α L v, (IV13) which violates (IV10) To complete the proof, note that the lower n( 1) n( 1) bloc of S 1 AS1 T is identical to that of A, with (IV10) holding for this modified system The reduction technique can now be repeatedly applied ote that Theorem 4 applies to any bloc tridiagonal system satisfying (IV10) When applied to the Kalman smoothing setting, if the system G T Q 1 G is well-conditioned, we now that the FBT (and hence the Kalman filter and RTS smoother) will behave well for any measurement models Recall that a lower bound for the condition number of the full system in terms of the behavior of the blocs in Theorem 3 and Corollary (31) Moreover, even if G T Q 1 G has a bad condition number, it is possible that the measurement term H T R 1 H (see (II7)) will improve the condition number More general Kalman smoothing applications may not have this advantage For example, the initialization procedure in [6] requires the inversion of systems analogous to G T Q 1 G, without a measurement term

6 A Invertible Measurement Component We now return to the system (II7), and briefly consider the case where H T R 1 H is an invertible matrix ote that this is not true in general, and in fact our stability analysis, as applied to the Kalman smoothing problem, did not use any assumptions on this term However, if we now that Λ := H T R 1 H (IV14) is an invertible matrix, then we can consider an alternative approach to solving (II7) Applying the Woodbury inversion formula, we obtain (G T Q 1 G+Λ) 1 = Λ 1 Λ 1 G T (Q+GΛ 1 G T ) 1 GΛ 1 (IV15) ow, the solution to (II7) can be found by applying this explicit inverse to the right hand side rhs := H T R 1 z+g T Q 1 ζ and the ey computation becomes (Q+GΛ 1 G T )x=gλ 1 rhs (IV16) ote that the matrix Q+GΛ 1 G T is bloc tridiagonal, since Q and Λ 1 are bloc diagonal, and G is lower bloc bidiagonal Therefore, we have reduced the problem to a system of the form (IV1) Moreover, at a glance we can see the lower eigenvalues of this system are bounded below by the eigenvalues of Q, while upper bounds can be constructed from eigenvalues of Q, G and Λ 1 Under very mild conditions, this system can be solved in a stable way by the forward tridiagonal algorithm, which also give a modified filter and smoother This is not surprising, since we have assumed the extra hypothesis that Λ is invertible V BACKWARD BLOCK TRIDIAGOAL ALGORITHM AD THE M SMOOTHER Having abstracted Kalman smoothing problems and algorithms to solutions of bloc tridiagonal systems, it is natural to consider alternative algorithms in the context of these systems In this section, we propose a new scheme, namely bacward bloc tridiagonal (BBT) algorithm, and show it is equivalent to the M smoother [5] when applied to the Kalman smoothing setting We also prove a stability result for ill-conditioned bloc tridiagonal systems Let us again begin with system (IV1) If we substract c T b 1 times row from row 1, we obtain the following equivalent system: = b 1 c T 0 0 c b 0 0 b c T c 1 b 1 c T b 1 c c b r 1 r r 1 c T b 1 r r e 1 e e 1 e We iterate this procedure until we reach the first row of the matrix, using d to denote the resulting diagonal blocs, and s the corresponding right hand side of the equations; ie, d b = b, d b = b c T +1 (db +1 ) 1 c +1 (= 1,,1) s b = e, s b = r c T +1 (db +1 ) 1 s +1 (= 1,,1) We obtain the following lower triangular system: d b c d b e s 1 e s = c 1 d 1 b 0 e 1 s 1 0 c d b e r (V1) ow we can solve for the first bloc vector and then proceed bac down, doing bac substitution We thus obtain the following algorithm: Algorithm 51 (Bacward bloc tridiagonal (BBT)): The inputs to this algorithm are {c }, {b }, and {r } The output is a sequence {e } that solves equation (IV1) 1) Set d b = b and s b = r For = 1,,1, Set d b = b c T +1 (db +1 ) 1 c +1 Sets b = r c T +1 (db +1 ) 1 s +1 ) Set e 1 =(d b 1 ) 1 s b 1 For =,,, Set e =(d b ) 1 (s b c e 1 ) Before we discuss stability results, we show that this algorithm is equivalent to the M smoother [5] The recursion in [5, Algorithm A], translated to our notation, is P = G T +1[ I P+1 C C T +1] P+1 G +1 (V) +H T R 1 H = [ I+C T P ] 1 +1C (V3) φ = H T R 1 z + G T +1[ I P+1 C C T ] φ+1,(v4) where Q = C C T The recursion is initialized as follows: P = H T R 1 H, φ = H T R 1 z n (V5) Before stating a theorem, we prove a useful linear algebraic result Lemma 51: Let P and Q be any invertible matrices Then P P(Q 1 + P) 1 P=Q 1 Q 1 (Q 1 + P) 1 Q 1 (V6) Proof: Starting with the left hand side, write P = P+ Q 1 Q 1 Then we have P P(Q 1 + P) 1 P=P P(Q 1 + P) 1 (P+ Q 1 Q 1 ) = P(Q 1 + P) 1 Q 1 =(P+ Q 1 Q 1 )(Q 1 + P) 1 Q 1 = Q 1 Q 1 (Q 1 + P) 1 Q 1 Theorem 51: When applied to C in (II8) with r = H R 1 z+g Q 1 ζ, BBT is equivalent to the M smoother, ie [5, Algorithm A] In particular, d b in Algorithm 51

7 corresponds to P +Q 1, while s b in Algorithm 51 is precisely φ in recursion (V) (V4) Proof: Using c and b in (II9), the relationships above are immediately seen to hold for step As in the other proofs, we show only the next step From (V), we have P 1 = H 1 T R 1 1 H + G T ΦG Φ=P P (C C T )P = P P (Q Q (P 1 + Q 1 ) 1 Q )P = P P (Q 1 + P ) 1 P = Q 1 Q 1 (db ) 1 Q 1 (V7) where the Woodbury inversion formula was used twice to get from line to line 4, and Lemma V6 together with the definition of d b was used to get from line 4 to line 5 Therefore, we immediately have P 1 = H 1 T R 1 1 H + G T (Q 1 Q 1 d 1 Q 1 )G = d 1 b Q 1 as claimed ext, we have φ 1 = H 1 T R 1 1 z + G T (I P (C C T ))q = H 1R T 1 1 z G T (I P (Q 1 + P ) 1 )s = H 1 T R 1 1 z G T (P 1 + Q ) 1 P 1 )s = H 1R T 1 1 z G T (Q 1 (P + Q 1 ) 1 )s = s b 1 (V8) Finally, note that the smoothed estimate give in [5, (A8)] (translated to our notation) ˆx 1 = (P 1 + Q 1 1 ) 1 ( s b 1 Q 1 1 x 0) is precisely (d b 1 ) 1 r 1, which is the estimate e 1 in step of Algorithm 51 The reader can chec that the forward recursion in [5, (A9)] is equivalent to the recursion in step of Algorithm 51 ext, we show that the BBT algorithm has the same stability result as the FBT algorithm Theorem 5: Consider any bloc tridiagonal system A R n of form (IV1) and suppose we are given the bounds α L and α U for the lower and upper bounds of the eigenvalues of this system 0<α L λ min (A) λ max (A) α U (V9) If we apply the BBT iteration then we have d b = b c T +1 (db +1 ) 1 c +1 0<α L λ min (d ) λ max (d ) α U (V10) Proof: ote first that d b = b, and satisfies (VII4) by the same argument as in the proof of Theorem 5 Define I I S = I c (d 1 b ) 1 I and note that d1 b c T 0 c d b c T 3 0 S T AS = 0 c T 1 c 1 d 1 b d b ow an analogous proof to that of Theorem 4 can be applied to show the upper n( 1) n( 1) bloc of S T AS satisfies (VII4) Applying this reduction iteratively completes the proof Theorems 4 and 5 show that both forward and bacward tridiagonal algorithms are stable when the bloc tridiagonal systems they are applied to are well conditioned For a lower bound on the condition number in the Kalman smoothing context, see Theorem 3 and Corollary (31) However, a different analysis can also be done for a particular class of bloc tridiagonal systems This result, which applies to Kalman smoothing systems, shows that the bacward algorithm can behave stably even when the tridiagonal system has null singular values In other words, the eigenvalues of the individual blocs generated by M during the bacward and forward recursions are actually independent of the condition number of the full system, which is a stronger result than we have for RTS For v R n n we use the notation v for the operator norm of the matrix v; ie, v =sup{ vw : w R n, w =1} Theorem 53: Suppose that the matrices c and b are given by c b = q 1 g = q 1 + u + g T +1 q 1 +1 g +1 where each q R n n is positive definite, each u R n n is positive semi-definite and g +1 = 0 It follows that d q 1 is positive semi-definite for all Furthermore, if α is a bound for q, q 1, u, and g, Then the condition number of d b is bounded by α + α 6 Proof: We note that d b = q 1 + u so this conditions bound holds for = Furthermore d b q 1 = u, so positive semi-definite assertion holds for = We now complete the proof by induction; ie, suppose d+1 b q 1 +1 is positive semi-definite d b q 1 d b = b c T +1 (db +1 ) 1 c +1 = u + g T +1 q 1 +1 g +1 g T +1 q 1 +1 (db +1 ) 1 q 1 [ q +1 (d+1 b ) 1] q 1 +1 g +1 = u + g T +1 q g +1 The assumption that d+1 b q 1 +1 is positive semi-definite implies that that q +1 (d+1 b ) 1 is positive semi-definite It now follows that d b q 1 is the sum of positive semi-definite matrices and hence is positive semi-definite, which completes the induction and hence proves that d b q 1 is positive semidefinite

8 We now complete the proof by showing that the condition number bound holds for index Using the last equation above, we have d b u + g T +1 q 1 +1 q +1 α+ α 5 Hence the maximum eigenvalue of d b is less than or equal α+ α 5 In addition, since d b q 1 is positive semi-definite, the minimum eigenvalue of d b is greater than or equal the minimum eigenvalue of q 1, which is equal to the reciprocal of the maximum eigenvalue of q Thus the minimum eigenvalue of d b is greater than or equal 1/α Thus the condition number of d b is bounded by α + α 6 which completes the proof VI UMERICAL EXAMPLE To put all of these ideas in perspective, we consider a toy numerical example Let n=1 and = 3, let q = 1 for all, and let a = 10 Then the system a T q 1 a in (III1) is given by , and its minimum eigenvalue is To understand what goes wrong, we first note that the minimum and maximum eigenvalues of all the g s except the last one coincide in this case, so the general condition of Corollary 31 will hold everywhere except at the last coordinate ow that we suspect the last coordinate, we can chec the eigenvector corresponding to the minimum eigenvalue: v min = [ ] T Indeed, the component of the eigenvector with the largest norm occurs precisely in the last component, so we are in the case described by Corollary 31 In order to stabilize the system under a numerical viewpoint, we have an option to mae the (3, ) and (, 3) coordinates less than 1 in absolute value; let s tae them to be 09 instead of 10 The new system has lowest eigenvalue 1 If we expand the system (by increasing the size of {g }, {q }), we find this stabilization technique wors regardless of the size, since the last component is always the weaest lin when all g are equal ext, suppose we apply FBT to the original (unstable) system We would get blocs d f 1 = 14401, d f = 14400, d f 3 = (VI1) This toy example shows Theorem 4 in action indeed, the eigenvalues of the blocs are bounded above and below by the eigenvalues of the full system Unfortunately, in this case this leads to a terrible condition number, since we are woring with an ill-conditioned system ow, suppose we apply the BBT We will get the following blocs: d b 3 = 1, db = 1, db 1 = 1 ote that even though the blocs are stable, this does not mean that the bacward algorithm can accurately solve the system g T q 1 gx=b In practice, it may perform better or worse than the forward algorithm for ill-conditioned systems, depending on the problem onetheless, the condition of the blocs does not depend on the condition number of the full system as in the forward algorithm, as shown by Theorem 5 and this example VII TWO FILTER BLOCK TRIDIAGOAL ALGORITHM AD THE MF SMOOTHER In this section, we explore the MF smoother, and show how it solves the bloc tridiagonal system (IV1) Studying the smoother in this way allows us to also characterize its stability using theoretical results developed in this paper Consider a linear system of the form (IV1) As in the previous sections, let d f,sf denote the forward matrix and vector terms obtained after step 1 of Algorithm 41, and d b,sb denote the terms obtained after step 1 of Algorithm 51 Finally, recall that b,r refer to the diagonal terms and right hand side of system (IV1) The following algorithm uses elements of both forward and bacward algorithms We dub it the Bloc Diagonalizer Algorithm 71 (Two filter bloc tridiagonal algorithm): The inputs to this algorithm are {c }, {b }, and {r } The output is a sequence {e } that solves equation (IV1) 1) Set d f 1 = b 1 Set s f 1 = r 1 For = To : Set d f = b c (d f 1 ) 1 c T Set s f = r c (d f 1 ) 1 s 1 ) Set d b = b and s b = r For = 1,,1, Set d b = b c T +1 (db +1 ) 1 c +1 Set s b = r c T +1 (db +1 ) 1 s +1 3) For = 1,,, set e =(d f + db b ) 1 (s f + sb r ) It is easy to see why the MF smoother is often used in practice Steps 1 and can be done in parallel on two processors Step 3 is independent across, and so can be done in parallel with up to processors In the next section, we will use the insights into bloc tridiagonal systems to propose a new and more efficient algorithm First, we finish our analysis of the MF smoother Let C denote the linear system in (IV1), and F denote the matrix whose action is equivalent to step 1 of Algorithm 71, so that FC is upper bloc triangular, and Fr recovers blocs {s f } Let B denote the matrix whose action is equivalent to steps of Algorithm 71, so that BC is lower bloc triangular, and Br recovers blocs {s b } Theorem 71: The solution e to (IV1) is given by Algorithm 71 Proof: The solution e returned by Algorithm 71 can be written as follows: e=((f+ B I)C) 1 (F+ B I)r (VII1) To see this, note that FC has the same blocs above the diagonal as C, and zero blocs below the diagonal Analogously, BC has the same blocs above the diagonal as C Then FC+BC C

9 is bloc diagonal, with diagonal blocs given by d f + db b Finally, applying the system F+ B I to r yields the blocs s f + sb r The fact that e solves (IV1) follows from the following calculation: Ce= C((F+ B I)C) 1 (F+ B I)r = CC 1 (F+ B I) 1 (F+ B I)r = r ote that applying F and B to construct (F + B I)r is an O(n 3 ) operation, since both F and B require recursive inversions of blocs, each of size n n, as is clear from steps 1, of Algorithm 71 Before we consider stability conditions (ie conditions that guarantee invertibility of d f + db b ), we demonstrate the equivalence of Algorithm 71 to the Mayne-Fraser smoother Lemma 71: When applied to the Kalman smoothing system (II7), the Mayne-Fraser smoother is equivalent to Algorithm 71 In particular, the MF update can be written ˆx =(d f + db b ) 1 (s f + sb r ) (VII) Proof: The MF smoother solution is given in [5, (B9)]: ˆx = (P + F ) 1 (q + g ) From Theorem 51, we now q = s b, and P = d b Q 1 ext, F = σ 1 1 from [5, (B6)], and we have F = σ 1 1 = σ 1 HT R 1 H [5, Chapter ] = d f GT +1 Q 1 +1 G +1 H T R 1 H by (IV3) Therefore, P + F = d b + d f Q 1 G T +1 Q 1 +1 G +1 H T R 1 = d b + d f b by (II9) Finally, g = σ 1 1 x 1 from [5, (B7)] and we have g = σ 1 1 x 1 = y 1 by (IV5) = (s f + HT R 1 z ) by (IV4) = (s f r ) by (II7) This gives (q +g )=s f +sb r, and the lemma is proved Lemma 71 characterizes MF through its relationship to the tridiagonal system (IV1) Our final result is with stability of the MF scheme, which can be understood by considering the solution (VII) ote that at = 1, we have d f 1 = b 1, and so the matrix to invert in (VII) is simply d1 b At =, we have db = b, and the matrix to invert is d f Therefore, the MF scheme is vulnerable to numerical instability (at least at time ) when the system (IV1) is ill-conditioned In particular, if we apply MF to the numerical example in Section VI, bloc d f 3 (VI1) will have to be used The following theorem shows that the MF soother has the same stability guarantees as the RTS smoother for wellconditioned systems H Theorem 7: Consider any bloc tridiagonal system A R n of form (IV1) and suppose we are given the bounds α L and α U for the lower and upper bounds of the eigenvalues of this system 0<α L λ min (A) λ max (A) α U (VII3) Then we also have 0<α L λ min (d f + db b ) λ max (d f + db b ) α U (VII4) Proof: At every intermediate step, it is easy to see that d f + db b = b c (d f 1 ) 1 c T + b c T +1 (db +1 ) 1 c +1 b = b c (d f 1 ) 1 c T ct +1 (db +1 ) 1 c +1 This corresponds exactly to isolating the middle bloc of the three by three system d f 1 c T 0 c b c T +1 0 c +1 d+1 b By Theorems 4 and 5, the eigenvalues of this system are bounded by the be eigenvalues of the full system Applying these theorems to the middle bloc shows that the system in (VII) also satisfies such a bound Therefore, for well-conditioned systems, the MF scheme shares the stability results of RTS In the next section we propose an algorithm that entirely eliminates the combination step (VII), and allows a - processor parallel scheme VIII EW TWO FILTER BLOCK TRIDIAGOAL ALGORITHM In this section, we tae advantage of the combined insight from forward and bacward algorithms (FBT and BBT), to propose an efficient parallelized algorithm for bloc tridiagonal systems In particular, in view of the analysis regarding the RTS and M algorithms, it is natural to combine them by running them at the same time (using two processors) However, there is no need to run them all the way through the data and then combine (as in the MF scheme) Instead, the algorithms can wor on the same matrix, meet in the middle, and exchange information, obtaining the smoothed estimate for the point in the middle of the time series At this point, each need only bac-substitute into its own (independent) linear system which is half the size of the original We illustrate on a 6 6 system of type (IV1), where steps 1, of FBT and BBT have been simultaneously applied (for three time points each) The resulting equivalent linear system has form d f 1 c T e 0 d f c T 1 s f d f 3 c T e s f c 4 d4 b e c 5 d5 b e 4 = s f 3 (VIII1) 0 e 5 s b 4 s b c 6 d6 b 5 e 6 s b 6

10 The superscripts f and b denote whether the variables correspond to steps 1, of FBT or BBT, respectively At this juncture, we have a choice of which algorithm is used to decouple the system Supposing the BBT taes the lead, the requisite row operations are where row 3 row 3 c T 4 (db 4 ) 1 row 4 row 4 row 4 c 4 ( ˆ d f 3 ) 1 row 3 ˆ d f 3 = d f 3 ct 4 (db 4 ) 1 c 4 (VIII) ŝ f 3 = s f 3 ct 4(d4) b 1 s 4 ŝ b 4 = s f 4 c 4( dˆ f 3 ) 1 ŝ f 3 After these operations, the equivalent linear system is given by d f 1 c T e 1 s f 0 d f c T dˆ f e s f d4 b e c 5 d5 b e 4 = ŝ f 3 (VIII3) 0 e 5 ŝb 4 s b c 6 d6 b 5 e 6 s b 6 This system has two independent (bloc-diagonal) components (note that ˆx 3 and ˆx 4 are immediately available), and the algorithm can be finished in parallel, using steps 3,4 of algorithms FBT and BBT ow that the idea is clear, we establish the formal algorithm ote the similarities and differences to the MF smoother in Algorithm 71 For a R +, we use the notation [a] to denote the floor function Algorithm 81 (ew two filter bloc tridiagonal algorithm): The inputs to this algorithm are {c }, {b }, and {r } The output is a sequence {e } that solves equation (IV1) 1) Set d f 1 = b 1 Set s f 1 = r 1 For = To [/] 1 : Set d f = b c (d f 1 ) 1 c T Set s f = r c (d f 1 ) 1 s 1 Set d b = b and s b = r For =,,[/], Set d b = b c T +1 (db +1 ) 1 c +1 Set s b = r c T +1 (db +1 ) 1 s +1 ) Set dˆ f [/] 1 = d f [/] 1 ct [/] (db [/] ) 1 c [/] ŝ f [/] 1 = s f [/] 1 ct [/] (db [/] ) 1 s [/] ŝ b [/] = s f [/] c [/]( dˆ f [/] 1 ) 1 s f [/] 1 3) Set e [/] 1 =( d ˆ [/] 1 ) 1 ŝ [/] 1 For =[/] To 1 : Set e =( dˆ f ) 1 (ŝ f ct +1 e +1) Set e [/] =( dˆ [/] b ) 1 ŝ b [/] For =[/+1],,, Set e =( dˆ b) 1 (ŝ b c e 1 ) Steps 1 and 3 of this algorithm can be done in parallel on two processors The middle step is the only time the processors need to exchange information This algorithm therefore has the desirable parallel features of Algorithm 71, but in addition, step 3 of Algorithm 71 is not required We now present a theorem that shows that when the overall system is well-conditioned, Algorithm 81 has the same stability results as FBT and BBT Theorem 81: Consider any bloc tridiagonal system A R n of form (IV1) and suppose we are given the bounds α L and α U for the lower and upper bounds of the eigenvalues of this system 0<α L λ min (A) λ max (A) α U (VIII4) Then same bounds apply to all blocs d f, db of Algorithm 81 Proof: Going from (VIII1) to (VIII3) (steps 1, of Algorithm 81), by the proof technique of Theorems 4 and 5 the eigenvalues of the lower bloc are bounded by the eigenvalues of the full system In step 3, the matrix dˆ f created in [/] 1 operation (VIII) has the same property, by Theorem (5) The same results holds regardless of whether the FBT or BBT algorithms performs the combination step 3 IX COCLUSIOS In this paper, we have characterized the numerical stability of bloc tridiagonal systems that arise in Kalman smoothing, see theorems 31 and 3 The analysis revealed that last blocs of the system have a strong effect on the system overall, and that the system may be numerically stabilized by changing these blocs In the Kalman smoothing context, the stability conditions in theorems 31 and 3 do not require a in (III1) to be invertible In fact, they may be singular, which means the process matrices G, or derivatives of nonlinear process functions g (1), do not have to be invertible We then showed that any well-conditioned symmetric bloc tridiagonal system can be solved in a stable manner with the FBT, which is equivalent to RTS in the Kalman smoothing context We also showed that the forgotten M smoother, ie Algorithm A in [5], is equivalent to the BBT algorithm BBT shares the stability properties of FBT for well-conditioned systems, but is unique among the algorithms considered because it can remain stable even when applied to ill-conditioned systems We also characterized the standard MF scheme based on the two-filter formula, showing how it solves the underlying system, and proving that it has the same numerical properties of RTS but does not have the numerical feature of M Finally, we designed a new stable parallel algorithm, which is more efficient than MF and has the same stability guarantee as RTS for well-conditioned systems The proposed algorithm is parallel, simple to implement, and stable for wellconditioned systems It is also more efficient than smoothers based on two independent filters, since these approaches require full parallel passes and then a combination step Such a combination step is unnecessary here after the parallel forward-bacward pass, we are done Taen together, these results provide insight into both numerical stability and algorithms for Kalman smoothing systems In this regard, it is worth stressing that bloc tridiagonal systems arise not only in the classic linear Gaussian

11 setting, but also in many new applications, including nonlinear process/measurement models, robust penalties, and inequality constraints, eg see [13], [1], [6], [4], [15] As a result, the theory and the new solvers of general bloc tridiagonal systems developed in this paper have immediate application to a wide range of problems REFERECES [1] B D O Anderson and J B Moore Optimal Filtering Prentice-Hall, Englewood Cliffs, J, USA, 1979 [] CF Ansley and R Kohn A geometrical derivation of the fixed interval smoothing algorithm Biometria, 69: , 198 [3] A Aravin, J Bure, and G Pillonetto Robust and trend following alman smoothers using student s t In Proc of SYSID 01, 01 [4] Alesandr Y Aravin, James V Bure, and Gianluigi Pillonetto Sparse/robust estimation and alman smoothing with nonsmooth logconcave densities: Modeling,computation, and theory, 013 Submitted to Journal of Machine Learning Theory (JMLR) [5] AY Aravin Robust Methods with Applications to Kalman Smoothing and Bundle Adjustment PhD thesis, University of Washington, Seattle, WA, June 010 [6] AY Aravin, BM Bell, JV Bure, and G Pillonetto An l 1 -Laplace robust Kalman smoother IEEE Transactions on Automatic Control, 56(1), 011 [7] Yaaov Bar-Shalom, X Rong Li, and Thiagalingam Kirubarajan Estimation with Applications to Tracing and avigation John Wiley and Sons, 001 [8] B M Bell, J V Bure, and G Pillonetto An inequality constrained nonlinear Kalman-Bucy smoother by interior point lielihood maximization Automatica, 45(1):5 33, 009 [9] BM Bell The marginal lielihood for parameters in a discrete Gauss- Marov process IEEE Transactions on Signal Processing, 48(3): , 000 [10] GJ Bierman A new computationally efficient fixed-interval, discretetime smoother Automatica, 19: , 1983 [11] Charles Chui and Guanrong Chen Kalman Filtering Springer, 009 [1] T Cipra and R Romera Kalman filter with outliers and missing observations Sociedad de Estadistica e Invastigacion Operativa, 6(): , 1997 [13] ZM Durovic and BD Kovachevic Robust estimation with unnown noise statistics IEEE Transactions on Automatic Control, 44(6):19 196, June 1999 [14] L Fahrmeir and H Kaufmann On Kalman filtering, posterior mode estimation, and Fisher scoring in dynamic exponential family regression Metria, pages 37 60, 1991 [15] S Farahmand, GB Giannais, and D Angelosante Doubly robust smoothing of dynamical processes via outlier sparsity constraints IEEE Transactions on Signal Processing, 59: , 011 [16] DC Fraser and JE Potter The optimum linear smoother as a combination of two optimum linear filters IEEE Transactions on Automatic Control, pages , 1969 [17] Mohinder S Grewal, Lawrence R Weill, and Angus P Andrews Global Positioning Systems, Inertial avigation, and Integration John Wiley and Sons, nd edition, 007 [18] R E Kalman A new approach to linear filtering and prediction problems Transactions of the AMSE - Journal of Basic Engineering, 8(D):35 45, 1960 [19] R E Kalman and R S Bucy ew results in linear filtering and prediction theory Trans ASME J Basic Eng, 83:95 108, 1961 [0] SA Kassam and HV Poor Robust techniques for signal processing: A survey Proceedings of the IEEE, 73(3): , March 1985 [1] RL Kirlin and A Moghaddamjoo Robust adaptive Kalman filtering for systems with unnown step inputs and non-gaussian measurement errors IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-34():5 63, March 1986 [] G Kitagawa and Will Gersch A smoothness priors time-varying ar coefficient modeling of nonstationary covariance time series Automatic Control, IEEE Transactions on, 30(1):48 56, Jan [3] L Ljung and T Kailath A unified approach to smoothing formulas Automatica, 1(): , 1976 [4] CJ Masreliez and RD Martin Robust Bayesian estimation for the linear model and robustifying the Kalman filter IEEE Transactions on Automatic Control, AC-(3): , June 1977 [5] DQ Mayne A solution of the smoothing problem for linear dynamic systems Automatica, 4:73 9, 1966 [6] RJ Meinhold and D Singpurwalla Robustification of Kalman filter models Journal of the American Statistical Association, 84(406): , June 1989 [7] Ian R Petersen and Andrey V Savin Robust Kalman Filtering for Signals and Systems with Large Uncertainties Control Engineering Birhauser, 1999 [8] H E Rauch, F Tung, and C T Striebel Maximum lielihood estimates of linear dynamic systems AIAA J, 3(8): , 1965 [9] IC Schic and SK Mitter Robust recursive estimation in the presence of heavy-tailed observation noise The Annals of Statistics, (): , June 1994 [30] JE Wall, AS Willsy, and R Sandell On the fixed-interval smoothing problem Stochastics, 5:1 41, 1981 [31] SJ Wright Solution of discrete-time optimal control problems on parallel computers Parallel Computing, 16:1 38, 1990 [3] SJ Wright Interior point methods for optimal control of discrete-time systems Journal of Optimization Theory and Applications, 77: , 1993

Optimizaton and Kalman-Bucy Smoothing. The Chinese University of Hong Kong, March 4, 2016

Optimizaton and Kalman-Bucy Smoothing. The Chinese University of Hong Kong, March 4, 2016 Optimizaton and Kalman-Bucy Smoothing Aleksandr Y. Aravkin University of Washington sasha.aravkin@gmail.com James V. Burke University of Washington jvburke@uw.edu Bradley Bell University of Washington

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Fast Robust Methods for Singular State-Space Models

Fast Robust Methods for Singular State-Space Models Fast Robust Methods for Singular State-Space Models Jonathan Joner,a, Alesandr Aravin b, James Bure a, Gianluigi Pillonetto d, Sarah Webster c a Department of Mathematics, University of Washington b Department

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Robust and Trend-following Student s t-kalman Smoothers

Robust and Trend-following Student s t-kalman Smoothers Robust and Trend-following Student s t-kalman Smoothers Alesandr Aravin a James V. Bure b Gianluigi Pillonetto c a Department of Earth and Ocean Sciences, University of British Columbia, Vancouver, Canada

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1 Katholiee Universiteit Leuven Departement Eletrotechnie ESAT-SISTA/TR 06-156 Information, Covariance and Square-Root Filtering in the Presence of Unnown Inputs 1 Steven Gillijns and Bart De Moor 2 October

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

Information Formulation of the UDU Kalman Filter

Information Formulation of the UDU Kalman Filter Information Formulation of the UDU Kalman Filter Christopher D Souza and Renato Zanetti 1 Abstract A new information formulation of the Kalman filter is presented where the information matrix is parameterized

More information

Lecture Note 2: Estimation and Information Theory

Lecture Note 2: Estimation and Information Theory Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 2: Estimation and Information Theory Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 2.1 Estimation A static estimation problem

More information

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

Outlier-insensitive Kalman Smoothing and Marginal Message Passing

Outlier-insensitive Kalman Smoothing and Marginal Message Passing Outlier-insensitive Kalman Smoothing and Marginal Message Passing Federico Wadehn, Vijay Sahdeva, Luas Bruderer, Hang Yu, Justin Dauwels and Hans-Andrea Loeliger Department of Information Technology and

More information

Linear Prediction Theory

Linear Prediction Theory Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part II: Observability and Stability James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of Wisconsin

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

Outlier-insensitive Kalman Smoothing and Marginal Message Passing

Outlier-insensitive Kalman Smoothing and Marginal Message Passing Outlier-insensitive Kalman Smoothing and Marginal Message Passing Federico Wadehn, Luas Bruderer, Justin Dauwels, Vijay Sahdeva, Hang Yu, and Hans-Andrea Loeliger Dept. of Information Technology and Electrical

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Proceedings of the 17th World Congress The International Federation of Automatic Control Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Seiichi

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

The likelihood for a state space model

The likelihood for a state space model Biometrika (1988), 75, 1, pp. 165-9 Printed in Great Britain The likelihood for a state space model BY PIET DE JONG Faculty of Commerce and Business Administration, University of British Columbia, Vancouver,

More information

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION 1 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 3 6, 1, SANTANDER, SPAIN RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Proceedings of the 5th International Conference on Inverse Problems in Engineering : Theory and Practice, Cambridge, UK, 11-15th July 2005

Proceedings of the 5th International Conference on Inverse Problems in Engineering : Theory and Practice, Cambridge, UK, 11-15th July 2005 D1 Proceedings of the 5th International Conference on Inverse Problems in Engineering : Theory and Practice, Cambridge, UK, 11-15th July 25 EXPERIMENTAL VALIDATION OF AN EXTENDED KALMAN SMOOTHING TECHNIQUE

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Extended Object and Group Tracking with Elliptic Random Hypersurface Models Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets 2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with

More information

Linear Models 1. Isfahan University of Technology Fall Semester, 2014

Linear Models 1. Isfahan University of Technology Fall Semester, 2014 Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and

More information

A Bibliography of Publications of Brad Bell

A Bibliography of Publications of Brad Bell A Bibliography of Publications of Brad Bell Brad Bell University of Washington Applied Physics Laboratory Seattle, Washington, 98105 USA Tel: +1 206 543 6855 FAX: +1 206 543 6785 E-mail: brad@apl.washington.edu

More information

Discrete-Time H Gaussian Filter

Discrete-Time H Gaussian Filter Proceedings of the 17th World Congress The International Federation of Automatic Control Discrete-Time H Gaussian Filter Ali Tahmasebi and Xiang Chen Department of Electrical and Computer Engineering,

More information

An Inequality Constrained Nonlinear Kalman-Bucy Smoother by Interior Point Likelihood Maximization

An Inequality Constrained Nonlinear Kalman-Bucy Smoother by Interior Point Likelihood Maximization An Inequality Constrained Nonlinear Kalman-Bucy Smoother by Interior Point Lielihood Maximization Bradley M. Bell a, James V. Bure b, Gianluigi Pillonetto c a Applied Physics Laboratory, University of

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Time-Varying Systems and Computations Lecture 3

Time-Varying Systems and Computations Lecture 3 Time-Varying Systems and Computations Lecture 3 Klaus Diepold November 202 Linear Time-Varying Systems State-Space System Model We aim to derive the matrix containing the time-varying impulse responses

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Expectation propagation for signal detection in flat-fading channels

Expectation propagation for signal detection in flat-fading channels Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

FIXED-INTERVAL smoothers (see [1] [10]) can outperform

FIXED-INTERVAL smoothers (see [1] [10]) can outperform IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 3, MARCH 2006 1069 Optimal Robust Noncausal Filter Formulations Garry A Einicke, Senior Member, IEEE Abstract The paper describes an optimal minimum-variance

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

OPTIMAL TIME TRANSFER

OPTIMAL TIME TRANSFER OPTIMAL TIME TRANSFER James R. Wright and James Woodburn Analytical Graphics, Inc. 220 Valley Creek Blvd, Exton, PA 19341, USA Abstract We have designed an optimal time transfer algorithm using GPS carrier-phase

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

Expectation Propagation in Dynamical Systems

Expectation Propagation in Dynamical Systems Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex

More information

Least-Squares Performance of Analog Product Codes

Least-Squares Performance of Analog Product Codes Copyright 004 IEEE Published in the Proceedings of the Asilomar Conference on Signals, Systems and Computers, 7-0 ovember 004, Pacific Grove, California, USA Least-Squares Performance of Analog Product

More information

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then 1 Sec 21 Matrix Operations Review Let A, B, and C be matrices of the same size, and let r and s be scalars Then (i) A + B = B + A (iv) r(a + B) = ra + rb (ii) (A + B) + C = A + (B + C) (v) (r + s)a = ra

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS Miroslav Šimandl, Jindřich Duní Department of Cybernetics and Research Centre: Data - Algorithms - Decision University of West

More information

State Estimation and Prediction in a Class of Stochastic Hybrid Systems

State Estimation and Prediction in a Class of Stochastic Hybrid Systems State Estimation and Prediction in a Class of Stochastic Hybrid Systems Eugenio Cinquemani Mario Micheli Giorgio Picci Dipartimento di Ingegneria dell Informazione, Università di Padova, Padova, Italy

More information

Brief Introduction of Machine Learning Techniques for Content Analysis

Brief Introduction of Machine Learning Techniques for Content Analysis 1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018 Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 218! Nonlinearity of adaptation! Parameter-adaptive filtering! Test for whiteness of the residual!

More information

Super-Resolution. Shai Avidan Tel-Aviv University

Super-Resolution. Shai Avidan Tel-Aviv University Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris

More information

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

Part I State space models

Part I State space models Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

Multiple-Model Adaptive Estimation for Star Identification with Two Stars

Multiple-Model Adaptive Estimation for Star Identification with Two Stars Multiple-Model Adaptive Estimation for Star Identification with Two Stars Steven A. Szlany and John L. Crassidis University at Buffalo, State University of New Yor, Amherst, NY, 460-4400 In this paper

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Simultaneous state and input estimation with partial information on the inputs

Simultaneous state and input estimation with partial information on the inputs Loughborough University Institutional Repository Simultaneous state and input estimation with partial information on the inputs This item was submitted to Loughborough University's Institutional Repository

More information

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel. UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Computing Maximum Entropy Densities: A Hybrid Approach

Computing Maximum Entropy Densities: A Hybrid Approach Computing Maximum Entropy Densities: A Hybrid Approach Badong Chen Department of Precision Instruments and Mechanology Tsinghua University Beijing, 84, P. R. China Jinchun Hu Department of Precision Instruments

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Memoryless output feedback nullification and canonical forms, for time varying systems

Memoryless output feedback nullification and canonical forms, for time varying systems Memoryless output feedback nullification and canonical forms, for time varying systems Gera Weiss May 19, 2005 Abstract We study the possibility of nullifying time-varying systems with memoryless output

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Dynamic Data Factorization

Dynamic Data Factorization Dynamic Data Factorization Stefano Soatto Alessro Chiuso Department of Computer Science, UCLA, Los Angeles - CA 90095 Department of Electrical Engineering, Washington University, StLouis - MO 63130 Dipartimento

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

ABSTRACT INTRODUCTION

ABSTRACT INTRODUCTION ABSTRACT Presented in this paper is an approach to fault diagnosis based on a unifying review of linear Gaussian models. The unifying review draws together different algorithms such as PCA, factor analysis,

More information

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Estimation Error Bounds for Frame Denoising

Estimation Error Bounds for Frame Denoising Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department

More information