EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

Size: px
Start display at page:

Download "EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence"

Transcription

1 1 EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence Jean Walrand I. SUMMARY Here are the key ideas and results of this important topic. Section II reviews Kalman Filter. A system is observable if its state can be determined from its outputs (after some delay). A system is reachable if there are inputs to drive it to any state. We explore the evolution of the covariance in a linear system in Section IV. The error covariance of a Kalman Filter is bounded if the system is observable. The covariance increases if it starts from zero. If a system is reachable and observable, then everything converges and the stationary filter is asymptotically optimal. II. REVIEW: KALMAN FILTER We consider a linear system with matrices that do not depend on time. That is, X n+1 = AX n + V n and Y n = CX n + W n, n 1, where {X 1, V n, W n, n 1} are all orthogonal and zero-mean with cov(v n ) = K V and cov(w n ) = K W. Theorem 1: Kalman Filter ˆX n = L[X n Y 1,..., Y n ] = L[X n Y n ] is obtained by the Kalman Filter: ˆX n = A ˆX + R n (Y n CA ˆX ) where R n = S n C T [CS n C T + K W ] 1, S n = AΣ A T + K V, Σ n = (I R n C)S n. Thus, S n+1 = K V + AS n A T AS n C T [CS n C T + K W ] 1 CS n A T. Moreover, the matrices S n and Σ n have the following significance: Proof: Let S n = cov(x n A ˆX ), Σ n = cov(x n ˆX n ). U n = Y n L[Y n Y ] = Y n L[CX n +W n Y ] = Y n CL[X n Y ] = Y n CA ˆX = CX n +W n CL[X n Y ] Then, by (1) in Lecture 12, ˆX n = L[X n Y n ] = L[X n Y ] + L[X n U n ] = A ˆX + R n U n = A ˆX + R n (Y n CA ˆX ) where R n = cov(x n, U n )cov(u n ) 1. Now, cov(x n, U n ) = cov(x n, CX n + W n CL[X n Y ]) = cov(x n L[X n Y ], CX n + W n CL[X n Y ]) = S n C T. Also, cov(u n ) = cov(cx n + W n CL[X n Y ]) = CS n C T + K W.

2 2 Thus, In addition, R n = S n C T [CS n C T + K W ] 1. S n = cov(x n L[X n Y ]) = cov(ax + V A ˆX ) = AΣ A T + K V. Finally, by (1) in Lecture 12, Σ n = cov(x n L[X n Y n ]) = cov(x n L[X n Y ]) cov(x n, U n )cov(u n ) 1 cov(u n, X n ) = S n R n CS n. In the following sections, we explore what happens as n. Specifically, we want to understand if the error covariance Σ n blows up or if it converges to a finite value. First, we recall some key notions about linear systems. III. OBSERVABILITY AND REACHABILITY Lemma 1: Cayley-Hamilton Let A R m m and det(si A) = α 0 + α 1 s + + α m 1 s m 1 + s m. Then α 0 I + α 1 A + + α A m 1 + A m = 0. In particular, span{i, A, A 2,...} = span{i, A,..., A m 1 }. Definition 1: Observability and Reachability The linear system X n+1 = AX n, Y n = CX n, n 1 (1) is observable if X 1 can be determined exactly from {Y n, n 1}. We then say (A, C) is observable. The linear system X n+1 = AX n + CU n, n 1 (2) is reachable if, for every state X, there is a sequence of inputs {U n, n 1} that drives the system from X 1 = 0 to state X. We say that (A, C) is reachable. Fact 1: (a) (A, C) is observable if and only if [C T A T C T (A m 1 ) T C T ] is of full rank. (b) (A, C) is reachable if and only if (A T, C T ) is observable, i.e., if and only if [C AC A m 1 C] is of full rank. In that case, m 1 A p CC T (A p ) T is positive definite. Proof: To see (a) note that (1) implies that Y n = CX n = CA X 1. Consequently, observability is equivalent to the null space of [C CA CA 2 ] being {0}. Accordingly, this is equivalent to the matrix [C T A T C T (A 2 ) T C T ] being of full rank. The conclusion then follows from Lemma 1. For (b), observe that (2) implies that X n = A X 1 + A n k 1 CU k = A n k 1 CU k. Therefore, the system is reachable if and only if [C AC A 2 C ] is of full rank. The conclusion then follows again from Lemma 1. IV. SYSTEM ASYMPTOTICS Our discussion is borrowed from [2]. First, let us examine the evolution of the unobserved system X n+1 = AX n + V n, n 1, where {X 1, V n, n 1} are all orthogonal and zero-mean with cov(v n ) = K V and K n = cov(x n ). Note that K n+1 = AK n A T + K V. (3) The following theorem describes the evolution of K n. Recall that a matrix A R m m is said to be stable if its eigenvalues all have magnitude strictly less than 1. Theorem 2: (a) If A is stable, then there is a positive semidefinite matrix K such that K n K as n. Moreover, K is the unique solution of the equation K = AKA T + K V. (4)

3 3 (b) Let K V = QQ T. Assume that (A, Q) is reachable. Then A is stable if and only if the equation has a positive definite solution K. Proof: (a) By induction we see that the solution of (3) is K = AKA T + QQ T n 2 K n = A K 1 (A ) T + A p K V (A p ) T. (5) If A is stable, then (A p ) i,j Cα p for some finite C and some α (0, 1), as it can be seen by considering the SVD of A. This implies that the first term in (5) vanishes and the sum converges to some K = Ap K V (A p ) T that is clearly positive semidefinite. The convergence also implies (4). It remains to show that (4) has a unique solution. Let K be another solution. Then = K K satisfies = A A T. Recursive substitutions show that = A n (A n ) T. Letting n shows that = 0 and the solution of (4) is unique. (b) If A is stable, we know from (a) that (4) has a unique positive semidefinite solution K = Ap QQ T (A p ) T. Since (A, Q) is reachable, the null space of [Q T Q T A T Q T (A 2 ) T ] is {0}. This implies K is positive definite. To show the converse, assume that K is a positive definite solution of (4). Then we know that K = A n K(A n ) T + Ap QQ T (A p ) T, n 1. Let λ be an eigenvalue of A T with eigenvector v. Then A T v = λv and ( ) v Kv = λ 2n v Kv + v A p QQ T (A p ) T v. But Ap QQ T (A p ) T is positive definite from the reachability of (A, Q), which implies that the last term in the above identity is positive. Consequently, it must be that λ < 1. The statements of the theorem should be intuitive. If the system is stable, then the state tries to go to 0 but is constantly pushed by the noise. That noise cannot push the state very far and one can expect the variance of the state to remain bounded. The convergence is a little bit more subtle. If the system is reachable, then the noise pushes the state in all directions and it is not surprising that the variance of the state is positive definite if the system is stable. If the system is not stable, the variance would explode. V. FILTER ASYMPTOTICS We now explore the evolution of Kalman Filter. Theorem 3: Let K V = QQ T. Suppose that (A, Q) is reachable and (A, C) is observable. If S 1 = 0, then The limiting matrices are the only solutions of the equations Σ n Σ, R n R, S n S, as n. Σ = (I RC)S, R = SC T (CSC T + K W ) 1, and S = AΣA T + K V. Equivalently, S is the unique positive semidefinite solution of S = A(S SC T ( CSC T + K W ) 1 CS)A T + K V. (6) Moreover, the time-invariant filter Z n = AZ + R(Y n CAZ ) satisfies cov(z n Ẑn) Σ, as n. Comments: The reachability implies that the noise excites all the components of the state. The observability condition guarantees that the observations track all the components of the state and imply that the estimation error remains bounded. Note that the state could grow unbounded, if A is unstable, but the estimator tracks it even in that case. The time-invariant filter has the same asymptotic error as the time-varying one. Proof: The proof has the following steps. For two positive semidefinite matrices S and S, we say that S S if S S is positive semidefinite. Similarly, we say that the positive semidefinite matrices {S n, n 1} are bounded if S n S for some positive semidefinite matrix S. (a) The matrices S n are bounded. (b) If S 1 = 0, then S n is nondecreasing in S 1. (c) If S 1 = 0, then S n S, where S is a positive semidefinite matrix. (d) The matrix A ARC is stable. (e) For any S 1, S n S. (f) Equation (6) has a unique positive semidefinite solution S. (g) The time-invariant filter has the same asymptotic error covariance as the time-varying filter. We outline these steps.

4 4 (a) The idea is that, because of observability, X n+m is a linear function of {Y p, V p, W p, p = n,..., n + m 1}. The covariance S n+m must be bounded by that of X n+m given {Y p, p = n,..., n + m 1}, which is a linear combination of the covariances of 2m random variables and is therefore uniformly bounded for all n. (b) That is, if S 1 is replaced by S 1 S 1, then S n is replaced by S n S n. The proof of this fact is pretty neat. One could try to do it by induction, based on the algebra. That turns out to be tricky. Instead, consider the following argument. Since we worry only about covariances, we may assume that all the random variables are jointly Gaussian. In that case, increasing S 1 to S 1 > S 1 can be done by replacing X 1 by X 1 = X 1 + ξ 1, where ξ 1 = N(0, S 1 S 1 ) and is independent of {X 1, V n, W n, n 0}. Now, let X n be the system state corresponding to X 1 and Y n be the corresponding observation. Note that X n+1 = X n+1 + A n ξ 1. Consequently, and while L[X n+1 Y n, ξ 1 ] = L[X n+1 Y n ] + L[X n+1 ξ 1 ] + L[A n ξ 1 Y n ] + L[A n ξ 1 ξ 1 ] = L[X n+1 Y n ] + A n ξ 1 S n+1 = cov(x n+1 L[X n+1 Y n, ξ 1 ]) S n+1 = cov(x n+1 L[X n+1 (Y ) n ]). Since, for each n 1, one can express Y n as a linear function of (Y n, ξ 1 ), it follows that S n+1 S n+1. (c) Note that S 1 = 0 S 2. From part (b), S n S n+1. However, S n B, for some positive semidefinite B. Thus, S n (i, i) := e T i S ne i S n+1 (i, i) B(i, i). This implies that S n (i, i) S(i, i) for some nonnegative S(i, i). Similarly, α(n) := S n (i, i) + 2S n (i, j) + S n (j, j) = (e i + e j ) T S n (e i + e j ) α(n + 1) (e i + e j ) T B(e i + e j ). This implies that α(n) α. We conclude that S n (i, j) must converge to some S(i, j). Hence S n (i, j) S(i, j). (d) Simple algebraic manipulations show that S = (A ARC)S(A ARC) T + ARK W R T A T + K V. Assume that (A ARC) T v = λv with λ > 1 and nonzero v. Then v Sv = v (A ARC)S(A ARC) T v + v ARK W R T A T v + v K V v = λ 2 v Sv + v ARK W R T A T v + v K V v. Since λ > 1, it is obtained that v Sv = v ARK W R T A T v = v K V v = v QQ T v = 0, which implies Qv = 0 and (A, Q) is not reachable. (e) We borrow this proof from [3]. The notation is the same as before. Consider the system ξ n+1 = A T ξ n + C T ζ n + W n. The problem is to find the ζ k (ξ k ) for k = 1,..., n to achieve Φ n (ξ) = min E[ {ξk T K V ξ k + ζk T K W ζ k } + ξn T S 1 ξ n ξ 1 = ξ]. By considering all possible choices for ζ 1 and assuming that the cost is minimized over {2,..., n + 1}, one sees that A direct calculation (see below) allows to verify that and the minimizing values of ζ n are given by Instead of using this optimal control, one uses In that case, the cost is ξ T G n ξ where Φ n+1 (ξ) = min ζ {ξ T K V ξ + ζ T K W ζ + E(Φ n (A T ξ + C T ζ + W 1 ))}. Φ n (ξ) = ξ T S n ξ + trace (S k K V ) ζ n k = [CS k C T + K W ] 1 CS k A T ξ n k. ε n k = [CSC T + K W ] 1 CSA T ξ n k. G n+1 = ΓG n Γ T + ARK W R T A T + K V, with G 1 = S 1 and Γ = A ARC. Since Γ is stable from part (d), we see that G n converges to S (Theorem 2). Now, Φ n (ξ) = ξ T S n ξ + trace (S k K V ) ξ T G n ξ, k=0 for all ξ. This implies that S n G n. Recall that if S 1 = 0, then the resulting covariance matrices, S n, are such that S n S. Moreover, we see that S n S n G n with S n S and G n S. It follows that S n S.

5 5 In the calculations above, we used the fact that, if A 1 is symmetric and invertible, then and is achieved by min ζ {ξ T A 0 ξ + ζ T A 1 ζ + 2ξ T A 2 ζ} = ξ T [A 0 A T 2 A 1 1 A 2]ξ ζ = A 1 1 A 2ξ, as a direct calculation shows. (f) Assume that S is another positive semidefinite solution of (6). If S 1 = S, then S n = S. However, we know from (e) that S n S. Hence, S = S. (g) This follows immediately because the time-invariant filter is a special case of the time-varying filter. REFERENCES [1] R. G. Gallager: Stochastic Processes: A Conceptual Approach. August 20, [2] P. R. Kumar and P. Varaiya: Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice-Hall, [3] H. Kushner: Introduction to Stochastic Control. Holt, Rinehart and Winston, Inc., 1971.

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Summary of EE226a: MMSE and LLSE

Summary of EE226a: MMSE and LLSE 1 Summary of EE226a: MMSE and LLSE Jean Walrand Here are the key ideas and results: I. SUMMARY The MMSE of given Y is E[ Y]. The LLSE of given Y is L[ Y] = E() + K,Y K 1 Y (Y E(Y)) if K Y is nonsingular.

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) vs Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

EC Control Engineering Quiz II IIT Madras

EC Control Engineering Quiz II IIT Madras EC34 - Control Engineering Quiz II IIT Madras Linear algebra Find the eigenvalues and eigenvectors of A, A, A and A + 4I Find the eigenvalues and eigenvectors of the following matrices: (a) cos θ sin θ

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

ECE504: Lecture 9. D. Richard Brown III. Worcester Polytechnic Institute. 04-Nov-2008

ECE504: Lecture 9. D. Richard Brown III. Worcester Polytechnic Institute. 04-Nov-2008 ECE504: Lecture 9 D. Richard Brown III Worcester Polytechnic Institute 04-Nov-2008 Worcester Polytechnic Institute D. Richard Brown III 04-Nov-2008 1 / 38 Lecture 9 Major Topics ECE504: Lecture 9 We are

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Linear Algebra II Lecture 22

Linear Algebra II Lecture 22 Linear Algebra II Lecture 22 Xi Chen University of Alberta March 4, 24 Outline Characteristic Polynomial, Eigenvalue, Eigenvector and Eigenvalue, Eigenvector and Let T : V V be a linear endomorphism. We

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

A Matrix Theoretic Derivation of the Kalman Filter

A Matrix Theoretic Derivation of the Kalman Filter A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26 Estimation: Regression and Least Squares This note explains how to use observations to estimate unobserved random variables.

More information

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012 Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Optimal State Estimators for Linear Systems with Unknown Inputs

Optimal State Estimators for Linear Systems with Unknown Inputs Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Review of Controllability Results of Dynamical System

Review of Controllability Results of Dynamical System IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC4026 SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Lecture 4 Controllability (a.k.a. Reachability) and Observability Algebraic Tests (Kalman rank condition & Hautus test) A few

More information

CSC 411 Lecture 12: Principal Component Analysis

CSC 411 Lecture 12: Principal Component Analysis CSC 411 Lecture 12: Principal Component Analysis Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 12-PCA 1 / 23 Overview Today we ll cover the first unsupervised

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Steady-state DKF. M. Sami Fadali Professor EE UNR

Steady-state DKF. M. Sami Fadali Professor EE UNR Steady-state DKF M. Sami Fadali Professor EE UNR 1 Outline Stability of linear estimators. Lyapunov equation. Uniform exponential stability. Steady-state behavior of Lyapunov equation. Riccati equation.

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Multivariate Linear Models

Multivariate Linear Models Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements

More information

EEE582 Homework Problems

EEE582 Homework Problems EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain) 1 AUTOMATIC CONTROL Andrea M. Zanchettin, PhD Spring Semester, 2018 Introduction to Automatic Control & Linear systems (time domain) 2 What is automatic control? From Wikipedia Control theory is an interdisciplinary

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix Steve Kirkland University of Regina June 5, 2006 Motivation: Google s PageRank algorithm finds the stationary vector of a stochastic

More information

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016 Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

A Characterization of the Hurwitz Stability of Metzler Matrices

A Characterization of the Hurwitz Stability of Metzler Matrices 29 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June -2, 29 WeC52 A Characterization of the Hurwitz Stability of Metzler Matrices Kumpati S Narendra and Robert Shorten 2 Abstract

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator Kalman Filtering Namrata Vaswani March 29, 2018 Notes are based on Vincent Poor s book. 1 Kalman Filter as a causal MMSE estimator Consider the following state space model (signal and observation model).

More information

Systems and Control Theory Lecture Notes. Laura Giarré

Systems and Control Theory Lecture Notes. Laura Giarré Systems and Control Theory Lecture Notes Laura Giarré L. Giarré 2018-2019 Lesson 14: Rechability Reachability (DT) Reachability theorem (DT) Reachability properties (DT) Reachability gramian (DT) Reachability

More information

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. Chapter 7 Output Feedback There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. In the last chapter we considered the use of state feedback to modify the dynamics

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Gaussians Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Outline Univariate Gaussian Multivariate Gaussian Law of Total Probability Conditioning

More information

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19 POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

False Data Injection Attacks in Control Systems

False Data Injection Attacks in Control Systems False Data Injection Attacks in Control Systems Yilin Mo, Bruno Sinopoli Department of Electrical and Computer Engineering, Carnegie Mellon University First Workshop on Secure Control Systems Bruno Sinopoli

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems TU Berlin Discrete-Time Control Systems 2 Stability Definitions We define stability first with respect to changes in the initial conditions Analysis of Discrete-Time

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Topics in control Tracking and regulation A. Astolfi

Topics in control Tracking and regulation A. Astolfi Topics in control Tracking and regulation A. Astolfi Contents 1 Introduction 1 2 The full information regulator problem 3 3 The FBI equations 5 4 The error feedback regulator problem 5 5 The internal model

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method Tim Roughgarden & Gregory Valiant April 15, 015 This lecture began with an extended recap of Lecture 7. Recall that

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 18: State Feedback Tracking and State Estimation Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 18:

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information