Derivation of the Maximum a Posteriori Estimate for Discrete Time Descriptor Systems

Size: px
Start display at page:

Download "Derivation of the Maximum a Posteriori Estimate for Discrete Time Descriptor Systems"

Transcription

1 Derivation of the Maximum a Posteriori Estimate for Discrete Time Descriptor Systems Ali A Al-Matouq a a Department of Electrical Engineering and Computer Science, b Department of Applied arxiv:27336v5 [cssy] 6 Mar 24 Mathematics and Statistics, Colorado School of Mines 6 Illinois St, Golden, CO 84 Abstract In this report a derivation of the MAP state estimator objective function for general (possibly nonsquare) discrete time causal/non-causal descriptor systems is presented The derivation made use of the Kronecer Canonical Transformation to extract the prior distribution on the descriptor state vector so that Maximum a Posteriori (MAP) point estimation can be used The analysis indicates that the MAP estimate for index causal descriptor systems does not require any model transformations and can be found recursively Furthermore, if the descriptor system is of index 2 or higher and the noise free system is causal, then the MAP estimate can also be found recursively without model transformations provided that model causality is accounted for in designing the stochastic model Index Terms Descriptor Systems, Maximum a Posteriori Estimate, Maximum Lielihood Estimate I MAXIMUM A POSTERIORI ESTIMATION FOR DISCRETE TIME DESCRIPTOR SYSTEMS A Introduction The first objective of this chapter is to find the maximum a posterior (MAP) estimate for the state vector sequence x given a stochastic discrete time descriptor system model (SDTDS), noisy measurements and Corresponding author aalmatou@minesedu

2 2 an informative prior for Ex as follows: Ex + =Ax +Bu +Fw () y =Hx +v (2) Ex N( r,p ) (3) where N( r,p ) denotes a normally distributed random variable with mean r and variance P We assume here that only the sequence u is deterministic and all other sequences are random The input disturbance sequence w R p and the measurement noise sequence v R q are iid normal random sequences;w N(,I q ) and v N(,R), where R Furthermore, the random variablesex,w,v are assumed uncorrelated between each other The matrices E,A R neq n, B R neq j, y R m, H R m n and F R neq p The maximum a posteriori estimate of x is defined as the mode of the posterior distribution denoted by ˆx map and given by: ˆx map :=argmax x p x y(x y) =arg max x p y x(y x) p x(x) p y (y) = argmax x (logp y x(y x)+logp x (x)) (4) where, ˆx map = {ˆx map } T =, x = {x } T =, y = {y } T = and ˆxmap is the MAP estimate at time Linear descriptor systems define x implicitly and hence the prior distribution p x (x) can not be found directly from the stochastic descriptor system given in () A proceeding step is needed to convert the stochastic descriptor system to a format that reveals the prior distribution on x On the other hand, the constrained maximum lielihood estimate ˆx ml is found by treating the state sequence x as a parameter and the estimates are obtained by maximizing the lielihood function: ˆx ml := argmax x L(x y) = argmax x p y x(y x) subject to x C (5) where the constraint x C forms the prior information about the parameter x In [], the input sequence Bu and the prior for Ex were reformulated as noisy measurements: Bu =Ex + Ax Fw r =Ex +e where e is a Gaussian zero mean random vector with variance P and independent of w,v, while x was viewed as parameters The objective was to construct recursively the filtered or predicted estimate

3 3 given by the conditional mean It may be argued, however, that this paradigm shift in viewing Bu as a measurement is inconsistent with the reality that Bu is a user defined input that is not random Also, the study in [2] presented an algorithm for transforming non-causal stochastic descriptor systems into causal systems but did not analyse how to avoid stochastic non-causality which is more meaningful for state estimation problems in practice In [3] and [4], matrix and state variable transformations were used to recast state estimation problems for square causal/non-causal descriptor systems into conventional state space estimation problems However, the method results in estimating transformed state variables instead of the original model variables and hence adds a requirement for an inverse transformation at every iteration for finding the estimates Moreover, the method was not generalized to non-square descriptor systems In this chapter, it is shown that model transformations are not necessary if the system is causal and the algebraic equations are modelled properly to avoid stochastic non-causality The analysis is based on examining the various subsystems that descriptor systems can represent using Kronecer canonical transformation This canonical form is suitable for extracting the prior on x for the most general case of the system dynamics () (ie causal or non-causal, square or non-square), and is also capable of revealing the necessary assumptions and restrictions needed on the stochastic model and noisy measurements that define a well posed estimation problem B The Real Kronecer Canonical Form of a Matrix Pencil λe A The Kronecer canonical form transformation (KCF) for singular matrix pencils λe A was developed by the German mathematician Leopold Kronecer in 89 This is also often called the generalized Schur decomposition of an arbitrary matrix pencil λe A and is a generalization of the Jordan canonical form for a square matrix Definition I [5] The matrix pencilλe A is said to be singular ifn eq n ordet(λe A) = λ C Otherwise, if n eq = n and there exist a λ C such that det(λe A) then the matrix pencil is called regular Definition I2 [5] The matrix pencil λẽ Ã is said to be strictly equivalent to the matrix pencil λe A for all λ C if there exist constant non-singular matrices P C neq neq and Q C n n independent of λ such that: Ẽ = PEQ, Ã = PAQ (6)

4 4 Definition I3 A nonzero vector x C n is a generalized eigenvector of the pair (E,A) if there exists a scalar λ C, called a generalized eigenvalue such that: (λe A)x = Theorem I4 [5], [6] Let E,A R neq n Then there exists non-singular matrices P R neq neq and Q R n n for all λ R such that: P(λE A)Q = λẽ Ã = diag( U ǫ,,u ǫp,j ρ,,j ρr,n σ,,n σo,o η,,o ηq ) where the matrix blocs are defined as follows: ) The bloc U ǫ correspond to the existence of scalar dependencies between the columns of λe A and is a zero matrix of size n eq ǫ Blocs of the type U ǫi for i =, p are the bidiagonal pencil blocs of size ǫ i (ǫ i +) and have the form: U ǫi = λe Uǫi A Uǫi = λ This subsystem has a right null space polynomial vector of the form [λ ǫi,λ ǫi,,λ,] T for any λ 2) J ρi are the real Jordan blocs of size ρ i ρ i for i =, r that correspond to the generalized eigenvalues of λe A of the form (α ρi λ) ρi, with: J ρi (α i ) = λe Jρi A Jρi = λ for real generalized eigenvalues α ρi R and: α ρi α ρi (7) (8) (9) J ρi (α i ) = λe Jρi A Jρi = λ ρi I 2 I2 ρi, ρi := µ ρ i ω ρi ω ρi µ ρi ()

5 5 for complex conjugate generalized eigenvalues α i = µ i +jω i, ᾱ = µ i jω i C with ω i > 3) N σi are the nilpotent blocs of size σ i σ i for i =, o that correspond to the infinite generalized eigenvalues of λe A with multiplicity σ i and have the form: N σi = λe Nσi A Nσi = λ 4) The bloc O η correspond to the existence of scalar dependencies between the rows of λe A and is a zero matrix of size η n Blocs O ηi are the bidiagonal blocs of size (η i +) η i for i =, q and have the form: O ηi = λe Oηi A Oηi = λ This subsystem has a left null space polynomial vector of [,λ,,λ ηi ] for any λ () (2) Proof: See [5] for full proof of the existence of P and Q Note that the indices ǫ i,ρ i,σ i and η i and the finite generalized eigenvalues α i fully characterize the matrix pencil λe A The presence of all these blocs in a pencil reflects the most general case The matrices E, A in (7) may correspond to a descriptor system model as the one given in () In this case, the descriptor system model may contain multiple subsystems, some connected and some disjoint from each other Moreover, these subsystems may be under-determined while others over-determined depending on the existence of the blocs U ǫi and O ηi in the transformed matrix pencil λẽ Ã A geometric implementation of the Kronecer canonical decomposition that results in finding real transformation matrices P R neq neq and Q R n n can be found in [7] This is important to avoid transforming real random variables x,w,v to complex random variables which will complicate the analysis otherwise C Transformation of Stochastic Descriptor Systems to KCF In order to find the MAP estimate for the state sequence of a stochastic descriptor system () given noisy measurements (2) and initial condition prior (3) we need to find the prior on {x } T = by transforming

6 6 () into Kronecer canonical form as follows: where, PEQ x + =PAQ x +PBu +PFw (3) y =HQ x +v (4) x = Q x = Since the pencil P(λE A)Q is bloc diagonal we can compute the solution for each bloc separately as given by [8] In the sequel, we will present this solution in a form suitable for MAP estimation We partition the resulting transformed system matrices in (3) as follows: E U A U E Ẽ =PEQ = J A, à = PAQ = J, E N A N E O A O where, B =PB = B U B J B N B O, F = PF = F U F J F N F O x (U) x (J) x (N) x (O) [ ], H = HQ = H U H J H N H O E U =diag(e Uǫ,,E Uǫp ), E J = diag(e Jρ,,E Jρr ), etc A U =diag(a Uǫ,,A Uǫp ), A J = diag(a Jρ,,A Jρr ), etc where diag( ) denotes diagonal concatenation of matrices Similarly, B U,,B O and F U,,F O are defined conformally with the rows of Ẽ and à Consequently, () can be expressed using the previous transformation and definitions as follows: (5) (6) Ẽ x + =à x + Bu + Fw (7) y = H x +v (8) The solution for each subsystem bloc will be presented next to examine the prior for each subsystem and to eventually find the MAP estimate for the untransformed variable x The analysis and solution of

7 7 general non-square discrete time descriptor systems was recently conducted in [8] We conduct a similar analysis for the stochastic descriptor system () ) The Under-Determined Subsystem Bloc: In the transformed pencil λẽ Ã multiple columns of zeros will occur depending on the number of dependent columns in the original matrix pencil λe A that differ by a scalar factor This corresponds to the existence of the bloc U ǫ = with size n eq ǫ This implies that there will be descriptor state variables that are unspecified by the descriptor system () and the number of unspecified descriptor states will depend on the number of zero columns in λẽ Ã For the general case, when ǫ i >, there will be dependent columns in λẽ Ã Assuming ǫ >, and only the bloc U ǫ appears in the KCF, the following difference equation can be formed from (8): Referring to (8), this can be written in expanded form as: x 2 U,+ x ǫ+ U,+ E U x (U) + =A U x (U) +B U u +F U w (9) = x U, x ǫ U, + b U b Uǫ u + f U f Uǫ w (2) where, b U, and f U, are formed from the rows of B U and F U respectively Since x U, can not be specified from the stochastic dynamic equations, the subsystem is called an under-determined subsystem As a result, we can not find a prior distribution for x U, from the transformed stochastic equations To reflect our lac of nowledge of this variable, we will assume the following uninformative prior distribution on this random variable x U, : x U, N(µ U,q 2 ) (2) whereq > is chosen to be large to mae the prior uninformative We also assume that x U, is independent from the random sequences w,v However, we can not estimate x U, with this type of prior The only way we can estimate this descriptor variable is by having an observation y that depends on x U, This condition is fulfilled when [E T U H T U ]T is full column ran This is nown as the estimableness condition given in [2] 2) The Over-Determined Subsystem Bloc: In the transformed pencil λẽ Ã multiple rows of zeros will occur depending on the number of dependent rows that differ by a scalar factor in the original matrix pencil λe A This will corresponds to the existence of the bloc O η = with size η n If λẽ Ã happens to have a row of zeros, then this will correspond to the following difference equation in (7): = +B Oηi u +F Oηi w (22)

8 8 which imposes constraints on the input and hence is not a well defined stochastic equation since the assumption that u is deterministic is now invalid As a conclusion, for a well defined stochastic model (), we can not have any dependency between the rows of the matrix pencil λe A; ie the matrix pencil must be full row ran Consequently, in order for the stochastic model () to be well defined, it can not have Kronecer blocs of the form O ηi This is equivalent of having [E A] full row ran, which is one of the conditions for a well-posed estimation problem mentioned in [2] 3) The Regular Subsystem Bloc: The regular subsystem is composed of the Jordan blocs J ρ and the nilpotent blocsn σ which correspond to the finite and infinite elementary divisors of λe A respectively These two blocs combine to form a square regular descriptor system Assuming ρ >, then the corresponding difference equation will be: x (J) + =A J x (J) +B J u +F J w (23) As a result we obtain an ordinary state space difference equation and all state variables can be determined from this subsystem Similarly, if we assume σ >, the nilpotent bloc appearing in () corresponds to the following difference equation: which can be expanded using () as follows: x 2 N,+ x 3 N,+ x σ N,+ = E N x (N) + =A N x (N) +B N u +F N w (24) x N, x 2 N, x σ N, x σ N, + b N b Nc b Nσ b Nσ u + f N f Nc f Nσ f Nσ w (25) We recognize that the matrix E N is nilpotent of degree σ ; ie E i N for i < σ and E i N = for i σ As a result, the solution to (25) can be expressed as follows: x (N) =E N x (N) + B Nu F N w =E 2 N x(n) +2 E NB N u + E N F N w + B N u F N w x (N) σ σ = ENB i N u +i ENF i N w +i (26) i= This subsystem forms the non-causal equations that correspond to the infinite elementary divisors of(λe A) We notice that x (N) can depend on future values of the input and noise sequences if the system is i=

9 9 non-causal In order to determine this state, we need to now the future values of the input u and disturbance sequence w The nilpotency of the matrix E N determines the index of the descriptor model (); ie ν d = σ for time invariant square descriptor models We recognize that a high index model; ie ν d > is not a sufficient condition for having a non-causal model, rather the matrices B and F must also have certain values such that ENB i N and ENF i N for i =,2,,ν d Non-causal systems, however, do not exist in reality, unless the variation is with respect to space rather than time Techniques for verifying causality and designing the matrix F so that () is causal D The MAP Estimate for Index Causal Descriptor Systems We have seen that the KCF is capable of performing the following tass simultaneously: ) Introducing zero column vectors in the transformed matrix pencil that correspond to dependent columns λe A that differ by a scalar or polynomial factor This allows us to determine the descriptor state variables that have no informative prior in the stochastic model upfront If [EU T H U ] T is full column ran, then any unspecified states can be estimated from measurements only 2) Introducing zero row vectors in the transformed matrix pencil that correspond to redundant rows of λe A that differ by a scalar or polynomial factor This allows us to determine if the stochastic model () is well defined as redundant rows will constrain the input sequence and render the estimation problem now well defined 3) Determining the Jordan blocs that correspond to the hidden stochastic state-space subsystems in the descriptor model () 4) Determining the nilpotent blocs that correspond to the hidden non-causal subsystems To find the MAP estimate for x, the MAP estimate for x will be determined first and then inverse transformation will be used (using real transformation matrices P, Q as given in (5)) to find the corresponding value of ˆx map Based on the above discussion, the following assumptions are needed: Assumption I5 Index Causal Descriptor Systems ) The matrix [E A] is full row ran; ie there is no dependency between the rows of the matrix pencil λe A and hence the stochastic model () has a solution to any consistent initial condition For example, if the matrices are square, this condition will guarantee that det(λe A) λ C which is the condition for system solvability [6] If the system is rectangular, then the number of

10 rows must be smaller than the number of columns which guarantees existence of a solution to the initial value problem [6] 2) The matrix [E T H T ] T is full column ran This will enable estimating descriptor states with no informative prior from the stochastic model () More precisely, it is required that [E T U ǫi H T U ǫi ] T be full column ran because only the under-determined subsystems contains the unspecified states as explained earlier The two ran conditions are identical since transformation matrices do not alter the ran of the matrices 3) The random iid sequences w N(,I), v N(,I) and the random variable r N( r,p ) are uncorrelated 4) The matrix F is full column ran This is not a limiting assumption as if it is permitted to redefine the random variables w, then using QR decomposition: [2] F = [F ] Q Q 2, w = Q w where, F is full column ran and w are iid zero mean unit covariance Gaussian vectors because Q is orthonormal [2] 5) The index of the stochastic descriptor system () is which can be verified using index calculation methods for square descriptor systems as given in [6] This will also ensure that the system is causal Consequently, the set of equations that describe the original stochastic descriptor system () and noisy measurements after using KCF transformation and using the above assumptions are as follows: E U x U + =A U x U +B Uu +F U w x U, =µ U, +qs x (J) + =A J x (J) +B J u +F J w x (N) = B N u F N w y = H x +v (27a) (27b) (27c) (27d) (27e) where an uninformative prior was specified for the undetermined state x U, N(µ U,,q2 ), with s is a normally distributed random sequence with zero mean and unit covariance independent from the noise sequences w,v All of these equations are explicit in the descriptor state vector Since [E A] is full row ran, we do not have any over-determined subsystem blocs O ηj The MAP estimate of { x } T = is

11 obtained from the conditional distribution: p x y ( x y) p y x (y x)p x ( x) (28) where x = { x } T = In the following the subscript for the prior distributions will be omitted for simplicity of notation and will be implied that the distributions are with respect to the random variable x From (27) and noting the independences between random variables, we recognize that: p( x) =p( x (U), x(j), x(n),, x(u) T, x(j) T, x(n) T ) =p( x (U), x(j) )p( x(u), x(j), x(n) x(u), x(j) ) p( x(u) T, x(j) T, x(n) T x(u) T, x(j) T ) =p( x (U), x(j) ) T = =p(e U x (U),E J x (J) p( x (U) +, x(j) +, x(n) x(u), x(j) ) T )p( x U, ) = p(e U x (U) +,E J x (J) +,A N x (N) A U x (U),A J x (J) )p( x U,+ ) (29) where the matrix multiplications in the last relationship were introduced by examining the relationships 2, 23 and 25 presented earlier More precisely, E U is a matrix with a zero vector in the first column and identity matrix in the remaining columns Hence, E U x (U) is independent from x U, and E U x (U) + is independent from x U,+ Also, we note that both E J and A N are identity matrices and the multiplication with the random variables have no effect Finally, the value of x ǫ+ U,+ is independent from the value of x ǫ U, and therefore we may use A U x (U) instead of x (U) in the conditional distribution As a result, the conditional distribution in (29) can be obtained using the relations given in (27) after variable substitution as follows: where, p(e U x (U) +,E J x (J) +,A N x (N) A U x (U),A J x (J) ) = p Fw (ζ) E U x U + A U x U +B Uu ζ = x (J) + A J x (J) +B J u, F = PF = x (N) B N u Similarly we may find the distribution for the measurement conditioned on the state as: p y x (y x) = F U F J F N, T p v (y H x ) (3) = Given the prior for Ex in (3) we need to obtain the prior after after multiplication with P as follows: PEx N(P r,pp P T )

12 2 which can be rewritten as: E U E J E N x (U) x (J) x (N) N r (U) r (J) r (N),P where P = PP P T Note that E N = and equation (27e) already determines the prior for x (N) with mean B N u and variance F N FN T Consequently, the negative logarithm of the conditional distribution can be written as: logp y x ( x,y) logp( x) 2 + T 2 = E U x (U) r (U) E J x (J) r (J) r (N) E U x (U) + A U x (U) B U u E J x (J) + A J x (J) B J u x (N) B N u 2 F 2 P + 2q T y H x 2 R = (3) T x U, µ U, 2 (32) where, F = F F T Notice that F = PF is full column ran by the assumption that F is full column ran Hence, F FT is non-singular and positive definite Taing the limit as q (to reflect our lac of prior for x U, ) and using the relations for PEQ, PAQ and PB in (6) will result in the following objective function: ˆx map =argmin x 2 PEx P r 2 P + 2 =arg min x 2 Ex r 2 P + 2 = T y Hx 2 R + T PEx + PAx PBu 2 F 2 = = T y Hx 2 R + T Ex + Ax Bu 2 Q (33) 2 = where, P T P P = P and Q = P T FP = FF T Hence, the MAP estimate for x for causal index descriptor systems of the form (),(2) can be found directly from the system matrices with no need of any transformation Solving this minimization problem is identical to solving the constrained maximum lielihood objective function derived in [] and [2] by viewing the initial condition and input sequence as noisy measurements Hence, this establishes that the MAP and ML estimates are identical for state estimation problems that involve causal descriptor systems of index =

13 3 REFERENCES [] R Niouhah, A Willsy, and B Levy, Kalman Filtering and Riccati equations for descriptor systems, Automatic Control, IEEE Transactions on, vol 37, no 9, pp , sep 992 [2] R Niouhah, S Campbell, and F Delebecque, Kalman filtering for general discrete-time linear systems, Automatic Control, IEEE Transactions on, vol 44, no, pp , oct 999 [3] M Gerdin, Identification and estimation for models described by differential-algebraic equations, PhD dissertation, Department of Electrical Engineering, Linoping University, 26 [4] T B Schön, Estimation of nonlinear dynamic systems: Theory and applications, PhD dissertation, Linöping, 26 [5] F Gantmaher, The theory of matrices Chelsea publishing company, 959, vol 2 [6] P Kunel and V Mehrmann, Differential-algebraic equations: analysis and numerical solution, ser EMS textboos in mathematics European Mathematical Society, 26 [7] T Berger and S Trenn, The quasi-ronecer form for matrix pencils, SIAM Journal of Matrix Analysis and Applications, vol 33, no 2, pp , 22 [8] T Brull, Linear discrete-time descriptor systems, Master s thesis, Institut fur Mathemati, TU Berlin, 27

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

Numerical Treatment of Unstructured. Differential-Algebraic Equations. with Arbitrary Index

Numerical Treatment of Unstructured. Differential-Algebraic Equations. with Arbitrary Index Numerical Treatment of Unstructured Differential-Algebraic Equations with Arbitrary Index Peter Kunkel (Leipzig) SDS2003, Bari-Monopoli, 22. 25.06.2003 Outline Numerical Treatment of Unstructured Differential-Algebraic

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form

Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form Journal of Mathematical Modeling Vol. 3, No. 1, 2015, pp. 91-101 JMM Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form Kamele Nassiri Pirbazari

More information

Computation of Kronecker-like forms of periodic matrix pairs

Computation of Kronecker-like forms of periodic matrix pairs Symp. on Mathematical Theory of Networs and Systems, Leuven, Belgium, July 5-9, 2004 Computation of Kronecer-lie forms of periodic matrix pairs A. Varga German Aerospace Center DLR - berpfaffenhofen Institute

More information

On Linear-Quadratic Control Theory of Implicit Difference Equations

On Linear-Quadratic Control Theory of Implicit Difference Equations On Linear-Quadratic Control Theory of Implicit Difference Equations Daniel Bankmann Technische Universität Berlin 10. Elgersburg Workshop 2016 February 9, 2016 D. Bankmann (TU Berlin) Control of IDEs February

More information

A Separation Principle for Decentralized State-Feedback Optimal Control

A Separation Principle for Decentralized State-Feedback Optimal Control A Separation Principle for Decentralized State-Feedbac Optimal Control Laurent Lessard Allerton Conference on Communication, Control, and Computing, pp. 528 534, 203 Abstract A cooperative control problem

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Infinite elementary divisor structure-preserving transformations for polynomial matrices Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time

More information

2 Introduction of Discrete-Time Systems

2 Introduction of Discrete-Time Systems 2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering

More information

Solving Linear Rational Expectations Models

Solving Linear Rational Expectations Models Solving Linear Rational Expectations Models simplified from Christopher A. Sims, by Michael Reiter January 2010 1 General form of the models The models we are interested in can be cast in the form Γ 0

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1. A SPECTRAL CHARACTERIZATION OF THE BEHAVIOR OF DISCRETE TIME AR-REPRESENTATIONS OVER A FINITE TIME INTERVAL E.N.Antoniou, A.I.G.Vardulakis, N.P.Karampetakis Aristotle University of Thessaloniki Faculty

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems TU Berlin Discrete-Time Control Systems 2 Stability Definitions We define stability first with respect to changes in the initial conditions Analysis of Discrete-Time

More information

Linear algebra properties of dissipative Hamiltonian descriptor systems

Linear algebra properties of dissipative Hamiltonian descriptor systems Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Unbiased minimum variance estimation for systems with unknown exogenous inputs Unbiased minimum variance estimation for systems with unknown exogenous inputs Mohamed Darouach, Michel Zasadzinski To cite this version: Mohamed Darouach, Michel Zasadzinski. Unbiased minimum variance

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Research Article On Exponential Stability Conditions of Descriptor Systems with Time-Varying Delay

Research Article On Exponential Stability Conditions of Descriptor Systems with Time-Varying Delay Applied Mathematics Volume 2012, Article ID 532912, 12 pages doi:10.1155/2012/532912 Research Article On Exponential Stability Conditions of Descriptor Systems with Time-Varying Delay S. Cong and Z.-B.

More information

Performance assessment of MIMO systems under partial information

Performance assessment of MIMO systems under partial information Performance assessment of MIMO systems under partial information H Xia P Majecki A Ordys M Grimble Abstract Minimum variance (MV) can characterize the most fundamental performance limitation of a system,

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS SECINS 5/54 BSIC PRPERIES F EIGENVUES ND EIGENVECRS / SIMIRIY RNSFRMINS Eigenvalues of an n : there exists a vector x for which x = x Such a vector x is called an eigenvector, and (, x) is called an eigenpair

More information

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Matched Filter Generalized Matched Filter Signal

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2013/2014 Lesson 18 23 April 2014 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

ON THE SINGULAR DECOMPOSITION OF MATRICES

ON THE SINGULAR DECOMPOSITION OF MATRICES An. Şt. Univ. Ovidius Constanţa Vol. 8, 00, 55 6 ON THE SINGULAR DECOMPOSITION OF MATRICES Alina PETRESCU-NIŢǍ Abstract This paper is an original presentation of the algorithm of the singular decomposition

More information

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets 2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with

More information

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then Homework 6 Solutions 1 Let V be the real vector space spanned by the functions e t, te t, t 2 e t, e 2t Find a Jordan canonical basis and a Jordan canonical form of T on V dened by T (f) = f Solution Note

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Classifying and building DLMs

Classifying and building DLMs Chapter 3 Classifying and building DLMs In this chapter we will consider a special form of DLM for which the transition matrix F k and the observation matrix H k are constant in time, ie, F k = F and H

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Degenerate Perturbation Theory. 1 General framework and strategy

Degenerate Perturbation Theory. 1 General framework and strategy Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying

More information

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations Applied Mathematical Sciences, Vol. 8, 2014, no. 4, 157-172 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ams.2014.311636 Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Finite-Horizon Optimal State-Feedback Control of Nonlinear Stochastic Systems Based on a Minimum Principle

Finite-Horizon Optimal State-Feedback Control of Nonlinear Stochastic Systems Based on a Minimum Principle Finite-Horizon Optimal State-Feedbac Control of Nonlinear Stochastic Systems Based on a Minimum Principle Marc P Deisenroth, Toshiyui Ohtsua, Florian Weissel, Dietrich Brunn, and Uwe D Hanebec Abstract

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Backward Error Estimation

Backward Error Estimation Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

GOLDEN SEQUENCES OF MATRICES WITH APPLICATIONS TO FIBONACCI ALGEBRA

GOLDEN SEQUENCES OF MATRICES WITH APPLICATIONS TO FIBONACCI ALGEBRA GOLDEN SEQUENCES OF MATRICES WITH APPLICATIONS TO FIBONACCI ALGEBRA JOSEPH ERCOLANO Baruch College, CUNY, New York, New York 10010 1. INTRODUCTION As is well known, the problem of finding a sequence of

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Introduction to Stochastic processes

Introduction to Stochastic processes Università di Pavia Introduction to Stochastic processes Eduardo Rossi Stochastic Process Stochastic Process: A stochastic process is an ordered sequence of random variables defined on a probability space

More information

The Jordan forms of AB and BA

The Jordan forms of AB and BA Electronic Journal of Linear Algebra Volume 18 Volume 18 (29) Article 25 29 The Jordan forms of AB and BA Ross A. Lippert ross.lippert@gmail.com Gilbert Strang Follow this and additional works at: http://repository.uwyo.edu/ela

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information