Lecture 2. Linear Systems

Similar documents
EE263: Introduction to Linear Dynamical Systems Review Session 9

Discrete and continuous dynamic systems

Singular Value Decomposition (SVD)

Linear System Theory

Observability and state estimation

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Lecture 19 Observability and state estimation

Problem set 5: SVD, Orthogonal projections, etc.

The Singular Value Decomposition and Least Squares Problems

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

MATH36001 Generalized Inverses and the SVD 2015

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Singular Value Decomposition

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

EE263: Introduction to Linear Dynamical Systems Review Session 6

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Maths for Signals and Systems Linear Algebra in Engineering

06/12/ rws/jMc- modif SuFY10 (MPF) - Textbook Section IX 1

Identification Methods for Structural Systems

1 Continuous-time Systems

Designing Information Devices and Systems II

6.241 Dynamic Systems and Control

Linear Least Squares. Using SVD Decomposition.

2. Review of Linear Algebra

Review of some mathematical tools

3 Gramians and Balanced Realizations

Computational Methods. Eigenvalues and Singular Values

Introduction to Modern Control MT 2016

Control Systems. Time response

UNIT 6: The singular value decomposition.

Controllability, Observability, Full State Feedback, Observer Based Control

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Vector and Matrix Norms. Vector and Matrix Norms

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

EE731 Lecture Notes: Matrix Computations for Signal Processing

The Singular Value Decomposition

DS-GA 1002 Lecture notes 10 November 23, Linear models

Singular Value Decomposition

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Cheat Sheet for MATH461

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Figure 1 A linear, time-invariant circuit. It s important to us that the circuit is both linear and time-invariant. To see why, let s us the notation

Computational math: Assignment 1

SINGULAR VALUE DECOMPOSITION

Linear Algebra, part 3 QR and SVD

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

STA141C: Big Data & High Performance Statistical Computing

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Problem # Max points possible Actual score Total 120

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Lifted approach to ILC/Repetitive Control

Singular Value Decomposition

Module 03 Linear Systems Theory: Necessary Background

Matrix Algebra for Engineers Jeffrey R. Chasnov

Dynamical systems: basic concepts

Control Systems. Time response. L. Lanari

Eigenvalues and Eigenvectors

Linear Algebra in Actuarial Science: Slides to the lecture

MATH 350: Introduction to Computational Mathematics

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Linear dynamical systems with inputs & outputs

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Introduction to Numerical Linear Algebra II

System Identification by Nuclear Norm Minimization

LTI Systems (Continuous & Discrete) - Basics

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Linear Systems. Carlo Tomasi

Control Systems. Frequency domain analysis. L. Lanari

6.241 Dynamic Systems and Control

Lecture 2: Linear Algebra Review

Observability. It was the property in Lyapunov stability which allowed us to resolve that

There are six more problems on the next two pages

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

Positive Definite Matrix

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Balanced Truncation 1

Lecture notes: Applied linear algebra Part 1. Version 2

Control Systems Design

Lecture 5 Least-squares

Therefore the new Fourier coefficients are. Module 2 : Signals in Frequency Domain Problem Set 2. Problem 1

STA141C: Big Data & High Performance Statistical Computing

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

COMP 558 lecture 18 Nov. 15, 2010

1 Some Facts on Symmetric Matrices

Chapter 0 Miscellaneous Preliminaries

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

ENGIN 211, Engineering Math. Laplace Transforms

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

6.241 Dynamic Systems and Control

Preconditioning. Noisy, Ill-Conditioned Linear Systems

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

14 Singular Value Decomposition

CME 345: MODEL REDUCTION

Transcription:

Lecture 2. Linear Systems Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering April 6, 2015 1 / 31

Logistics hw1 due this Wed, Apr 8 hw due every Wed in class, or my mailbox on 3rd floor of Annenberg reading: BV Appendix A, pay attention to linear algebra, notation Schur complements (also in hw1) vs hw2+ will be choose your own adventure : do an assigned problem or pick and do a problem from the catalog 2 / 31

Autonomous systems Consider the autonomous linear dynamical system ẋ(t) = Ax(t), x(0) = x 0 solution is matrix exponential x(t) = e At x(0), where e At = I +At + 1 2! A2 t 2 + 1 3! A3 t 3 + = sinh(at) + cosh(at) 3 / 31

Formal derivation from discrete time Original continuous equation approximated by forward Euler for small timestep δ 1 x k+1 x k δ Ax k, x k = x(kδ), k = 0,1,2,... Classic pattern for discrete time systems: x 0 = x(0) = x 0 x 1 = x 0 +Aδx 0 x 2 = (I +Aδ) 2 x 0. x k = (I +Aδ) k x 0. x(t) = lim δ 0 +(I +Aδ) t/δ x 0 = e At x 0 4 / 31

State propagation propagator. Multiplying by e At propagates the autonomous state forward by time t. For v,w R n, w = e At v implies v = e At w. the point w is v propagated by time t equivalently: the point v is w propagated by time t current state contains all information matrix exponential is a time propagator (huge deal in physics, e.g., Hamiltonians in quantum mechanics) Markov property. future is independent of past given present 5 / 31

State propagation: forward v w = e At v 6 / 31

State propagation: backward v = e At w w 7 / 31

Example: output prediction from threshold alarms An autonomous dynamical system evolves according to ẋ(t) = Ax(t), y(t) = Cx(t), x(0) = x 0, where A R n n, and C R 1 n are known, n = 6 x 0 R n is not known x(t) and y(t) are not directly measurable we have threshold information, y(t) 1, t [1.12,1.85] [4.47,4.87], y(t) 1, t [0.22,1.00] [3.38,3.96]. question: x(10) =?? (and is it unique?) 8 / 31

Example: output prediction from threshold alarms 5 Autonomous system trace 4 3 2 1 y(t) 0-1 -2-3 -4-5 0 1 2 3 4 5 6 7 8 9 10 t 9 / 31

Example: output prediction from threshold alarms method 1. determine x 0, then propagate forward by 10s: 1 = Ce At i x 0, t i = 1.12,1.85,4.47,4.87 1 = Ce At i x 0, t i = 0.22,1.00,3.38,3.96 8 eqns, 6 unknowns = x(10) = e A 10 x 0 method 2. determine x(10) directly: 1 = Ce A(10 ti) x(10), t i = 1.12,1.85,4.47,4.87 1 = Ce A(10 ti) x(10), t i = 0.22,1.00,3.38,3.96 8 eqns, 6 unknowns 10 / 31

State propagation with inputs For continuous time input-output system ẋ(t) = Ax(t)+Bu(t) y(t) = Cx(t)+Du(t) x(0) = x 0, if u( ) 0, then the state propagator is a convolution operation, interpreted elementwise. t0+t x(t 0 +t) = e At x(t 0 )+ e A(t0+t τ) Bu(τ)dτ t 0 11 / 31

Impulse response suppose the input is an impulse u(t) = δ(t) t y(t) = Ce At x 0 + Ce A(t τ) Bu(τ)dτ +Du(t) 0 t = Ce At x 0 + Ce A(t τ) Bδ(τ)dτ +Dδ(t) 0 = Ce At x 0 +Ce At B +Dδ(t) Ce At x 0 is due to initial condition h(t) = Ce At B +Dδ(t) is the impulse response 12 / 31

Linearity u 1 (t) linear system y 1 (t) u 2 (t) linear system y 2 (t) αu 1 (t)+βu 2 (t) linear system αy 1 (t)+βy 2 (t) 13 / 31

Time invariance u(t) TI system y(t) u(t τ d ) TI system y(t τ d ) 14 / 31

Linear + Time Invariant (LTI) systems fact. (ÅM 5.3) if a system is LTI, its output is a convolution y(t) = h(t τ)u(τ)dτ u(t): input y(t): output h(t): impulse response fully characterizes system for any input y(t) = (h u)(t) = = h(t τ)u(τ) dτ h(τ)u(t τ) dτ 15 / 31

Graphical interpretation δ(t) h(t) t t δ(t τ) h(t τ) t δ(t τ) (u(τ)dτ) h(t τ) (u(τ)dτ) t u(t) u(t) t y(t) t t t u(t)= δ(t τ)u(τ)dτ y(t)= h(t τ)u(τ)dτ 16 / 31

Singular value decomposition fact. every m n matrix A can be factored as A = UΣV T = r i=1 σ i u i v T i where r = rank(a), U R m r, U T U = I, V R n r, V T V = I, Σ = diag(σ 1,...,σ r ), and σ 1 σ r 0. u i R m are the left singular vectors v i R n are the right singular vectors σ i 0 are the singular values 17 / 31

Singular value decomposition The thin decomposition 1 A = u 1 u r σ... σ r v1 T. vr T can be extended to a thick decomposition by completing the basis for R m and R n and making U, V square. {u 1,...,u r } is an orthonormal basis for range(a) {v r+1,...,v n } is an orthonormal basis for null(a) 18 / 31

Controllability: testing for membership in span Given a desired y R m, we have y range(a) rank [ A y ] = rank(a) y span{u 1,...,u r }. The component of y in range(a) is r u i ui T y. i=1 r y range(a) y u i ui T y = 0 i=1 (I UU T )y = 0 19 / 31

Linear mappings and ellipsoids An m n matrix A maps the unit ball in R n to an ellipsoid in R m, B = {x R n : x 1} AB = {Ax : x B}. an ellipsoid induced by a matrix P S m ++ is the set E P = {x R m : x T P 1 x 1} λ i, v i : eigenvalues and eigenvectors of P R m λ2 v 2 λ1 v 1 20 / 31

SVD Mapping The SVD is a decomposition of the linear mapping x Ax, such that right singular vectors v i map to left singular vectors σ i u i A = UΣV T = r i=1 = σ i u i v T i, r = rank(a) equivalent eigenvalue problem is Av i = σ i u i R n R m v 1 v 2 σ 2 u 2 σ 1 u 1 x Ax 21 / 31

Pseudoinverse For A = UΣV T R m n full rank, the pseudoinverse is A = VΣ 1 U T = lim µ 0 (A T A+µI) 1 A T = (A T A) 1 A T, m n = lim µ 0 A T (AA T +µi) 1 = A T (AA T ) 1, m n least norm (control): if A is fat and full rank, x = arg min x R n,ax=y x 2 2 = A y least squares (estimation): if A is skinny and full rank, x = arg min x R n Ax y 2 2 = A y 22 / 31

Discrete convolution Discrete linear system with impulse coefficients h 0,...,h n 1, y k = k h k i u i, k = 0,...,n 1 i=0 or written as a matrix equation, y 0 h 0 0 0 y 1. = h 1 h 0 0... h n 1 h n 2 h 0 y n 1 u 0 u 1.. u n 1 23 / 31

Discrete convolution Matrix structure gives rise to familiar system properties: y 0 h 0 0 0 u 0 y 1 h 1 h 0 0 u 1 =..... y n 1 h n 1 h n 2 h 0 u n 1 }{{}}{{}}{{} y A u A is a matrix: system is linear A lower triangular: system is causal A Toeplitz: system is time-invariant open problem. how do you spell Otto Toeplitz s name? 24 / 31

Typical filter Causal finite 2N +1 point truncation of ideal low pass filter h i = Ω ( ) c π sinc Ωc (i N) i = 0,...,2N π 0.25 Filter coefficients: Ω c = π/4, N = 40 0.2 0.15 0.1 hi 0.05 0-0.05-0.1 0 10 20 30 40 50 60 70 80 i 25 / 31

SVD of filter matrix U Σ V 10 10 10 20 20 20 30 30 30 40 40 40 50 50 50 60 60 60 70 70 70 80 20 40 60 80 80 20 40 60 80 80 20 40 60 80 1.2 Singular values of A 1 0.8 σi 0.6 0.4 0.2 0 0 10 20 30 40 50 60 70 80 i 26 / 31

Closely related: circulant matrices A circulant matrix is a folded over Toeplitz matrix c 0 c 1 c 2 c n 1 c n 1 c 0 c 1 c n 2 C =. c n 2 c n 1 c.. 0 cn 3.......... c 1 c 2 c 3 c 0 eigenvalues are the DFT of the sequence {c 0,...,c n 1 } n 1 λ (k) = c i e 2πjki/n, v (k) = 1 n i=0 1 e 2πjk/n. e 2πjk(n 1)/n for more, see R. M. Gray Toeplitz and Circulant Matrices 27 / 31

Equalizer design causal deconvolution problem. pick filter coefficients g such that g h is as close as possible to the identity u 0 g 0 0 0 h 0 0 0 u 0 u 1.. g 1 g 0 0 h 1 h 0 0 u 1........ u n 1 g n 1 g n 2 g 0 h n 1 h n 2 h 0 u n 1 G = A minimizes GA I 2 F over all G Rn n often want G causal, i.e., lower triangular Toeplitz (ltt.) for more, see Wiener Hopf filter minimize GA I 2 F subject to G is ltt. 28 / 31

Markov parameters Consider the discrete-time LTI system x k+1 = Ax k +Bu k y k = Cx k x(0) = x 0. For each k = 0,1,2,..., we have y 0 0 u 0 C y 1 CB 0 u 1 CA y 2 = CAB CB 0 u 2 + CA 2 x...... 0.. CA k 1 B CA k 2 B CA k 3 B 0 CA k y k u k 29 / 31

Minimum energy control Given matrices A, B, C, and initial condition x 0, find a minimum energy input sequence u 0,...,u k that achieves desired outputs yk des 1,...,yk des l at times k 1,...,k l. y des k 1 CA k1 1 B CA k1 2 B... 0 u 0 CA k1 yk des 2 CA k2 1 B CA k2 2 B... 0 u 1 CA k2 =...... +. x 0.. yk des }{{ l CA kl 1 B CA kl 2 B... 0 u k CA k l }}{{}}{{}}{{} y H u G if y Gx 0 range(h), a minimizing sequence is u = H (y Gx 0 ). left singular vectors of H with large σ i are modes with lots of actuator authority 30 / 31

Least squares estimation Given matrices A, C, (B = 0) and measured (noisy) outputs yk meas 1,...,yk meas l at times k 1,...,k l, find the best least squares estimate of the initial condition x 0. yk meas 1 y meas k 2. yk meas l }{{} y CA k1 CA k2 = x 0 +noise. CA k l }{{} G the least squares estimate is x 0 = G y right singular vectors of G with large σ i are modes with lots of sensitivity null(g) is unobservable space 31 / 31