Lecture 2. Linear Systems

Size: px
Start display at page:

Download "Lecture 2. Linear Systems"

Transcription

1 Lecture 2. Linear Systems Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering April 6, / 31

2 Logistics hw1 due this Wed, Apr 8 hw due every Wed in class, or my mailbox on 3rd floor of Annenberg reading: BV Appendix A, pay attention to linear algebra, notation Schur complements (also in hw1) vs hw2+ will be choose your own adventure : do an assigned problem or pick and do a problem from the catalog 2 / 31

3 Autonomous systems Consider the autonomous linear dynamical system ẋ(t) = Ax(t), x(0) = x 0 solution is matrix exponential x(t) = e At x(0), where e At = I +At + 1 2! A2 t ! A3 t 3 + = sinh(at) + cosh(at) 3 / 31

4 Formal derivation from discrete time Original continuous equation approximated by forward Euler for small timestep δ 1 x k+1 x k δ Ax k, x k = x(kδ), k = 0,1,2,... Classic pattern for discrete time systems: x 0 = x(0) = x 0 x 1 = x 0 +Aδx 0 x 2 = (I +Aδ) 2 x 0. x k = (I +Aδ) k x 0. x(t) = lim δ 0 +(I +Aδ) t/δ x 0 = e At x 0 4 / 31

5 State propagation propagator. Multiplying by e At propagates the autonomous state forward by time t. For v,w R n, w = e At v implies v = e At w. the point w is v propagated by time t equivalently: the point v is w propagated by time t current state contains all information matrix exponential is a time propagator (huge deal in physics, e.g., Hamiltonians in quantum mechanics) Markov property. future is independent of past given present 5 / 31

6 State propagation: forward v w = e At v 6 / 31

7 State propagation: backward v = e At w w 7 / 31

8 Example: output prediction from threshold alarms An autonomous dynamical system evolves according to ẋ(t) = Ax(t), y(t) = Cx(t), x(0) = x 0, where A R n n, and C R 1 n are known, n = 6 x 0 R n is not known x(t) and y(t) are not directly measurable we have threshold information, y(t) 1, t [1.12,1.85] [4.47,4.87], y(t) 1, t [0.22,1.00] [3.38,3.96]. question: x(10) =?? (and is it unique?) 8 / 31

9 Example: output prediction from threshold alarms 5 Autonomous system trace y(t) t 9 / 31

10 Example: output prediction from threshold alarms method 1. determine x 0, then propagate forward by 10s: 1 = Ce At i x 0, t i = 1.12,1.85,4.47, = Ce At i x 0, t i = 0.22,1.00,3.38, eqns, 6 unknowns = x(10) = e A 10 x 0 method 2. determine x(10) directly: 1 = Ce A(10 ti) x(10), t i = 1.12,1.85,4.47, = Ce A(10 ti) x(10), t i = 0.22,1.00,3.38, eqns, 6 unknowns 10 / 31

11 State propagation with inputs For continuous time input-output system ẋ(t) = Ax(t)+Bu(t) y(t) = Cx(t)+Du(t) x(0) = x 0, if u( ) 0, then the state propagator is a convolution operation, interpreted elementwise. t0+t x(t 0 +t) = e At x(t 0 )+ e A(t0+t τ) Bu(τ)dτ t 0 11 / 31

12 Impulse response suppose the input is an impulse u(t) = δ(t) t y(t) = Ce At x 0 + Ce A(t τ) Bu(τ)dτ +Du(t) 0 t = Ce At x 0 + Ce A(t τ) Bδ(τ)dτ +Dδ(t) 0 = Ce At x 0 +Ce At B +Dδ(t) Ce At x 0 is due to initial condition h(t) = Ce At B +Dδ(t) is the impulse response 12 / 31

13 Linearity u 1 (t) linear system y 1 (t) u 2 (t) linear system y 2 (t) αu 1 (t)+βu 2 (t) linear system αy 1 (t)+βy 2 (t) 13 / 31

14 Time invariance u(t) TI system y(t) u(t τ d ) TI system y(t τ d ) 14 / 31

15 Linear + Time Invariant (LTI) systems fact. (ÅM 5.3) if a system is LTI, its output is a convolution y(t) = h(t τ)u(τ)dτ u(t): input y(t): output h(t): impulse response fully characterizes system for any input y(t) = (h u)(t) = = h(t τ)u(τ) dτ h(τ)u(t τ) dτ 15 / 31

16 Graphical interpretation δ(t) h(t) t t δ(t τ) h(t τ) t δ(t τ) (u(τ)dτ) h(t τ) (u(τ)dτ) t u(t) u(t) t y(t) t t t u(t)= δ(t τ)u(τ)dτ y(t)= h(t τ)u(τ)dτ 16 / 31

17 Singular value decomposition fact. every m n matrix A can be factored as A = UΣV T = r i=1 σ i u i v T i where r = rank(a), U R m r, U T U = I, V R n r, V T V = I, Σ = diag(σ 1,...,σ r ), and σ 1 σ r 0. u i R m are the left singular vectors v i R n are the right singular vectors σ i 0 are the singular values 17 / 31

18 Singular value decomposition The thin decomposition 1 A = u 1 u r σ... σ r v1 T. vr T can be extended to a thick decomposition by completing the basis for R m and R n and making U, V square. {u 1,...,u r } is an orthonormal basis for range(a) {v r+1,...,v n } is an orthonormal basis for null(a) 18 / 31

19 Controllability: testing for membership in span Given a desired y R m, we have y range(a) rank [ A y ] = rank(a) y span{u 1,...,u r }. The component of y in range(a) is r u i ui T y. i=1 r y range(a) y u i ui T y = 0 i=1 (I UU T )y = 0 19 / 31

20 Linear mappings and ellipsoids An m n matrix A maps the unit ball in R n to an ellipsoid in R m, B = {x R n : x 1} AB = {Ax : x B}. an ellipsoid induced by a matrix P S m ++ is the set E P = {x R m : x T P 1 x 1} λ i, v i : eigenvalues and eigenvectors of P R m λ2 v 2 λ1 v 1 20 / 31

21 SVD Mapping The SVD is a decomposition of the linear mapping x Ax, such that right singular vectors v i map to left singular vectors σ i u i A = UΣV T = r i=1 = σ i u i v T i, r = rank(a) equivalent eigenvalue problem is Av i = σ i u i R n R m v 1 v 2 σ 2 u 2 σ 1 u 1 x Ax 21 / 31

22 Pseudoinverse For A = UΣV T R m n full rank, the pseudoinverse is A = VΣ 1 U T = lim µ 0 (A T A+µI) 1 A T = (A T A) 1 A T, m n = lim µ 0 A T (AA T +µi) 1 = A T (AA T ) 1, m n least norm (control): if A is fat and full rank, x = arg min x R n,ax=y x 2 2 = A y least squares (estimation): if A is skinny and full rank, x = arg min x R n Ax y 2 2 = A y 22 / 31

23 Discrete convolution Discrete linear system with impulse coefficients h 0,...,h n 1, y k = k h k i u i, k = 0,...,n 1 i=0 or written as a matrix equation, y 0 h y 1. = h 1 h h n 1 h n 2 h 0 y n 1 u 0 u 1.. u n 1 23 / 31

24 Discrete convolution Matrix structure gives rise to familiar system properties: y 0 h u 0 y 1 h 1 h 0 0 u 1 =..... y n 1 h n 1 h n 2 h 0 u n 1 }{{}}{{}}{{} y A u A is a matrix: system is linear A lower triangular: system is causal A Toeplitz: system is time-invariant open problem. how do you spell Otto Toeplitz s name? 24 / 31

25 Typical filter Causal finite 2N +1 point truncation of ideal low pass filter h i = Ω ( ) c π sinc Ωc (i N) i = 0,...,2N π 0.25 Filter coefficients: Ω c = π/4, N = hi i 25 / 31

26 SVD of filter matrix U Σ V Singular values of A σi i 26 / 31

27 Closely related: circulant matrices A circulant matrix is a folded over Toeplitz matrix c 0 c 1 c 2 c n 1 c n 1 c 0 c 1 c n 2 C =. c n 2 c n 1 c.. 0 cn c 1 c 2 c 3 c 0 eigenvalues are the DFT of the sequence {c 0,...,c n 1 } n 1 λ (k) = c i e 2πjki/n, v (k) = 1 n i=0 1 e 2πjk/n. e 2πjk(n 1)/n for more, see R. M. Gray Toeplitz and Circulant Matrices 27 / 31

28 Equalizer design causal deconvolution problem. pick filter coefficients g such that g h is as close as possible to the identity u 0 g h u 0 u 1.. g 1 g 0 0 h 1 h 0 0 u u n 1 g n 1 g n 2 g 0 h n 1 h n 2 h 0 u n 1 G = A minimizes GA I 2 F over all G Rn n often want G causal, i.e., lower triangular Toeplitz (ltt.) for more, see Wiener Hopf filter minimize GA I 2 F subject to G is ltt. 28 / 31

29 Markov parameters Consider the discrete-time LTI system x k+1 = Ax k +Bu k y k = Cx k x(0) = x 0. For each k = 0,1,2,..., we have y 0 0 u 0 C y 1 CB 0 u 1 CA y 2 = CAB CB 0 u 2 + CA 2 x CA k 1 B CA k 2 B CA k 3 B 0 CA k y k u k 29 / 31

30 Minimum energy control Given matrices A, B, C, and initial condition x 0, find a minimum energy input sequence u 0,...,u k that achieves desired outputs yk des 1,...,yk des l at times k 1,...,k l. y des k 1 CA k1 1 B CA k1 2 B... 0 u 0 CA k1 yk des 2 CA k2 1 B CA k2 2 B... 0 u 1 CA k2 = x 0.. yk des }{{ l CA kl 1 B CA kl 2 B... 0 u k CA k l }}{{}}{{}}{{} y H u G if y Gx 0 range(h), a minimizing sequence is u = H (y Gx 0 ). left singular vectors of H with large σ i are modes with lots of actuator authority 30 / 31

31 Least squares estimation Given matrices A, C, (B = 0) and measured (noisy) outputs yk meas 1,...,yk meas l at times k 1,...,k l, find the best least squares estimate of the initial condition x 0. yk meas 1 y meas k 2. yk meas l }{{} y CA k1 CA k2 = x 0 +noise. CA k l }{{} G the least squares estimate is x 0 = G y right singular vectors of G with large σ i are modes with lots of sensitivity null(g) is unobservable space 31 / 31

EE263: Introduction to Linear Dynamical Systems Review Session 9

EE263: Introduction to Linear Dynamical Systems Review Session 9 EE63: Introduction to Linear Dynamical Systems Review Session 9 SVD continued EE63 RS9 1 Singular Value Decomposition recall any nonzero matrix A R m n, with Rank(A) = r, has an SVD given by A = UΣV T,

More information

Discrete and continuous dynamic systems

Discrete and continuous dynamic systems Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

EE263: Introduction to Linear Dynamical Systems Review Session 6

EE263: Introduction to Linear Dynamical Systems Review Session 6 EE263: Introduction to Linear Dynamical Systems Review Session 6 Outline diagonalizability eigen decomposition theorem applications (modal forms, asymptotic growth rate) EE263 RS6 1 Diagonalizability consider

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

06/12/ rws/jMc- modif SuFY10 (MPF) - Textbook Section IX 1

06/12/ rws/jMc- modif SuFY10 (MPF) - Textbook Section IX 1 IV. Continuous-Time Signals & LTI Systems [p. 3] Analog signal definition [p. 4] Periodic signal [p. 5] One-sided signal [p. 6] Finite length signal [p. 7] Impulse function [p. 9] Sampling property [p.11]

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Introduction to Modern Control MT 2016

Introduction to Modern Control MT 2016 CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear

More information

Control Systems. Time response

Control Systems. Time response Control Systems Time response L. Lanari outline zero-state solution matrix exponential total response (sum of zero-state and zero-input responses) Dirac impulse impulse response change of coordinates (state)

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Controllability, Observability, Full State Feedback, Observer Based Control

Controllability, Observability, Full State Feedback, Observer Based Control Multivariable Control Lecture 4 Controllability, Observability, Full State Feedback, Observer Based Control John T. Wen September 13, 24 Ref: 3.2-3.4 of Text Controllability ẋ = Ax + Bu; x() = x. At time

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

Figure 1 A linear, time-invariant circuit. It s important to us that the circuit is both linear and time-invariant. To see why, let s us the notation

Figure 1 A linear, time-invariant circuit. It s important to us that the circuit is both linear and time-invariant. To see why, let s us the notation Convolution In this section we consider the problem of determining the response of a linear, time-invariant circuit to an arbitrary input, x(t). This situation is illustrated in Figure 1 where x(t) is

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

SINGULAR VALUE DECOMPOSITION

SINGULAR VALUE DECOMPOSITION DEPARMEN OF MAHEMAICS ECHNICAL REPOR SINGULAR VALUE DECOMPOSIION Andrew Lounsbury September 8 No 8- ENNESSEE ECHNOLOGICAL UNIVERSIY Cookeville, N 38 Singular Value Decomposition Andrew Lounsbury Department

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems . AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...

More information

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018 Linear System Theory Wonhee Kim Lecture 1 March 7, 2018 1 / 22 Overview Course Information Prerequisites Course Outline What is Control Engineering? Examples of Control Systems Structure of Control Systems

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Lifted approach to ILC/Repetitive Control

Lifted approach to ILC/Repetitive Control Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Module 03 Linear Systems Theory: Necessary Background

Module 03 Linear Systems Theory: Necessary Background Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Dynamical systems: basic concepts

Dynamical systems: basic concepts Dynamical systems: basic concepts Daniele Carnevale Dipartimento di Ing. Civile ed Ing. Informatica (DICII), University of Rome Tor Vergata Fondamenti di Automatica e Controlli Automatici A.A. 2014-2015

More information

Control Systems. Time response. L. Lanari

Control Systems. Time response. L. Lanari Control Systems Time response L. Lanari outline zero-state solution matrix exponential total response (sum of zero-state and zero-input responses) Dirac impulse impulse response change of coordinates (state)

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77 1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),

More information

Linear dynamical systems with inputs & outputs

Linear dynamical systems with inputs & outputs EE263 Autumn 215 S. Boyd and S. Lall Linear dynamical systems with inputs & outputs inputs & outputs: interpretations transfer function impulse and step responses examples 1 Inputs & outputs recall continuous-time

More information

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1 Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector

More information

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

System Identification by Nuclear Norm Minimization

System Identification by Nuclear Norm Minimization Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems

More information

LTI Systems (Continuous & Discrete) - Basics

LTI Systems (Continuous & Discrete) - Basics LTI Systems (Continuous & Discrete) - Basics 1. A system with an input x(t) and output y(t) is described by the relation: y(t) = t. x(t). This system is (a) linear and time-invariant (b) linear and time-varying

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Control Systems. Frequency domain analysis. L. Lanari

Control Systems. Frequency domain analysis. L. Lanari Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 7: State-space Models Readings: DDV, Chapters 7,8 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology February 25, 2011 E. Frazzoli

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Observability. It was the property in Lyapunov stability which allowed us to resolve that

Observability. It was the property in Lyapunov stability which allowed us to resolve that Observability We have seen observability twice already It was the property which permitted us to retrieve the initial state from the initial data {u(0),y(0),u(1),y(1),...,u(n 1),y(n 1)} It was the property

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 14: Controllability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 14: Controllability p.1/23 Outline

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Therefore the new Fourier coefficients are. Module 2 : Signals in Frequency Domain Problem Set 2. Problem 1

Therefore the new Fourier coefficients are. Module 2 : Signals in Frequency Domain Problem Set 2. Problem 1 Module 2 : Signals in Frequency Domain Problem Set 2 Problem 1 Let be a periodic signal with fundamental period T and Fourier series coefficients. Derive the Fourier series coefficients of each of the

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

1 Some Facts on Symmetric Matrices

1 Some Facts on Symmetric Matrices 1 Some Facts on Symmetric Matrices Definition: Matrix A is symmetric if A = A T. Theorem: Any symmetric matrix 1) has only real eigenvalues; 2) is always iagonalizable; 3) has orthogonal eigenvectors.

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006 Multivariable Control Lecture 3 Description of Linear Time Invariant Systems John T. Wen September 7, 26 Outline Mathematical description of LTI Systems Ref: 3.1-3.4 of text September 7, 26Copyrighted

More information

ENGIN 211, Engineering Math. Laplace Transforms

ENGIN 211, Engineering Math. Laplace Transforms ENGIN 211, Engineering Math Laplace Transforms 1 Why Laplace Transform? Laplace transform converts a function in the time domain to its frequency domain. It is a powerful, systematic method in solving

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 12: I/O Stability Readings: DDV, Chapters 15, 16 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology March 14, 2011 E. Frazzoli

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52 1/52 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 2 Laplace Transform I Linear Time Invariant Systems A general LTI system may be described by the linear constant coefficient differential equation: a n d n

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information