EE263: Introduction to Linear Dynamical Systems Review Session 9

Similar documents
(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Least Squares. Using SVD Decomposition.

UNIT 6: The singular value decomposition.

Singular Value Decomposition

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

The Singular Value Decomposition and Least Squares Problems

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE263: Introduction to Linear Dynamical Systems Review Session 2

Lecture 2. Linear Systems

MATH36001 Generalized Inverses and the SVD 2015

The Singular Value Decomposition

Linear Algebra, part 3 QR and SVD

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

2. Review of Linear Algebra

The Singular Value Decomposition

Linear Algebra. Session 12

Lecture 4 Orthonormal vectors and QR factorization

A Review of Linear Algebra

Notes on Eigenvalues, Singular Values and QR

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Introduction to Numerical Linear Algebra II

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

CSC 576: Linear System

Lecture notes: Applied linear algebra Part 1. Version 2

Numerical Methods I Singular Value Decomposition

Singular Value Decomposition

Singular value decomposition

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Basic Calculus Review

Chapter 7: Symmetric Matrices and Quadratic Forms

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Maths for Signals and Systems Linear Algebra in Engineering

COMP 558 lecture 18 Nov. 15, 2010

Functional Analysis Review

Singular Value Decomposition (SVD)

Notes on Solving Linear Least-Squares Problems

Lecture 2: Linear Algebra Review

Singular Value Decomposition

ECE 275A Homework # 3 Due Thursday 10/27/2016

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Pseudoinverse & Moore-Penrose Conditions

Matrix decompositions

Linear Algebra Primer

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Linear Algebra Methods for Data Mining

The QR Decomposition

Lecture 5 Singular value decomposition

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Problem set 5: SVD, Orthogonal projections, etc.

The Singular Value Decomposition

Singular Value Decomposition Analysis

Summary of Week 9 B = then A A =

The SVD-Fundamental Theorem of Linear Algebra

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Positive Definite Matrix

Linear Algebra Fundamentals

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Least squares: the big idea

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

1 Cricket chirps: an example

6 Inner Product Spaces

Algorithmen zur digitalen Bildverarbeitung I

MIT Final Exam Solutions, Spring 2017

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

18.06 Professor Johnson Quiz 1 October 3, 2007

Orthonormal Transformations and Least Squares

Review of some mathematical tools

Math Fall Final Exam

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

Linear Algebra Review. Vectors

Lecture 1: Review of linear algebra

Conceptual Questions for Review

Singular Value Decomposition

STA141C: Big Data & High Performance Statistical Computing

Stat 159/259: Linear Algebra Notes

Linear Algebra in Actuarial Science: Slides to the lecture

18.06SC Final Exam Solutions

Cheat Sheet for MATH461

Probabilistic Latent Semantic Analysis

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

MATH 350: Introduction to Computational Mathematics

(v, w) = arccos( < v, w >

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Lecture 3: Review of Linear Algebra

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

The Singular Value Decomposition

Lecture 3: Review of Linear Algebra

Applied Linear Algebra in Geoscience Using MATLAB

EE363 homework 7 solutions

Computational Methods. Eigenvalues and Singular Values

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

1. The Polar Decomposition

Transcription:

EE63: Introduction to Linear Dynamical Systems Review Session 9 SVD continued EE63 RS9 1

Singular Value Decomposition recall any nonzero matrix A R m n, with Rank(A) = r, has an SVD given by A = UΣV T, where U R m r, U T U = I V R n r, V T V = I Σ = diag(σ 1,...,σ r ) here, σ 1 σ σ r > 0 are the singular values (this is the economy or thin version of the SVD) EE63 RS9

Some basic facts about singular values recall A = max x =1 Ax = σ max(a) for any A, we have 1. σ max (A) σ max (A)σ max (), σ max (A) = max x =1 A(x) σ max(a) max x =1 x = σ max(a)σ max (). σ min (A) σ min (A)σ min (), σ min (A) = min x =1 A(x) σ min(a) min x =1 x = σ min(a)σ min () 3. σ min (A+) σ min (A) σ max (), Ax = (A+)x x (A+)x + x EE63 RS9 3

rearranging this inequality yields (A+)x Ax x σ min (A) x σ max () x If x 0, then (A+)x x σ min (A) σ max () since this is true for all x 0, it is true for the minimizing x: min x 0 (A+)x x = σ min (A+) σ min (A) σ max () EE63 RS9 4

A Pythagorean inequality for the matrix norm suppose that A R m n and R p n, show that A A +. under what conditions do we have equality? suppose that v R n with v = 1 is the maximum gain direction of hence but A = A ] v, [ A ], A ] v = [ Av v = Av + v A + EE63 RS9 5

since in general the maximum [ gain ] directions of A and are different from A the maximum direction of equality holds when the maximum gain direction of A and are the same, if v 1 R n with v 1 = 1 is the maximum gain direction of both A and then [ A [ A ]v 1 = [ Av1 v 1 = Av 1 + v 1 = A +, ] and since also A A + we should have A +, conversely, if A = A = A + then from above the maximum gain directions of A and should be equal EE63 RS9 6

[ ] A to the maximum gain direction of, and therefore, the maximum gain directions of A and should be equal themselves EE63 RS9 7

Pseudo-inverse the pseudo-inverse of A = UΣV T is given by A = VΣ 1 U T A full rank skinny - A = (A T A) 1 A T A full rank fat - A = A T (AA T ) 1 can be used when A not full rank take y R m, then x = A y has minimum norm, among all w that minimize Aw y w minimizes Aw y iff Aw y = 0, i.e., A T Aw = A T y so A y solves minimize w subject to A T Aw = A T y. EE63 RS9 8

Range and nullspace let u 1,...,u r be the columns of U, and let v 1,...,v r be the columns of V the set {u 1,...,u r } is an orthonormal basis for R(A) R(A) R(U) Az = UΣV T z = U(ΣV T z) so if x R(A) then x R(U) R(U) R(A) take x R(U), x = Uy for some y Σ diagonal, full rank, V T fat, full rank ΣV T fat and full rank so for any y there exists z such that y = ΣV T z and so x = UΣV T z = Az and x R(A) similarly the set {v 1,...,v r } is an orthonormal basis for R(A T ) = N(A) EE63 RS9 9

Example given two sets of independent vectors b 1,...,b r R m, and d 1,...,d n r R n, construct a matrix A R m n, with R(A) = span{b 1,...,b r }, and N(A) = span{d 1,...,d n r } find an orthonormal basis b 1,..., b r for span{b 1,...,b r } (using, say, Gram-Schmidt) let b 1,..., b r be the columns of similarly find an orthonormal basis d 1,..., d r for span{d 1,...,d n r } let d 1,..., d r be the columns of D now we can simply take A = D T clearly the choice of A is not unique EE63 RS9 10

Energy storage efficiency in a linear dynamical system consider the discrete-time linear dynamic system x(t+1) = Ax(t)+u(t), y(t) = Cx(t), where x(t) R n, and u(t),y(t) R initial state x(0) = 0, input sequence u(0),...,u(n 1) we are interested in the output over the next N samples, i.e., y(n),...,y(n 1), (u(t) = 0 for t N) define the input and output energies as E in = N 1 t=0 u(t), E out = N 1 t=n y(t) how would you choose the (nonzero) input sequence u(0),...,u(n 1) to maximize the ratio of output energy to input energy? EE63 RS9 11

Energy storage efficiency in a linear dynamical system for the discrete time system x(t+1) = Ax(t)+u(t), y(t) = Cx(t), we can write the state at time t in terms of the past inputs as x(t) = [ A... A t 1 ] u(t 1). u(0), and the output y(t) as y(t) = [ C CA... CA t 1 ] u(t 1). u(0) since the input is zero from time t = N onwards, the state is propagated forward by A at each sample, i.e., x(t+1) = Ax(t), and so for t N, we EE63 RS9 1

can write the output in terms of the input from t = 0 to t = N 1 as y(t) = [ CA t N CA t N+1... CA t 1 ] u(n 1). u(0) stacking up the y(t)s, t = N,...,N 1, we obtain the equation, y(n). y(n 1) = C CA... CA N 1 CA CA... CA N... CA N 1 CA N... CA N u(n 1). u(0), or Y = CU; and we are interested in maximizing Y / U, subject to U 1 this is an SVD problem - the maximum value of Y / U is equal to the largest singular value of C, and the sequence of inputs u is precisely the right singular vector corresponding to σ max (C)! EE63 RS9 13