CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

Size: px
Start display at page:

Download "CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction"

Transcription

1 CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38

2 Outline 1 Solution Approximation 2 Orthogonal and Oblique Projections 3 Galerkin and Petrov-Galerkin Projections 4 Equivalent High-Dimensional Model 5 Error Analysis 2 / 38

3 Solution Approximation High-dimensional Model Ordinary Differential Equation (ODE) dw (t) = f(w(t), t) (1) dt w R N : State variable Initial condition: w() = w Output equation There is no parameter dependence for now y(t) = g(w(t), t) (2) 3 / 38

4 Solution Approximation Low Dimensionality of Trajectories In many cases, the trajectories of the solutions computed using High-Dimensional Models (HDMs) are contained in low-dimensional subspaces k S = dim (S) The state can be written exactly as a linear combination of vectors spanning S w(t) = q 1 (t)v q ks (t)v ks V S = [v 1,, v ks ] R N k S is a basis for S (q 1(t),, q ks (t)) are the generalized coordinates for w(t) in S q(t) = [q 1(t),, q ks (t)] T R k S is the reduced-order state vector In matrix form, the above expansion can be written as w(t) = V S q(t) 4 / 38

5 Solution Approximation Low Dimensionality of Trajectories Often, the exact basis V S is unknown but can be estimated empirically by a trial basis V R N k, k < N k and k S may be different The following ansatz is considered w(t) Vq(t) Substituting the above subspace approximation in Eq. (1) and in Eq. (2) leads to d Vq(t) dt = f(vq(t), t) + r(t) y(t) = g(vq(t), t) where r(t) is the residual due to the subspace approximation 5 / 38

6 Solution Approximation Low Dimensionality of Trajectories The residual r(t) R N accounts for the fact that Vq(t) is not in general an exact solution of problem (1) Since the basis V is assumed to be time-invariant and therefore d dq (Vq) = V dt dt V d q(t) dt = f(vq(t), t) + r(t) y(t) = g(vq(t), t) Set of N differential equations in terms of k unknowns Over-determined system (k < N) q 1 (t),, q k (t) 6 / 38

7 Orthogonal and Oblique Projections Orthogonality Let w and y be two vectors in R N w and y are orthogonal to each other with respect to the canonical inner product in R N if and only if w T y = w and y are orthonormal to each other with respect to the canonical inner product in R N if and only if w T y =, and w T w = 1, and y T y = 1 Let V be a matrix in R N k V is an orthogonal (orthonormal) matrix if and only if V T V = I k 7 / 38

8 Orthogonal and Oblique Projections Projections Definition A matrix Π R N N is a projection matrix if Π 2 = Π Some direct consequences range(π) is invariant under the action of Π and 1 are the only possible eigenvalues of Π let k be the rank of Π: Then, there exists a basis X such that [ ] Ik Π = X X 1 N k 8 / 38

9 Orthogonal and Oblique Projections Projections Consider [ Ik Π = X N k ] X 1 decompose X as X = [ X 1 X 2 ], where X1 R N k and X 2 R N (N k) then, w R N Πw range(x 1 ) = range(π) = S 1 w Πw range(x 2 ) = Ker(Π) = S 2 Π defines the projection onto S 1 parallel to S 2 R N = S 1 S 2 9 / 38

10 Orthogonal and Oblique Projections Orthogonal Projections Consider the case where S 2 = S 1 Let V R N k be an orthogonal matrix whose columns span S 1, and let w R N : The orthogonal projection of w onto the subspace S 1 is VV T w the equivalent projection matrix is Π V,V = VV T special case #1: If w belongs to S 1 Π V,V w = VV T w = w special case #2: If w is orthogonal to S 1 Π V,V w = VV T w = 1 / 38

11 Orthogonal and Oblique Projections Orthogonal Projections Π V,V w = VV T w 11 / 38

12 Orthogonal and Oblique Projections Orthogonal Projections Example: Helix in 3D (N = 3) let w(t) R 3 define a curve parameterized by t [, 6π] as follows w 1(t) cos(t) w(t) = w 2(t) w 3(t) = sin(t) t w (t) 15 w w 2.5 w / 38

13 Orthogonal and Oblique Projections Orthogonal Projections w (t) Π V, V w (t) Orthogonal projection onto range(v) = span(e 1, e 2) Π V,V w(t) = cos(t) sin(t) w w2.5 w / 38

14 Orthogonal and Oblique Projections Orthogonal Projections w (t) Π V, V w (t) Orthogonal projection onto range(v) = span(e 2, e 3) Π V,V w(t) = sin(t) t w w2.5 w / 38

15 Orthogonal and Oblique Projections Orthogonal Projections w (t) Π V, V w (t) Orthogonal projection onto range(v) = span(e 1, e 3) Π V,V w(t) = cos(t) t w w2.5 w / 38

16 Orthogonal and Oblique Projections Oblique Projections This is the general case where S 2 may be distinct from S 1 Let V R N k and W R N k be two full-column rank matrices whose columns span respectively S 1 and S 2 The projection of w R N onto the subspace S 1 parallel to S 2 is V(W T V) 1 W T w the equivalent projection matrix is Π V,W = V(W T V) 1 W T special case #1: If w belongs to S 1 then w = Vq where q R k and Π V,W w = V(W T V) 1 W T Vq = Vq special case #2: If w is orthogonal to S2, i.e. w belongs to S 2 Π V,W w = V(W T V) 1 W T w = 16 / 38

17 Orthogonal and Oblique Projections Oblique Projections Π V,W w = V(W T V) 1 W T w 17 / 38

18 Orthogonal and Oblique Projections Oblique Projections Example: Helix in 3D bases V = [e 1, e 2], W = [e 1 + e 3, e 2 + e 3] projection matrix Π V,W = V(W T V) 1 W T = projected helix equation 1 1 Π V,W w(t) = 1 1 cos(t) sin(t) t = cos(t) + t sin(t) + t 18 / 38

19 Orthogonal and Oblique Projections Oblique Projections w (t) Π V, W w (t) 15 w w w / 38

20 Galerkin and Petrov-Galerkin Projections Model Order Reduction Start with an HDM d w(t) dt = f(w(t), t) y(t) = g(w(t), t) w() = w w R N : Vector of state variables y R q : Vector of output variables (typically q N) f(, ) R N : Completes the specification of the ODE of interest 2 / 38

21 Galerkin and Petrov-Galerkin Projections Model Order Reduction The goal is to construct a Reduced-Order Model (ROM) d dt q(t) = f r (q(t), t) y(t) = g r (q(t), t) where q R k : Vector of reduced-order state variables, k N y R q : Vector of output variables f r (, ) R k : Completes the definition of the reduced-order equations 21 / 38

22 Galerkin and Petrov-Galerkin Projections Model reduction requirements The Model Order Reduction (MOR) method should be computationally tractable be applicable to a large class of dynamical systems minimize a certain error criterion between the solution computed using the HDM and that computed the ROM preserve as many properties of the underlying HDM as possible 22 / 38

23 Galerkin and Petrov-Galerkin Projections Petrov-Galerkin Projection Recall the residual introduced by approximating w(t) as Vq(t) V d q(t) = f(vq(t), t) + r(t) dt Constrain this residual to be orthogonal to a subspace W defined by a test basis W R N k that is, define q(t) such that W T r(t) = Left-multiply the above equations by W T : This leads to the Petrov-Galerkin projection-based equations W T V d dt q(t) = WT f(vq(t), t) + 23 / 38

24 Galerkin and Petrov-Galerkin Projections Petrov-Galerkin Projection Assume that W T V is non-singular: In this case, the reduced-order equations are d dt q(t) = (WT V) 1 W T f(vq(t), t) y(t) = g(vq(t), t) After the above reduced-order equations have been solved, the subspace approximation of the high-dimensional state vector can be reconstructed, if needed, as follows w(t) = Vq(t) 24 / 38

25 Galerkin and Petrov-Galerkin Projections Galerkin Projection If W = V, the projection method is called a Galerkin projection If in addition V is orthogonal, the reduced-order equations become d dt q(t) = VT f(vq(t), t) y(t) = g(vq(t), t) 25 / 38

26 Galerkin and Petrov-Galerkin Projections Linear Time-Invariant Systems Special case: Linear Time-Invariant (LTI) systems f(w(t), t) = Aw(t) + Bu(t) g(w(t), t) = Cw(t) + Du(t) u R p : Vector of input variables corresponding ROM d dt q(t) = (WT V) 1 W T (AVq(t) + Bu(t)) y(t) = CVq(t) + Du(t) reduced-order LTI operators A r = (W T V) 1 W T AV R k k, k N B r = (W T V) 1 W T B R k p C r = CV R q k D r = D R q p 26 / 38

27 Galerkin and Petrov-Galerkin Projections Initial Condition High-dimensional initial condition w() = w R N Reduced-order initial condition q() = (W T V) 1 W T w R k in the high-dimensional state space, this gives Vq() = V(W T V) 1 W T w R k this is an oblique projection of w onto range(v) parallel to range(w) Error in the subspace approximation of the initial condition w() Vq() = (I N V(W T V) 1 W T )w Alternative: use an affine approximation w(t) = w() + Vq(t) (see Homework 1) 27 / 38

28 Equivalent High-Dimensional Model Question: Which HDM would produce the same solution as that given by the ROM? recall the reduced-order equations d dt q(t) = (WT V) 1 W T f(vq(t), t) y(t) = g(vq(t), t) the corresponding reconstructed high-dimensional state solution is w(t) = Vq(t) pre-multiplying the above reduced-order system by V leads to d dt w(t) = V(WT V) 1 W T f( w(t), t) ỹ(t) = g( w(t), t) the associated initial condition is w() = Vq() = V(W T V) 1 W T w() 28 / 38

29 Equivalent High-Dimensional Model Recall the projector Π V,W Π V,W = V(W T V) 1 W T Definition Equivalent HDM with the initial condition d dt w(t) = Π V,Wf( w(t), t) ỹ(t) = g( w(t), t) w() = Π V,W w() The equivalent dynamical function is f(, ) = Π V,W f(, ) 29 / 38

30 Equivalent High-Dimensional Model Equivalence Between Two Reduced-Order Models Consider the Petrov-Galerkin-Based ROM d dt q(t) = (WT V) 1 W T f(vq(t), t) y(t) = g(vq(t), t) q() = (W T V) 1 W T w() Lemma Choosing two different bases V and W that respectively span the same subspaces V and W results in the same reconstructed solution w(t) In other words, subspaces are more important than bases... 3 / 38

31 Equivalent High-Dimensional Model Equivalence Between Two Reduced-Order Models Consequences given an HDM, a corresponding ROM is uniquely defined by its associated Petrov-Galerkin projector Π V,W this projector is itself uniquely defined by the two subspaces hence W = range(w) and V = range(v) ROM (W, V) W and V belong to the Grassmann manifold G(k, N), which is the set of all subspaces of dimension k in R N 31 / 38

32 Error Analysis Definition Question: Can we characterize the error of the solution computed using the ROM relative to the solution obtained using the HDM? E ROM (t) = w(t) w(t) = w(t) Vq(t) assume here a Galerkin projection and an associated orthogonal basis V T V = I k projector Π V,V = VV T the error vector can be decomposed into two orthogonal components E ROM (t) = w(t) Π V,V w(t) + Π V,V w(t) Vq(t) ( ) = (I N Π V,V ) w(t) + V V T w(t) q(t) = E V (t) + E V (t) 32 / 38

33 Error Analysis Orthogonal Components of the Error Vector Error component orthogonal to V E V (t) = (I N Π V,V ) w(t) Interpretation: The exact trajectory does not strictly belong to V = range(v) Error component parallel to V E V (t) = V ( V T w(t) q(t) ) Interpretation: An equivalent but different dynamical system is solved Note that E V (t) can be computed without executing the ROM and therefore can provide an a priori error estimate 33 / 38

34 Error Analysis Orthogonal Components of the Error Vector V? w(t) (I N V,V )w(t) w(t) Vq(t) V V(V T w(t) Vq(t) q(t)) 34 / 38

35 Error Analysis Orthogonal Components of the Error Vector Adapted from A New Look at Proper Orthogonal Decomposition, Rathiman and Petzold., SIAM Journal of Numerical Analysis, Vol. 41, No. 5, / 38

36 Error Analysis Orthogonal Components of the Error Vector Again, consider the case of an orthogonal Galerkin projection One can derive an ODE associated with the error component lying in V in terms of that lying in V de V dt (t) = Π V,V (f(w(t), t) f(w(t) E V (t) E V (t), t)) with the initial condition E V () = In the case of an autonomous linear system the ODE has the simple form dw (t) = Aw(t) dt de V dt (t) = Π V,VAE V (t) + Π V,V AE V (t) where E V (t) acts as a forcing term 36 / 38

37 Error Analysis Orthogonal Components of the Error Vector Theorem One can then derive the following error bound E ROM (t) ( F (T, V T AV) 2 V T AV ) E V (t) where denotes the L 2 ([, T ], R N ) function norm, T f 2 = f (τ) 2 2dτ, and F (T, M) denotes the linear operator defined by F (T, M) : L 2 ([, T ], R N ) L 2 ([, T ], R N ) ( t ) u t e M(t τ) u(τ)dτ Error bounds for the nonlinear case can be found in A New Look at Proper Orthogonal Decomposition, Rathiman and Petzold., SIAM Journal of Numerical Analysis, Vol. 41, No. 5, / 38

38 Error Analysis Preservation of Model Stability If A is symmetric and the projection is an orthogonal Galerkin projection, the stability of the HDM is preserved during the reduction process (Hint: Consider the equivalent HDM and analyze the sign of d dt ( wt w)) However, if A is not symmetric, the stability of the HDM is not preserved: For example, consider a linear HDM characterized by the following unsymmetric matrix A = [ the eigenvalues of A are {.1127,.8873} (stable model) consider next the reduced-order basis V [ ] 1 V = A r = [1], and therefore the ROM obtained using Galerkin projection, is not asymptotically stable ] 38 / 38

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Methods for Nonlinear Systems Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 65 Outline 1 Nested Approximations 2 Trajectory PieceWise Linear (TPWL)

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

Matrix Vector Products

Matrix Vector Products We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut RICE UNIVERSITY Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

Answer Key b c d e. 14. b c d e. 15. a b c e. 16. a b c e. 17. a b c d. 18. a b c e. 19. a b d e. 20. a b c e. 21. a c d e. 22.

Answer Key b c d e. 14. b c d e. 15. a b c e. 16. a b c e. 17. a b c d. 18. a b c e. 19. a b d e. 20. a b c e. 21. a c d e. 22. Math 20580 Answer Key 1 Your Name: Final Exam May 8, 2007 Instructor s name: Record your answers to the multiple choice problems by placing an through one letter for each problem on this answer sheet.

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

Homework 2. Solutions T =

Homework 2. Solutions T = Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

On linear quadratic optimal control of linear time-varying singular systems

On linear quadratic optimal control of linear time-varying singular systems On linear quadratic optimal control of linear time-varying singular systems Chi-Jo Wang Department of Electrical Engineering Southern Taiwan University of Technology 1 Nan-Tai Street, Yungkung, Tainan

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Answer Keys For Math 225 Final Review Problem

Answer Keys For Math 225 Final Review Problem Answer Keys For Math Final Review Problem () For each of the following maps T, Determine whether T is a linear transformation. If T is a linear transformation, determine whether T is -, onto and/or bijective.

More information

Math Real Analysis II

Math Real Analysis II Math 4 - Real Analysis II Solutions to Homework due May Recall that a function f is called even if f( x) = f(x) and called odd if f( x) = f(x) for all x. We saw that these classes of functions had a particularly

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

Controllability, Observability, Full State Feedback, Observer Based Control

Controllability, Observability, Full State Feedback, Observer Based Control Multivariable Control Lecture 4 Controllability, Observability, Full State Feedback, Observer Based Control John T. Wen September 13, 24 Ref: 3.2-3.4 of Text Controllability ẋ = Ax + Bu; x() = x. At time

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Projective geometry and spacetime structure. David Delphenich Bethany College Lindsborg, KS USA

Projective geometry and spacetime structure. David Delphenich Bethany College Lindsborg, KS USA Projective geometry and spacetime structure David Delphenich Bethany College Lindsborg, KS USA delphenichd@bethanylb.edu Affine geometry In affine geometry the basic objects are points in a space A n on

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Remark 2.1. Motivation. This chapter deals with the first difficulty inherent to the incompressible Navier Stokes equations, see Remark

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections Section 6. 6. Orthogonal Sets Orthogonal Projections Main Ideas in these sections: Orthogonal set = A set of mutually orthogonal vectors. OG LI. Orthogonal Projection of y onto u or onto an OG set {u u

More information

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00 Math 32 Final Exam Jerry L. Kazdan May, 204 2:00 2:00 Directions This exam has three parts. Part A has shorter questions, (6 points each), Part B has 6 True/False questions ( points each), and Part C has

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Tangent and Normal Vector - (11.5)

Tangent and Normal Vector - (11.5) Tangent and Normal Vector - (.5). Principal Unit Normal Vector Let C be the curve traced out by the vector-valued function rt vector T t r r t t is the unit tangent vector to the curve C. Now define N

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Orthogonal Complements

Orthogonal Complements Orthogonal Complements Definition Let W be a subspace of R n. If a vector z is orthogonal to every vector in W, then z is said to be orthogonal to W. The set of all such vectors z is called the orthogonal

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Announcements Monday, November 20

Announcements Monday, November 20 Announcements Monday, November 20 You already have your midterms! Course grades will be curved at the end of the semester. The percentage of A s, B s, and C s to be awarded depends on many factors, and

More information

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product

More information

Module 03 Linear Systems Theory: Necessary Background

Module 03 Linear Systems Theory: Necessary Background Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm

THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION 1 The problem Let M be the manifold of all k by m column-wise orthonormal matrices and let f be a function defined on arbitrary k by m matrices.

More information

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler. Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents Max Planck Institute Magdeburg Preprints Petar Mlinarić Sara Grundel Peter Benner Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents MAX PLANCK INSTITUT

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

A Short Course on Frame Theory

A Short Course on Frame Theory A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

Solutions for Math 225 Assignment #5 1

Solutions for Math 225 Assignment #5 1 Solutions for Math 225 Assignment #5 1 (1) Find a polynomial f(x) of degree at most 3 satisfying that f(0) = 2, f( 1) = 1, f(1) = 3 and f(3) = 1. Solution. By Lagrange Interpolation, ( ) (x + 1)(x 1)(x

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information