CME 345: MODEL REDUCTION

Size: px
Start display at page:

Download "CME 345: MODEL REDUCTION"

Transcription

1 CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University 1 / 43

2 Outline 1 Time-continuous Formulation 2 Method of Snapshots for a Single Parametric Configuration 3 The POD Method in the Frequency Domain 4 Connection with SVD 5 Error Analysis 6 Extension to Multiple Parametric Configurations 7 Applications 2 / 43

3 Time-continuous Formulation Nonlinear High-Dimensional Model d w(t) dt = f(w(t), t) y(t) = g(w(t), t) w(0) = w 0 w R N : vector of state variables y R q : vector of output variables (typically q N) f(, ) R N : together with d w(t), defines the high-dimensional dt system of equations 3 / 43

4 Time-continuous Formulation POD Minimization Problem Consider a fixed initial condition w 0 R N Denote the associated state trajectory in the time-interval [0, T ] by T w = {w(t)} 0 t T The Proper Orthogonal Decomposition (POD) method seeks an orthogonal projector Π V,V of fixed rank k that minimizes the integrated projection error T 0 T w(t) Π V,V w(t) 2 2 dt = E V (t) 2 2dt = E V 2 = J(Π V,V ) 0 4 / 43

5 Time-continuous Formulation Solution to the POD Minimization Problem Theorem Let K R N N be the real, symmetric, positive, semi-definite matrix defined as follows T K = w(t)w(t) T dt 0 Let ˆλ 1 ˆλ 2 ˆλ N 0 denote the ordered eigenvalues of K, and φ i R N, i = 1,, N, denote their associated eigenvectors which are also referred to as the POD modes K φ i = ˆλ i φi, i = 1,, N The subspace V = range ( V) of dimension k that minimizes J(Π V,V ) is the invariant subspace of K associated with the eigenvalues ˆλ 1 ˆλ 2 ˆλ k 5 / 43

6 Method of Snapshots for a Single Parametric Configuration Discretization of POD by the Method of Snapshots Solving the eigenvalue problem K φ i = ˆλ i φi is in general computationally intractable because: (1) the dimension N of the matrix K is usually large, (2) this matrix is usually dense However, the state data is typically available under the form of discrete snapshot vectors In this case, T quadrature rule as follows 0 {w(t i )} Nsnap i=1 w(t)w(t) T dt can be approximated using a N snap K = α i w(t i )w(t i ) T i=1 where α i, i = 1,, N snap are the quadrature weights 6 / 43

7 Method of Snapshots for a Single Parametric Configuration Discretization of POD by the Method of Snapshots Let S R N Nsnap denote the snapshot matrix defined as follows S = [ α1 w(t 1 )... αnsnap w(t Nsnap ) ] It follows that K = SS T where K is still a large-scale (N N) matrix 7 / 43

8 Method of Snapshots for a Single Parametric Configuration Discretization of POD by the Method of Snapshots Note that the non-zero eigenvalues of the matrix K = SS T R N N are the same as those of the matrix R = S T S R Nsnap Nsnap Since usually N snap N, it is more economical to solve instead the symmetric eigenvalue problem Rψ i = λ i ψ i, i = 1,, N snap However, if S is ill-conditioned, R is worse conditioned κ 2 (S) = κ 2 (S T S) κ 2 (R) = κ 2 (S) 2 8 / 43

9 Method of Snapshots for a Single Parametric Configuration Discretization of POD by the Method of Snapshots If rank(r) = r, then the first r POD modes φ i are given by φ i = 1 λi Sψ i, i = 1,, r Let Φ = [ ] [ ] φ 1... φ r and Ψ = ψ1... ψ r with Ψ T Ψ = I r = Φ = SΨΛ 1 2 where λ 1 (0) Λ =... (0) λ r Rψ i = λ i ψ i, i = 1,, N snap Ψ T RΨ = Ψ T S T SΨ = Λ Hence Φ T KΦ = Λ 1 2 Ψ T S T SS T SΨΛ 1 2 = Λ 1 2 ΛΨ T ΨΛΛ 1 2 = Λ Since the columns of Φ are the eigenvectors of K ordered by decreasing eigenvalues, the optimal orthogonal basis of size k r is V = [ ] [ ] I Φ k Φ k r k = Φ 0 k 9 / 43

10 The POD Method in the Frequency Domain Fourier Analysis Parseval s theorem 1 (the Fourier transform is unitary) T 1 2 lim T T T 2 V T w(t) 2 2 dt = lim T, Ω 1 2πT Ω where F[w(t)] = W(ω) is the Fourier transform of w(t) Consequence V T ( = V T ( lim T lim T, Ω 1 T 1 2πT T 2 T 2 Ω Ω w(t)w(t) T dt Ω (Proof: see Homework assignment #2) F [ V T w(t) ] 2 2 dω ) V W(ω)W(ω) dω ) V 1 Rayleigh s energy theorem, Plancherel s theorem 10 / 43

11 The POD Method in the Frequency Domain Snapshots in the Frequency Domain Let K denote the analog to K in the frequency domain K = Ω Ω W(ω)W(ω) dω N C snap i= N C snap α i W(ω i )W(ω i ) where ω i = ω i is The corresponding snapshot matrix is [ α0 S = W(ω 0 ) ( ) 2α 1 Re (W(ω 1 ))... 2α N C snap Re W(ω N C snap ) ( ) ] 2α1 Im (W(ω 1 ))... 2α N C snap Im W(ω N C snap ) It follows that K = S S T R = S T S T = Ψ Λ Ψ Φ = S Ψ Λ ] 1 2 Ṽ = [ Φk [ ] I k ΦN r = 0 Φ k 11 / 43

12 The POD Method in the Frequency Domain Case of Linear-Time Invariant Systems f(w(t), t) = Aw(t) + Bu(t) g(w(t), t) = Cw(t) + Du(t) Single input case: p = 1 B R N Time trajectory t w(t) = e At w 0 + e A(t τ) Bu(τ)dτ 0 Snapshots in the time-domain for an impulse input u(t) = δ(t) and zero initial condition w(t i ) = e At i B, t i 0 In the frequency domain, the LTI system can be written as jω l W = AW + B, ω l 0 and the associated snapshots are W(ω l ) = (jω l I A) 1 B 12 / 43

13 The POD Method in the Frequency Domain Case of Linear-Time Invariant Systems How to sample the frequency domain? approximate time trajectory for a zero initial condition t ΠṼ, Ṽ w(t) = ṼṼT e A(t τ) Bu(τ)dτ low-dimensional solution is accurate if the corresponding error is small that is w(t) ΠṼ, Ṽ w(t) = (I ṼṼT ) 0 t 0 e A(t τ) Bu(τ)dτ is small, which depends on the frequency content of u(τ) = the sampled frequency band should contain the dominant frequencies of u(τ) 13 / 43

14 Connection with SVD Definition Given A R N M, there exist two orthogonal matrices U R N N (U T U = I N ) and Z R M M (Z T Z = I M ) such that A = UΣZ T where Σ R N M has diagonal entries Σ ii = σ i satisfying σ 1 σ 2 σ min(n,m) 0 and zero entries everywhere else {σ i } min(n,m) i=1 are the singular values of A, and the columns of U and Z are the left and right singular vectors of A, respectively U = [u 1 u N ], Z = [z 1 z M ] 14 / 43

15 Connection with SVD Properties The SVD of a matrix provides many useful information about it (rank, range, null space, norm,...) {σi 2 } min(n,m) i=1 are the eigenvalues of the symmetric positive, semi-definite matrices AA T and A T A Az i = σ i u i, i = 1,, min(n, M) rank(a) = r, where r is the index of the smallest non-zero singular value If U r = [u 1 u r ] and Z r = [z 1 z r ] denote the singular vectors associated with the non-zero singular values and U N r = [u r+1 u N ] and Z M r = [z r+1 z M ], then A = σ 1 u 1 z T σr ur zt r = r σ i u i z T i i=1 range (A) = range (U r ) range (A T ) = range (Z r ) null (A) = range (Z M r ) null (A T ) = range (U N r ) 15 / 43

16 Connection with SVD Application of the SVD to Optimality Problems Given A R N M with N M, which matrix X R N M with rank(x) = k < r = rank(a) M minimizes A X 2? Theorem (Schmidt-Eckart-Young-Mirsky) min A X 2 = σ k+1 (A), if σ k (A) > σ k+1 (A) X, rank(x)=k Hence, X = k σ i u i z T i, where A = UΣZ T, minimizes A X 2 i=1 This minimizer is also the unique solution of the related problem (Eckart-Young theorem) min X, rank(x)=k A X F 16 / 43

17 Connection with SVD Application to Image Compression Consider a color image in RGB representation made of M N pixels, where M < N (i.e., a landscape image) this image can be represented by an M N 3 real matrix A 1 A 1 can be converted to a 3N M matrix A 3 as follows A 1 2 R M N 3 A 2 2 R M (N 3) A 3 = A T 2 2 R (3N) M finally, A 3 can be approximated using the SVD as follows r A 3 = σ 1u 1z T σ r u r z T r = σ i u i z T i i=1 17 / 43

18 CME 345: MODEL REDUCTION - Proper Orthogonal Decomposition Connection with SVD Application to Image Compression Example: A3 R (a) rank 1 (b) rank 2 (c) rank 3 (d) rank 4 (e) rank 5 (f) rank 6 18 / 43

19 CME 345: MODEL REDUCTION - Proper Orthogonal Decomposition Connection with SVD Application to Image Compression (g) rank 10 (h) rank 20 (i) rank 50 (j) rank 75 (k) rank 100 (l) rank 285 = the SVD can be used for data compression 19 / 43

20 Connection with SVD Discretization of POD by the Method of Snapshots and the SVD The discretization of the POD by the method of snapshots requires computing the eigenspectrum of K = SS T Φ T KΦ = Φ T SS T Φ = Λ corresponding to its non-zero eigenvalues Link with the SVD of S [ S = UΣZ T Σr 0 = [U r U N r ] 0 0 ] Z T = K = UΣ 2 U T and U T KU = Σ 2 = Φ = U r and Λ 1 2 = Σr = U r R N r is to be identified with X R N M, N M r Computing the SVD of S is usually preferred to computing the eigendecomposition of R = S T S because, as noted earlier κ 2 (R) = κ 2 (S) 2 20 / 43

21 Error Analysis Reduction Criterion How to choose the size k of the Reduced-Order Basis (ROB) V obtained using the POD method start from the property of the Frobenius norm of S S F = r σi 2(S) consider the error measured with the Frobenius norm induced by the truncation of the POD basis r (I N VV T )S F = σi 2(S) i=1 i=k+1 the square of the relative error gives an indication of the magnitude of the missing information k r σi 2 (S) σi 2 (S) i=1 i=k+1 E POD (k) = 1 E r POD (k) = σi 2(S) r σi 2(S) i=1 i=1 21 / 43

22 Error Analysis Reduction Criterion How to choose the size k of the ROB V obtained using the POD method (continue) k σi 2(S) i=1 E POD (k) = r σi 2(S) i=1 E POD (k) represents the relative energy of the snapshots captured by the k first POD basis vectors k is usually chosen as the minimum integer for which 1 E POD (k) ɛ for a given tolerance 0 < ɛ < 1 (for instance ɛ = 0.1%) this criterion originates from turbulence applications 22 / 43

23 Error Analysis Reduction Criterion Recall the model reduction error components E ROM (t) = E V (t) + E V (t) = (I N Π V,V ) w(t) + V ( V T w(t) q(t) ) denote E snap ROM = [E ROM(t 1) E ROM (t Nsnap )] r [E V (t 1) E V (t Nsnap )] F = σi 2(S) hence and i=k+1 1 E POD (k) = [E V (t1) E V (t N snap )] 2 F r σi 2(S) 1 E POD (k) i=1 E snap ROM 2 F r σi 2(S) note that the energy criterion is valid only for the sampled snapshots i=1 23 / 43

24 Extension to Multiple Parametric Configurations The Steady-State Case Consider the parametrized steady-state high-dimensional system of equations f(w; µ) = 0, µ D R d, µ = [µ 1,, µ d ] T Consider the goal of constructing a ROB and the associated projection-based ROM for computing the approximate solution w(µ) Vq(µ), µ D Question: How do we build a global ROB V that can capture the solution in the entire parameter domain D? 24 / 43

25 Extension to Multiple Parametric Configurations Choice of Snapshots Lagrange basis V span { ( w µ (1)) (,, w µ (s))} N snap = s Hermite basis { ( V span w µ (1)), w ( µ (1)) (,, w µ (s)), w ( µ (s))} µ 1 µ d Taylor basis N snap = s (d + 1) V span w ( µ (1)), w µ 1 ( µ (1)), 2 w µ 2 1 ( µ (1)),, q w µ q 1 ( µ (1)),, w µ d ( µ (1)),, q w µ q d (µ (1)) N snap = 1+d+ d(d + 1) (d + q 1)! (d 1)!q! = 1+ q i=1 (d + i 1)! (d 1)!i! 25 / 43

26 Extension to Multiple Parametric Configurations Design of Numerical Experiments How one chooses the s parameter samples µ (1),, µ (s) where to compute the snapshots { w ( µ (1)),, w ( µ (s))}? the samples location in the parameter space will determine the accuracy of the resulting global ROM in the entire parameter domain D R d Possible approaches Uniform sampling for parameter spaces of moderate dimensions (d 5) Latin Hypercube sampling for higher-dimensional parameter spaces Goal-oriented greedy sampling that exploits an error indicator to focus on the ROM accuracy 26 / 43

27 Extension to Multiple Parametric Configurations A Greedy Approach Ideally, one can build a ROM progressively and update it (increase its dimension) by considering additional samples µ (i) and corresponding solution snapshots at the locations of the parameter space where the current ROM is the most inaccurate that is, µ (i) = argmax µ D E ROM (µ) = argmax w(µ) Vq(µ) µ D q(µ) can be efficiently computed but the cost of obtaining w(µ) can be high eventually an intractable approach Idea: rely on an economical a posteriori error estimator option 1: error bound E ROM (µ) (µ) option 2: error indicator based on the norm of the (affordable) residual r(µ) = f(vq(µ); µ) For this purpose, D is{ usually replaced by a large discrete set of candidate parameters µ (1),, µ (c)} D 27 / 43

28 Extension to Multiple Parametric Configurations A Greedy Approach Greedy procedure based on the norm of the residual as an error indicator Algorithm (given a termination criterion) 1 randomly select a first sample µ (1) 2 solve the HDM-based problem ( f w(µ (1) ); µ (1)) = 0 3 build a corresponding ROB V 4 for i = 2, 5 solve µ (i) = argmax { µ µ (1),,µ (c) } r(µ) 6 solve the HDM-based problem ( f w(µ (i) ); µ (i)) = 0 7 { build a ROB V based } on the snapshots (or in this case samples) w(µ (1) ),, w(µ (i) ) 28 / 43

29 Extension to Multiple Parametric Configurations The Unsteady Case Parameterized HDM Lagrange basis d w(t; µ) = f (w(t; µ), t; µ) dt { ( V span w t 1 ; µ (1)) ( (,, w t N t ; µ(1)),, w t 1 ; µ (s)) (,, w t N t ; µ(s))} Nsnap = s Nt A posteriori error estimators option 1: error bound ( T 1/2 E ROM (µ) = E ROM (t; µ) dt) 2 (µ) 0 option 2: error indicator based on the norm of the (affordable) residual ( T 1/2 T r(µ) = r(t; µ) dt) 2 = d 2 w(t; µ) f(vq(t; µ), t; µ) 0 dt dt 0 29 / 43

30 Extension to Multiple Parametric Configurations The Unsteady Case Greedy procedure based on the residual norm as an error indicator Algorithm (given a termination criterion) 1 randomly select a first sample µ (1) 2 solve the HDM-based problem d dt w(t; µ(1) ) = f (w(t; µ (1) ), t; µ (1)) 3 build a ROB V based on the snapshots { } w(t 1; µ (1) ),, w(t Nt ; µ (1) ) 4 for i = 2, 5 solve µ (i) = argmax { µ µ (1),,µ (c) } r(µ) 6 solve the HDM-based problem d dt w(t; µ(i) ) = f (w(t; µ (i) ), t; µ (i)) 7 build a ROB V based on the snapshots { } w(t 1; µ (1) ),, w(t Nt ; µ (i) ) 30 / 43

31 CME 345: MODEL REDUCTION - Proper Orthogonal Decomposition Applications Image Compression Recall = 1 EPOD (m) < 10 1 rank 2 (n) < 10 2 rank 47 (o) < 10 3 rank 138 (p) < 10 4 rank 210 (q) < 10 5 rank 249 (r) < 10 6 rank / 43

32 Applications Image Compression E POD (k) k 32 / 43

33 Applications Second-Order Dynamical System N u = 48 masses N = 96 degrees of freedom in state space form Model reduction by the POD method in the frequency domain 33 / 43

34 Applications Second-Order Dynamical System Nyquist plots this leads to the choice of a ROM of size k = / 43

35 Applications Second-Order Dynamical System Bode diagram for a ROM of size k = / 43

36 Applications Fluid System - Advection-Diffusion High-dimensional model (N = 5, 402) 36 / 43

37 Applications Fluid System - Advection-Diffusion POD modes 37 / 43

38 Applications Fluid System - Advection-Diffusion Projection error (singular values decay) 10 2 ɛ(k) k 38 / 43

39 Applications Fluid System - Advection-Diffusion POD-based ROM (k = 1 and k = 2) 39 / 43

40 Applications Fluid System - Advection-Diffusion POD-based ROM (k = 3 and k = 4) 40 / 43

41 Applications Fluid System - Advection-Diffusion POD-based ROM (k = 5 and k = 6) 41 / 43

42 Applications Fluid System - Advection-Diffusion Model reduction error E ROM (t) 10 2 Relative Error k 42 / 43

43 Applications Fluid System - Advection-Diffusion Model reduction error E ROM (t) and projection error E V (t) 10 2 Projection error Total error Error k for this problem, E V (t) dominates E V (t) 43 / 43

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Methods for Nonlinear Systems Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 65 Outline 1 Nested Approximations 2 Trajectory PieceWise Linear (TPWL)

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Dimensionality reduction of parameter-dependent problems through proper orthogonal decomposition

Dimensionality reduction of parameter-dependent problems through proper orthogonal decomposition MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 01.2016 January 2016 (New 25.05.2016) Dimensionality

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Reduced Basis Method for Parametric

Reduced Basis Method for Parametric 1/24 Reduced Basis Method for Parametric H 2 -Optimal Control Problems NUMOC2017 Andreas Schmidt, Bernard Haasdonk Institute for Applied Analysis and Numerical Simulation, University of Stuttgart andreas.schmidt@mathematik.uni-stuttgart.de

More information

Problem Set 5 Solutions 1

Problem Set 5 Solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Hessian-based model reduction for large-scale systems with initial condition inputs

Hessian-based model reduction for large-scale systems with initial condition inputs Hessian-based model reduction for large-scale systems with initial condition inputs O. Bashir 1, K. Willcox 1,, O. Ghattas 2, B. van Bloemen Waanders 3, J. Hill 3 1 Massachusetts Institute of Technology,

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Lecture 2. Linear Systems

Lecture 2. Linear Systems Lecture 2. Linear Systems Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering April 6, 2015 1 / 31 Logistics hw1 due this Wed, Apr 8 hw due every Wed in class, or my mailbox on

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

8. the singular value decomposition

8. the singular value decomposition 8. the singular value decomposition cmda 3606; mark embree version of 19 February 2017 The singular value decomposition (SVD) is among the most important and widely applicable matrix factorizations. It

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 64: Dynamic Systems Spring Homework 4 Solutions Exercise 47 Given a complex square matrix A, the definition

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS

AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS 1 / 10 AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS Pseudo-Time Integration 1 / 10 AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS 2 / 10 Outline 1

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

MP 472 Quantum Information and Computation

MP 472 Quantum Information and Computation MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Singular Value Decomposition Analysis

Singular Value Decomposition Analysis Singular Value Decomposition Analysis Singular Value Decomposition Analysis Introduction Introduce a linear algebra tool: singular values of a matrix Motivation Why do we need singular values in MIMO control

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Lecture 4: Analysis of MIMO Systems

Lecture 4: Analysis of MIMO Systems Lecture 4: Analysis of MIMO Systems Norms The concept of norm will be extremely useful for evaluating signals and systems quantitatively during this course In the following, we will present vector norms

More information

Math Fall Final Exam

Math Fall Final Exam Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Machine Learning Applied to 3-D Reservoir Simulation

Machine Learning Applied to 3-D Reservoir Simulation Machine Learning Applied to 3-D Reservoir Simulation Marco A. Cardoso 1 Introduction The optimization of subsurface flow processes is important for many applications including oil field operations and

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Some solutions of the written exam of January 27th, 2014

Some solutions of the written exam of January 27th, 2014 TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following

More information

High-resolution Parametric Subspace Methods

High-resolution Parametric Subspace Methods High-resolution Parametric Subspace Methods The first parametric subspace-based method was the Pisarenko method,, which was further modified, leading to the MUltiple SIgnal Classification (MUSIC) method.

More information

Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Orthogonal Projection Operators Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Lecture 15 Perron-Frobenius Theory

Lecture 15 Perron-Frobenius Theory EE363 Winter 2005-06 Lecture 15 Perron-Frobenius Theory Positive and nonnegative matrices and vectors Perron-Frobenius theorems Markov chains Economic growth Population dynamics Max-min and min-max characterization

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

The Essentials of Linear State-Space Systems

The Essentials of Linear State-Space Systems :or-' The Essentials of Linear State-Space Systems J. Dwight Aplevich GIFT OF THE ASIA FOUNDATION NOT FOR RE-SALE John Wiley & Sons, Inc New York Chichester Weinheim OAI HOC OUOC GIA HA N^l TRUNGTAMTHANCTINTHUVIIN

More information

Reduced order modelling for Crash Simulation

Reduced order modelling for Crash Simulation Reduced order modelling for Crash Simulation Fatima Daim ESI Group 28 th january 2016 Fatima Daim (ESI Group) ROM Crash Simulation 28 th january 2016 1 / 50 High Fidelity Model (HFM) and Model Order Reduction

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Dimensionality Reduction Using the Sparse Linear Model: Supplementary Material

Dimensionality Reduction Using the Sparse Linear Model: Supplementary Material Dimensionality Reduction Using the Sparse Linear Model: Supplementary Material Ioannis Gkioulekas arvard SEAS Cambridge, MA 038 igkiou@seas.harvard.edu Todd Zickler arvard SEAS Cambridge, MA 038 zickler@seas.harvard.edu

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information