Proper Orthogonal Decomposition (POD)

Size: px
Start display at page:

Download "Proper Orthogonal Decomposition (POD)"

Transcription

1 Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28

2 Outline Intro Results Problem etras 1 Introduction/Motivation Low Rank Approimation and Model Reduction for ODEs and PDEs 2 Proper Orthogonal Decomposition() Introduction to in Euclidean Space in General Hilbert Space 3 Numerical Results Solutions from Full and Reduced Systems of PDE: 1D Burgers Equation 4 Problem: for PDEs with nonlinearities Nonlinear Approimation Error and Computational Time

3 Intro Results Problem etras Low Rank Appro and Model Reduction for ODEs and PDEs Problem in finite dimensional space Given y 1,..., y n R m. Let Y = span{y 1, y 2,..., y n } R m ; r = rank(y ) y 1,..., y n R m possible (almost) linearly dependent NOT a good basis for Y Goal: Find orthonormal basis vectors {φ 1,..., φ k } that best approimates Y, for given k < r Solution: Use Low Rank Approimation Form a matri of known data: Y = y 1... y n R m n.

4 Intro Results Problem etras Low Rank Appro and Model Reduction for ODEs and PDEs Low Rank Approimation Singular Value Decomposition (SVD) Let Y R m n, r = rank(y ), and k < r. Problem: Low Rank Approimation min{ Y Ŷ 2 F : rank(ŷ ) = k} Ŷ Solution Ŷ = U k Σ k V T k with min error Y Ŷ 2 F = r i=k+1 σ2 i where Y = UΣV T is SVD of Y ; Σ = diag(σ 1,..., σ r ) R r r with σ 1 σ 2... σ r > Optimal orthonormal basis of rank k =: basis Ŷ = k i=1 σ iu i v T i {φ} k i=1 = {u i} k i=1

5 Intro Results Problem etras Low Rank Appro and Model Reduction for ODEs and PDEs EX: Image Compression via SVD 1 % Low Rank Approimation source:

6 Intro Results Problem etras Low Rank Appro and Model Reduction for ODEs and PDEs EX: Image Compression via SVD 5 % Low Rank Approimation source:

7 Intro Results Problem etras Low Rank Appro and Model Reduction for ODEs and PDEs Model Reduction for ODEs Full-order system (dim = N) d y(t) = Ky(t) + g(t) + N(y(t)) y(t) dt Reduced-order system (dim = k < N) Let y = U k ỹ, for U k R N k, with orthonormal columns: U T k U k = I R k k. U k d dt ỹ(t) = KU k ỹ(t) + g(t) + N(U k y(t)) d dt ỹ(t) = UT k KU k }{{} ỹ(t) + UT k g(t) }{{} +UT k N(U k y(t)) d dt ỹ(t) = Kỹ(t) + g(t) + U T k N(U k ỹ(t)) ỹ(t) How to construct U k?...use!

8 Intro Results Problem etras Low Rank Appro and Model Reduction for ODEs and PDEs Model Reduction for PDEs E. Unsteady 1D Burgers Equation t 2 y(, t) ν y(, t) + 2 ( ) y(, t) 2 = [, 1], t 2 y(, t) = y(1, t) =, t, y(, ) = y (), [, 1], Discretized system (Galerkin) Full-order system (dim = N): FE basis {ϕ i } N i=1 M h d dt y(t) + νk hy(t) N h (y(t)) = y h (, t) = N i=1 ϕ i()y i (t) Reduced-order system (dim = k < N): orthonormal basis {φ i } k i=1 M d dt ỹ(t) + ν Kỹ(t) Ñ(ỹ(t)) = ỹ h(, t) = k i=1 φ i()ỹ i (t) How to construct {φ i } k i=1?...use!

9 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space y Proper Orthogonal Decomposition() is a method for finding a low-dimensional approimate representation of: large-scale dynamical systems, e.g. signal analysis, turbulent fluid flow large data set, e.g. image processing SVD in Euclidean space. Etracts basis functions containing characteristics from the system of interest Generally gives a good approimation with substantially lower dimension 1 basis # basis # basis # Sol of Full System (FE):dim = 1 basis # t

10 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space Definition of Let X be Hilbert Space(I.e. Complete Inner Product Space) with inner product, and norm =, Given y 1,..., y n X. Define y i snapshot i, i. Let Y span{y 1, y 2,..., y n } X. Given k n, generates a set of orthonormal basis of dimension k, which minimizes the error from approimating the snapshots: basis Optimal solution of: n y j ŷ j 2, s.t. φ i, φ j = δ ij min {φ} k i=1 j=1 where ŷ j () = k i=1 y j, φ i φ i (), an approimation of y j using {φ} k i=1. Solve by SVD

11 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space Derivation for in Euclidean Space: E.g. X = R m min {φ} k i=1 s.t. n j=1 y j k i=1 (y T j φ i )φ i 2 2 E(φ 1,..., φ k ) φ T i φ j = δ ij = { 1 if i = j if i j i, j = 1,..., k Lagrange Function: k L(φ 1,..., φ k, λ 11,..., λ ij,..., λ kk ) = E(φ 1,..., φ k ) + λ ij (φ T i φ j δ ij ) i,j=1 KKT Necessary Conditions: φ i L = { n j=1 y j(y T j φ i ) = λ ii φ i λ ij =, ifi j }. φ T i φ j = δ ij

12 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space Optimal Condition: Symmetric m-by-m eigenvalue problem YY T φ i = λ i φ i where λ i = λ ii, Y = y 1... y n for i = 1,..., k Error for basis: n y i j=1 k (yj T φ i )φ i 2 2 = i=1 r i=k+1 λ i φ i = YY T φ i = n j=1 y j(yj T φ i ) λ i = n j=1 (y j T φ i ) 2 Since φ T i φ j = δ ij and span(y ) = span{φ 1,..., φ r } (from SVD), then y j = r i=1 (y j T φ i )φ i, forj = 1,..., n, and n y j j=1 k n (yj T φ i )φ i 2 2 = i=1 j=1 i=k+1 λ i r (yj T φ i ) 2 = r i=k+1 λ i

13 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space SOLUTION for basis in R m Recall Optimality Condition and the Error: YY T φ i = λ i φ i, i = 1,..., k n j=1 y j k i=1 (y j T φ i )φ i 2 2 = r i=k+1 λ i Optimal solution: basis: {φ i }k i=1 = {u i} k i=1 Lagrange multiplier: λ i = σ 2 i, i = 1,..., k, can be obtained by the SVD of Y R m n or EVD of YY T R m m, i.e., Y = UΣV T YY T u i = σ 2 i u i, i = 1,..., r, where Σ = diag(σ 1,..., σ r ) R r r with σ 1 σ 2... σ r > ; U = [u 1,..., u r ] R m r and V = [v 1,..., v r ] R n r have orthonormal columns. This is equivalent to SVD solution for Low Rank Approimation

14 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space SOLUTION for basis in a Hilbert Space X Two approaches: 1 Define linear symmetric operator F (w) = n j=1 w, y j y j, w X. Find eigenfunction u i X: EVD: F (u i ) = σ 2 i u i n j=1 u i, y j y j = σ 2 i u i (cf. YY T u i = σ 2 i u i ) Sol: φ i = u i, λ i = σ 2 i, i = 1,..., k 2 Define linear symmetric operator L = [ y i, y j ] R n n. Find eigenvector v i R n : EVD: L v i = σ 2 i v i [ y i, y j ]v i = σ 2 i v i (cf. Y T Yv i = σ 2 i v i ) Sol: φ i = 1 σ i n j=1 (v i) j y j, λ i = σ 2 i U k = YV k Σ 1 k NOTE: v i R n,but u i, y j X

15 Intro Results Problem etras Intro in Euclidean Space in General Hilbert Space Remark: Practical way for computing the basis for discretized PDEs Let {ϕ i } N i=1 X H1 (Ω) be finite element(fe) basis. FE snapshots(solutions): {y j } n i=j X at time {t j } n j=1. y j = y(, t j ) = N Ŷ ij (t j )ϕ i (). i=1 m L = [ y i, y j ] = [ Ŷ ik Ŷ jl ϕ i, ϕ j ] = Ŷ T MŶ Rn n, k,l=1 where M = [ ϕ i, ϕ j ] R N N. basis: φ i = 1 n (v i ) j y j, σ i where L v i = σi 2 v i, with v 1, v 2,..., v k corresponding to σ 1 σ 2... σ k >. j=1

16 y Intro Results Problem etras Solutions from Full and Reduced Systems of PDE: 1D Burgers Equation y Numerical Results for PDEs Recall the 1D Burgers Equation( [,) 1], t : 2 t y(, t) ν 2 y(, t) + y(,t) 2 2 =, y(, t) = y(1, t) =, t, y(, ) = y (), [, 1]. 1 Singular values of the Snapshots Sol of Reduced System (Direct ):dim = 6 Sol of Full System (Finite Element):dim = y(,t) 1.5 Sol of Full System (Finite Element) :dim = 1 t= t=.229 t= Sol of Reduced System (Direct ):dim = 2 y(,t) Sol of Reduced System (Direct ):dim = 1 t= Sol of Reduced System (Direct ):dim = 3 t=.229 t= t t.5 1 y(,t) Sol of Reduced System (Direct ):dim = 4 t= t=.229 t= y(,t) t= Sol of Reduced System (Direct ):dim = 5 t=.229 t= y(,t) t= t=.229 t= y(,t) t= t=.229 t=

17 Intro Results Problem etras Nonlinear Approimation Error and CPU Time Problem: for PDEs with nonlinearities If we apply the basis directly to construct a discretized system, the original system of order N: become a system of order k N: where the nonlinear term : M h d dt y(t) + νk hy(t) N h (y(t)) = M d ỹ(t) + ν Kỹ(t) Ñ(ỹ(t)) =, dt Ñ(ỹ(t)) = U T k }{{} k N N h (U k ỹ(t)) }{{} N 1 Computational Compleity still depends on N!!

18 Intro Results Problem etras Nonlinear Approimation Error and CPU Time Nonlinear Approimation Recall the problem from bf Direct : Ñ(ỹ(t)) = U T N }{{} h (Uỹ(t)) }{{} k N N 1 WANT: Ñ(ỹ(t)) }{{} C ˆN(ỹ(t)) }{{} k n m n m 1 k, n m N Independent of N 2 Approaches: Precomputing Technique: Simple nonlinear Empirical Interpolation Method (EIM): General nonlinear

19 Intro Results Problem etras Nonlinear Approimation Error and CPU Time ACCURACY vs. COMPLEXITY Relative Error:= sum y FE (t) y(t) / y FE (t) of Reconstruct Snapshots using t 1 1 Direct Precompute 1 EIM.35.3 CPU TIME for solving reduced order ODE system Direct Precompute EIM Average Relative Error cpu time (sec) Number of basis used Number of basis used Figure: LEFT: Error E avg = 1 nt ỹ h (,t i ) y h (,t i ) X n t i=1 y h (,t i ) X.RIGHT: CPU time (sec)

20 Intro Results Problem etras Nonlinear Approimation Error and CPU Time QUESTION?

21 Intro Results Problem etras Empirical Interpolation Method (EIM) Solutions from EIM Conclusions and F Empirical Interpolation Method (EIM) [Patera; 24] Approimates nonlinear parametrized functions Let s(; µ) be a parametrized function with spatial variable Ω and parameter µ D. Function approimation from EIM: ŝ(; µ) = n m m=1 q m ()β m (µ), where span{q m } nm m=1 M s {s( ; µ) : µ D} and β m (µ) is specified from the interpolation points {z m } nm m=1, in Ω: for i = 1,..., n m. s(z i ; µ) = n m m=1 q m (z i )β m (µ),

22 Intro Results Problem etras Empirical Interpolation Method (EIM) Solutions from EIM Conclusions and F EIM: Numerical Eample s(; µ) = (1 )cos(3πµ( + 1))e (1+)µ, where [ 1, 1] and µ [1, π]. Plot of Approimate Functions (dim = 1) with Eact Functions (in black solid line) 2 µ= µ= µ= µ= s(;µ)

23 Intro Results Problem etras Empirical Interpolation Method (EIM) Solutions from EIM Conclusions and F Plots of Numerical Solutions from 3 Approaches 1 5 Direct Precomputed EIM- 1 Singular values of the Snapshots y(,t) y(,t) Sol of Full System (Finite Element) :dim = 1 t= t=.229 t= Sol of Reduced System (Direct ):dim = Sol of Reduced System (Direct ):dim = 4 t= t=.229 t= y(,t) y(,t) Sol of Reduced System (Direct ):dim = 1 t= Sol of Reduced System (Direct ):dim = 3 t=.229 t= t= Sol of Reduced System (Direct ):dim = 5 t=.229 t= y(,t) t= t=.229 t= y(,t) t= t=.229 t= Figure: SVD Figure: Direct

24 Intro Results Problem etras Empirical Interpolation Method (EIM) Solutions from EIM Conclusions and F y(,t) Sol of Full System (Finite Element) :dim = 1 1 t= t=.229 t= y(,t) Sol of Reduced System (Precompute ):dim = 1 t=.8 t=.229 t= y(,t) Sol of Full System (Finite Element) :dim = 1 1 t= t=.229 t= y(,t) Sol of Reduced System (EIM ):dim = 1 t=.8 t=.229 t= Sol of Reduced System (Precompute ):dim = Sol of Reduced System (Precompute ):dim = Sol of Reduced System (EIM ):dim = Sol of Reduced System (EIM ):dim = 3 y(,t) t= t=.229 t= Sol of Reduced System (Precompute ):dim = 4 y(,t) t= t=.229 t= Sol of Reduced System (Precompute ):dim = 5 y(,t) t= t=.229 t= Sol of Reduced System (EIM ):dim = 4 y(,t) t= t=.229 t= Sol of Reduced System (EIM ):dim = 5 y(,t) t= t=.229 t= y(,t) t= t=.229 t= y(,t) t= t=.229 t= y(,t) t= t=.229 t= Figure: Precomputed Figure: EIM-

25 Intro Results Problem etras Empirical Interpolation Method (EIM) Solutions from EIM Conclusions and F Conclusions Unique Contribution: Clear description of the EIM Successful Implementation of EIM with EIM is comparable to widely accepted methods, such as precomputing technique The results suggest that EIM with basis is a promising model reduction technique for more general nonlinear PDEs. Future Work Etend to higher dimensions Etend to PDEs with more general nonlinearities Apply to practical problem, e.g. optimal control problem CPU time (sec) Average Relative Error n Relative Error:=(1/n )sum t ( y FE (t) y(t) / y FE (t) ) 2 of t i=1 Reconstructed Snapshots using different dim of basis from EIM EIM dim=1 EIM dim=2 EIM dim=4 EIM dim=6 EIM dim=1 EIM dim=2 Direct Precompute Number of basis used CPU TIME for solving reduced order ODE system Number of basis used EIM dim=1 EIM dim=2 EIM dim=4 EIM dim=6 EIM dim=1 EIM dim=2 Direct Precompute Figure: Plots of the first 4 basis

26 Intro Results Problem etras Empirical Interpolation Method (EIM) Solutions from EIM Conclusions and F Overview Nonlinear PDE FE-Galerkin FULL Discretized System (ODE) dim = N Solve ODE SNAPSHOTS: {y(, t l )} ns l=1 basis: {φ i } k i=1 -Galerkin REDUCED Discretized System (ODE) Linear Term: dim = k N Non-Linear Term: dim N Nonlinear Appro -EIM -Precomputing REDUCED Discretized System (ODE) Linear Term: dim = k N Non-Linear Term: dim = n m N (i) Memory Saving (ii) Accuracy (iii) Efficiency (Time Saving)

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

Discrete Empirical Interpolation for Nonlinear Model Reduction

Discrete Empirical Interpolation for Nonlinear Model Reduction Discrete Empirical Interpolation for Nonlinear Model Reduction Saifon Chaturantabut and Danny C. Sorensen Department of Computational and Applied Mathematics, MS-34, Rice University, 6 Main Street, Houston,

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut RICE UNIVERSITY Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

Applications of DEIM in Nonlinear Model Reduction

Applications of DEIM in Nonlinear Model Reduction Applications of DEIM in Nonlinear Model Reduction S. Chaturantabut and D.C. Sorensen Collaborators: M. Heinkenschloss, K. Willcox, H. Antil Support: AFOSR and NSF U. Houston CSE Seminar Houston, April

More information

arxiv: v1 [math.na] 16 Feb 2016

arxiv: v1 [math.na] 16 Feb 2016 NONLINEAR MODEL ORDER REDUCTION VIA DYNAMIC MODE DECOMPOSITION ALESSANDRO ALLA, J. NATHAN KUTZ arxiv:62.58v [math.na] 6 Feb 26 Abstract. We propose a new technique for obtaining reduced order models for

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION

NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SIAM J. SCI. COMPUT. Vol. 3, No. 5, pp. 737 764 c Society for Industrial and Applied Mathematics NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SAIFON CHATURANTABUT AND DANNY C. SORENSEN

More information

Lecture 9: SVD, Low Rank Approximation

Lecture 9: SVD, Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Application of POD-DEIM Approach for Dimension Reduction of a Diffusive Predator-Prey System with Allee Effect

Application of POD-DEIM Approach for Dimension Reduction of a Diffusive Predator-Prey System with Allee Effect Application of POD-DEIM Approach for Dimension Reduction of a Diffusive Predator-Prey System with Allee Effect Gabriel Dimitriu 1, Ionel M. Navon 2 and Răzvan Ştefănescu 2 1 The Grigore T. Popa University

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

Application of POD-DEIM Approach on Dimension Reduction of a Diffusive Predator-Prey System with Allee effect

Application of POD-DEIM Approach on Dimension Reduction of a Diffusive Predator-Prey System with Allee effect Application of POD-DEIM Approach on Dimension Reduction of a Diffusive Predator-Prey System with Allee effect Gabriel Dimitriu 1, Ionel M. Navon 2 and Răzvan Ştefănescu 2 1 The Grigore T. Popa University

More information

POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM

POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM G. Dimitriu Razvan Stefanescu Ionel M. Navon Abstract In this study, we implement the DEIM algorithm (Discrete Empirical

More information

POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model

POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model .. POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model North Carolina State University rstefan@ncsu.edu POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model 1/64 Part1

More information

INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506

INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 INTERPRETATION OF PROPER ORTHOGONAL DECOMPOSITION AS SINGULAR VALUE DECOMPOSITION AND HJB-BASED FEEDBACK DESIGN PAPER MSC-506 SIXTEENTH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL THEORY OF NETWORKS AND SYSTEMS

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Manuel Matthias Baumann

Manuel Matthias Baumann Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Nonlinear Model Order Reduction using POD/DEIM for Optimal Control

More information

Proper Orthogonal Decomposition in PDE-Constrained Optimization

Proper Orthogonal Decomposition in PDE-Constrained Optimization Proper Orthogonal Decomposition in PDE-Constrained Optimization K. Kunisch Department of Mathematics and Computational Science University of Graz, Austria jointly with S. Volkwein Dynamic Programming Principle

More information

Multiobjective PDE-constrained optimization using the reduced basis method

Multiobjective PDE-constrained optimization using the reduced basis method Multiobjective PDE-constrained optimization using the reduced basis method Laura Iapichino1, Stefan Ulbrich2, Stefan Volkwein1 1 Universita t 2 Konstanz - Fachbereich Mathematik und Statistik Technischen

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,

More information

REDUCED ORDER METHODS

REDUCED ORDER METHODS REDUCED ORDER METHODS Elisa SCHENONE Legato Team Computational Mechanics Elisa SCHENONE RUES Seminar April, 6th 5 team at and of Numerical simulation of biological flows Respiration modelling Blood flow

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model nonlinear 4DVAR 4DVAR Data Assimilation of the Shallow Water Equation Model R. Ştefănescu and Ionel M. Department of Scientific Computing Florida State University Tallahassee, Florida May 23, 2013 (Florida

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

Towards parametric model order reduction for nonlinear PDE systems in networks

Towards parametric model order reduction for nonlinear PDE systems in networks Towards parametric model order reduction for nonlinear PDE systems in networks MoRePas II 2012 Michael Hinze Martin Kunkel Ulrich Matthes Morten Vierling Andreas Steinbrecher Tatjana Stykel Fachbereich

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Model Order Reduction Techniques

Model Order Reduction Techniques Model Order Reduction Techniques SVD & POD M. Grepl a & K. Veroy-Grepl b a Institut für Geometrie und Praktische Mathematik b Aachen Institute for Advanced Study in Computational Engineering Science (AICES)

More information

Model order reduction & PDE constrained optimization

Model order reduction & PDE constrained optimization 1 Model order reduction & PDE constrained optimization Fachbereich Mathematik Optimierung und Approximation, Universität Hamburg Leuven, January 8, 01 Names Afanasiev, Attwell, Burns, Cordier,Fahl, Farrell,

More information

Midterm Solutions. EE127A L. El Ghaoui 3/19/11

Midterm Solutions. EE127A L. El Ghaoui 3/19/11 EE27A L. El Ghaoui 3/9/ Midterm Solutions. (6 points Find the projection z of the vector = (2, on the line that passes through 0 = (, 2 and with direction given by the vector u = (,. Solution: The line

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Methods for Nonlinear Systems Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 65 Outline 1 Nested Approximations 2 Trajectory PieceWise Linear (TPWL)

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Reduced order modelling for Crash Simulation

Reduced order modelling for Crash Simulation Reduced order modelling for Crash Simulation Fatima Daim ESI Group 28 th january 2016 Fatima Daim (ESI Group) ROM Crash Simulation 28 th january 2016 1 / 50 High Fidelity Model (HFM) and Model Order Reduction

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

MLCC 2015 Dimensionality Reduction and PCA

MLCC 2015 Dimensionality Reduction and PCA MLCC 2015 Dimensionality Reduction and PCA Lorenzo Rosasco UNIGE-MIT-IIT June 25, 2015 Outline PCA & Reconstruction PCA and Maximum Variance PCA and Associated Eigenproblem Beyond the First Principal Component

More information

Model Reduction, Centering, and the Karhunen-Loeve Expansion

Model Reduction, Centering, and the Karhunen-Loeve Expansion Model Reduction, Centering, and the Karhunen-Loeve Expansion Sonja Glavaški, Jerrold E. Marsden, and Richard M. Murray 1 Control and Dynamical Systems, 17-81 California Institute of Technology Pasadena,

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Towards Reduced Order Modeling (ROM) for Gust Simulations

Towards Reduced Order Modeling (ROM) for Gust Simulations Towards Reduced Order Modeling (ROM) for Gust Simulations S. Görtz, M. Ripepi DLR, Institute of Aerodynamics and Flow Technology, Braunschweig, Germany Deutscher Luft und Raumfahrtkongress 2017 5. 7. September

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

More information

Highly-efficient Reduced Order Modelling Techniques for Shallow Water Problems

Highly-efficient Reduced Order Modelling Techniques for Shallow Water Problems Highly-efficient Reduced Order Modelling Techniques for Shallow Water Problems D.A. Bistrian and I.M. Navon Department of Electrical Engineering and Industrial Informatics, University Politehnica of Timisoara,

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

DEIM-BASED PGD FOR PARAMETRIC NONLINEAR MODEL ORDER REDUCTION

DEIM-BASED PGD FOR PARAMETRIC NONLINEAR MODEL ORDER REDUCTION VI International Conference on Adaptive Modeling and Simulation ADMOS 213 J. P. Moitinho de Almeida, P. Díez, C. Tiago and N. Parés (Eds) DEIM-BASED PGD FOR PARAMETRIC NONLINEAR MODEL ORDER REDUCTION JOSE

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling

Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling S. Volkwein Lecture Notes (August 7, 13) University of Konstanz Department of Mathematics and Statistics Contents Chapter 1. The POD

More information

Solving the Heat Equation (Sect. 10.5).

Solving the Heat Equation (Sect. 10.5). Solving the Heat Equation Sect. 1.5. Review: The Stationary Heat Equation. The Heat Equation. The Initial-Boundary Value Problem. The separation of variables method. An example of separation of variables.

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Proper Orthogonal Decomposition

Proper Orthogonal Decomposition Proper Orthogonal Decomposition Kameswararao Anupindi School of Mechanical Engineering Purdue University October 15, 2010 Kameswararao Anupindi (Purdue University) ME611, Principles of Turbulence October

More information

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact: Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Stabilization of Heat Equation

Stabilization of Heat Equation Stabilization of Heat Equation Mythily Ramaswamy TIFR Centre for Applicable Mathematics, Bangalore, India CIMPA Pre-School, I.I.T Bombay 22 June - July 4, 215 Mythily Ramaswamy Stabilization of Heat Equation

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters

Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters 211 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 211 Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters Jeff Borggaard*

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

A model order reduction technique for speeding up computational homogenisation

A model order reduction technique for speeding up computational homogenisation A model order reduction technique for speeding up computational homogenisation Olivier Goury, Pierre Kerfriden, Wing Kam Liu, Stéphane Bordas Cardiff University Outline Introduction Heterogeneous materials

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information