PDE Model Reduction Using SVD

Similar documents
Main matrix factorizations

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Lecture 6. Numerical methods. Approximation of functions

Linear Algebra, part 3 QR and SVD

Applied Linear Algebra in Geoscience Using MATLAB

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Matrix decompositions

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Notes on Eigenvalues, Singular Values and QR

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Singular Value Decompsition

SVD and Image Compression

Singular Value Decomposition and Digital Image Compression

Advanced Introduction to Machine Learning CMU-10715

Matrix decompositions

Introduction to Machine Learning

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

20 Unsupervised Learning and Principal Components Analysis (PCA)

CS60021: Scalable Data Mining. Dimensionality Reduction

Numerical Methods I Singular Value Decomposition

Principal Component Analysis

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Lecture 6, Sci. Comp. for DPhil Students

Linear Algebra Review. Fei-Fei Li

Linear Algebra Massoud Malek

Chapter 3 Transformations

Conceptual Questions for Review

14 Singular Value Decomposition

Principal Component Analysis

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Lecture: Face Recognition and Feature Reduction

Chap 3. Linear Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Matrix decompositions

Solution of Linear Equations

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Singular Value Decomposition

Foundations of Computer Vision

Linear Algebra. Session 12

Lecture: Face Recognition and Feature Reduction

2. Review of Linear Algebra

Lecture 3: Review of Linear Algebra

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

Lecture 3: Review of Linear Algebra

Orthonormal Transformations and Least Squares

Linear Subspace Models

I. Multiple Choice Questions (Answer any eight)

CS 143 Linear Algebra Review

Introduction to Data Mining

Roundoff Error. Monday, August 29, 11

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

UNIT 6: The singular value decomposition.

Linear Least Squares. Using SVD Decomposition.

Review problems for MA 54, Fall 2004.

1 Linearity and Linear Systems

Preface to Second Edition... vii. Preface to First Edition...

1 Singular Value Decomposition and Principal Component

Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c /9/9 page 147 le-tex

Singular Value Decomposition

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MATH 350: Introduction to Computational Mathematics

Computing least squares condition numbers on hybrid multicore/gpu systems

Linear Algebra Methods for Data Mining

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Methods for sparse analysis of high-dimensional data, II

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

What is Principal Component Analysis?

Eigenimaging for Facial Recognition

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Some preconditioners for systems of linear inequalities

Expectation Maximization

Linear Algebra Review. Vectors

Linear Algebra Review. Fei-Fei Li

Dimensionality reduction

Methods for sparse analysis of high-dimensional data, II

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Unsupervised Learning: Dimensionality Reduction

7. Symmetric Matrices and Quadratic Forms

Designing Information Devices and Systems II

The Singular Value Decomposition and Least Squares Problems

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Lecture 8: Fast Linear Solvers (Part 7)

AM205: Assignment 2. i=1

Example Linear Algebra Competency Test

Polar Decomposition of a Matrix

Diagonalizing Matrices

B553 Lecture 5: Matrix Algebra Review

The QR Factorization

3D Computer Vision - WT 2004

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Singular Value Decomposition

Numerical Linear Algebra

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Transcription:

http://people.sc.fsu.edu/ jburkardt/presentations/fsu 2006.pdf... John 1 Max Gunzburger 2 Hyung-Chun Lee 3 1 School of Computational Science Florida State University 2 Department of Mathematics and School of Computational Science Florida State University 3 Mathematics Department Ajou University, Korea SCS/FSU, 21 September 2006

Data Flood: Satellites Satellites generate so much data that most is not even sent! When it gets to earth, most is not examined!

Data Flood: Particle Accelerators Physicists run particle accelerators that generate millions of images: Computer programs are needed just to discard most of the data.

Data Flood: Fingerprints Here is a fingerprint found after a kidnapping: Even if a match is on file, how do you find it?

Data Flood: Where s the Information? We ve got all this data, why aren t we smarter? Data is not information!

SVD: The Singular Value Decomposition We introduce the singular value decomposition:. The SVD of an M by N (real) matrix A: A = U Σ V U is M by M orthogonal; Σ is an M by N (nonnegative) diagonal matrix. V is N by N orthogonal;

SVD: What the SVD Tells Us For a system with patterns, the SVD decomposition tells us where the information is concentrated. A = U Σ V The leading columns of U are the preferred behaviors ; The diagonal of Σ is an energy or importance weight; The entries of Σ are positive, and sorted in decreasing order; For information with patterns, the entries of Σ are rapidly decreasing.

SVD: Can Tell Us About A Mapping We think of A as mapping unit vectors x to an ellipsoid of result vectors Ax: Image from Muller, Magaia, Herbst

SVD: Summing Rank-1 Matrices The SVD also represents A as the sum of rank 1 matrices. A = σ 1 u 1 v 1 + σ 2 u 2 v 2... + σ r u r v r. This representation is done optimally. A truncated approximation is the best of that rank, in the Frobenius norm.

SVD: From Data, We Get Information The singular values tell us how singular a matrix is; The singular value ratios tell us how balanced the matrix is; The rightmost V vectors give a null space basis; The leftmost U vectors give the dominant outputs; Solve underdetermined or overdetermined systems; Can find an orthonormal basis for vector set (Gram-Schmidt); We may be able to approximate a matrix by very low rank;

SVD: Software Sources EISPACK, an ancient eigenvalue package, SVD() LINPACK, an ancient linear system package, SSVDC() LAPACK, a modern linear algebra package, SGESVD() SCALAPACK, parallel LAPACK, PSGESVD() ESSL, IBM linear algebra package, SGESVF() ACM TOMS, Algorithm #581, GRSVD() MAPLE, function SingularValues() MATLAB, function svd() MATHEMATICA, function SingularValues[] Numerical Recipes. SVDCMP()

The IN/OUT Flow, Governing Equations Incompressible viscous Navier-Stokes equations: u(x, y) velocity vector; p(x, y) pressure; ρ, known constant density; µ, known viscosity. ρu t µ u + ρ(u )u + p = 0 ρ u = 0

The IN/OUT Flow: region

Finite Elements: The Elements We create a mesh of elements.

Example: Fluid Flow with Varying Input

The IN/OUT Flow: Animation

SVD Model Reduction for IN/OUT: Vector Importance Index Value Relative Cumulative 1 26.9107 0.527 0.527 2 7.0878 0.138 0.666 3 6.5015 0.127 0.794 4 3.1420 0.061 0.855 5 1.6973 0.033 0.889 6 1.4947 0.029 0.918 7 0.9253 0.018 0.936 8 0.7592 0.014 0.951 9 0.5738 0.011 0.962 10 0.4570 0.008 0.971............ 16 0.0994 0.001 0.997

SVD Model Reduction for IN/OUT: Vector Importance

SVD Model Reduction for IN/OUT: Vector 1

SVD Model Reduction for IN/OUT: Vector 2

SVD Model Reduction for IN/OUT: Vector 3

SVD Model Reduction for IN/OUT: Vector 4

Finite Elements: Function approximation FEM approximates velocity and pressure using basis functions generated by mesh: u(x, y) U(x, y) = p(x, y) P(x, y) = NV i=1 NP j=1 a i φ i (x, y) b j ψ i (x, y)

ROM: Reduced Order Modeling SVD applied to the data extracts an small orthogonal basis of important vectors. Using basis functions from the finite element mesh, we had to define 3,362 basis functions, and then solve for the corresponding coefficients. The SVD tells us that every velocity vector we ve seen so far could be well approximated by our 16 SVD basis vectors. Why not use them as a new finite element basis?

ROM: Solving New IN/OUT Problems

ROM: Solving New IN/OUT Problems L2 difference between ROM and full FEM solutions.

SVD and Data Analysis Navier Stokes equations still hide lots of information. SVD extracts useful data (singular vectors). SVD also gives importance of each vector. SVD did not need to know the Navier Stokes equations. We can apply SVD to data with no known governing equations.

FACES: (Muller, Magaia, Herbst) Image from Muller, Magaia, Herbst

FACES: Creating Data 200 people were each photographed in 3 poses for N=600 images. If each image used 400 * 800 pixels, this makes about 300,000 pixels, each with R, G and B values, for about M=1,000,000 numeric values.

FACES: SVD Analaysis (Singular Values)

FACES: SVD Analysis (Singular Vectors) Using similar SVD methods on the face data, here are first 8 modes or eigenfaces. Usually 100 or 150 eigenfaces are enough to store almost all the facial information. Image from Muller, Magaia, Herbst

FACES: Recognizing a New Face Reconstruction using 40, 100, 450 eigenfaces. Image from Muller, Magaia, Herbst

FACES: The Covariance Matrix The SVD includes information that can be used to recognize or reject new data. If we gently preprocess the data, we get usable covariance information. Subtract average from all data; Scale, dividing by N; U vectors are maximum variation directions Σ contains standard deviations

FACES: The Covariance Matrix Image from Muller, Magaia, Herbst

FACES: The Covariance Matrix Suppose we have a new item. Is it a face? Subtract the average, and project it onto the U vectors. From the SVD information, we can compute the probability that an item of data would fall this far from the average.

Summary of SVD SVD is a natural analysis of data. knowledge of governing equations (if any!) not necessary unknown information and patterns discovered. compact representation of huge dataset; a natural (secondary) basis for FEM calculations. projection can be used to recognize new data. SVD is one way to turn data into information.

References, Gunzburger, Lee, Centroidal Voronoi Tessellation-Based Reduced-Order Modeling of Complex Systems, SIAM Journal on Scientific Computing, Volume 28, Number 2, 2006. Muller, Magaia, Herbst, Singular Value Decomposition, Eigenfaces, and 3D Reconstructions, SIAM Review, Volume 46, Number 3, 2004. Trefethen, Bau, Numerical Linear Algebra, SIAM, 1997.