Mapping Lapack to Matlab, Mathematica and Ada 2005 functions (DRAFT version, may contain errors)

Size: px
Start display at page:

Download "Mapping Lapack to Matlab, Mathematica and Ada 2005 functions (DRAFT version, may contain errors)"

Transcription

1 Mapping Lapack to Matlab, Mathematica and Ada 2005 functions (DRAFT version, may contain errors) Nasser M. Abbasi sometime in 2011 page compiled on July 1, 2015 at 7:19pm This table lists some of the Lapack functions (only the Single Precision REAL Routines are shown), and Matlab, Mathematica and Ada calls which closely provide that functionality. Table lapack description Matlab Mathematica Ada SGESV Solves a general equations Ax = b f=factorize(a) x=f\b LinearSolve[A,B] PsedudoInverse[A].B x:=solve(a,b) S=inverse(A) x=s*b SGBSV SGTSV SPOSV Solves a general banded system of linear equations Ax = b Solves a general tridiagonal system of linear equations Ax = b Ax = b pinv(a)* b LinearSolve[A,B] x:=solve(a,b) LinearSolve[A,B] x:=solve(a,b) LinearSolve[A,B] x:=solve(a,b) 1

2 SPPSV SPBSV SPTSV SSYSV SSPSV SGELS equations Ax = b, where A is held in packed storage banded system Ax = b tridiagonal system Ax = b Solves a real symmetric indefinite equations Ax = b Solves a real symmetric indefinite equations Ax = b where A is held in packed storage least squares solution to an overdetermined equations, Ax = b or A T x = b, or the minimum norm solution of an underdetermined system, where A is a general rectangular matrix of full rank, using a QR or LQ factorization of A LinearSolve[A,B] x:=solve(a,b) see above. Or R=chol(A) x=r\(r \B) see above. Or x:=solve(a,b) R=CholeskyDecomposition[A] LinearSolve[Transpose[R],B] LinearSolve[R,%] LinearSolve[A,B] x:=solve(a,b) LinearSolve[A,B] x:=solve(a,b) LinearSolve[A,B] x:=solve(a,b) for overdetermined: for underdetermined: pinv(a)*b or lsqlin(a,b) for overdetermined: LinearSolve[A,b] for underdetermined: PseudoInverse[A].b or LeastSquares[A,b] x:=solve(a,b) 2

3 SGELSD SGGLSE SGGGLM SSYEV SSYEVD SSPEV least squares solution to an overdetermined equations, Ax = b or A T x = b, or the minimum norm solution of an underdetermined system, where A is a general rectangular matrix of full rank, using singular value decomposition (SVD) Solves the LSE (Constrained Linear Least Squares Problem) using the Generalized RQ factorization Solves the GLM (Generalized Linear Regression Model) using the GQR (Generalized QR) factorization symmetric matrix symmetric matrix If desired, it uses a divide and conquer algorithm symmetric matrix in packed storage Can also use x= x=linearsolve[a,b] No SVD. Can use x:=solve(a,b) u,w,v=singularvaluedecomposition[a] [u,s,v]=svd(a) x=v*inv(s)*v *b invs=diagonalmatrix[1/diagonal[s]] x=v.invs.transpose[v].b lsqlin() FindMinimum[] Missing? glmfit() requires statistics toolbox see GeneralizedLinearModelFit[] and LinearModelFit[] Missing? 3

4 SSPEVD symmetric matrix in packed storage. If desired, it uses a divide and conquer algorithm SSBEV symmetric band matrix SSBEVD symmetric band matrix. If desired, it uses a divide and conquer algorithm SSTEV symmetric tridiagonal matrix SSTEVD symmetric tridiagonal matrix. If desired, it uses a divide and conquer algorithm 4

5 SGEES Schur factorization of a general matrix and orders the factorization so that selected eigenvalues are at the top left of the Schur form schur() SchurDecomposition[] SGEEV left and right general matrix For right eigenvectors use [V,D] = eig(a) For left eigenvectors of A use [W,D] = eig(a. ) W=conj(W) For right eigenvectors use D,V=Eigensystem[A] v=transpose[v] For left eigenvectors of A D,W=Eigensystem[Transpose[A]] W=ConjugateTranspose[W] For right eigenvectors use eigensystem(a,values,vectors) and for left eigenvectors, use transpose() on A and call again then call conjugate(). See annex G for the exact calls. SGESVD SGESDD SSYGV singular value decomposition (SVD) a general matrix singular value decomposition (SVD) a general matrix using divide-and-conquer the symmetric-definite eigenproblem svd() svd() SingularValueDecomposition[] SingularValueDecomposition[] [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] 5

6 SSYGVD SSPGV SSPGVD the symmetric-definite eigenproblem Ax = λbx, ABx = λx, BAx = λx If desired, it uses a divide and conquer algorithm the symmetric-definite eigenproblem Ax = λbx, ABx = λx, BAx = λx where A and B are in packed storage the symmetric-definite eigenproblem Ax = λbx, ABx = λx, BAx = λx, where A and B are in packed storage. If desired, it uses a divide and conquer algorithm [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] 6

7 SSBGV SSBGVD SGGES SGGEV the eigenvalues, and the symmetric of the form the form Ax = λbx A and B are assumed to be symmetric and banded, and B is also the symmetric definite banded eigenproblem of the form Ax = λbx A and B are assumed to be symmetric and banded, and B is also. If desired, it uses a divide and conquer algorithm eigenvalues, Schur form, and left and/or right Schur vectors for a pair of nonsymmetric matrices eigenvalues, and left and/or right eigenvectors for a pair of nonsymmetric matrices [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] schur() SchurDecomposition[] [V,D]=eig(A,B, qz ) D,V=Eigensystem[A,B] 7

8 SGGSVD SGESVX SGBSVX SGTSVX SPOSVX Generalized Singular Value Decomposition Solve a general equations, Ax = b, A T x = b, or A H x = b and provides an estimate of the condition number, and error bounds on the solution Solves a general banded system of linear equations Ax = b, A T x = b, or A H x = b,and provides an estimate of the condition number and error bounds on the solution. Solves a general tridiagonal system of linear equations Ax = b, A T x = b, or A H x = b, and provides an estimate of the condition,number and error bounds on the solution equations Ax = b, and provides an estimate of the condition number and error bounds on the solution. gsvd() SingularValueList[] LinearSolve[A,b] Use transpose or conjuagte A first, then call solve(). But LinearSolve[A,b] Use transpose or conjuagte A first, then call solve(). But LinearSolve[A,b] Use transpose or conjuagte A first, then call solve(). But LinearSolve[A,b] x:solve(a,b). But 8

9 SPPSVX equations Ax = b, where A is held in packed storage, and provides an estimate of the condition number and error bounds on the solution LinearSolve[A,b] x:solve(a,b). But SPBSVX banded system of linear equations Ax = b, where A is held in packed storage, and provides an estimate of the condition number and error bounds on the solution. LinearSolve[A,b] x:solve(a,b). But SPTSVX tridiagonal system of linear equations Ax = b, where A is held in packed storage, and provides an estimate of the condition number and error bounds on the solution. LinearSolve[A,b] x:solve(a,b). But SSYSVX Solves a real symmetric indefinite equations Ax = b, and provides an estimate of the condition number and error bounds on the solution. LinearSolve[A,b] x:solve(a,b). But 9

10 SSPSVX Solves a real symmetric indefinite equations Ax = b, where A is held in packed storage, and provides an estimate of the condition number and error bounds on the solution. LinearSolve[A,b] x:solve(a,b). But SGELSY minimum norm least squares solution to an over-or under-determined equations Ax = b, using a complete orthogonal factorization of A for overdetermined: for underdetermined: pinv(a)*b or lsqlin(a,b) for overdetermined: LinearSolve[A,b] for underdetermined: PseudoInverse[A].b or LeastSquares[A,b] x:=solve(a,b) SGELSS minimum norm least squares solution to an over- or under-determined equations Ax = b, using the singular value decomposition of A. for overdetermined: for underdetermined: pinv(a)*b or lsqlin(a,b) for overdetermined: LinearSolve[A,b] for underdetermined: PseudoInverse[A].b or LeastSquares[A,b] x:=solve(a,b) SSYEVX symmetric matrix. use eig() then then eigenvalues(a) then 10

11 SSYEVR eigenvalues, and, symmetric matrix. Eigenvalues are computed by the dqds algorithm, and computed from various good LDL T, representations (also known as Relatively Robust Representations). can use eig() Eigensystem() then then SSYGVX and the symmetric-definite eigenproblem Ax = λbx, ABx = λx, BAx = λx No direct support, [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] or SSPEVX symmetric matrix in packed storage. can use eig() Eigensystem() then then SSPGVX and the symmetric-definite eigenproblem Ax = λbx, ABx = λx, BAx = λx where A and B are in packed storage. No direct support, [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] or 11

12 SSBEVX symmetric band matrix. can use eig() Eigensystem() then then SSBGVX eigenvalues, and the symmetric-definite banded eigenproblem, of the form A*x=(lambda)*B*x. A and B are assumed to be symmetric and banded, and B is also. No direct support, [V,D]=eig(A,B, chol ) D,V=Eigensystem[A,B] or SSTEVX symmetric tridiagonal matrix. can use eig() Eigensystem() then then SSTEVR eigenvalues, and symmetric tridiagonal matrix. Eigenvalues are computed by the dqds algorithm, and computed from various good LDL T representations (also known as Relatively Robust Representations). can use eig() Eigensystem() then then 12

13 SGEESX Schur factorization of a general matrix, orders the factorization so that selected eigenvalues, are at the top left of the Schur form, and computes reciprocal condition numbers for the average of the selected eigenvalues and for the associated right invariant subspace. can use eig(), shur(), then Eigensystem(), Schur- Decomposition[], then then SGGESX eigenvalues, the real Schur form, and the left and/or right matrices of Schur vectors. can use eig(), shur(), then, Schur- Decomposition[], then No support for eigenvalues. No shur decomposition SGEEVX left and right general matrix, with preliminary balancing of the matrix, and computes reciprocal condition numbers for the eigenvalues and right eigenvectors. can use eig() and cond(), and No support but can use, no condition number. SGGEVX eigenvalues, and the left and/or right eigenvectors. [V,D]=eig(A,B, chol ) [D,V]=Eigensystem[A,B] No support for eigenvalues references 13

14 Ada 2005 reference manual Mathematica and Matlab help 14

LAPACK Introduction. Zhaojun Bai University of California/Davis. James Demmel University of California/Berkeley

LAPACK Introduction. Zhaojun Bai University of California/Davis. James Demmel University of California/Berkeley 93 LAPACK Zhaojun Bai University of California/Davis James Demmel University of California/Berkeley Jack Dongarra University of Tennessee Julien Langou University of Colorado Denver Jenny Wang University

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

NAG Library Chapter Introduction. f08 Least-squares and Eigenvalue Problems (LAPACK)

NAG Library Chapter Introduction. f08 Least-squares and Eigenvalue Problems (LAPACK) f8 Least-squares and Eigenvalue Problems (LAPACK) Introduction f8 NAG Library Chapter Introduction f8 Least-squares and Eigenvalue Problems (LAPACK) Contents 1 Scope of the Chapter... 3 2 Background to

More information

Chapter f Linear Algebra

Chapter f Linear Algebra 1. Scope of the Chapter Chapter f Linear Algebra This chapter is concerned with: (i) Matrix factorizations and transformations (ii) Solving matrix eigenvalue problems (iii) Finding determinants (iv) Solving

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

NAG Toolbox for MATLAB Chapter Introduction. F02 Eigenvalues and Eigenvectors

NAG Toolbox for MATLAB Chapter Introduction. F02 Eigenvalues and Eigenvectors NAG Toolbox for MATLAB Chapter Introduction F02 Eigenvalues and Eigenvectors Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Standard Eigenvalue Problems... 2 2.1.1 Standard

More information

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Matrix Eigensystem Tutorial For Parallel Computation

Matrix Eigensystem Tutorial For Parallel Computation Matrix Eigensystem Tutorial For Parallel Computation High Performance Computing Center (HPC) http://www.hpc.unm.edu 5/21/2003 1 Topic Outline Slide Main purpose of this tutorial 5 The assumptions made

More information

Eigenvalue problems. Eigenvalue problems

Eigenvalue problems. Eigenvalue problems Determination of eigenvalues and eigenvectors Ax x, where A is an N N matrix, eigenvector x 0, and eigenvalues are in general complex numbers In physics: - Energy eigenvalues in a quantum mechanical system

More information

Eigenvalues, Eigenvectors and the Jordan Form

Eigenvalues, Eigenvectors and the Jordan Form EE/ME 701: Advanced Linear Systems Eigenvalues, Eigenvectors and the Jordan Form Contents 1 Introduction 3 1.1 Review of basic facts about eigenvectors and eigenvalues..... 3 1.1.1 Looking at eigenvalues

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Syntax Description Remarks and examples Conformability Diagnostics References Also see. symeigenvalues(a)

Syntax Description Remarks and examples Conformability Diagnostics References Also see. symeigenvalues(a) Title stata.com eigensystem( ) Eigenvectors and eigenvalues Syntax Description Remarks and examples Conformability Diagnostics References Also see Syntax void eigensystem(a, X, L [, rcond [, nobalance

More information

Chapter 16 Numerical Linear Algebra

Chapter 16 Numerical Linear Algebra 16.1 Sets of Linear Equations Chapter 16 Numerical Linear Algebra MATLAB was developed to handle problems involving matrices and vectors in an efficient way. One of the most basic problems of this type

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Jordan Normal Form and Singular Decomposition

Jordan Normal Form and Singular Decomposition University of Debrecen Diagonalization and eigenvalues Diagonalization We have seen that if A is an n n square matrix, then A is diagonalizable if and only if for all λ eigenvalues of A we have dim(u λ

More information

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) Lecture 1: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) The eigenvalue problem, Ax= λ x, occurs in many, many contexts: classical mechanics, quantum mechanics, optics 22 Eigenvectors and

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University Lecture 13 Stability of LU Factorization; Cholesky Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

ARPACK. Dick Kachuma & Alex Prideaux. November 3, Oxford University Computing Laboratory

ARPACK. Dick Kachuma & Alex Prideaux. November 3, Oxford University Computing Laboratory ARPACK Dick Kachuma & Alex Prideaux Oxford University Computing Laboratory November 3, 2006 What is ARPACK? ARnoldi PACKage Collection of routines to solve large scale eigenvalue problems Developed at

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

MATLAB. The Language of Technical Computing. Mathematics Version 7

MATLAB. The Language of Technical Computing. Mathematics Version 7 MATLAB The Language of Technical Computing Mathematics Version 7 How to Contact The MathWorks: www.mathworks.com comp.soft-sys.matlab support@mathworks.com suggest@mathworks.com bugs@mathworks.com doc@mathworks.com

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

PDE Model Reduction Using SVD

PDE Model Reduction Using SVD http://people.sc.fsu.edu/ jburkardt/presentations/fsu 2006.pdf... John 1 Max Gunzburger 2 Hyung-Chun Lee 3 1 School of Computational Science Florida State University 2 Department of Mathematics and School

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Eigenvalue problems III: Advanced Numerical Methods

Eigenvalue problems III: Advanced Numerical Methods Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Recent Developments in Dense Numerical Linear Algebra. Higham, Nicholas J. MIMS EPrint:

Recent Developments in Dense Numerical Linear Algebra. Higham, Nicholas J. MIMS EPrint: Recent Developments in Dense Numerical Linear Algebra Higham, Nicholas J. 1997 MIMS EPrint: 2007.75 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Singular Value Decomposition: Compression of Color Images

Singular Value Decomposition: Compression of Color Images 1/26 Singular Value Decomposition: Compression of Color Images Bethany Adams and Nina Magnoni Introduction The SVD has very useful applications. It can be used in least squares approximations, search engines,

More information

Eigenvalues and Singular Values

Eigenvalues and Singular Values Chapter 10 Eigenvalues and Singular Values This chapter is about eigenvalues and singular values of matrices. Computational algorithms and sensitivity to perburbations are both discussed. 10.1 Eigenvalue

More information