Continuous analogues of matrix factorizations NASC seminar, 9th May 2014

Size: px
Start display at page:

Download "Continuous analogues of matrix factorizations NASC seminar, 9th May 2014"

Transcription

1 Continuous analogues of matrix factorizations NSC seminar, 9th May 2014 lex Townsend DPhil student Mathematical Institute University of Oxford (joint work with Nick Trefethen) Many thanks to Gil Strang, MIT Work supported by supported by EPSRC grant EP/P505666/1

2 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] lex Oxford 2/24

3 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] = tall skinny matrix [ f 1 (x) f n (x) ] quasimatrix [Stewart, 98] lex Oxford 2/24

4 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] = tall skinny matrix [ f 1 (x) f n (x) ] quasimatrix [Stewart, 98] = square matrix f(x, y) chebfun2 [T & Trefethen, 13] lex Oxford 2/24

5 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] = tall skinny matrix [ f 1 (x) f n (x) ] quasimatrix [Stewart, 98] = square matrix f(x, y) chebfun2 [T & Trefethen, 13] v f(s, y)v(s) ds chebop [Driscoll, Bornemann, & Trefethen, 08] lex Oxford 2/24

6 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] = tall skinny matrix [ f 1 (x) f n (x) ] quasimatrix [Stewart, 98] = square matrix f(x, y) chebfun2 [T & Trefethen, 13] v f(s, y)v(s) ds chebop [Driscoll, Bornemann, & Trefethen, 08] SVD, QR, LU, Chol? cmatrix [T & Trefethen, 14] lex Oxford 2/24

7 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] = tall skinny matrix [ f 1 (x) f n (x) ] quasimatrix [Stewart, 98] = square matrix f(x, y) chebfun2 [T & Trefethen, 13] v f(s, y)v(s) ds chebop [Driscoll, Bornemann, & Trefethen, 08] SVD, QR, LU, Chol? cmatrix [T & Trefethen, 14] Interested in continuous analogues rather than infinite analogues lex Oxford 2/24

8 Introduction Discrete vs continuous v = column vector f(x) chebfun [Battles & Trefethen, 04] = tall skinny matrix [ f 1 (x) f n (x) ] quasimatrix [Stewart, 98] = square matrix f(x, y) chebfun2 [T & Trefethen, 13] v f(s, y)v(s) ds chebop [Driscoll, Bornemann, & Trefethen, 08] SVD, QR, LU, Chol? cmatrix [T & Trefethen, 14] Interested in continuous analogues rather than infinite analogues side: Infinite analogues are Schmidt, Wiener Hopf, infinite-dimensional QR, etc lex Oxford 2/24

9 Introduction Matrices, quasimatrices, cmatrices matrix quasimatrix cmatrix m n [a, b] n [a, b] [c, d] cmatrix is a continuous function of (y, x) [a, b] [c, d] lex Oxford 3/24

10 Introduction Matrices vs cmatrices n m n matrix: entries indexed by {1,, m} {1,, n} n [a, b] [c, d] cmatrix: entries indexed by [a, b] [c, d] {1,, m} subset of R Question Well-ordered Not well-ordered by < What is the 1st column? Successor No successor What is the next column? null set Null subsets What sparsity makes sense? Finite Infinite Convergence? lex Oxford 4/24

11 Introduction Matrices vs cmatrices n m n matrix: entries indexed by {1,, m} {1,, n} n [a, b] [c, d] cmatrix: entries indexed by [a, b] [c, d] {1,, m} subset of R Question Well-ordered Not well-ordered by < What is the 1st column? Successor No successor What is the next column? null set Null subsets What sparsity makes sense? Finite Infinite Convergence? Three heroes: lex Oxford 4/24

12 Introduction Matrices vs cmatrices n m n matrix: entries indexed by {1,, m} {1,, n} n [a, b] [c, d] cmatrix: entries indexed by [a, b] [c, d] {1,, m} subset of R Question Well-ordered Not well-ordered by < What is the 1st column? Successor No successor What is the next column? null set Null subsets What sparsity makes sense? Finite Infinite Convergence? Three heroes: Smoothness lex Oxford 4/24

13 Introduction Matrices vs cmatrices n m n matrix: entries indexed by {1,, m} {1,, n} n [a, b] [c, d] cmatrix: entries indexed by [a, b] [c, d] {1,, m} subset of R Question Well-ordered Not well-ordered by < What is the 1st column? Successor No successor What is the next column? null set Null subsets What sparsity makes sense? Finite Infinite Convergence? Three heroes: Smoothness pivoting lex Oxford 4/24

14 Introduction Matrices vs cmatrices n m n matrix: entries indexed by {1,, m} {1,, n} n [a, b] [c, d] cmatrix: entries indexed by [a, b] [c, d] {1,, m} subset of R Question Well-ordered Not well-ordered by < What is the 1st column? Successor No successor What is the next column? null set Null subsets What sparsity makes sense? Finite Infinite Convergence? Three heroes: Smoothness pivoting ɛ mach lex Oxford 4/24

15 Singular value decomposition Matrix factorization = UΣV T, Σ = diagonal, U, V = orthonormal columns U Σ V T lex Oxford 5/24

16 Singular value decomposition Matrix factorization = UΣV T, Σ = diagonal, U, V = orthonormal columns U Σ V T Exists: SVD exists and is (almost) unique lex Oxford 5/24

17 Singular value decomposition Matrix factorization = UΣV T, Σ = diagonal, U, V = orthonormal columns U Σ V T Exists: SVD exists and is (almost) unique pplication: best rank r approx is r = 1st r terms (in 2- & F-norm) lex Oxford 5/24

18 Singular value decomposition Matrix factorization = UΣV T, Σ = diagonal, U, V = orthonormal columns U Σ V T Exists: SVD exists and is (almost) unique pplication: best rank r approx is r = 1st r terms (in 2- & F-norm) Separable model: = n j=1 σ ju j v T is a sum of outer products j lex Oxford 5/24

19 Singular value decomposition Matrix factorization = UΣV T, Σ = diagonal, U, V = orthonormal columns U Σ V T Exists: SVD exists and is (almost) unique pplication: best rank r approx is r = 1st r terms (in 2- & F-norm) Separable model: = n j=1 σ ju j v T is a sum of outer products j Computation: Bidiagonalize then iterate [Golub & Kahan (1965)] lex Oxford 5/24

20 Singular value decomposition Continuous analogue = UΣV T, Σ = diagonal, U, V = orthonormal columns σ 1 v T 1 u 1 u 2 σ 2 v T 2 t least formally U Σ V T lex Oxford 6/24

21 Singular value decomposition Continuous analogue = UΣV T, Σ = diagonal, U, V = orthonormal columns σ 1 v T 1 u 1 u 2 σ 2 v T 2 t least formally U Σ V T Exists: SVD exists if is continuous and is (almost) unique [Schmidt 1907] lex Oxford 6/24

22 Singular value decomposition Continuous analogue = UΣV T, Σ = diagonal, U, V = orthonormal columns σ 1 v T 1 u 1 u 2 σ 2 v T 2 t least formally U Σ V T Exists: SVD exists if is continuous and is (almost) unique [Schmidt 1907] pplication: best rank r approx is f r = 1st r terms (L 2 -norm) [Weyl 1912] lex Oxford 6/24

23 Singular value decomposition Continuous analogue = UΣV T, Σ = diagonal, U, V = orthonormal columns σ 1 v T 1 u 1 u 2 σ 2 v T 2 t least formally U Σ V T Exists: SVD exists if is continuous and is (almost) unique [Schmidt 1907] pplication: best rank r approx is f r = 1st r terms (L 2 -norm) [Weyl 1912] Separable model: = j=1 σ ju j v T is a sum of outer products j lex Oxford 6/24

24 Singular value decomposition Continuous analogue = UΣV T, Σ = diagonal, U, V = orthonormal columns σ 1 v T 1 u 1 u 2 σ 2 v T 2 t least formally U Σ V T Exists: SVD exists if is continuous and is (almost) unique [Schmidt 1907] pplication: best rank r approx is f r = 1st r terms (L 2 -norm) [Weyl 1912] Separable model: = j=1 σ ju j v T is a sum of outer products j Computation: void bidiagonalization lex Oxford 6/24

25 Singular value decomposition bsolute and uniform convergence of the SVD Theorem Let be an [a, b] [c, d] cmatrix that is (uniformly) Lipschitz continuous in both variables Then the SVD of exists, the singular values are unique with σ j 0 as j, and = j=1 σ j u j v T j, where the series is uniformly and absolutely convergent to Proof See [Schmidt 1907], [Hammerstein 1923], and [Smithies 1937] If satisfies the assumptions of the theorem, then = UΣV T lex Oxford 7/24

26 Singular value decomposition lgorithm 1 Compute = Q R 2 Compute quasimatrix QR, R T = Q RR R (Householder triangularization of a quasimatrix [Trefethen 08]) R T = Q Q R R R R 3 Compute SVD RR = U Σ V T = (Q V)Σ(Q R U) T This is a continuous analogue of a discrete algorithm [Ipsen 90] lex Oxford 8/24

27 Singular value decomposition Related work Erhard Schmidt James Mercer Carl Eckart & Gail Young utonne, Bateman, Hammerstein, Kellogg, Picard, Smithies, Weyl izerman, Braverman, König, Rozonoer Golub, Hestenes, Kahan, Kogbetliantz, Reinsch lex Oxford 9/24

28 LU decomposition Matrix factorization = P 1 LU, P = permutation, L = unit lower-triangular, U = upper-triangular P 1 L U P 1 L = psychologically lower-triangular Exists: It (almost) exists and with extra conditions is (almost) unique pplication: Used to solve dense linear systems x = b Separable model: = n j=1 l ju T j is a sum of outer products [Pan 2000] Computation: Gaussian elimination with pivoting lex Oxford 10/24

29 LU decomposition Continuous analogue = LU, L = unit lower-triangular, U = upper-triangular u T 1 u T 2 l 1 l 2 L U Exists: It (usually) exists and with extra conditions is (almost) unique pplication: Can be used to solve integral equations Separable model: = j=1 l ju T j is a sum of outer products Computation: Continuous analogue of GECP (GE with complete pivoting) lex Oxford 11/24

30 LU decomposition Computation The standard point of view: P 1 L U lex Oxford 12/24

31 LU decomposition Computation The standard point of view: different point of view: P 1 L U (j, :)(:, k)/(j, k) (GE step for matrices) (y 0, :)(:, x 0 )/(y 0, x 0 ) (GE step for functions) Each step of GE is a rank-1 update We use complete pivoting lex Oxford 12/24

32 LU decomposition Computation The standard point of view: different point of view: P 1 L U (j, :)(:, k)/(j, k) (GE step for matrices) (y 0, :)(:, x 0 )/(y 0, x 0 ) (GE step for functions) Each step of GE is a rank-1 update We use complete pivoting Pivoting orders the columns and rows lex Oxford 12/24

33 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? lex Oxford 13/24

34 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? y 1 lex Oxford 13/24

35 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? y 1 y 2 lex Oxford 13/24

36 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? y 1 y 3 y 2 lex Oxford 13/24

37 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? y 1 y 3 y 2 y 4 lex Oxford 13/24

38 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? y 1 y 3 y 2 y 4 y 5 lex Oxford 13/24

39 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? y 1 y 3 y 2 y 4 y 5 lex Oxford 13/24

40 LU decomposition What is a triangular quasimatrix? u T 1 u T 2 l 1 l 2 L = unit lower-triangular U = upper-triangular L U What is a lower-triangular quasimatrix? Red dots = 0 s, blue squares = 1 s Position of 0 s is determined by pivoting strategy Forward substitution has a continuous analogue More precisely, L is lower-triangular wrt y 1, y 2, y 1 y 3 y 2 y 4 y 5 lex Oxford 13/24

41 LU decomposition bsolute and uniform convergence of LU Theorem Let be an [a, b] [c, d] continuous cmatrix Suppose (, x) is analytic in the stadium of radius 2ρ(b a) about [a, b] for some ρ > 1 where it is bounded in absolute value by M (uniformly in x) Then = l j u T j, j=1 where the series is uniformly and absolutely convergent to Moreover, k l j u T j Mρ k j=1 a stadium b 2ρ(b a) lex Oxford 14/24

42 LU decomposition Chebfun2 application Low rank function approximation = chebfun2(@(x,y) cos(10*(xˆ2+y))+sin(10*(x+yˆ2))); contour(, ) = pivot location (y, x ) k j =1 lex Oxford Rank = 125 Rank = 65 Rank = 5 Rank = 33 d b (y, x )dydx `j (y )uj (x ), c a Rank = 28 Rank = 2 k b j =1 a d `j (y )dy uj (x )dx c 15/24

43 LU decomposition Chebfun2 application SVD is optimal, but GE can be faster 2D Runge function: Relative error in L (y, x) = γ(x 2 + y 2 ) γ=1 γ= Rank of approximant γ=100 SVD GE Wendland s CSRBFs: s (y, x) = φ 3,s ( x y 2 ) C 2s Relative error in L φ 3,3 C 6 φ 3,1 C Rank of approximant SVD GE φ 3,0 C 0 lex Oxford 16/24

44 LU decomposition Related work Eugene Tyrtyshnikov Mario Bebendorf Keith Geddes Petros Drineas Goreinov, Oseledets, Savostyanov, Zamarashkin Gesenhues, Griebel, Hackbusch, Rjasanow Carvajal, Chapman Candes, Greengard, Mahoney, Martinsson, Rokhlin Moral of the story: Iterative GE is everywhere, under different guises Many others: Halko, Liberty, Martinsson, O Neil, Tropp, Tygert, Woolfe, etc lex Oxford 17/24

45 Cholesky factorization Matrix factorization = R T R, R = upper-triangular R T R Exists: Exists and is unique if is a positive-definite matrix pplication: numerical test for a positive-definite matrix Separable model: = n j=1 r jr T is a sum of outer products j Computation: Cholesky algorithm, ie, GECP on a positive definite matrix lex Oxford 18/24

46 Cholesky factorization Continuous analogue = R T R, R = upper-triangular quasimatrix r T 1 r 1 r 2 r T 2 t least formally Pivoting: Essential Continuous analogue of pivoted Cholesky Exists: Exists and is essentially unique for nonnegative definite functions Definition n [a, b] [a, b] continuous symmetric cmatrix is nonnegative definite if v T v = b b a a v(y)(y, x)v(x)dxdy 0, v C[a, b] lex Oxford 19/24

47 Cholesky factorization Convergence Theorem Let be an [a, b] [a, b] continuous, symmetric, and nonnegative definite cmatrix Suppose that (, x) is analytic in the closed Bernstein ellipse E 2ρ(b a) with foci a and b with ρ > 1 and bounded in absolute value by M, uniformly in y Then = j=1 r j r T j, where the series is uniformly and absolutely convergent to Moreover, k r j r T j 32Mkρ k 4ρ 1 j=1 a stadium E 2ρ(b a) b lex Oxford 20/24

48 Cholesky factorization Computation Pivoted Cholesky = GECP on nonnegative definite function Pivots in Cholesky Pivot size Step Each step is a rank 1 update: (:, x 0 )(x 0, :)/(x 0, x 0 ) 1 lways take the absolute maximum on the diagonal even if there is a tie with an off-diagonal entry lex Oxford 21/24

49 Cholesky factorization Chebfun2 application test for symmetric nonnegative definite functions = chebfun2(@(x,y) cos(10*x*y) + y + xˆ2 + sin(10*x*y)); B = * ; chol(b) 1 Inverse multiquadric ll the pivots are nonnegative and on the y = x line nonnegative definite lex Oxford 22/24

50 Demo Demo lex Oxford 23/24

51 References Z Battles & L N Trefethen, n extension of MTLB to continuous functions and operators, SISC, 25 (2004), pp T Driscoll, F Bornemann, & L N Trefethen, The chebop system for automatic solution of differential equations, BIT, 48 (2008), pp C Eckart & G Young, The approximation of one matrix by another of lower rank, Psychometrika, 1 (1936), pp N J Higham, ccuracy and Stability of Numerical lgorithms, 2nd edition, SIM, 2002 E Schmidt, Zur Theorie der linearen und nichtlinearen Integralgleichungen I Teil Entwicklung willkürlichen Funktionen nach System vorgeschriebener, Math nn, 63 (1907), pp G W Stewart, fternotes Goes to Graduate School, Philadelphia, SIM, 1998 T & L N Trefethen, Gaussian elimination as an iterative algorithm, SIM News, March 2013 T & L N Trefethen, n extension of Chebfun to two dimensions, to appear in SISC, 2013 lex Oxford 24/24

CONTINUOUS ANALOGUES OF MATRIX FACTORIZATIONS

CONTINUOUS ANALOGUES OF MATRIX FACTORIZATIONS CONTINUOUS NLOGUES OF MTRIX FCTORIZTIONS LEX TOWNSEND ND LLOYD N. TREFETHEN bstract. nalogues of QR, LU, SVD, and Cholesky factorizations are proposed for problems in which the usual discrete matrix is

More information

The automatic solution of PDEs using a global spectral method

The automatic solution of PDEs using a global spectral method The automatic solution of PDEs using a global spectral method 2014 CBMS-NSF Conference, 29th June 2014 Alex Townsend PhD student University of Oxford (with Sheehan Olver) Supervised by Nick Trefethen,

More information

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale Subset Selection Deterministic vs. Randomized Ilse Ipsen North Carolina State University Joint work with: Stan Eisenstat, Yale Mary Beth Broadbent, Martin Brown, Kevin Penner Subset Selection Given: real

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,

More information

Spectrum-Revealing Matrix Factorizations Theory and Algorithms

Spectrum-Revealing Matrix Factorizations Theory and Algorithms Spectrum-Revealing Matrix Factorizations Theory and Algorithms Ming Gu Department of Mathematics University of California, Berkeley April 5, 2016 Joint work with D. Anderson, J. Deursch, C. Melgaard, J.

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Subset Selection. Ilse Ipsen. North Carolina State University, USA

Subset Selection. Ilse Ipsen. North Carolina State University, USA Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns

More information

Chebfun2: Exploring Constant Coefficient PDEs on Rectangles

Chebfun2: Exploring Constant Coefficient PDEs on Rectangles Chebfun2: Exploring Constant Coefficient PDEs on Rectangles Alex Townsend Oxford University with Sheehan Olver, Sydney University FUN13 workshop, 11th of April 2013 Alex Townsend Constant coefficient PDEs

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Linear Algebra Review 1 / 16 Midterm Exam Tuesday Feb

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

A randomized algorithm for approximating the SVD of a matrix

A randomized algorithm for approximating the SVD of a matrix A randomized algorithm for approximating the SVD of a matrix Joint work with Per-Gunnar Martinsson (U. of Colorado) and Vladimir Rokhlin (Yale) Mark Tygert Program in Applied Mathematics Yale University

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

QR decomposition: History and its Applications

QR decomposition: History and its Applications Mathematics & Statistics Auburn University, Alabama, USA Dec 17, 2010 decomposition: and its Applications Tin-Yau Tam èuî Æâ w f ŒÆêÆ ÆÆ Page 1 of 37 email: tamtiny@auburn.edu Website: www.auburn.edu/

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Axis-alignment in low-rank and other structures

Axis-alignment in low-rank and other structures Axis-alignment in low-rank and other structures Nick Trefethen, Oxford and ENS Lyon 1 Low-rank approximation 2 Quasi-Monte Carlo 3 Sparse grids 4 Multivariate polynomials Paper: Cubature, approximation,

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

A fast and well-conditioned spectral method: The US method

A fast and well-conditioned spectral method: The US method A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Randomized algorithms for the approximation of matrices

Randomized algorithms for the approximation of matrices Randomized algorithms for the approximation of matrices Luis Rademacher The Ohio State University Computer Science and Engineering (joint work with Amit Deshpande, Santosh Vempala, Grant Wang) Two topics

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Rank revealing factorizations, and low rank approximations

Rank revealing factorizations, and low rank approximations Rank revealing factorizations, and low rank approximations L. Grigori Inria Paris, UPMC January 2018 Plan Low rank matrix approximation Rank revealing QR factorization LU CRTP: Truncated LU factorization

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

RANDOMIZED METHODS FOR RANK-DEFICIENT LINEAR SYSTEMS

RANDOMIZED METHODS FOR RANK-DEFICIENT LINEAR SYSTEMS Electronic Transactions on Numerical Analysis. Volume 44, pp. 177 188, 2015. Copyright c 2015,. ISSN 1068 9613. ETNA RANDOMIZED METHODS FOR RANK-DEFICIENT LINEAR SYSTEMS JOSEF SIFUENTES, ZYDRUNAS GIMBUTAS,

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Subspace sampling and relative-error matrix approximation

Subspace sampling and relative-error matrix approximation Subspace sampling and relative-error matrix approximation Petros Drineas Rensselaer Polytechnic Institute Computer Science Department (joint work with M. W. Mahoney) For papers, etc. drineas The CUR decomposition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

arxiv: v1 [math.na] 29 Dec 2014

arxiv: v1 [math.na] 29 Dec 2014 A CUR Factorization Algorithm based on the Interpolative Decomposition Sergey Voronin and Per-Gunnar Martinsson arxiv:1412.8447v1 [math.na] 29 Dec 214 December 3, 214 Abstract An algorithm for the efficient

More information

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f. Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g

More information

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD

Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Algorithms and Perturbation Theory for Matrix Eigenvalue Problems and the SVD Yuji Nakatsukasa PhD dissertation University of California, Davis Supervisor: Roland Freund Householder 2014 2/28 Acknowledgment

More information

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1 LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just

More information

Randomized algorithms for the low-rank approximation of matrices

Randomized algorithms for the low-rank approximation of matrices Randomized algorithms for the low-rank approximation of matrices Yale Dept. of Computer Science Technical Report 1388 Edo Liberty, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

A fast randomized algorithm for the approximation of matrices preliminary report

A fast randomized algorithm for the approximation of matrices preliminary report DRAFT A fast randomized algorithm for the approximation of matrices preliminary report Yale Department of Computer Science Technical Report #1380 Franco Woolfe, Edo Liberty, Vladimir Rokhlin, and Mark

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Problem Set # 1 Solution, 18.06

Problem Set # 1 Solution, 18.06 Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

CUR Matrix Factorizations

CUR Matrix Factorizations CUR Matrix Factorizations Algorithms, Analysis, Applications Mark Embree Virginia Tech embree@vt.edu October 2017 Based on: D. C. Sorensen and M. Embree. A DEIM Induced CUR Factorization. SIAM J. Sci.

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

A fast randomized algorithm for orthogonal projection

A fast randomized algorithm for orthogonal projection A fast randomized algorithm for orthogonal projection Vladimir Rokhlin and Mark Tygert arxiv:0912.1135v2 [cs.na] 10 Dec 2009 December 10, 2009 Abstract We describe an algorithm that, given any full-rank

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Communication avoiding parallel algorithms for dense matrix factorizations

Communication avoiding parallel algorithms for dense matrix factorizations Communication avoiding parallel dense matrix factorizations 1/ 44 Communication avoiding parallel algorithms for dense matrix factorizations Edgar Solomonik Department of EECS, UC Berkeley October 2013

More information

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Lecture 5: Randomized methods for low-rank approximation

Lecture 5: Randomized methods for low-rank approximation CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 5: Randomized methods for low-rank approximation Gunnar Martinsson The University of Colorado at Boulder Research

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information