Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

Size: px
Start display at page:

Download "Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems"

Transcription

1 Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010

2 Test Matrix T 1 R m,m T 1 := d a 0 a d a 0 a d a 0 a d where a, d R. We call this the 1D test matrix. Second Derivative Matrix T: a = 1, d = 2 symmetric, tridiagonal, diagonally dominant if d 2 a, nonsingular if diagonally dominant and a 0, strictly diagonally dominant if d > 2 a., (1)

3 Properties of T 2 = T 1 I + I T 1 T 2 := 2d a 0 a a 2d a 0 a a 2d 0 0 a a 0 0 2d a 0 a a 0 a 2d a 0 a a 0 a 2d 0 0 a a 0 0 2d a a 0 a 2d a a 0 a 2d 1. Poisson matrix A for a = 1, d = It is symmetric positive definite if d > 0 and d 2 a. 3. It is banded. 4. It is block-tridiagonal. 5. We know the eigenvalues and eigenvectors. 6. The eigenvectors are orthogonal..

4 Fast Methods Consider solving Ax = b, where A = T 1 or A = T 2. O(n) algorithm for T 1. What about T 2? Consider d = 2, a = 1, the Poisson matrix. Call it A.

5 Full Cholesky Symmetric positive definite system Ax = b of order n = m 2 A = R T R with R upper triangular. Full Cholesky: #flops O(n 3 /3). If m = 10 3 then it takes years of computing time. Need to use matrix structure nz = 105

6 Banded Cholesky System Ax = b of order n = m 2 and bandwidth d = m = n. band Cholesky: #flops is O(nd 2 ) = O(n 2 ). But if m = 10 3 then it still takes hours of computing time nz = 1009 We get fill-inn and the O(n 2 ) count is realistic.

7 Use Blockstructure of A A = Block tridiagonal A = T + 2I I 0 I T + 2I I 0 I T + 2I

8 LU of Block Tridiagonal Matrix [ ] D1 C 1 0 A 2 D 2 C 2 = 0 A 3 D 3 [ ] [ ] I 0 0 R1 C 1 0 L 2 I 0 0 R 2 C 2. 0 L 3 I 0 0 R 3 Given A i, D i, C i with D i square. Find L i, R i. Compare (i, j) block entries on both sides (1, 1) : D 1 = R 1 R 1 = D 1 (2, 1) : A 2 = L 2 R 1 L 2 = A 2 R 1 1 (2, 2) : D 2 = L 2 C 1 + R 2 R 2 = D 2 L 2 C 1 (2, 3) : A 3 = L 3 R 2 L 3 = A 3 R 1 2 (3, 3) : D 3 = L 3 C 2 + R 3 R 3 = D 3 L 3 C 2 In general R 1 = D 1, L k = A k R 1 k 1, R k = D k L k C k 1, k = 2, 3,..., m.

9 LU of Block Tridiagonal Matrix Ax = b = [b T 1,..., b T m] T D 1 C 1 A 2 D 2 C A m 1 D m 1 C m 1 A m D m = I L 2 I L m I R 1 C R m 1 C m 1 R m. R 1 = D 1, 2, 3,..., m. L k = A k R 1 k 1, R k = D k L k C k 1, k = y 1 = b 1, y k = b k L k y k 1, k = 2, 3,..., m, x m = R 1 m y m, x k = R 1 k (y k C k x k+1 ), k = m 1,..., 2, 1. (2) The solution is then x T = [x T 1,..., xt m].

10 BlockCholesky factor R for n = nz = 1018 We get fill-inn in the R i blocks Computing R i requires O(m 3 ) flops Need m O(m 3 ) = O(m 4 ) flops and the O(n 2 ) count remains. Block methods can be faster because we replace some loops with matrix multiplications.

11 Other Methods Iterative methods. Multigrid. Fast solvers based on diagonalization of T = tridiag( 1, 2, 1) and the Fast Fourier Transform. For this solve Ax = b in the form TV + VT = h 2 F.

12 Eigenpairs of T = tridiag( 1, 2, 1) R m,m Let h = 1/(m + 1). We know that Ts j = λ j s j for j = 1,..., m, where s j = [sin (jπh), sin (2jπh),..., sin (mjπh)] T, λ j = 4 sin 2 ( jπh 2 ), s T j s k = 1 2h δ j,k, j, k = 1,..., m.

13 The sine matrix S S := [s 1,..., s m ], D = diag(λ 1,..., λ n ), TS = SD, S T S = S 2 = 1 2h I, S is almost, but not quite orthogonal.

14 Find V using diagonalization We define a matrix X by V = SXS, where V is the solution of TV + VT = h 2 F TV + VT = h 2 F V=SXS TSXS + SXST = h 2 F S( )S STSXS 2 + S 2 XSTS = h 2 SFS TS=SD S 2 DXS 2 + S 2 XS 2 D = h 2 SFS S 2 =I/(2h) DX + XD = 4h 4 SFS =: B. λ j x jk + x jk λ k = b jk. x jk = b jk /(λ j + λ k ) for all j, k.

15 Algorithm [A Simple Fast Poisson Solver] 1. h = 1/(m + 1); F = ( f (jh, kh) ) m j,k=1 ; S = ( sin(jkπh) ) m j,k=1 ; σ = ( sin 2 ((jπh)/2) ) m j=1 2. G = (g j,k ) = SFS; 3. X = (x j,k ) m j,k=1, where x j,k = h 4 g j,k /(σ j + σ k ); 4. V = SXS; (3) Output is the exact solution of the discrete Poisson equation on a square computed in O(n 3/2 ) operations. Only a couple of m m matrices are required for storage. Next: Use FFT to reduce the complexity to O(n log 2 n)

16 Discrete Sine Transform, DST Given v = [v 1,..., v m ] T R m we say that the vector w = [w 1,..., w m ] T given by w j = m k=1 ( ) jkπ sin v k, m + 1 is the Discrete Sine Transform (DST) of v. j = 1,..., m In matrix form we can write the DST as the matrix times vector w = Sv, where S is the sine matrix. We can identify the matrix B = SA as the DST of A R m,n, i.e. as the DST of the columns of A.

17 The product B = AS The product B = AS can also be interpreted as a DST. Indeed, since S is symmetric we have B = (SA T ) T which means that B is the transpose of the DST of the rows of A. It follows that we can compute the unknowns V in Algorithm 15 by carrying out Discrete Sine Transforms on 4 m-by-m matrices in addition to the computation of X.

18 The Euler formula e iφ = cos φ + i sin φ, e iφ = cos φ i sin φ φ R, i = 1 Consider Note that cos φ = eiφ + e iφ, sin φ = eiφ e iφ 2 2i ω N := e 2πi/N = cos ( 2π N ) (2π ) i sin, ω N N N = 1 ω jk 2m+2 = e 2jkπi/(2m+2) = e jkπhi = cos(jkπh) i sin(jkπh).

19 Discrete Fourier Transform (DFT) If z j = N k=1 ω (j 1)(k 1) N y k, j = 1,..., N then z = [z 1,..., z N ] T is the DFT of y = [y 1,..., y N ] T. z = F N y, where F N := ( ω (j 1)(k 1) ) N N j,k=1, RN,N F N is called the Fourier Matrix

20 Example ω 4 = exp 2πi/4 = cos( π 2 ) i sin( π 2 ) = i F 4 = ω 4 ω 2 4 ω ω 2 4 ω 4 4 ω ω 3 4 ω 6 4 ω 9 4 = i 1 i i 1 i

21 Connection DST and DFT DST of order m can be computed from DFT of order N = 2m + 2 as follows: Lemma Given a positive integer m and a vector x R m. Component k of S m x is equal to i/2 times component k + 1 of F 2m+2 z where In symbols z = (0, x 1,..., x m, 0, x m, x m 1,..., x 1 ) T R 2m+2. (S m x) k = i 2 (F 2m+2z) k+1, k = 1,..., m.

22 (0, x 1,..., x m, 0, x m, x m 1,..., x 1 ) T Proof Let ω = ω 2m+2. Component k + 1 of F 2m+2 z is given by (F 2m+2 z) k+1 = = m m x j ω jk x j ω (2m+2 j)k j=1 j=1 m x j (ω jk ω jk ) j=1 = 2i m x j sin(jkπh) = 2i(S m x) k. j=1 Dividing both sides by 2i proves the Lemma.

23 Fast Fourier Transform (FFT) Suppose N is even. Express F N in terms of F N/2. Reorder the columns in F N so that the odd columns appear before the even ones. P N := (e 1, e 3,..., e N 1, e 2, e 4,..., e N ) F 4 = i 1 i i 1 i F 4P 4 = i i i i [ ] [ ] [ F 2 = =, D 1 ω = diag(1, ω 4 ) = 0 i [ ] F2 D F 4 P 4 = 2 F 2 F 2 D 2 F 2. ].

24 F 2m, P 2m, F m and D m Theorem If N = 2m is even then [ Fm D F 2m P 2m = m F m F m D m F m ], (4) where D m = diag(1, ω N, ω 2 N,..., ωm 1 N ). (5)

25 Proof Fix integers j, k with 0 j, k m 1 and set p = j + 1 and q = k + 1. Since ωm m = 1, ωn 2 = ω m, and ωn m = 1 we find by considering elements in the four sub-blocks in turn (F 2m P 2m ) p,q = ω j(2k) N = ω jk m = (F m ) p,q, (F 2m P 2m ) p+m,q = ω (j+m)(2k) N = ω m (j+m)k = (F m ) p,q, (F 2m P 2m ) p,q+m = ω j(2k+1) N = ω j N ωjk m = (D m F m ) p,q, (F 2m P 2m ) p+m,q+m = ω (j+m)(2k+1) N = ω j(2k+1) N = ( D m F m ) p,q It follows that the four m-by-m blocks of F 2m P 2m have the required structure.

26 The basic step Using Theorem 2 we can carry out the DFT as a block multiplication. Let y R 2m and set w = P T 2my = (w T 1, wt 2 )T, where w 1, w 2 R m. Then F 2m y = F 2m P 2m P T 2my = F 2m P 2m w and [ ] [ ] [ Fm D F 2m P 2m w = m F m w1 q1 + q = 2 F m D m F m w 2 q 1 q 2 where q 1 = F m w 1 and q 2 = D m (F m w 2 ). In order to compute F 2m y we need to compute F m w 1 and F m w 2. Note that w T 1 = [y 1, y 3,..., y N 1 ], while w T 2 = [y 2, y 4,..., y N ]. This follows since w T = [w T 1, wt 2 ] = yt P 2m and post multiplying a vector by P 2m moves odd indexed components to the left of all the even indexed components. ],

27 Recursive FFT Algorithm (Recursive FFT) Recursive Matlab function when N = 2 k function z=fftrec(y) n=length(y); if n==1 z=y; else q1=fftrec(y(1:2:n-1)); q2=exp(-2*pi*i/n).ˆ (0:n/2-1).*fftrec(y(2:2:n)); z=[q1+q2 q1-q2]; end Such a recursive version of FFT is useful for testing purposes, but is much too slow for large problems. A challenge for FFT code writers is to develop nonrecursive versions and also to handle efficiently the case where N is not a power of two.

28 FFT Complexity Let x k be the complexity (the number of flops) when N = 2 k. Since we need two FFT s of order N/2 = 2 k 1 and a multiplication with the diagonal matrix D N/2 it is reasonable to assume that x k = 2x k 1 + γ2 k for some constant γ independent of k. Since x 0 = 0 we obtain by induction on k that x k = γk2 k = γn log 2 N. This also holds when N is not a power of 2. Reasonable implementations of FFT typically have γ 5

29 Improvement using FFT The efficiency improvement using the FFT to compute the DFT is impressive for large N. The direct multiplication F N y requires O(8n 2 ) flops since complex arithmetic is involved. Assuming that the FFT uses 5N log 2 N flops we find for N = the ratio 8N 2 5N log 2 N Thus if the FFT takes one second of computing time and if the computing time is proportional to the number of flops then the direct multiplication would take something like seconds or 23 hours.

30 Poisson solver based on FFT requires O(n log 2 n) flops, where n = m 2 is the size of the linear system Ax = b. Why? 4m sine transforms: G 1 = SF, G = G 1 S, V 1 = SX, V = V 1 S. A total of 4m FFT s of order 2m + 2 are needed. Since one FFT requires O(γ(2m + 2) log 2 (2m + 2)) flops the 4m FFT s amounts to 8γm(m + 1) log 2 (2m + 2) 8γm 2 log 2 m = 4γn log 2 n, This should be compared to the O(8n 3/2 ) flops needed for 4 straightforward matrix multiplications with S. What is faster will depend on the programming of the FFT and the size of the problem.

31 Compare exact methods System Ax = b of order n = m 2. Assume m = 1000 so that n = Method # flops Storage T = 10 9 flops 1 Full Cholesky 3 n3 = n 2 = years Band Cholesky 2n 2 = n 3/2 = /2 hour Block Tridiagonal 2n 2 = n 3/2 = /2 hour Diagonalization 8n 3/2 = n = sec FFT 20n log 2 n = n = /2 sec

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation

More information

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural

More information

3.5 Finite Differences and Fast Poisson Solvers

3.5 Finite Differences and Fast Poisson Solvers 3.5 Finite Differences and Fast Poisson Solvers It is extremely unusual to use eigenvectors to solve a linear system KU = F. You need to know all the eigenvectors of K, and (much more than that) the eigenvector

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

CS 598: Communication Cost Analysis of Algorithms Lecture 9: The Ideal Cache Model and the Discrete Fourier Transform

CS 598: Communication Cost Analysis of Algorithms Lecture 9: The Ideal Cache Model and the Discrete Fourier Transform CS 598: Communication Cost Analysis of Algorithms Lecture 9: The Ideal Cache Model and the Discrete Fourier Transform Edgar Solomonik University of Illinois at Urbana-Champaign September 21, 2016 Fast

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 13 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath Parallel Numerical Algorithms

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Appendix C: Recapitulation of Numerical schemes

Appendix C: Recapitulation of Numerical schemes Appendix C: Recapitulation of Numerical schemes August 31, 2009) SUMMARY: Certain numerical schemes of general use are regrouped here in order to facilitate implementations of simple models C1 The tridiagonal

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

EE364a Review Session 7

EE364a Review Session 7 EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Sequential Fast Fourier Transform (PSC ) Sequential FFT

Sequential Fast Fourier Transform (PSC ) Sequential FFT Sequential Fast Fourier Transform (PSC 3.1 3.2) 1 / 18 Applications of Fourier analysis Fourier analysis studies the decomposition of functions into their frequency components. Piano Concerto no. 9 by

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

CSE 548: Analysis of Algorithms. Lecture 4 ( Divide-and-Conquer Algorithms: Polynomial Multiplication )

CSE 548: Analysis of Algorithms. Lecture 4 ( Divide-and-Conquer Algorithms: Polynomial Multiplication ) CSE 548: Analysis of Algorithms Lecture 4 ( Divide-and-Conquer Algorithms: Polynomial Multiplication ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2015 Coefficient Representation

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

CAAM 335: Matrix Analysis

CAAM 335: Matrix Analysis 1 CAAM 335: Matrix Analysis Solutions to Problem Set 4, September 22, 2008 Due: Monday September 29, 2008 Problem 1 (10+10=20 points) (1) Let M be a subspace of R n. Prove that the complement of M in R

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016 CS125 Lecture 4 Fall 2016 Divide and Conquer We have seen one general paradigm for finding algorithms: the greedy approach. We now consider another general paradigm, known as divide and conquer. We have

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Orthogonal Symmetric Toeplitz Matrices

Orthogonal Symmetric Toeplitz Matrices Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Sparse Matrix Techniques for MCAO

Sparse Matrix Techniques for MCAO Sparse Matrix Techniques for MCAO Luc Gilles lgilles@mtu.edu Michigan Technological University, ECE Department Brent Ellerbroek bellerbroek@gemini.edu Gemini Observatory Curt Vogel vogel@math.montana.edu

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Section 4.4 Reduction to Symmetric Tridiagonal Form

Section 4.4 Reduction to Symmetric Tridiagonal Form Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

9.6: Matrix Exponential, Repeated Eigenvalues. Ex.: A = x 1 (t) = e t 2 F.M.: If we set

9.6: Matrix Exponential, Repeated Eigenvalues. Ex.: A = x 1 (t) = e t 2 F.M.: If we set 9.6: Matrix Exponential, Repeated Eigenvalues x Ax, A : n n (1) Def.: If x 1 (t),...,x n (t) is a fundamental set of solutions (F.S.S.) of (1), then X(t) x 1 (t),...,x n (t) (n n) is called a fundamental

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Numerical Methods for Random Matrices

Numerical Methods for Random Matrices Numerical Methods for Random Matrices MIT 18.95 IAP Lecture Series Per-Olof Persson (persson@mit.edu) January 23, 26 1.9.8.7 Random Matrix Eigenvalue Distribution.7.6.5 β=1 β=2 β=4 Probability.6.5.4.4

More information

A Fast Fourier transform based direct solver for the Helmholtz problem. AANMPDE 2017 Palaiochora, Crete, Oct 2, 2017

A Fast Fourier transform based direct solver for the Helmholtz problem. AANMPDE 2017 Palaiochora, Crete, Oct 2, 2017 A Fast Fourier transform based direct solver for the Helmholtz problem Jari Toivanen Monika Wolfmayr AANMPDE 2017 Palaiochora, Crete, Oct 2, 2017 Content 1 Model problem 2 Discretization 3 Fast solver

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form Linear algebra This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations The focus is upon elimination methods such as Gaussian elimination, and

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Lecture Notes for Mat-inf 4130, Tom Lyche

Lecture Notes for Mat-inf 4130, Tom Lyche Lecture Notes for Mat-inf 4130, 2016 Tom Lyche August 15, 2016 2 Contents Preface ix 0 A Short Review of Linear Algebra 1 0.1 Notation.............................. 1 0.2 Vector Spaces and Subspaces...................

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information