Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Size: px
Start display at page:

Download "Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1"

Transcription

1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1

2 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û M 1 (Aû b) u k+1 = u k M 1 (Au k b) (k = 0, 1... ) 1. Choose initial value u 0, tolerance ε, set k = 0 2. Calculate residuum r k = Au k b 3. Test convergence: if r k < ε set u = u k, finish 4. Calculate update: solve Mv k = r k 5. Update solution: u k+1 = u k v k, set k = i + 1, repeat with step 2. Lecture 5 Slide 42

3 Lecture 9 Slide 3 The Jacobi method Let A = D E F, where D: main diagonal, E: negative lower triangular part F : negative upper triangular part Preconditioner: M = D, where D is the main diagonal of A ( ) u k+1,i = u k,i 1 a iju k,j b i (i = 1... n) a ii j=1...n Equivalent to the succesive (row by row) solution of a iiu k+1,i + a iju k,j = b i (i = 1... n) j=1...n,j i Already calculated results not taken into account Alternative formulation with A = M N: u k+1 = D 1 (E + F )u k + D 1 b = M 1 Nu k + M 1 b Variable ordering does not matter Lecture 5 Slide 43

4 Lecture 9 Slide 4 The Gauss-Seidel method Solve for main diagonal element row by row Take already calculated results into account a iiu k+1,i + a iju k+1,j + a iju k,j = b i (i = 1... n) j<i j>i (D E)u k+1 Fu k = b May be it is faster Variable order probably matters Preconditioners: forward M = D E, backward: M = D F Splitting formulation: A = M N forward: N = F, backward: M = E Forward case: u k+1 = (D E) 1 Fu k + (D E) 1 b = M 1 Nu k + M 1 b Lecture 5 Slide 44

5 Lecture 9 Slide 5 Block methods Jacobi, Gauss-Seidel, (S)SOR methods can as well be used block-wise, based on a partition of the system matrix into larger blocks, The blocks on the diagonal should be square matrices, and invertible Interesting variant for systems of partial differential equations, where multiple species interact with each other Lecture 5 Slide 48

6 Lecture 9 Slide 6 Convergence Let û be the solution of Au = b. Let e k = u j û be the error of the k-th iteration step u k+1 = u k M 1 (Au k b) = (I M 1 A)u k + M 1 b u k+1 û = u k û M 1 (Au k Aû) = (I M 1 A)(u k û) = (I M 1 A) k (u 0 û) resulting in e k+1 = (I M 1 A) k e 0 So when does (I M 1 A) k converge to zero for k? Lecture 5 Slide 49

7 Lecture 9 Slide 7 Spectral radius and convergence Definition The spectral radius ρ(a) is the largest absolute value of any eigenvalue of A: ρ(a) = max λ σ(a) λ. Theorem (Saad, Th. 1.10) lim A k = 0 ρ(a) < 1. k Proof, : Let u i be a unit eigenvector associated with an eigenvalue λ i. Then so we must have ρ(a) < 1 Au i = λ iu i A 2 u i = λ ia iu i = λ 2 u i. A k u i = λ k u i therefore A k u i 2 = λ k and lim λ k = 0 k Lecture 5 Slide 52

8 Lecture 9 Slide 8 Corollary from proof Theorem (Saad, Th. 1.12) lim k Ak 1 k = ρ(a) Lecture 5 Slide 54

9 Lecture 9 Slide 9 Back to iterative methods Sufficient condition for convergence: ρ(i M 1 A) < 1. Lecture 5 Slide 55

10 Eigenvalue analysis for more general matrices For 1D heat conduction we used a very special regular structure of the matrix which allowed exact eigenvalue calculations Generalizations to tensor product is possible Generalization to varying coefficients, unstructured grids... what can be done for general matrices? Lecture 9 Slide 10

11 The Gershgorin Circle Theorem (Semyon Gershgorin,1931) (everywhere, we assume n 2) Theorem (Varga, Th. 1.11) Let A be an n n (real or complex) matrix. Let Λ i = a ij j=1...n j i If λ is an eigenvalue of A then there exists r, 1 r n such that λ a rr Λ r Proof Assume λ is eigenvalue, x a corresponding eigenvector, normalized such that max i=1...n x i = x r = 1. From Ax = λx it follows that (λ a ii )x i = a ij x j λ a rr = j=1...n j i j=1...n j r a rj x j j=1...n j r a rj x j j=1...n j r a rj = Λ r Lecture 9 Slide 11

12 Gershgorin Circle Corollaries Corollary: Any eigenvalue of A lies in the union of the disks defined by the Gershgorin circles λ {µ V : µ a ii Λ i } Corollary: i=1...n ρ(a) max i=1...n ρ(a) max j=1...n n a ij = A j=1 n a ij = A 1 i=1 Proof n µ a ii Λ i µ Λ i + a ii = a ij j=1 Furthermore, σ(a) = σ(a T ). Lecture 9 Slide 12

13 Gershgorin circles: example A = , λ 1 = 1, λ 2 = 2, λ 3 = 3, Λ 1 = 5.2, Λ 2 = 0.8, λ 3 = Im Re Lecture 9 Slide 13

14 Gershgorin circles: heat example I A = B = (I D 1 A) = 2 h 2 h h h h h We have b ii = 0, Λ i = { 1 2, i = 1, n 1 i = 2... n 1 estimate λ i 1 Lecture 9 Slide 14

15 Gershgorin circles: heat example II Let n=11, h=0.1: ( ) ihπ λ i = cos 1 + 2h (i = 1... n) Im Re the Gershgorin circle theorem is too pessimistic... Lecture 9 Slide 15

16 Weighted directed graph representation of matrices Define a directed graph from the nonzero entries of a matrix A = (a ik ): 4 Nodes: N = {N i } i=1...n Directed edges: E = { N k N l a kl 0} Matrix entries weights of directed edges A = N N 1 9 N N N :1 equivalence between matrices and weighted directed graphs Convenient e.g. for sparse matrices Lecture 9 Slide 16

17 Reducible and irreducible matrices Definition A is reducible if there exists a permutation matrix P such that PAP T = ( ) A11 A 12 0 A 22 A is irreducible if it is not reducible. Theorem (Varga, Th. 1.17): A is irreducible the matrix graph is connected, i.e. for each ordered pair (N i, N j ) there is a path consisting of directed edges, connecting them. Equivalently, for each i, j there is a sequence of consecutive nonzero matrix entries a ik1, a k1k 2, a k2k 3..., a kr 1k r a kr j. Lecture 9 Slide 17

18 Taussky theorem (Olga Taussky, 1948) Theorem (Varga, Th. 1.18) Let A be irreducible. Assume that the eigenvalue λ is a boundary point of the union of all the disks λ {µ C : µ a ii Λ i } i=1...n Then, all n Gershgorin circles pass through λ, i.e. for i = 1... n, λ a ii = Λ i Lecture 9 Slide 18

19 Taussky theorem proof Proof Assume λ is eigenvalue, x a corresponding eigenvector, normalized such that max i=1...n x i = x r = 1. From Ax = λx it follows that (λ a rr )x r = j=1...n j r λ a rr j=1...n j r a rj x j (1) a rj x j j=1...n j r a rj = Λ r (2) λ is boundary point λ a rr = Λ r For all p r with a rp 0, x p = 1. Due to irreducibility there is at least one such p. For this p, equation (2) is valid (with p in place of r) λ a pp = Λ p Due to irreducibility, this is true for all p = 1... n. Lecture 9 Slide 19

20 Consequences for heat example from Taussky theorem B = I D 1 A We had b ii = 0, Λ i = { 1 2, i = 1, n 1 i = 2... n 1 estimate λ i 1 Assume λ i = 1. Then λ i lies on the boundary of the union of the Gershgorin circles. But then it must lie on the boundary of both circles with radius 1 2 and 1 around 0. Contradiction λ i < 1, ρ(b) < 1! Lecture 9 Slide 20

21 Diagonally dominant matrices Definition Let A = (a ij ) be an n n matrix. A is diagonally dominant if (i) for i = 1... n, a ii j=1...n j i a ij A is strictly diagonally dominant (sdd) if (i) for i = 1... n, a ii > a ij j=1...n j i A is irreducibly diagonally dominant (idd) if (i) A is irreducible (ii) A is diagonally dominant for i = 1... n, a ii j=1...n j i a ij (iii) for at least one r, 1 r n, a rr > j=1...n j r a rj Lecture 9 Slide 21

22 A very practical nonsingularity criterion Theorem (Varga, Th. 1.21): Let A be strictly diagonally dominant or irreducibly diagonally dominant. Then A is nonsingular. If in addition, a ii > 0 is real for i = 1... n, then all real parts of the eigenvalues of A are positive: Reλ i > 0, i = 1... n Lecture 9 Slide 22

23 A very practical nonsingularity criterion, proof I Proof: Assume A strictly diagonally dominant. Then the union of the Gershgorin disks does not contain 0 and λ = 0 cannot be an eigenvalue A is nonsingular. As for the real parts, the union of the disks is i=1...n {µ C : µ a ii Λ i } and Reµ must be larger than zero if µ should be contained. Lecture 9 Slide 23

24 A very practical nonsingularity criterion, proof I Assume A irreducibly diagonally dominant. Then, if 0 is an eigenvalue, it sits on the boundary of one of the Gershgorin disks. By Taussky theorem, we have a ii = Λ i for all i = 1... n. This is a contradiction as by definition there is at least one i such that a ii > Λ i Assume a ii > 0, real. All real parts of the eigenvalues must be 0. Therefore, if a real part is 0, it lies on the boundary of at least one disk. By Taussky theorem it must be contained at the same time in the boundary of all the disks and in the imaginary axis. This contradicts the fact that there is at least one disk which does not touch the imaginary axis as by definition there is at least one i such that a ii > Λ i Lecture 9 Slide 24

25 Corollary Theorem: If A is complex hermitian or real symmetric, sdd or idd, with positive diagonal entries, it is positive definite. Proof: All eigenvalues of A are real, and due to the nonsingularity criterion, they must be positive, so A is positive definite. Lecture 9 Slide 25

26 Heat conduction matrix α + 1 h A = 2 h 2 h h h 1 2 h h 1 h 1 h + α A is idd A is nonsingular diaga is positive real eigenvalues of A have positive real parts A is real, symmetric A is positive definite Lecture 9 Slide 26

27 Perron-Frobenius Theorem (1912/1907) Definition: A real n-vector x is positive (x > 0) if all entries of x are positive nonnegative (x 0) if all entries of x are nonnegative Definition: A real n n matrix A is positive (A > 0) if all entries of A are positive nonnegative (A 0) if all entries of A are nonnegative Theorem(Varga, Th. 2.7) Let A 0 be an irreducible n n matrix. Then (i) A has a positive real eigenvalue equal to its spectral radius ρ(a). (ii) To ρ(a) there corresponds a positive eigenvector x > 0. (iii) ρ(a) increases when any entry of A increases. (iv) ρ(a) is a simple eigenvalue of A. Proof: See Varga. Lecture 9 Slide 27

28 Perron-Frobenius for general nonnegative matrices Each n n matrix can be brought to the normal form R 11 R R 1m PAP T 0 R R 2m = R mm where for j = 1... m, either R jj irreducible or R jj = (0). Theorem(Varga, Th. 2.20) Let A 0 be an n n matrix. Then (i) A has a nonnegative eigenvalue equal to its spectral radius ρ(a). This eigenvalue is positive unless A is reducible and its normal form is strictly upper triangular (ii) To ρ(a) there corresponds a nonzero eigenvector x 0. (iii) ρ(a) does not decrease when any entry of A increases. m Proof: See Varga; σ(a) = σ(r jj ), apply irreducible Perron-Frobenius to R jj. j=1 Lecture 9 Slide 28

29 Theorem on Jacobi matrix Theorem: Let A be sdd or idd, and D its diagonal. Then ρ( I D 1 A ) < 1 Proof: Let B = (b ij ) = I D 1 A. Then b ij = If A is sdd, then for i = 1... n, { 0, i = j aij a ii, i j j=1...n b ij = j=1...n j i a ij a ii = Λ i a ii < 1 Therefore, ρ( B ) < 1. Lecture 9 Slide 29

30 Theorem on Jacobi matrix II If A is idd, then for i = 1... n, j=1...n j=1...n b ij = j=1...n j i a ij a ii = Λ i a ii 1 b rj = Λ r < 1 for at least one r a rr Therefore, ρ( B ) <= 1. Assume ρ( B ) = 1. By Perron-Frobenius, 1 is an eigenvalue. As it is in the union of the Gershgorin disks, for some i, λ = 1 Λ i a ii 1 and it must lie on the boundary of this union. By Taussky then one has for all i λ = 1 Λ i a ii = 1 which contradicts the idd condition. Lecture 9 Slide 30

31 Jacobi method convergence Corollary: Let A be sdd or idd, and D its diagonal. Assume that a ii > 0 and a ij 0 for i j. Then ρ(i D 1 A) < 1, i.e. the Jacobi method converges. Proof In this case, B = B. Lecture 9 Slide 31

32 Regular splittings A = M N is a regular splitting if M is nonsingular M 1, N are nonnegative, i.e. have nonnegative entries Regard the iteration u k+1 = M 1 Nu k + M 1 b. We have I M 1 A = M 1 N. Lecture 9 Slide 32

33 Convergence theorem for regular splitting Theorem: Assume A is nonsingular, A 1 0, and A = M N is a regular splitting. Then ρ(m 1 N) < 1. Proof: Let G = M 1 N. Then A = M(I G), therefore I G is nonsingular. In addition A 1 N = (M(I M 1 N)) 1 N = (I M 1 N) 1 M 1 N = (I G) 1 G By Perron-Frobenius (for general matrices), ρ(g) is an eigenvalue with a nonnegative eigenvector x. Thus, 0 A 1 Nx = ρ(g) 1 ρ(g) x Therefore 0 ρ(g) 1. As I G is nonsingular, ρ(g) < 1. Lecture 9 Slide 33

34 Convergence rate comparison Corollary: ρ(m 1 N) = Proof: Rearrange τ = τ 1+τ where τ = ρ(a 1 N). ρ(g) 1 ρ(g) Corollary: Let A 0, A = M 1 N 1 and A = M 2 N 2 be regular splittings. If N 2 N 1 0, then 1 > ρ(m 1 2 N 2 ) ρ(m 1 1 N 1 ). Proof: τ 2 = ρ(a 1 N 2 ) ρ(a 1 N 1 ) = τ 1 But τ 1+τ is strictly increasing. Lecture 9 Slide 34

35 M-Matrix definition Definition Let A be an n n real matrix. A is called M-Matrix if (i) a ij 0 for i j (ii) A is nonsingular (iii) A 1 0 Corollary: If A is an M-Matrix, then A 1 > 0 A is irreducible. Proof: See Varga. Lecture 9 Slide 35

36 Main practical M-Matrix criterion Corollary: Let A be sdd or idd. Assume that a ii > 0 and a ij 0 for i j. Then A is an M-Matrix. Proof: We know that A is nonsingular, but we have to show A 1 0. Let B = I D 1 A. Then ρ(b) < 1, therefore I B is nonsingular. We have for k > 0: I B k+1 = (I B)(I + B + B B k ) (I B) 1 (I B k+1 ) = (I + B + B B k ) The left hand side for k converges to (I B) 1, therefore (I B) 1 = k=0 B k As B 0, we have (I B) 1 = A 1 D 0. As D > 0 we must have A 1 0. Lecture 9 Slide 36

37 Application Let A be an M-Matrix. Assume A = D E F. Jacobi method: M = D is nonsingular, M 1 0. N = E + F nonnegative convergence Gauss-Seidel: M = D E is an M-Matrix as A M and M has non-positive off-digonal entries. N = F 0. convergence Comparison: N J N GS Gauss-Seidel converges faster. More general: Block Jacobi, Block Gauss Seidel etc. Lecture 9 Slide 37

38 Intermediate Summary Given some matrix, we now have some nice recipies to establish nonsingularity and iterative method convergence: Check if the matrix is irreducible. This is mostly the case for elliptic and parabolic PDEs. Check if the matrix is strictly or irreducibly diagonally dominant. If yes, it is in addition nonsingular. Check if main diagonal entries are positive and off-diagonal entries are nonpositive. If yes, in addition, the matrix is an M-Matrix, its inverse is nonnegative, and elementary iterative methods converge. These critera do not depend on the symmetry of the matrix! Lecture 9 Slide 38

39 Example: 1D finite difference matrix: α + 1 h Au = 2 h 2 h h h 1 2 h h 1 h 1 h + α u 1 u 2 u 3. u N 2 u N 1 u N = f = αv 1 hf 2 hf 3. hf N 2 hf N 1 αv n idd positive main diagonal entries, nonpositive off-diagonal entries A is nonsingular, has the M-property, and we can e.g. apply the Jacobi and Gauss-Seidel iterative method to solve it (ok, in 1D we already know this is a bad idea... ). for f 0 and v 0 it follows that u 0. heating and positive environment temperatures cannot lead to negative temperatures in the interior. Lecture 9 Slide 39

40 Incomplete LU factorizations (ILU) Idea (Varga, Buleev, 1960): fix a predefined zero pattern apply the standard LU factorization method, but calculate only those elements, which do not correspond to the given zero pattern Result: incomplete LU factors L, U, remainder R: A = LU R Problem: with complete LU factorization procedure, for any nonsingular matrix, the method is stable, i.e. zero pivots never occur. Is this true for the incomplete LU Factorization as well? Lecture 9 Slide 40

41 Comparison of M-Matrices Theorem(Saad, Th. 1.33): Let A, B n n matrices such that (i) A B (ii) b ij 0 for i j. Then, if A is an M-Matrix, so is B. Proof: For the diagonal parts, one has D B D A > 0, D A A D B B 0 Therefore I D 1 A A D 1(D B B) D 1 (D B B) = I D 1 B =: G 0. A B B Perron-Frobenius ρ(g) = ρ(i D 1 B B) ρ(i D 1 A A) < 1 I G is nonsingular. From the proof of the M-matrix criterion, B = (I G) 1 = k=0 G k 0. As D B > 0, we get B 0. D 1 B Lecture 9 Slide 41

42 M-Property propagation in Gaussian Elimination Theorem:(Ky Fan; Saad Th 1.10) Let A be an M-matrix. Then the matrix A 1 obtained from the first step of Gaussian elimination is an M-matrix. Proof: One has aij 1 = a ij ai1a1j a 11, a ij, a i1, a 1j 0, a 11 > 0 aij 1 0 for i j a 12 a A = L 1 A 1 with L 1 = a 1n a A 1 nonsingular nonsingular, nonnegative Let e 1... e n be the unit vectors. Then A 1 1 e 1 = 1 a e For j > 1, A 1 1 e j = A 1 L 1 e j = A 1 e j 0. A Lecture 9 Slide 42

43 Stability of ILU Theorem (Saad, Th. 10.2): If A is an M-Matrix, then the algorithm to compute the incomplete LU factorization with a given nonzero pattern A = LU R is stable. Moreover, A = LU R is a regular splitting. Lecture 9 Slide 43

44 Stability of ILU decomposition II Proof Let à 1 = A 1 + R 1 = L 1 A + R 1 where R 1 is a nonnegative matrix which occurs from dropping some off diagonal entries from A 1. Thus, à 1 A 1 and Ã1 is an M-matrix. We can repeat this recursively à k = A k + R k = L k A k 1 + R k = L k L k 1 A k 2 + L k R k 1 + R k = L k L k 1... L 1 A + L k L k 1... L 2 R R k Let L = (L n 1... L 1 ) 1, U = à n 1. Then U = L 1 A + S with S = L n 1 L n 2... L 2 R 1 + +R n 1 = L n 1 L n 2... L 2 (R 1 +R R n 1 ) Let R = R 1 + R R n 1, then A = LU R where U 1 L 1, R are nonnegative. Lecture 9 Slide 44

45 ILU(0) Special case of ILU: ignore any fill-in. Representation: M = ( D E) D 1 ( D F ) D is a diagonal matrix (wich can be stored in one vector) which is calculated by the incomplete factorization algorithm. Setup: for(int i=0;i<n;i++) d(i)=a(i,i) for(int i=0;i<n;i++) { d(i)=1.0/d(i) for (int j=i+1;j<n;j++) d(j)=d(j)-a(i,j)*d(i)*a(j,i) } Lecture 9 Slide 45

46 ILU(0) Solve Mu = v for(int i=0;i<n;i++) { double x=0.0; for (int j=0;j<i;i++) x=x+a(i,j)*u(j) u(i)=d(i)*(v(i)-x) } for(int i=n-1;i>=0;i--) { double x=0.0 for(int j=i+1;j<n;j++) x=x+a(i,j)*u(j) u(i)=u(i)-d(i)*x } Lecture 9 Slide 46

47 ILU(0) Generally better convergence properties than Jacobi, Gauss-Seidel One can develop block variants Alternatives: ILUM: ( modified ): add ignored off-diagonal entries to D ILUT: zero pattern calculated dynamically based on drop tolerance Dependence on ordering Can be parallelized using graph coloring Not much theory: experiment for particular systems I recommend it as the default initial guess for a sensible preconditioner Incomplete Cholesky: symmetric variant of ILU Lecture 9 Slide 47

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1 Scientific Computing WS 2018/2019 Lecture 10 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 10 Slide 1 Homework assessment Lecture 10 Slide 2 General Please apologize terse answers - on the bright

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Numerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan

Numerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan IERAIVE MEHODS Numerical methods in chemical engineering Edwin Zondervan 1 OVERVIEW Iterative methods for large systems of equations We will solve Laplace s equation for steady state heat conduction LAPLACE

More information

On optimal improvements of classical iterative schemes for Z-matrices

On optimal improvements of classical iterative schemes for Z-matrices Journal of Computational and Applied Mathematics 188 (2006) 89 106 www.elsevier.com/locate/cam On optimal improvements of classical iterative schemes for Z-matrices D. Noutsos a,, M. Tzoumas b a Department

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble. Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning

More information

Lecture 4 Basic Iterative Methods I

Lecture 4 Basic Iterative Methods I March 26, 2018 Lecture 4 Basic Iterative Methods I A The Power Method Let A be an n n with eigenvalues λ 1,...,λ n counted according to multiplicity. We assume the eigenvalues to be ordered in absolute

More information

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

Lecture 2 Decompositions, perturbations

Lecture 2 Decompositions, perturbations March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

Incomplete factorization preconditioners and their updates with applications - I 1,2

Incomplete factorization preconditioners and their updates with applications - I 1,2 Incomplete factorization preconditioners and their updates with applications - I 1,2 Daniele Bertaccini, Fabio Durastante Moscow August 24, 216 Notes of the course: Incomplete factorization preconditioners

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, Georgia, USA

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information