Computational Linear Algebra

Size: px
Start display at page:

Download "Computational Linear Algebra"

Transcription

1 Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18

2 Part 3: Iterative Methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 2

3 overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 3

4 Definitions iteration methods consider a linear system Ax b (3.0.1) with given right hand side b and regular matrix A an iteration method successively computes approximations x m for the exact solution A 1 b via repeated execution of a defined rule with given start vector x 0 x m 1 (x m, b) for m 0, 1,... PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 4

5 Definitions iteration methods (cont d) Definition 3.1 An iteration method given via the mapping : and called linear if matrices M, N exist, thus (x, b) Mx Nb applies. Matrix M is called iteration matrix of iteration method. Definition 3.2 A vector x : for b if ~ is denoted fixed point of iteration method ~ ~ x (x, b) applies. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 5

6 Definitions consistency vs. convergence Definition 3.3 An iteration method is called consistent w.r.t. matrix A, if for all b the solution A 1 b is fixed point of for b. An iteration method is called convergent if for all b and all start values x 0 one limit independent from the start value exists. Note: Consistency poses a necessary constraint for any iteration method. In case of a linear iteration method, consistency can be directly determined from matrices M and N. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 6

7 Definitions consistency vs. convergence (cont d) Theorem 3.4 A linear iteration method is consistent w.r.t. matrix A iff M I NA applies. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 7

8 Definitions consistency vs. convergence (cont d) Theorem 3.5 A linear iteration method is convergent iff the spectral radius of iteration matrix M fulfils the constraint (M) 1. Theorem 3.6 Let be a convergent and w.r.t. A consistent linear iteration method, then limit x~ of the sequence x m (x m 1, b) for m 1, 2,... fulfils for each x 0 the linear system (3.0.1). Both proofs are lengthy. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 8

9 overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 9

10 basic concept based on the partitioning of matrix A as A B (A B), B, (3.1.1) from Ax b the equivalent system Bx (B A)x b follows PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 10

11 basic concept (cont d) is B regular, we get x B 1 (B A)x B 1 b and hereby define the linear iteration methods x m 1 (x m, b) Mx m Nb for m 0, 1,... with M : B 1 (B A) and N : B 1 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 11

12 basic concept (cont d) Theorem 3.7 Let B be regular, then the linear iteration method is consistent w.r.t. A. x m 1 (x m, b) B 1 (B A)x m B 1 b PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 12

13 basic concept (cont d) Theorem 3.8 Let be a consistent linear iteration method w.r.t. A for whose iteration matrix M a norm exists, thus q : M 1 applies. Then for given 0 follows x m A 1 b for all m with m and x 1 (x 0, b) x 0. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 13

14 basic concept (cont d) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 14

15 basic concept (cont d) observation: if 0 q 1 and q q 2 applies, then follows ~ considering two convergent linear iteration methods 1 and 2 whose respective iteration matrices M 1 and M 2 fulfil the property (M 1 ) (M 2 ) 2 then theorem 3.8 delivers an assured accuracy for method 1 normally after half of the iterations required by 2, i.e., the number of required iterations halves if the spectral radius for instance is reduced from 0.9 to PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 15

16 JACOBI method for solving linear system Ax b with regular matrix A the JACOBI method requires non disappearing diagonal elements a ii 0, i 1,..., n, thus the diagonal matrix D a 11,..., a nn is regular hence, the linear system can be re written in equivalent form as x D 1 (D A)x D 1 b : M J : N J due to theorem 3.7, the linear iteration method x m 1 D 1 (D A)x m D 1 b for m 0, 1, 2,... herewith is consistent w.r.t. matrix A as new iteration x m 1 is computed solely based on the old one x m this method is also called total step method; speed of convergence depends on the spectral radius (M J ) only PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 16

17 JACOBI method (cont d) Theorem 3.9 If regular matrix A strong row sum criteria with a ii 0, i 1,..., n, fulfils the or the strong column sum criteria or the square sum criteria then the JACOBI method converges for arbitrary start vector x 0 arbitrary right hand side b towards A 1 b. and for PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 17

18 JACOBI method (cont d) remark: matrices fulfilling the strong row sum criteria are called strict diagonally dominant PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 18

19 JACOBI method (cont d) anyhow, many matrices do not fulfil any of the three criteria stated in theorem 3.9 (e.g., an FD discretisation of the POISSON equation) here, the JACOBI method converges for regular matrix A diagonally dominant, i.e. where that is applies, if some k 1,..., n exists with PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 19

20 JACOBI method (cont d) example: the simple problem (3.1.2) has the solution A 1 b 1, 1 T : A : x : b with convergence of the JACOBI method is proved PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 20

21 JACOBI method (cont d) eigenvalues of the iteration matrix M J D 1 (D A) are 1,2 thus (M J ) follows PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 21

22 JACOBI method (cont d) using start vector x 0 (21, 19) T leads to the following iteration JACOBI method m x m,1 x m,2 m : x m A 1 b e e e e e e e e e e e e e e e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 22

23 GAUSS SEIDEL method we again consider a linear system Ax b with regular matrix A and a regular diagonal matrix D diag a 11,..., a nn we further define the strict lower triangular matrix L (l ij ) ij 1,..., n with l ij a ij, i j 0, otherwise and the strict upper triangular matrix R (r ij ) ij 1,..., n with r ij a ij, i j 0, otherwise and hereby define the equivalent linear system (D L)x Rx b (3.1.3) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 23

24 GAUSS SEIDEL method (cont d) hence, linear system (3.1.3) can be re written as x (D L) 1 Rx (D L) 1 b : M GS : N GS thus M GS (D L) 1 (D L A) I N GS A applies and the linear iteration method x m 1 (D L) 1 Rx m (D L) 1 b for m 0, 1, 2,... (3.1.4) is consistent w.r.t. matrix A PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 24

25 GAUSS SEIDEL method (cont d) to derive a component based notation we derive i th row of (3.1.4) in form according to (3.1.3) let x m 1,j for j 1,..., i 1 be known, then x m 1,i can be computed via (3.1.5) for i 1,..., n and m 0, 1, 2,... hence, for computation of i th component (within iteration m 1) both old components of x m and the first i 1 new components of x m 1 are used this method is called single step method; due to better approximation of A via (D L) a smaller spectral radius and faster convergence to be expected PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 25

26 GAUSS SEIDEL method (cont d) Theorem 3.10 Let the regular matrix A with a ii 0 for i 1,..., n be given. If the recursive defined numbers p 1,..., p n with p i for i 1,..., n fulfil the condition p : max p i 1 i 1,..., n then the GAUSS SEIDEL method converges for arbitrary start vector x 0 and for any arbitrary right hand side b towards A 1 b. Corollary 3.11 Let the regular matrix A be strict diagonally dominant, then the GAUSS SEIDEL method converges for arbitrary start vector x 0 and for any arbitrary right hand side b towards A 1 b. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 26

27 GAUSS SEIDEL method (cont d) example: matrix A from the linear system (3.1.2) is strict diagonally dominant which ensures convergence of GAUSS SEIDEL method corresponding iteration matrix M GS (D L) 1 R has eigenvalues 1 0 and 2, thus (M GS ) (M J ) applies and we can expect twice the speed of convergence compared to JACOBI method PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 27

28 GAUSS SEIDEL method (cont d) for start vector x 0 (21, 19) T and right hand side b (0.3, 0.3) T we get according to our expectations the following iteration GAUSS SEIDEL method m x m,1 x m,2 m : x m A 1 b e e e e e e e e e e e e e e e e e e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 28

29 Relaxation methods! source: purabotanica.com PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 29

30 relaxation methods re writing the linear iteration method x m 1 B 1 (B A)x m B 1 b as follows x m 1 x m B 1 (b Ax m ), (3.1.6) thus x m 1 can be interpreted as correction of x m by using vector c m when confining further considerations on total step methods, the objective of relaxation is to improve speed of convergence of method (3.1.6) via weighting the correction vector c m hence, we modify (3.1.6) to : c m x m 1 x m B 1 (b Ax m ) with PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 30

31 relaxation methods (cont d) based on x m we are searching for the optimal x m 1 in direction of c m, i.e., the spectral radius of the iteration matrix becomes minimal c m x m 1 x m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 31

32 relaxation methods (cont d) with x m 1 x m B 1 (b Ax m ) (I B 1 A)x m B 1 b (3.1.7) : M( ) : N( ) the determination of minimal, hence must be as follows, thus (M( )) becomes arg min (M( )) weighting factor is called relaxation parameter method (3.1.7) is denoted under relaxation method for 1 and over relaxation method for 1 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 32

33 JACOBI relaxation method according to (3.1.6) we write the JACOBI method as x m 1 x m D 1 (b Ax m ) for m 0, 1, 2,... hence, the JACOBI relaxation method looks as follows x m 1 x m D 1 (b Ax m ) (I D 1 A)x m D 1 b for m 0, 1, 2,... : M J ( ) : N J ( ) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 33

34 JACOBI relaxation method (cont d) we get in component based notation and finally for i 1,..., n and m 0, 1, 2,... PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 34

35 JACOBI relaxation method (cont d) Theorem 3.12 Let the iteration matrix M J of the JACOBI method have only real eigenvalues 1... n with respective linearly independent eigenvectors u 1,..., u n and (M J ) 1 applies. Thus the iteration matrix M J ( ) of the JACOBI relaxation method has eigenvalues and i 1 i for i 1,..., n, applies. opt arg min (M J ( )) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 35

36 JACOBI relaxation method (cont d) consider the relaxation functions f for different values of with f : f, f ( ) 1 1 f 1/2 f 1 f ( n ) 1 f d f ( 1 ) 1 f n 3/2 1 d PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 36

37 JACOBI relaxation method (cont d) the optimal relaxation parameter could be determined via n ( ) f ( n ) f ( 1 ) 1 ( ) thus, from n ( ) 1 n (1 1 ) 1 ( ) under the assumption (M J ) 1 we get 0 linear systems for which M J has a symmetric distribution of eigenvalues around origin yield due to theorem 3.12 an optimal relaxation parameter opt 1, hence speed of convergence cannot be accelerated PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 37

38 GAUSS SEIDEL relaxation method we consider the component based notation of GAUSS SEIDEL method (3.1.5) with weighted correction vector component r m,i given as for i 1,..., n and m 0, 1, 2,... thus, we get (I D 1 L)x m 1 [(1 )I D 1 R]x m D 1 b PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 38

39 GAUSS SEIDEL relaxation method (cont d) with D 1 (D L)x m 1 D 1 [(1 )D R]x m D 1 b the GAUSS SEIDEL relaxation method in the notation x m 1 (D L) 1 [(1 )D R]x m (D L) 1 b follows : M GS ( ) : N GS ( ) Theorem 3.13 Let A with a ii 0 for i 1,..., n, then for (M GS ( )) 1 applies. (Proof is lengthy!) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 39

40 GAUSS SEIDEL relaxation method (cont d) from theorem 3.13 follows that 0 always leads to a divergent method, hence the initial request for finds its motivation here furthermore, the above theorem also implies the request 2 Corollary 3.14 The GAUSS SEIDEL relaxation method converges at most for a relaxation parameter (0, 2). Theorem 3.15 Let A be hermitian and positive definite, then the GAUSS SEIDEL relaxation method converges iff (0, 2). Both proofs are lengthy. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 40

41 GAUSS SEIDEL relaxation method (cont d) note: it can be shown for : (M J ) 1 the spectral radius of the iteration matrix M GS ( ) becomes minimal for opt which yields (M GS ( opt )) opt 1 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 41

42 GAUSS SEIDEL relaxation method (cont d) example: for the linear system (3.1.2) with A, b we know that eigenvalues of M J are 1,2 and thus (M J ) 1 applies hence, the optimal relaxation parameter is opt PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 42

43 GAUSS SEIDEL relaxation method (cont d) which yields (M GS ( opt )) ( (M GS ) 2 ) [for comparison: (M J ) , (M GS ) (M J ) ] using start vector x 0 (21, 19) T leads to the following iteration m x m,1 GAUSS SEIDEL relaxation method x m,2 m : x m A 1 b e e e e e e e e e e e e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 43

44 RICHARDSON method based upon the basic algorithm x m 1 (I A)x m b with a weighting of the correction vector with a number r m b Ax m thus, we get x m 1 (I A)x m I b (3.1.8) or in component based notation : M R ( ) : N R ( ) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 44

45 RICHARDSON method (cont d) Theorem 3.16 Let A with (A) and max max, min min, then the RICHARDSON method (3.1.8) converges for (A) iff (A) applies. 0 note: it can be shown that for (A) with max, min according to above the spectral radius of iteration matrix M R ( ) becomes minimal for opt (3.1.9) which yields (M R ( )) (3.1.10) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 45

46 RICHARDSON method (cont d) given are the functions g max, g min : with g max ( ) 1 max and g min ( ) 1 min, hence under consideration of opt arg min (M R ( )) arg max g max ( ), g min ( ) we can determine the condition opt max 1 1 opt min from the figure and, thus opt g max, g min 1 g min g max opt PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 46

47 RICHARDSON method (cont d) example: for the linear system (3.1.2) with A, b and 0 det (A I) (0.7 )(0.5 ) 0.08 ( 0.6) we obtain eigenvalues 1, and, thus, a spectrum (A) theorem 3.16 ensures convergence of RICHARDSON method for all with hence, with (3.1.9) we get opt PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 47

48 RICHARDSON method (cont d) finally, with (3.1.10) we get (M R ( opt )) 0.5 using start vector x 0 (21, 19) T leads to the following iteration RICHARDSON method m x m,1 x m,2 m : x m A 1 b e e e e e e e e e e e e e e e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 48

49 overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 49

50 overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 50

51 overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 51

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Motivation: Sparse matrices and numerical PDE's

Motivation: Sparse matrices and numerical PDE's Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Chapter 12: Iterative Methods

Chapter 12: Iterative Methods ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations Int. J. Contemp. Math. Sciences, Vol. 6, 0, no. 3, 7 - A Refinement of Gauss-Seidel Method for Solving of Linear System of Equations V. B. Kumar Vatti and Tesfaye Kebede Eneyew Department of Engineering

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~ MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~ Question No: 1 (Marks: 1) If for a linear transformation the equation T(x) =0 has only the trivial solution then T is One-to-one Onto Question

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Introduction and Stationary Iterative Methods

Introduction and Stationary Iterative Methods Introduction and C. T. Kelley NC State University tim kelley@ncsu.edu Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011 Outline Notation and Preliminaries General References What you Should Know

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes Iterative Methods for Systems of Equations 0.5 Introduction There are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Iterative Methods for Solving Linear Systems

Iterative Methods for Solving Linear Systems Chapter 8 Iterative Methods for Solving Linear Systems 8.1 Convergence of Sequences of Vectors and Matrices In Chapter 4 we have discussed some of the main methods for solving systems of linear equations.

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN

Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN Vedat Tavsanoglu Yildiz Technical University 9 August 006 1 Paper Outline Raster simulation is an image scanning-processing

More information