Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Size: px
Start display at page:

Download "Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University"

Transcription

1 Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014

2 1.1 Review on linear algebra Norms of vectors and matrices vector norm: A vector norm on R n is a function,, from R n into R with the following properties: x 0 for all x R n. x = 0 if and only if x = 0. αx = α x for all α R and x R n. x + y x + y Examples: l 2 : x 2 = n i=1 x 2 i l : x = max x i 1 i n l 1 : x 1 = x i 1 i n l p : x 1 = ( x i p ) 1/p, for p 1 1 i n l 2 norm is also called the Euclidean norm representing the usual notion of distance from the origin in case x in R, R 2, R 3. Figures of l 1, l 2, l norm in R 2, R 3. distance between vectors: norm of difference of two vectors x y. It can be used to measure the error between the true solution and an approximated one. denote convergence of a sequence: if x (k) x 0 then lim x (k) i = x i. Important inequalities: (Cauchy-Schwarz Inequality) For each x = (x 1, x 2,, x n ) t and y = (y 1,, y n ) t in R n, x t y = x i y i n x 2 i n yi 2 = x y i=1 x x 2 n x. i=1 Matrix norm: A matrix norm on the set of all n n matrices is a real valued function,, defined on this set, satisfying for n n matrices A and B and all real numbers α: 2 i=1

3 A 0 A = 0, if and only if A is O, the matrix with all 0 entries. αa = α A. A + B A + B. AB A B. Distance of two matrices: A B. Natural/Induced norm: if is a vector norm in R n,then A = max x =1 Ax is a matrix norm. This is called natural, or induced matrix norm. We also have A = max Ax = max A z x =1 z 0 z = max Az z 0 z This definition leads to Examples of matrix norms: Natural norms: Proof. A = Az A z max Ax = max x =1 1 i n a ij Show A max 1 i n n a ij. Take x such that 1 = x = max 1 i n x i Ax = max (Ax) i = max n a ij x j max 1 i n 1 i n 1 i n a ij max 1 j n x j But x = max 1 j n x j = 1, so the inequality is proved. Show A max 1 i n n a ij. Let p be an integer with a pj = max 1 i n a kj and x be the vector such that x j = 1 if a pj 0, x j = 1 otherwise. Then x = 1 and a pj x j = a pj, for all j = 1, n. So Ax = max n a ij x j 1 i n 3 a pj x j = max 1 i n a ij

4 This implies that A = max Ax max x =1 1 i n a ij A 1 = max Ax 1 = max x 1=1 1 j n a ij i=1 A 2 = max Ax 2 = ρ(a T A) x 2=1 where λ is an eigenvalue of ρ(a T A) denotes the maximal eigenvalue of A T A. Matlab command: norm(a, Non-natural norm: Frobenious norm A F = ( a ij 2 Some useful inequalities for natural norms: A 2 A F n A 2 Ax 2 A F x 2 ρ(a) A for any natural norm. i=1 1.2 Iterative methods for linear system We will introduce Jacobi, Gauss-Seidel iterative and SOR methods, classic methods that date to the late eighteenth century. Iterative techniques are seldom used for solving linear systems of small dimension since the time required for sufficient accuracy exceeds that required for direct techniques such as Gaussian elimination. For large systems with a high percentage of 0 entries, however, these techniques are efficient in terms of both computer storage and computation. Numerical Iterative methods to solve Ax = b: starts with an initial approximation x (0) to the solution x ad generates a sequence of vectors {x (k) } k=0 that converges to x. Iterative techniques involve a process that converts the system Ax = b into an equivalent system of the form x = T x + c for some fixed matrix T and vector c. Fixed point iteration: x (k+1) = T x (k) + c 4

5 Jacobi iterative method Transfer the linear system: The Jacobi iterative method is obtained by solving the i-th equation in Ax = b for x i to obtain (provided a ii 0) Generate each x (k) i Examples: Solve x i = 1 a ii (b i x (k) i = 1 a ii (,j i from x (k 1) for k 1 by,j i Solution x = (1, 2, 1, 1). with a ij x j ), for i = 1, 2,, n ( a ij x (k) j ) + b i ), for i = 1, 2,, n (1.1) 10x 1 x 2 + 2x 3 = 6 x x 2 x 3 + 3x 4 = 25 2x 1 x x 3 x 4 = 11 3x 2 x 3 + 8x 4 = 15 x (0) = (0, 0, 0, 0) x (1) = (0.6000, , , ),, x (10) = (1.0100, 1.998, , ) and the relative error with respect to l is less than Matrix form. Decomposition of A = D L U: where D is the diagonal matrix whose diagonal entries are those of A, L is the strictly lower-triangular part of A, and U be the strictly upper-triangular part of A. Let Dx = (L + u)x + b., if D 1 exists, that is, a ii 0 for each i, then x (k) = D 1 (L + u)x (k 1) + D 1 b, for k = 1, 2, Let T j = D 1 (L + u), c j = D 1 b, then x (k) = T j x (k 1) + c j In practice, this form is only used for theoretical purposes while the previous one (??) is used in computation In previous example: T j = , b = ( )

6 Jacobi Iterative algorithm: to solve Ax = b given an initial approximation x (0) : Input: the number of equations and unknowns n; the entries a ij of A; the entries b, X (0),tolerance TOL, maximum number of iterations N; Output: the approximate solution x or a message that the number of iterations was exceeded. 1. Set k = 1 2. while k < N do (a) For i = 1,, n, set x i = 1 a ii [ n,j i (a ijx (0) j ) + b i ]; (b) If x x (0) < T Ol, then output x. (c) Set k = k + 1. (d) For i = 1,, n, set x (0) i = x i. 3. Output ( maximum number of iterations exceeded ). stop. Comments on the algorithm: the algorithm requires that a ii 0, for each i = 1, 2,, n. If one of the aii entries is 0 and the system is nonsingular, a reordering of the equations can be performed so that no a ii = 0. To speed convergence, the equations should be arranged so that a ii is as large as possible. Another possible stopping criterion is to iterate until the relative error is smaller than some tolerance. For this purpose, any convenient norm can be used, the usual being the l norm Gauss-Seidel method In Jacobi iterative method, the components of x (k 1) are used to compute all the components x (k) i of x (k). But for i > 1, the components x (k) 1, x(k) 2,, x(k) i 1 of x (k) have already been computed and are expected to be better approximations to the actual solutions x 1, c, x i 1 than are x (k 1) 1,, x (k 1) i 1. It seems reasonable, then, to compute x (k) i using these most recently calculated values. Gaussian-Seidel iterative technique x (k) i = 1 i 1 [b i a ij x (k) j a ii j=i+1 a ij x (k 1) j ] (1.2) Example: 10x 1 x 2 + 2x 3 = 6 x x 2 x 3 + 3x 4 = 25 2x 1 x x 3 x 4 = 11 3x 2 x 3 + 8x 4 = 15 6

7 Solution x = (1, 2, 1, 1). with and iterative until the relative error x (0) = (0, 0, 0, 0) x (k) x (k 1) x (k) 10 3 For the Gauss-Seidel method we write the system, for each k = 1, 2, : When x (0) = (0, 0, 0, 0) t, we have x (1) = (0.6000, , , ) t, until x (5) = (1.0001, , , ). since the relative error is and x (5) is accepted as a reasonable approximation to the solution. Note that, in an earlier example, Jacobis method required twice as many iterations for the same accuracy. Matrix form: write first in equation form and, then with the definitions of D, L, and U given previously, we have the Gauss-Seidel method represented by x (k) = (D L) 1 Ux (k 1) + (D L) 1 b Letting T g = (D L) 1 U and c g = (D L) 1 b, gives the Gauss-Seidel technique the form x (k) = T g x (k 1) + c g For the lower-triangular matrix D L to be nonsingular, it is necessary and sufficient that a ii 0, for each i = 1, 2,, n Convergence Convergent matrix : An n n matrix is convergent if lim k (A)k ij = 0, for eachi, j = 1, 2,, n Theorem: the following statements are equivalent: A is convergent matrix lim n A n = 0, for some natural norms lim n A n = 0, for all norms ρ(a) < 1 lim n A n x = 0 for every x. Convergence results for General iteration methods: where x (0) is arbitrary. x (k) = T x k 1 + c, for eachk = 1, 2, 7

8 Lemma: If the spectral radius ρ(t ) satisfies ρ(t ) < 1, then (I T ) 1 exists, and (I T ) 1 = I + T + T 2 + = Proof. T x = λx (I T )x = (1 λ)x and λ ρ(t ) < 1, so I T is invertible. Let S m = I + T + T T m, (I T )S m = I T m+1. Since T is convergent, then i=0 T j lim (I T )S m = lim(i T m+1 ) = I m Theorem: The iteration x (k) = T x k 1 + c, for eachk = 1, 2, converges to the unique solution of x = T x + c if an only if ρ(t ) < 1. Proof. Assume ρ(t ) < 1, then x (k) = T k x (0) + (T k T + I)c. Thus lim k x (k) = 0+(I T ) 1 c. Thus x (k) converges to x = (I T ) 1 c and x = T x + c. To prove the converse, let x be the unique solution to x = T x + c and z be arbitrary vector. Define x (0) = x z and for k 1, (k) = T x (k 1) + c, then x (k) converges to x and x x (k) = T (x x (k 1) ) = = T (k) (x x (0) ) = T k z thus lim k T k z = lim k x (k) x = 0 Since z R n is arbitrary, so by the theorem on convergent matrices, T is convergent and ρ(t ) < 1. Corollary: T < 1 for any natural matrix norm and c is a given vector, then the sequence {x (k) } converges with any x (0) to x with x = T x + c and the error bounds hold x x (k) T k x (0) x x x (k) T k 1 T x(1) x (0) Note: proof is similar to the fixed point theorem for a single nonlinear equation. Convergence of the Jacobi&Gauss-Seidel iterative method for special types of matrices: Jacobi : T j = D 1 (L + U) Gauss-Seidel : 8 T g = (D L) 1 U

9 Theorem 1 If A is strictly diagonally dominant, then for any x (0), both Jacobi and Gauss-Seidel methods converges to the unique solution Ax = b. Proof. Jacobi: T j = D 1 (L + U), thus the i-th row of T j is ( a i1 /a ii,, a i,i 1 /a ii, 0, a i,i+1 /a ii,, a i,n /a ii ), and the sum of absolute value of i-th row is equal to n a ij j i, a < 1 ii since A is strictly diagonally dominant. This implies T j < 1 and ρ(t j ) < 1. Therefore Jacobi converges for any initial x (0). Gauss-Seidel: T g = (D L) 1 U. Let x be a eigenvector corresponding to an eigenvalue λ of T g, i.e. T g x = λx. Let x i be the element which has the largest value in x. Let ξ = x/x i and it is easy to see that T j ξ = λξ and ξ i = 1, ξ j 1 for j i. This leads to Uξ = λ(d L)ξ and at i th element, we have λ = j>i a ijξ j j i a ijξ j = < 1 j>i a ijξ j a ii + j i a ijξ j j>i a ij ξ j a ii j i a ij ξ j j>i a ij a ii j i a ij This is true for all eigenvalues of T g, thus ρ(t g ) < 1. Is Gauss-seidel better than Jacobi method? Theorem 2 (Stein-Rosenberg) If a ij 0, for each i j and a ii > 0, for each i = 1, 2,, n then one and only of the following statement holds 0 < ρ(t g ) < ρ(t j ) < 1 1 < ρ(t j ) < ρ(t g ) ρ(t j ) = ρ(t g ) = 0 ρ(t j ) = ρ(t g ) = 1 Proof. See Young, D. M. Iterative solution of large linear systems. Comments on this theorem: 9

10 For some special cases with 0 < ρ(t g ) < ρ(t j ) < 1, that when one method gives convergence, then both give convergence, and the Gauss- Seidel method converges faster than the Jacobi method. If 1 < ρ(t j ) < ρ(t g ) indicates that when one method diverges then both diverge and the divergence is more pronounced for the Gauss-Seidel method. 1.3 Relaxation method Successive over-relaxation (SOR) method Let A = D L U. The iteration formula for Gauss-Seidel is x (k) i = 1 i 1 [b i a ij x (k) j a ii j=i+1 If we consider the vector x (k) as a whole, we can write Thus T g = (D L) 1 U. x (k) = (D L) 1 Ux (k 1) + (D L) 1 b a ij x (k 1) j ] (1.3) For SOR method, at k-th step, for each i-th element, we compute in two steps: Or in other form a ii x (k) i In matrix form zi k = 1 i 1 [b i a ij x (k) j a ii i 1 + ω x (k) i a ij x (k) j j=i+1 = ωz k i + (1 ω)x (k 1) i = (1 ω)a ii x (k 1) i ω a ij x (k 1) j ]; (1.4) j=i+1 a ij x (k 1) j x (k) = (D ωl) 1 [(1 ω)d + ωu]x (k 1) + ω(d ωl) 1 b + ωb i The iteration matrix T ω = (D ωl) 1 [(1 ω)d+ωu] and c ω = ω(d ωl) 1 b. Example: 4x 1 + 3x 2 = 24 3x 1 + 4x 2 x 3 = 30 x 2 + 4x 3 = 24 10

11 has the solution (3, 4, 5) t. Compare SOR with ω = 1.25 using x (0) = (1, 1, 1) t and Gauss-Seidel. For the iterates to be accurate to the 7 decimal places, the Gauss-Seidel method requires 34 iterations, as opposed to 14 iterations for SOR. Remark: For SOR, we apply Gauss-Seidel formula for one element i, then do relaxation average. It is different from do Gauss-Seidel for all elements i = 1,, n, then do relaxation for all element, i.e. T ω ωt GS + (1 ω)i Choosing the optimal ω: no complete answer for general linear system. while: If a ii 0 for each i = 1, 2, n then ρ(t ω ) ω 1. this implies that the SOR method can converge only if 0 < ω < 2. Proof. Let λ 1,, λ n are the eigenvalue of T ω. Then det(t ω ) = i (λ i). On the other hand det(t ω ) = det((d ωl) 1 )det((1 ω)d + ωu) = 1/det(D ωl)det((1 ω)d + ωu) = 1/det(D)det((1 ω)d) = (1 ω) n Thus ρ(t ω ) det(t ω ) 1 n = 1 ω. For ρ(t ω ) < 1, we have 0 < ω < 2. If A is positive definite matrix and 0 < ω < 2, then the SOR method converges for any choice of initial approximate vector x (0). Proof. The proof of this theorem can be found in Ortega, J. M., Numerical Analysis; A Second Course, Academic Press, New York, 1972, 201 pp. If A is positive definite and trigiagonal, then ρ(t g ) = [ρ(t j )] 2 < 1 and the optimal ω for SOR is given by 2 ω = ρ(t g ) and with this choice of ω, ρ(t ω ) = ω 1. Proof. The proof of this theorem can be found in Ortega, J. M., Numerical Analysis; A Second Course, Academic Press, New York, 1972, 201 pp Example A = We can show A is positive definite and tridigonal, ρ(t j ) = Thus ω

12 1.4 Error bounds Let x be the unique solution of Ax = b, and x is an estimate. When b Ax is small, we want x x is also small. This is often the case, but certain systems, which occur frequently in practice, fail to have this property. ( ) ( ) Check the examle: A =, b = has the unique solution x = (1, 1) t. The poor approxpmation x = (3, 0) t has the residual vector r = b A x = (0, 00002) t, so r = while x x = 2!. Suppose that x is an approximation to the solution of Ax = b, A is a nonsingular matrix, and r = b A x, then for any natural norm and if x 0 and b 0, x x r A 1 x x x A A 1 r b Definition: The condition number of the nonsingular matrix A relative to a norm is defined as thus x x κ(a) r A κ(a) = A A 1 and x x x κ(a) r b. It is easy to see that κ(a) 1. A matrix A is said to be well-conditioned if κ(a) is close to 1, and is ill-conditioned when κ(a) is significantly greater than 1. maxi σi If is 2-norm, then κ(a) = min i σ i where σ i is singular value of A. ( ) ( ) For A =, A = , and A 1 =, thus A 1 = Thus κ(a) = !.If 2-norm is used, κ(a) = (use matlab command cond(a) ). 1.5 The conjugate gradient method Originally proposed by Hestenes and Stiefel in 1952 as a direct method for solving an n n positive definite system. Performance over Gaussian elimination and the previously discussed iterative methods for positive definite system, often require n steps. Recall inner product: define the inner product of x, y R n as x, y = x t y. The inner product satisfies the following properties 12

13 1. x, y = y, x ; 2. αx, y = x, αy = α x, y ; 3. x + z, y = x, y + z, y ; 4. x, x 0; 5. x, x = 0 if and only if x = 0. When A is positive definite, x, Ax > 0 unless x = 0. We say that two non-zero vectors x and y are conjugate with respect to A (or A-orthogonal) if x, Ay = 0. It is easy to show that a set of conjugate vectors with respect to a SPD matrix A is linearly independent. Conjugate gradient method: look for a set of conjugate directions v (k), and represent x = k t kv (k), thus Ax = b k t k Av (k) = b t k v (k), Av (k) = v (k), b t k = v(k), b v (k), Av (k) Theorem: Let A be SPD, then x is a solution to Ax = b if and only if x minimizes g(x) = x, Ax 2 x, b. Let v (1) = r (0) = b Ax (0), and find the set of new directions. Theorem: let {v (1 ),, v (n) } be a set of non-zero conjugate direction with respect to SPD matrix A, and let x (0) be arbitrary. Define t k = v(k), b Ax (k 1), x (k) = x (k 1) + t v (k), Av (k) k v (k) for k = 1,, n. Then, assuming exact arithmetic, Ax (n) = b. Theorem: The residual vectors r (k),where k = 1, 2, n, for a conjugate direction method, satisfy the equations r (k), v (j) = 0, for j = 1, 2, k Conjugate gradient method: Let r (0) = b Ax (0), v (1) = r (0) ; and for k = 13

14 1, 2 n t k = r(k 1), r (k 1) v (k), Av (k) x (k) = x (k 1) + t k v (k 1) r (k) = r (k 1) t k Av (k 1) s k = r (k), r (k) r (k 1), r (k 1) v (k+1) = r (k) + s k v (k) Preconditioned conjugate gradient (PCG) method: consider à x = b where à = C 1 A(C 1 ) t, x = C t x and b = C 1 b and à is better conditioned. PCG merthod often achieves an acceptable solution in n steps and is often used in the solution of large linear systems with sparse and positive definite matrix. The preconditionning matrix C is approximately equal to L in the Choleski facforization. 14

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 1. General Iterative Methods 1.1. Introduction. Many applications lead to N N linear algebraic systems of the

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

CSC 576: Linear System

CSC 576: Linear System CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP

NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N. MATEMATICĂ, Tomul LIII, 2007, f.1 NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP BY GINA DURA and RĂZVAN ŞTEFĂNESCU Abstract. The aim

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2

More information

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x. ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Num. discretization. Numerical simulations. Finite elements Finite volume. Finite diff. Mathematical model A set of ODEs/PDEs. Integral equations

Num. discretization. Numerical simulations. Finite elements Finite volume. Finite diff. Mathematical model A set of ODEs/PDEs. Integral equations Scientific Computing Computer simulations Road map p. 1/46 Numerical simulations Physical phenomenon Mathematical model A set of ODEs/PDEs Integral equations Something else Num. discretization Finite diff.

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information