Iterative techniques in matrix algebra
|
|
- Brook Patterson
- 5 years ago
- Views:
Transcription
1 Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015
2 Outline 1 Norms of vectors and matrices 2 Eigenvalues and eigenvectors 3 The Jacobi and Gauss-Siedel Iterative Techniques 4 Relaxation Techniques for Solving Linear Systems 5 Error bounds and iterative refinement 6 The conjugate gradient method
3 Definition 1 : R n R is a vector norm if (i) x 0, x R n, (ii) x = 0 if and only if x = 0, (iii) αx = α x α R and x R n, (iv) x + y x + y x, y R n. Definition 2 The l 2 and l norms for x = [x 1, x 2,, x n ] T are defined by { x 2 = (x T x) 1/2 = i=1 x 2 i } 1/2 and x = max 1 i n x i. The l 2 norm is also called the Euclidean norm.
4 Theorem 3 (Cauchy-Bunyakovsky-Schwarz inequality) For each x = [x 1, x 2,, x n ] T and y = [y 1, y 2,, y n ] T in R n, x T y = { x i y i i=1 i=1 x 2 i } 1/2 { } 1/2 yi 2 = x 2 y 2. i=1 Proof: If x = 0 or y = 0, the result is immediate. Suppose x 0 and y 0. For each α R, 0 x αy 2 2 = and 2α x i y i i=1 (x i αy i ) 2 = i=1 x 2 i + α 2 i=1 x 2 i 2α i=1 x i y i + α 2 i=1 yi 2 = x α 2 y 2 2. i=1 yi 2, i=1
5 Since x 2 > 0 and y 2 > 0, we can let to give Thus ( 2 x ) ( ) 2 x i y i y 2 i=1 α = x 2 y 2 x x 2 2 y 2 y 2 2 = 2 x x T y = x i y i x 2 y 2. i=1
6 For each x, y R n, and x + y = max 1 i n x i + y i max 1 i n ( x i + y i ) x + y 2 2 = which gives max x i + max y i = x + y 1 i n 1 i n (x i + y i ) 2 = i=1 2 x 2 i + 2 i=1 x i y i + i=1 i=1 x x 2 y 2 + y 2 2 = ( x 2 + y 2 ) 2, y 2 i x + y 2 x 2 + y 2.
7 Definition 4 A sequence {x (k) R n } k=1 is convergent to x with respect to the norm if ε > 0, an integer N(ε) such that Theorem 5 x (k) x < ε, k N(ε). {x (k) R n } k=1 converges to x with respect to if and only if lim k x(k) i = x i, i = 1, 2,..., n. Proof: Given any ε > 0, an integer N(ε) such that max 1 i n x(k) i x i = x (k) x < ε, k N(ε).
8 This result implies that Hence x (k) i x i < ε, i = 1, 2,..., n. lim k x(k) i = x i, i. For a given ε > 0, let N i (ε) represent an integer with Define x (k) i If k N(ε), then x i < ε, whenever k N i (ε). N(ε) = max 1 i n N i(ε). max 1 i n x(k) i x i = x (k) x < ε. This implies that {x (k) } converges to x with respect to.
9 Theorem 6 For each x R n, x x 2 n x. Proof: Let x j be a coordinate of x such that x 2 = x j 2 x 2 i = x 2 2, i=1 so x x 2 and x 2 2 = x 2 i i=1 x 2 j = nx 2 j = n x 2, i=1 so x 2 n x.
10 is a matrix norm. Definition 7 A matrix norm on the set of all n n matrices is a real-valued function satisfying for all n n matrices A and B and all real number α: Theorem 8 (i) A 0; (ii) A = 0 if and only if A = 0; (iii) αa = α A ; (iv) A + B A + B ; (v) AB A B ; If is a vector norm on R n, then A = max x =1 Ax
11 For any z 0, we have x = z/ z as a unit vector. Hence ( ) A = max Ax = max z x =1 z 0 A Az = max z z 0 z. Corollary 9 Theorem 10 Az A z. If A = [a ij ] is an n n matrix, then A = max 1 i n j=1 a ij.
12 Proof: Let x be an n-dimension vector with 1 = x = max 1 i n x i. Then Ax = max 1 i n a ij x j j=1 max a ij max x j = max 1 i n 1 j n 1 i n Consequently, A = j=1 max Ax max x =1 1 i n On the other hand, let p be an integer with apj = max aij, a ij. j=1 a ij. j=1
13 and x be the vector with { 1, if apj 0, x j = 1, if a pj < 0. Then x = 1 and a pj x j = a pj, j = 1, 2,..., n, so Ax = max 1 i n a ij x j a pj x j = a pj = max j=1 This result implies that which gives A = j=1 max Ax max x =1 1 i n A = max aij. j=1 a ij. j=1 1 i n j=1 a ij.
14 Exercise Page 441: 5, 9, 10, 11
15 Eigenvalues and eigenvectors Definition 11 (Characteristic polynomial) If A is a square matrix, the characteristic polynomial of A is defined by p(λ) = det(a λi). Definition 12 (Eigenvalue and eigenvector) If p is the characteristic polynomial of the matrix A, the zeros of p are eigenvalues of the matrix A. If λ is an eigenvalue of A and x 0 satisfies (A λi)x = 0, then x is an eigenvector of A corresponding to the eigenvalue λ. Definition 13 (Spectrum and Spectral Radius) The set of all eigenvalues of a matrix A is called the spectrum of A. The spectral radius of A is ρ(a) = max{ λ ; λ is an eigenvalue ofa}.
16 Theorem 14 If A is an n n matrix, then (i) A 2 = ρ(a T A); (ii) ρ(a) A for any matrix norm. Proof: Proof for the second part. Suppose λ is an eigenvalue of A and x 0 is a corresponding eigenvector such that Ax = λx and x = 1. Then λ = λ x = λx = Ax A x = A, that is, λ A. Since λ is arbitrary, this implies that ρ(a) = max λ A. Theorem 15 For any A and any ε > 0, there exists a matrix norm such that ρ(a) < A < ρ(a) + ε.
17 Definition 16 We call an n n matrix A convergent if lim k (Ak ) ij = 0 i = 1, 2,..., n and j = 1, 2,..., n. Theorem 17 The following statements are equivalent. 1 A is a convergent matrix; 2 lim k Ak = 0 for some matrix norm; 3 lim k Ak = 0 for all matrix norm; 4 ρ(a) < 1; 5 lim k Ak x = 0 for any x.
18 Exercise Page 449: 11, 12, 18, 19
19 Jacobi and Gauss-Siedel Iterative Techniques For small dimension of linear systems, it requires for direct techniques. For large systems, iterative techniques are efficient in terms of both computer storage and computation. The basic idea of iterative techniques is to split the coefficient matrix A into A = M (M A), for some matrix M, which is called the splitting matrix. Here we assume that A and M are both nonsingular. Then the original problem is rewritten in the equivalent form Mx = (M A)x + b.
20 This suggests an iterative process x (k) = (I M 1 A)x (k 1) + M 1 b T x (k 1) + c, where T is usually called the iteration matrix. The initial vector x (0) can be arbitrary or be chosen according to certain conditions. Two criteria for choosing the splitting matrix M are x (k) is easily computed. More precisely, the system Mx (k) = y is easy to solve; the sequence {x (k) } converges rapidly to the exact solution. Note that one way to achieve the second goal is to choose M so that M 1 approximate A 1. In the following subsections, we will introduce some of the mostly commonly used classic iterative methods.
21 Jacobi Method If we decompose the coefficient matrix A as A = L + D + U, where D is the diagonal part, L is the strictly lower triangular part, and U is the strictly upper triangular part, of A, and choose M = D, then we derive the iterative formulation for Jacobi method: x (k) = D 1 (L + U)x (k 1) + D 1 b. With this method, the iteration matrix T J = D 1 (L + U) and c = D 1 b. Each component x (k) i can be computed by / x (k) i 1 i = b i a ij x (k 1) j a ij x (k 1) j a ii. j=1 j=i+1
22 a 11 x (k) 1 + a 12 x (k 1) 2 + a 13 x (k 1) a 1n x (k 1) n = b 1 a 21 x (k 1) 1 + a 22 x (k) 2 + a 23 x (k 1) a 2n x (k 1) n = b 2. a n1 x (k 1) 1 + a n2 x (k 1) 2 + a n3 x (k 1) a nn x n (k) = b n. Algorithm 1 (Jacobi Method) Given x (0), tolerance T OL, maximum number of iteration M. Set k = 1. While k M and x x (0) 2 T OL Set k = k + 1, x (0) = x. For i = 1, 2,..., n ( x i = b i i 1 j=1 a ijx (0) j ) / n j=i+1 a ijx (0) j a ii End For End While
23 Example 18 Consider the linear system Ax = b given by E 1 : 10x 1 x 2 + 2x 3 = 6, E 2 : x x 2 x 3 + 3x 4 = 25, E 3 : 2x 1 x x 3 x 4 = 11, E 4 : 3x 2 x 3 + 8x 4 = 15 which has the unique solution x = [1, 2, 1, 1] T. Solving equation E i for x i, for i = 1, 2, 3, 4, we obtain x 1 = 1/10x 2 1/5x 3 + 3/5, x 2 = 1/11x 1 + 1/11x 3 3/11x /11, x 3 = 1/5x 1 + 1/10x 2 + 1/10x 4 11/10, x 4 = 3/8x 2 + 1/8x /8.
24 Then Ax = b can be rewritten in the form x = T x + c with 0 1/10 1/5 0 3/5 T = 1/11 0 1/11 3/11 1/5 1/10 0 1/10 and c = 25/11 11/10 0 3/8 1/8 0 15/8 and the iterative formulation for Jacobi method is x (k) = T x (k 1) + c for k = 1, 2,.... The numerical results of such iteration is list as follows:
25 k x 1 x 2 x 3 x
26 Matlab code of Example clear all; delete rslt.dat; diary rslt.dat; diary on; n = 4; xold = zeros(n,1); xnew = zeros(n,1); T = zeros(n,n); T(1,2) = 1/10; T(1,3) = -1/5; T(2,1) = 1/11; T(2,3) = 1/11; T(2,4) = -3/11; T(3,1) = -1/5; T(3,2) = 1/10; T(3,4) = 1/10; T(4,2) = -3/8; T(4,3) = 1/8; c(1,1) = 3/5; c(2,1) = 25/11; c(3,1) = -11/10; c(4,1) = 15/8; xnew = T * xold + c; k = 0; fprintf( k x1 x2 x3 x4 \n ); while ( k <= 100 & norm(xnew-xold) > 1.0d-14 ) xold = xnew; xnew = T * xold + c; k = k + 1; fprintf( %3.0f,k); for jj = 1:n fprintf( %5.4f,xold(jj)); end fprintf( \n ); end
27 Gauss-Seidel Method When computing x (k) i for i > 1, x (k) 1,..., x(k) i 1 have already been computed and are likely to be better approximations to the exact x 1,..., x i 1 than x (k 1). It seems reasonable to compute x (k) i 1,..., x (k 1) i 1 using these most recently computed values. That is a 11 x (k) 1 + a 12 x (k 1) 2 + a 13 x (k 1) a 1n x (k 1) n = b 1 a 21 x (k) 1 + a 22 x (k) 2 + a 23 x (k 1) a 2n x (k 1) n = b 2 a 31 x (k) 1 + a 32 x (k) 2 + a 33 x (k) a 3n x (k 1) n = b 3. a n1 x (k) 1 + a n2 x (k) 2 + a n3 x (k) a nn x (k) n = b n. This improvement induce the Gauss-Seidel method. The Gauss-Seidel method sets M = D + L and defines the iteration as x (k) = (D + L) 1 Ux (k 1) + (D + L) 1 b.
28 That is, Gauss-Seidel method uses T G = (D + L) 1 U as the iteration matrix. The formulation ( above can be rewritten as ) x (k) = D 1 Lx (k) + Ux (k 1) b. Hence each component x (k) i can be computed by / x (k) i 1 i = b i a ij x (k) j a ij x (k 1) j a ii. j=1 j=i+1 For Jacobi method, only the components of x (k 1) are used to compute x (k). Hence x (k) i, i = 1,..., n, can be computed in parallel at each iteration k. At each iteration of Gauss-Seidel method, since x (k) i can not be computed until x (k) 1,..., x(k) i 1 are available, the method is not a parallel algorithm in nature.
29 Algorithm 2 (Gauss-Seidel Method) Given x (0), tolerance T OL, maximum number of iteration M. Set k = 1. For i = 1, 2,..., n ( x i = b i i 1 j=1 a ijx j ) / n j=i+1 a ijx (0) j a ii End For While k M and x x (0) 2 T OL Set k = k + 1, x (0) = x. For i = 1, 2,..., n ( x i = b i i 1 j=1 a ijx j ) / n j=i+1 a ijx (0) j a ii End For End While
30 Example 19 Consider the linear system Ax = b given by E 1 : 10x 1 x 2 + 2x 3 = 6, E 2 : x x 2 x 3 + 3x 4 = 25, E 3 : 2x 1 x x 3 x 4 = 11, E 4 : 3x 2 x 3 + 8x 4 = 15 which has the unique solution x = [1, 2, 1, 1] T. Gauss-Seidel method gives the equation x (k) 1 = 1 10 x(k 1) x(k 1) , x (k) 1 2 = 11 x(k) x(k 1) x(k 1) x (k) 3 = 1 5 x(k) x(k) x(k 1) 4 11 x (k) 4 = 3 8 x(k) x(k) 11, 10,
31 The numerical results of such iteration is list as follows: k x 1 x 2 x 3 x The results of Example appear to imply that the Gauss-Seidel method is superior to the Jacobi method. This is almost always true, but there are linear systems for which the Jacobi method converges and the Gauss-Seidel method does not. See Exercises 17 and 18 (8th edition).
32 Matlab code of Example clear all; delete rslt.dat; diary rslt.dat; diary on; n = 4; xold = zeros(n,1); xnew = zeros(n,1); A = zeros(n,n); A(1,1)=10; A(1,2)=-1; A(1,3)=2; A(2,1)=-1; A(2,2)=11; A(2,3)=-1; A(2,4)=3; A(3,1)=2; A(3,2)=-1; A(3,3)=10; A(3,4)=-1; A(4,2)=3; A(4,3)=-1; A(4,4)=8; b(1)=6; b(2)=25; b(3)=-11; b(4)=15; for ii = 1:n xnew(ii) = b(ii); for jj = 1:ii-1 xnew(ii) = xnew(ii) - A(ii,jj) * xnew(jj); end for jj = ii+1:n xnew(ii) = xnew(ii) - A(ii,jj) * xold(jj); end xnew(ii) = xnew(ii) / A(ii,ii); end k = 0; fprintf( k x1 x2 x3 x4 \n ); while ( k <= 100 & norm(xnew-xold) > 1.0d-14 ) xold = xnew; k = k + 1; for ii = 1:n xnew(ii) = b(ii); for jj = 1:ii-1 xnew(ii) = xnew(ii) - A(ii,jj) * xnew(jj); end for jj = ii+1:n xnew(ii) = xnew(ii) - A(ii,jj) * xold(jj); end xnew(ii) = xnew(ii) / A(ii,ii); end fprintf( %3.0f,k); for jj = 1:n fprintf( %5.4f,xold(jj)); end fprintf( \n ); end
33 Lemma 20 If ρ(t ) < 1, then (I T ) 1 exists and (I T ) 1 = T i = I + T + T 2 +. i=0 Proof: Let λ be an eigenvalue of T, then 1 λ is an eigenvalue of I T. But λ ρ(a) < 1, so 1 λ 0 and 0 is not an eigenvalue of I T, which means (I T ) is nonsingular. Next we show that (I T ) 1 = I + T + T 2 +. Since ( m ) (I T ) T i = I T m+1, i=0 and ρ(t ) < 1 implies T m 0 as m, we have ( ) ( m (I T ) T i = (I T ) T i) = I. lim m
34 Theorem 21 For any x (0) R n, the sequence produced by x (k) = T x (k 1) + c, k = 1, 2,..., converges to the unique solution of x = T x + c if and only if ρ(t ) < 1. Proof: Suppose ρ(t ) < 1. The sequence of vectors x (k) produced by the iterative formulation are In general x (1) = T x (0) + c x (2) = T x (1) + c = T 2 x (0) + (T + I)c x (3) = T x (2) + c = T 3 x (0) + (T 2 + T + I)c. x (k) = T k x (0) + (T k 1 + T k T + I)c.
35 Since ρ(t ) < 1, lim k T k x (0) = 0 for any x (0) R n. By Lemma 20, (T k 1 + T k T + I)c (I T ) 1 c, as k. Therefore lim k x(k) = lim T k x (0) + k j=0 T j c = (I T ) 1 c. Conversely, suppose {x (k) } x = (I T ) 1 c. Since x x (k) = T x + c T x (k 1) c = T (x x (k 1) ) = T 2 (x x (k 2) ) = = T k (x x (0) ). Let z = x x (0). Then lim T k z = lim (x k k x(k) ) = 0. It follows from theorem ρ(t ) < 1.
36 Theorem 22 If T < 1, then the sequence x (k) converges to x for any initial x (0) and 1 x x (k) T k x x (0) 2 x x (k) T k 1 T x(1) x (0). Proof: Since x = T x + c and x (k) = T x (k 1) + c, x x (k) = T x + c T x (k 1) c = T (x x (k 1) ) = T 2 (x x (k 2) ) = = T k (x x (0) ). The first statement can then be derived x x (k) = T k (x x (0) ) T k x x (0). For the second result, we first show that x (n) x (n 1) T n 1 x (1) x (0) for any n 1.
37 Since x (n) x (n 1) = T x (n 1) + c T x (n 2) c we have Let m k, = T (x (n 1) x (n 2) ) = T 2 (x (n 2) x (n 3) ) = = T n 1 (x (1) x (0) ), x (n) x (n 1) T n 1 x (1) x (0). x (m) x (k) ( = x (m) x (m 1)) ( + x (m 1) x (m 2)) + + (x (k+1) x (k)) ( =T m 1 x (1) x (0)) ( + T m 2 x (1) x (0)) ( + + T k x (1) x (0)) ( = T m 1 + T m T k) ( x (1) x (0)),
38 hence x (m) x (k) ( T m 1 + T m T k) x (1) x (0) ( ) = T k T m k 1 + T m k x (1) x (0). Since lim m x (m) = x, x x (k) = lim m x(m) x (k) ( ) lim T m k T m k 1 + T m k x (1) x (0) ( ) = T k x (1) x (0) lim T m k 1 + T m k m = T k 1 1 T x(1) x (0). This proves the second result.
39 Theorem 23 If A is strictly diagonal dominant, then both the Jacobi and Gauss-Seidel methods converges for any initial vector x (0). Proof: By assumption, A is strictly diagonal dominant, hence a ii 0 (otherwise A is singular) and a ii > a ij, i = 1, 2,..., n. j=1,j i For Jacobi method, the iteration matrix T J = D 1 (L + U) has entries { aij [T J ] ij = a ii, i j, 0, i = j. Hence T J = max 1 i n j=1,j i a ij a ii = max 1 i n 1 a ii j=1,j i and this implies that the Jacobi method converges. a ij < 1,
40 For Gauss-Seidel method, the iteration matrix T G = (D + L) 1 U. Let λ be any eigenvalue of T G and y, y = 1, is a corresponding eigenvector. Thus Hence for i = 1,..., n, T G y = λy = Uy = λ(d + L)y. This gives and i 1 a ij y j = λa ii y i + λ a ij y j. j=i+1 j=1 i 1 λa ii y i = λ a ij y j a ij y j j=1 j=i+1 i 1 λ a ii y i λ a ij y j + j=1 j=i+1 a ij y j.
41 Choose the index k such that y k = 1 y j (this index can always be found since y = 1). Then which gives λ k 1 λ a kk λ a kj + j=1 j=k+1 a kj n j=k+1 a n kj a kk k 1 j=1 a kj < j=k+1 a kj n j=k+1 a kj = 1 Since λ is arbitrary, ρ(t G ) < 1. This means the Gauss-Seidel method converges. The rate of convergence depends on the spectral radius of the matrix associated with the method. One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral radius.
42 Exercise Page 459: 9, 10, 11
43 Relaxation Techniques for Solving Linear Systems Definition 24 Suppose x R n is an approximated solution of Ax = b. The residual vector r for x is r = b A x. Let the approximate solution x (k,i) produced by Gauss-Seidel method be defined by [ ] T x (k,i) = x (k) 1,..., x(k),..., x (k 1) n and i 1, x(k 1) i [ ] T r (k) i = r (k) 1i, r(k) 2i,..., r(k) ni = b Ax (k,i) be the corresponding residual vector. Then the mth component of r (k) i is r (k) mi = b i 1 m a mj x (k) j a mj x (k 1) j, j=1 j=i
44 or, equivalently, mi = b i 1 m a mj x (k) j r (k) j=1 j=i+1 a mj x (k 1) j for each m = 1, 2,..., n. In particular, the ith component of r (k) i is so r (k) ii i 1 = b i j=1 a ij x (k) j j=i+1 a ij x (k 1) j a mi x (k 1) i, a ii x (k 1) i, a ii x (k 1) i + r (k) i 1 ii = b i j=1 = a ii x (k) i. a ij x (k) j j=i+1 a ij x (k 1) j
45 Consequently, the Gauss-Seidel method can be characterized as choosing x (k) i to satisfy x (k) i = x (k 1) i + r(k) ii. a ii Relaxation method is modified the Gauss-Seidel procedure to x (k) i = x (k 1) i = x (k 1) i + ω r(k) ii a ii = (1 ω)x (k 1) i + ω i 1 b i a ij x (k) j a ii j=1 + ω i 1 b i a ii j=1 j=i+1 a ij x (k) j a ij x (k 1) j j=i+1 a ii x (k 1) i a ij x (k 1) j for certain choices of positive ω such that the norm of the residual vector is reduced and the convergence is significantly (1)
46 These methods are called for ω < 1: under relaxation, ω = 1: Gauss-Seidel method, ω > 1: over relaxation. Over-relaxation methods are called SOR (Successive over-relaxation). To determine the matrix of the SOR method, we rewrite (1) as a ii x (k) i 1 i + ω a ij x (k) j = (1 ω)a ii x (k 1) i ω a ij x (k 1) j + ωb i, j=1 so that if A = L + D + U, then we have or j=i+1 (D + ωl)x (k) = [(1 ω)d ωu] x (k 1) + ωb x (k) = (D + ωl) 1 [(1 ω)d ωu] x (k 1) + ω(d + ωl) 1 b T ω x (k 1) + c ω.
47 Example 25 The linear system Ax = b given by has the solution [3, 4, 5] T. 4x 1 + 3x 2 = 24, 3x 1 + 4x 2 x 3 = 30, x 2 + 4x 3 = 24, Numerical results of Gauss-Seidel method with x (0) = [1, 1, 1] T : k x 1 x 2 x
48 Numerical results of SOR method with ω = 1.25 and x (0) = [1, 1, 1] T : k x 1 x 2 x
49 Numerical results of SOR method with ω = 1.6 and x (0) = [1, 1, 1] T : k x 1 x 2 x
50 Matlab code of SOR clear all; delete rslt.dat; diary rslt.dat; diary on; n = 3; xold = zeros(n,1); xnew = zeros(n,1); A = zeros(n,n); DL = zeros(n,n); DU = zeros(n,n); A(1,1)=4; A(1,2)=3; A(2,1)=3; A(2,2)=4; A(2,3)=-1; A(3,2)=-1; A(3,3)=4; b(1,1)=24; b(2,1)=30; b(3,1)=-24; omega=1.25; for ii = 1:n DL(ii,ii) = A(ii,ii); for jj = 1:ii-1 DL(ii,jj) = omega * A(ii,jj); end DU(ii,ii) = (1-omega)*A(ii,ii); for jj = ii+1:n DU(ii,jj) = - omega * A(ii,jj); end end c = omega * (DL \ b); xnew = DL \ ( DU * xold ) + c; k = 0; fprintf( k x1 x2 x3 \n ); while ( k <= 100 & norm(xnew-xold) > 1.0d-14 ) xold = xnew; k = k + 1; xnew = DL \ ( DU * xold ) + c; fprintf( %3.0f,k); for jj = 1:n fprintf( %5.4f,xold(jj)); end fprintf( \n ); end diary off
51 Theorem 26 (Kahan) If a ii 0, for each i = 1, 2,..., n, then ρ(t ω ) ω 1. This implies that the SOR method can converge only if 0 < ω < 2. Theorem 27 (Ostrowski-Reich) If A is positive definite and the relaxation parameter ω satisfying 0 < ω < 2, then the SOR iteration converges for any initial vector x (0). Theorem 28 If A is positive definite and tridiagonal, then ρ(t G ) = [ρ(t J )] 2 < 1 and the optimal choice of ω for the SOR iteration is 2 ω = [ρ(t J )] 2 With this choice of ω, ρ(t ω ) = ω 1.
52 Example 29 The matrix A = , given in previous example, is positive definite and tridiagonal. Since T J = 1 D (L + U) = = ,
53 we have so T J λi = λ λ λ, Thus, det(t J λi) = λ(λ ). ρ(t J ) = and ω = [ρ(t J )] = This explains the rapid convergence obtained in previous example when using ω = 0.125
54 Symmetric Successive Over Relaxation (SSOR) Method Let A be symmetric and A = D + L + L T. The idea is in fact to implement the SOR formulation twice, one forward and one backward, at each iteration. That is, SSOR method defines Define (D + ωl)x (k 1 2 ) = [ (1 ω)d ωl T ] x (k 1) + ωb, (2) (D + ωl T )x (k) = [(1 ω)d ωl] x (k 1 2 ) + ωb. (3) { Mω : = D + ωl, N ω : = (1 ω)d ωl T. Then from the iterations (2) and (3), it follows that x (k) = ( M T ω N T ω M 1 ω T (ω)x (k 1) + M(ω) 1 b. N ω ) x (k 1) + ω ( M T ω N T ω M 1 ω + Mω T ) b
55 But Thus ((1 ω)d ωl) (D + ωl) 1 + I = ( ωl D ωd + 2D)(D + ωl) 1 + I = I + (2 ω)d(d + ωl) 1 + I = (2 ω)d(d + ωl) 1. M(ω) 1 = ω ( D + ωl T ) 1 (2 ω)d(d + ωl) 1, then the splitting matrix is M(ω) = The iteration matrix is 1 ω(2 ω) (D + ωl)d 1 ( D + ωl T ). T (ω) = (D + ωl T ) 1 [(1 ω)d ωl] (D + ωl) 1 [ (1 ω)d ωl T ].
56 Exercise Page 467: 2, 8
57 Error bounds and iterative refinement Example 30 The linear system Ax = b given by [ ] [ ] [ 1 2 x1 = x 2 has the unique solution x = [1, 1] T The poor approximation x = [3, 0] T has the residual vector [ ] [ ] [ ] [ r = b A x = = ] ], so r = Although the norm of the residual vector is small, the approximation x = [3, 0] T is obviously quite poor; in fact, x x = 2.
58 The solution of above example represents the intersection of the lines l 1 : x 1 + 2x 2 = 3 and l 2 : x 1 + 2x 2 = l 1 and l 2 are nearly parallel. The point (3, 0) lies on l 1 which implies that (3, 0) also lies close to l 2, even though it differs significantly from the intersection point (1, 1).
59 Theorem 31 Suppose that x is an approximate solution of Ax = b, A is nonsingular matrix and r = b A x. Then and if x 0 and b 0, Proof: Since x x r A 1 x x x and A is nonsingular, we have A A 1 r b. r = b A x = Ax A x = A(x x) x x = A 1 r A 1 r. (4) Moreover, since b = Ax, we have b A x.
60 It implies that Combining Equations (4) and (5), we have 1 x A b. (5) x x x A A 1 r. b Definition 32 (Condition number) The condition number of nonsingular matrix A is κ(a) = A A 1. For any nonsingular matrix A, 1 = I = A A 1 A A 1 = κ(a).
61 Definition 33 A matrix A is well-conditioned if κ(a) is close to 1, and is ill-conditioned when κ(a) is significantly greater than 1. In previous example, [ A = ]. Since A 1 = [ ], we have κ(a) = A A 1 = =
62 How to estimate the effective condition number in t-digit arithmetic without having to invert the matrix A? If the approximate solution x of Ax = b is being determined using t-digit arithmetic and Gaussian elimination, then r = b A x 10 t A x. All the arithmetic operations in Gaussian elimination technique are performed using t-digit arithmetic, but the residual vector r are done in double-precision (i.e., 2t-digit) arithmetic. Use the Gaussian elimination method which has already been calculated to solve Ay = r. Let ỹ be the approximate solution.
63 Then ỹ A 1 r = A 1 (b A x) = x x and x x + ỹ. Moreover, ỹ x x = A 1 r A 1 r A 1 (10 t A x ) = 10 t x κ(a). It implies that κ(a) ỹ x 10t. Iterative refinement In general, x + ỹ is a more accurate approximation to the solution of Ax = b than x.
64 Algorithm 3 (Iterative refinement) Given tolerance T OL, maximum number of iteration M, number of digits of precision t. Solve Ax = b by using Gaussian elimination in t-digit arithmetic. Set k = 1 while ( k M ) Compute r = b Ax in 2t-digit arithmetic. Solve Ay = r by using Gaussian elimination in t-digit arithmetic. If y < T OL, then stop. Set k = k + 1 and x = x + y. End while
65 Example 34 The linear system given by x 1 x 2 x 3 = has the exact solution x = [1, 1, 1] T. Using Gaussian elimination and five-digit rounding arithmetic leads successively to the augmented matrices and
66 The approximate solution is x (1) = [1.2001, , ] T. The residual vector corresponding to x is computed in double precision to be r (1) = b A x (1) = = Hence the solution of Ay = r (1) to be = ỹ (1) = [ , , ] T and the new approximate solution x (2) is x (2) = x (1) + ỹ (1) = [1.0000, , ] T
67 Using the suggested stopping technique for the algorithm, we compute r (2) = b A x (2) and solve the system Ay (2) = r (2), which gives Since ỹ (2) = [ , , ] T. we conclude that ỹ (2) 10 5, x (3) = x (2) + ỹ (2) = [1.0000, , ] T is sufficiently accurate. In the linear system Ax = b, A and b can be represented exactly. Realistically, the matrix A and vector b will be perturbed by δa and δb, respectively, causing the linear system to be solved in place of Ax = b. (A + δa)x = b + δb
68 Theorem 35 Suppose A is nonsingular and δa < 1 A 1. Then the solution x of (A + δa) x = b + δb approximates the solution x of Ax = b with the error estimate ( x x κ(a) δb x 1 κ(a)( δa / A ) b + δa ). A If A is well-conditioned, then small changes in A and b produce correspondingly small changes in the solution x. If A is ill-conditioned, then small changes in A and b may produce large changes in x.
69 Exercise Page 476: 2, 4, 7, 8
70 The conjugate gradient method Consider the linear systems Ax = b where A is large sparse and symmetric positive definite. Define the inner product notation < x, y >= x T y for any x, y R n. Theorem 36 Let A be symmetric positive definite. Then x is the solution of Ax = b if and only if x minimizes g(x) =< x, Ax > 2 < x, b >.
71 Proof: ( ) Rewrite g(x) as g(x) = < x x, A(x x ) > + < x, Ax > + < x, Ax > < x, Ax > 2 < x, b > = < x x, A(x x ) > < x, Ax > +2 < x, Ax > 2 < x, b > = < x x, A(x x ) > < x, Ax > +2 < x, Ax b >. Suppose that x is the solution of Ax = b, i.e., Ax = b. Then g(x) =< x x, A(x x ) > < x, Ax > which minimum occurs at x = x.
72 ( ) Fixed vectors x and v, for any α R, f(α) g(x + αv) = < x + αv, Ax + αav > 2 < x + αv, b > = < x, Ax > +α < v, Ax > +α < x, Av > +α 2 < v, Av > 2 < x, b > 2α < v, b > = < x, Ax > 2 < x, b > +2α < v, Ax > 2α < v, b > +α 2 < v, Av > =g(x) + 2α < v, Ax b > +α 2 < v, Av >. Because f is a quadratic function of α and < v, Av > is positive, f has a minimal value when f (α) = 0. Since f (α) = 2 < v, Ax b > +2α < v, Av >, the minimum occurs at < v, Ax b > < v, b Ax > ˆα = = < v, Av > < v, Av >.
73 and < v, b Ax > g(x + ˆαv) = f(ˆα) = g(x) 2 < v, b Ax > < v, Av > ( ) < v, b Ax > 2 + < v, Av > < v, Av > = g(x) < v, b Ax >2 < v, Av >. So, for any nonzero vector v, we have and g(x + ˆαv) < g(x) if < v, b Ax > 0 (6) g(x + ˆαv) = g(x) if < v, b Ax >= 0. (7) Suppose that x is a vector that minimizes g. Then g(x + ˆαv) g(x ) for any v. (8)
74 From (6), (7) and (8), we have which implies that Ax = b. Let < v, b Ax >= 0 for any v, r = b Ax. Then α = < v, b Ax > < v, Av > = < v, r > < v, Av >. If r 0 and if v and r are not orthogonal, then g(x + αv) < g(x) which implies that x + αv is closer to x than is x.
75 Let x (0) be an initial approximation to x and v (1) 0 be an initial search direction. For k = 1, 2, 3,..., we compute α k = < v(k), b Ax (k 1) > < v (k), Av (k) >, x (k) = x (k 1) + α k v (k) and choose a new search direction v (k+1). Question: How to choose {v (k) } such that {x (k) } converges rapidly to x? Let Φ : R n R be a differential function on x. Then it holds Φ(x + εp) Φ(x) = Φ(x) T p + O(ε). ε The right hand side takes minimum at p = Φ(x) Φ(x) for all p with p = 1 (neglect O(ε)). (i.e., the largest descent)
76 Denote x = [x 1, x 2,..., x n ] T. Then g(x) =< x, Ax > 2 < x, b >= It follows that g x k (x) = 2 i=1 j=1 a ij x i x j 2 x i b i. i=1 a ki x i 2b k, for k = 1, 2,..., n. i=1 Therefore, the gradient of g is g(x) = [ g (x), g ] g T,, (x) = 2(Ax b) = 2r. x 1 x 2 x n
77 Steepest descent method (gradient method) Theorem 37 Given an initial x 0 0. For k = 1, 2,... r k 1 = b Ax k 1 If r k 1 = 0, then stop; else α k = rt k 1 r k 1 r T k 1 Ar k 1, x k = x k 1 + α k r k 1. End for If x k, x k 1 are two approximations of the steepest descent method for solving Ax = b and λ 1 λ 2 λ n > 0 are the eigenvalues of A, then it holds: ( ) x k x λ1 λ n A x k 1 x A, λ 1 + λ n where x A = x T Ax. Thus the gradient method is convergent.
78 If the condition number of A (= λ 1 /λ n ) is large, then λ 1 λ n λ 1 +λ n 1. The gradient method converges very slowly. Hence this method is not recommendable. It is favorable to choose that the search directions {v (i) } as mutually A-conjugate, where A is symmetric positive definite. Definition 38 Two vectors p and q are called A-conjugate (A-orthogonal), if p T Aq = 0.
79 Lemma 39 Let v 1,..., v n 0 be pairwisely A-conjugate. Then they are linearly independent. Proof: From 0 = c j v j j=1 follows that 0 = (v k ) T A c j v j = j=1 c j (v k ) T Av j = c k (v k ) T Av k, j=1 so c k = 0, for k = 1,..., n.
80 Theorem 40 Let A be symm. positive definite and v 1,..., v n R n \{0} be pairwisely A-orthogonal. Give x 0 and let r 0 = b Ax 0. For k = 1,..., n, let α k = < v k, b Ax k 1 > < v k, Av k > and x k = x k 1 + α k v k. Then Ax n = b and < b Ax k, v j >= 0, for each j = 1, 2,..., k. Proof: Since, for each k = 1, 2,..., n, x k = x k 1 + α k v k, we have Ax n = Ax n 1 + α n Av n = (Ax n 2 + α n 1 Av n 1 ) + α n Av n = = Ax 0 + α 1 Av 1 + α 2 Av α n Av n.
81 It implies that < Ax n b, v k > = < Ax 0 b, v k > +α 1 < Av 1, v k > + + α n < Av n, v k > = < Ax 0 b, v k > +α 1 < v 1, Av k > + + α n < v n, Av k > = < Ax 0 b, v k > +α k < v k, Av k > = < Ax 0 b, v k > + < v k, b Ax k 1 > < v k, Av k > = < Ax 0 b, v k > + < v k, b Ax k 1 > = < Ax 0 b, v k > < v k, Av k > + < v k, b Ax 0 + Ax 0 Ax 1 + Ax k 2 + Ax k 2 Ax k 1 > = < Ax 0 b, v k > + < v k, b Ax 0 > + < v k, Ax 0 Ax 1 > + + < v k, Ax k 2 Ax k 1 > = < v k, Ax 0 Ax 1 > + + < v k, Ax k 2 Ax k 1 >.
82 For any i we have x i = x i 1 + α i v i and Ax i = Ax i 1 + α i Av i, Thus, for k = 1,..., n, Ax i 1 Ax i = α i Av i. < Ax n b, v k > = α 1 < v k, Av 1 > α k 1 < v k, Av k 1 >= 0 which implies that Ax n = b. Suppose that < r k 1, v j >= 0 for j = 1, 2,..., k 1. (9) By the result r k = b Ax k = b A(x k 1 + α k v k ) = r k 1 α k Av k
83 it follows that < r k, v k > = < r k 1, v k > α k < Av k, v k > = < r k 1, v k > < v k, b Ax k 1 > < v k, Av k > = 0. < Av k, v k > From assumption (9) and A-orthogonality, for j = 1,..., k 1 < r k, v j >=< r k 1, v j > α k < Av k, v j >= 0 which is completed the proof by the mathematic induction. Method of conjugate directions: Let A be symmetric positive definite, b, x 0 R n. Given v 1,..., v n R n \{0} pairwisely A-orthogonal. r 0 = b Ax 0, For k = 1,..., n, α k = <v k,r k 1 > <v k,av k >, x k = x k 1 + α k v k, r k = r k 1 α k Av k = b Ax k.
84 Practical Implementation In k-th step a direction v k which is A-orthogonal to v 1,..., v k 1 must be determined. It allows for orthogonalization of r k against v 1,..., v k. Let r k 0, g(x) decreases strictly in the direction r k. For ε > 0 small, we have g(x k εr k ) < g(x k ). If r k 1 = b Ax k 1 0, then we use r k 1 to generate v k by Choose β k 1 such that v k = r k 1 + β k 1 v k 1. (10) 0 = < v k 1, Av k >=< v k 1, Ar k 1 + β k 1 Av k 1 > = < v k 1, Ar k 1 > +β k 1 < v k 1, Av k 1 >.
85 That is β k 1 = < v k 1, Ar k 1 > < v k 1, Av k 1 >. (11) Theorem 41 Let v k and β k 1 be defined in (10) and (11), respectively. Then r 0,..., r k 1 are mutually orthogonal and < v k, Av i >= 0, for i = 1, 2,..., k 1. That is {v 1,..., v k } is an A-orthogonal set. Having chosen v k, we compute α k = < v k, r k 1 > < v k, Av k > = < r k 1 + β k 1 v k 1, r k 1 > < v k, Av k > = < r k 1, r k 1 > < v k, Av k > + β k 1 < v k 1, r k 1 > < v k, Av k > = < r k 1, r k 1 > < v k, Av k >. (12)
86 Since r k = r k 1 α k Av k, we have < r k, r k >=< r k 1, r k > α k < Av k, r k >= α k < r k, Av k >. Further, from (12), < r k 1, r k 1 >= α k < v k, Av k >, so β k = < v k, Ar k > < v k, Av k > = < r k, Av k > < v k, Av k > (1/α k ) < r k, r k > = (1/α k ) < r k 1, r k 1 > = < r k, r k > < r k 1, r k 1 >.
87 Algorithm 4 (Conjugate Gradient method (CG-method)) Let A be s.p.d., b R n, choose x 0 R n, r 0 = b Ax 0 = v 0. If r 0 = 0, then N = 0 stop, otherwise for k = 0, 1,... (a). α k = <r k,r k > <v k,av k >, (b). x k+1 = x k + α k v k, (c). r k+1 = r k α k Av k, (d). If r k+1 = 0, let N = k + 1, stop. (e). β k = <r k+1,r k+1 > <r k,r k >, (f). v k+1 = r k+1 + β k v k. Theoretically, the exact solution is obtained in n steps. If A is well-conditioned, then approximate solution is obtained in about n steps. If A is ill-conditioned, then the number of iterations may be greater than n.
88 Theorem 42 CG-method satisfies the following error estimate ( ) k κ 1 x k x A 2 x 0 x A, κ + 1 where κ = λ1 λ n and λ 1 λ n > 0 are the eigenvalues of A. Remark 1 (Compare with Gradient method) Let x G k But be the kth iterate of Gradient method. Then k x G k x A λ 1 λ n λ 1 + λ n x 0 x A. λ 1 λ n = κ 1 κ 1 λ 1 + λ n κ + 1 >, κ + 1 because in general κ κ. Therefore the CG-method is much better than Gradient method.
89 Select a nonsingular matrix C so that is better conditioned. Consider the linear system where Then à = C 1 AC T à x = b, x = C T x and b = C 1 b. à x = (C 1 AC T )(C T x) = C 1 Ax. Thus, Ax = b à x = b and x = C T x.
90 Since we have Let x k = C T x k, r k = b à x k = C 1 b ( C 1 AC T ) C T x k = C 1 (b Ax k ) = C 1 r k. ṽ k = C T v k and w k = C 1 r k. Then β k = = < r k, r k > < r k 1, r k 1 > = < C 1 r k, C 1 r k > < C 1 r k 1, C 1 r k 1 > < w k, w k > < w k 1, w k 1 >.
91 Thus, α k = < r k 1, r k 1 > < ṽ k, Ãṽ k > = < C 1 r k 1, C 1 r k 1 > < C T v k, C 1 AC T C T v k > < w k 1, w k 1 > = < C T v k, C 1 Av k > and, since we have Further, < C T v k, C 1 Av k > = (v k ) T CC 1 Av k = (v k ) T Av k = < v k, Av k >, α k = < w k 1, w k 1 > < v k, Av k >. and x k = x k 1 + α k ṽ k, so C T x k = C T x k 1 + α k C T v k x k = x k 1 + α k v k.
92 Continuing, r k = r k 1 α k Ãṽ k, so C 1 r k = C 1 r k 1 α k C 1 AC T C T v k and r k = r k 1 α k Av k. Finally, ṽ k+1 = r k + β k ṽ k and C T v k+1 = C 1 r k + β k C T v k, so v k+1 = C T C 1 r k + β k v k = C T w k + β k v k.
93 Algorithm 5 (Preconditioned CG-method (PCG-method)) Choose C and x 0. Set r 0 = b Ax 0, solve Cw 0 = r 0 and C T v 1 = w 0. If r 0 = 0, then N = 0 stop, otherwise for k = 1, 2,... (a). α k =< w k 1, w k 1 > / < v k, Av k >, (b). x k = x k 1 + α k v k, (c). r k = r k 1 α k Av k, (d). If r k = 0, let N = k + 1, stop. Otherwise, solve Cw k = r k and C T z k = w k, (e). β k =< w k, w k > / < w k 1, w k 1 >, (f). v k+1 = z k + β k v k.
94 Simplification: Let Then r k = CC T z k Mz k. β k = = < r k, r k > < r k 1, r k 1 > = < C 1 r k, C 1 r k > < C 1 r k 1, C 1 r k 1 > < z k, r k > < z k 1, r k 1 >, α k = < r k 1, r k 1 > < ṽ k, Ãṽ k > = < C 1 r k 1, C 1 r k 1 > < C T v k, C 1 AC T C T v k > = < z k 1, r k 1 > < v k, Av k >, v k+1 = C T C 1 r k + β k v k = z k + β k v k.
95 Algorithm: CG-method with preconditioner M Input: Given x 0 and r 0 = b Ax 0, solve Mz 0 = r 0. Set v 1 = z 0 and k = 1. 1: repeat 2: Compute α k = zk 1 T r k 1/vk T Av k; 3: Compute x k = x k 1 + α k v k ; 4: Compute r k = r k 1 α k Av k ; 5: if r k = 0 then 6: Stop; 7: else 8: Solve Mz k = r k ; 9: Compute β k = zk T r k/zk 1 T r k 1; 10: Compute v k+1 = z k + β k v k ; 11: end if 12: Set k = k + 1; 13: until r k = 0
96 Choices of M (Criterion): (i) cond(m 1/2 AM 1/2 ) is nearly by 1, i.e., M 1/2 AM 1/2 I, A M. (ii) The linear system Mz = r must be easily solved. e.g. M = LL T. (iii) M is symmetric positive definite.
97 (i) Jacobi method: A = D (L + R), M = D x k+1 = x k + D 1 r k = x k + D 1 (b Ax k ) = D 1 (L + R)x k + D 1 b (ii) Gauss-Seidel: A = (D L) R, M = D L x k+1 = x k + z k = x k + (D L) 1 (b Ax k ) = (D L) 1 Rx k + (D L) 1 b.
98 (iii) SOR-method: Write ωa = (D ωl) ((1 ω)d + ωr) M N. Then we have x k+1 = (D ωl) 1 (ωr + (1 ω)d)x k + (D ωl) 1 ωb = (D ωl) 1 ((D ωl) ωa)x k + (D ωl) 1 ωb = (I (D ωl) 1 ωa)x k + (D ωl) 1 ωb = x k + (D ωl) 1 ω(b Ax k ) = x k + ωm 1 r k.
99 (iv) SSOR: A = D L L T. Let { Mω : = D ωl, N ω : = (1 ω)d + ωl T. Then x i+1 = ( Mω T Nω T Mω 1 ) N ω xi + b Gx i + M(ω) 1 b with M(ω) = 1 ω(2 ω) (D ωl)d 1 ( D ωl T ).
Chapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationTsung-Ming Huang. Matrix Computation, 2016, NTNU
Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationIterative Solution methods
p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationNumerical Linear Algebra And Its Applications
Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,
More informationConjugate Gradient Method
Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationNumerical methods for solving linear systems
Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationFEM and Sparse Linear System Solving
FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science
More informationEXAMPLES OF CLASSICAL ITERATIVE METHODS
EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationBackground. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58
Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationIntroduction to Numerical Analysis
Université de Liège Faculté des Sciences Appliquées Introduction to Numerical Analysis Edition 2015 Professor Q. Louveaux Department of Electrical Engineering and Computer Science Montefiore Institute
More information5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...
5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationChapter 3. Linear and Nonlinear Systems
59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationNum. discretization. Numerical simulations. Finite elements Finite volume. Finite diff. Mathematical model A set of ODEs/PDEs. Integral equations
Scientific Computing Computer simulations Road map p. 1/46 Numerical simulations Physical phenomenon Mathematical model A set of ODEs/PDEs Integral equations Something else Num. discretization Finite diff.
More informationCSC 576: Linear System
CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationVector and Matrix Norms I
Vector and Matrix Norms I Scalar, vector, matrix How to calculate errors? Scalar: absolute error: ˆα α relative error: Vectors: vector norm Norm is the distance!! ˆα α / α Chih-Jen Lin (National Taiwan
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationIntroduction and Stationary Iterative Methods
Introduction and C. T. Kelley NC State University tim kelley@ncsu.edu Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011 Outline Notation and Preliminaries General References What you Should Know
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationLecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University
Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of
More information7.2 Steepest Descent and Preconditioning
7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationChapter 3. Numerical linear algebra. 3.1 Motivation. Example 3.1 (Stokes flow in a cavity) Three equations,
Chapter 3 Numerical linear algebra 3. Motivation In this chapter we will consider the two following problems: ➀ Solve linear systems Ax = b, where x, b R n and A R n n. ➁ Find x R n that minimizes m (Ax
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS
TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationLecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University
Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1
More information