Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Size: px
Start display at page:

Download "Goal: to construct some general-purpose algorithms for solving systems of linear Equations"

Transcription

1 Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations

2 4.6 Solution of Equations by Iterative Methods

3 4.6 Solution of Equations by Iterative Methods The Gaussian algorithm and its variants (C/) are called direct methods ( {) for solving the problem Ax = b. They proceed through a finite number of steps and produce a solution x that would be completely accurate were it not for roundoff errors.

4 4.6 Solution of Equations by Iterative Methods The Gaussian algorithm and its variants (C/) are called direct methods ( {) for solving the problem Ax = b. They proceed through a finite number of steps and produce a solution x that would be completely accurate were it not for roundoff errors. An indirect method (m{), by contrast, produces a sequence of vectors that ideally converges to the solution.

5 4.6 Solution of Equations by Iterative Methods The Gaussian algorithm and its variants (C/) are called direct methods ( {) for solving the problem Ax = b. They proceed through a finite number of steps and produce a solution x that would be completely accurate were it not for roundoff errors. An indirect method (m{), by contrast, produces a sequence of vectors that ideally converges to the solution. The computation is halted when an approximate solution is obtained having some specified accuracy or after a certain number of iterations.

6 4.6 Solution of Equations by Iterative Methods The Gaussian algorithm and its variants (C/) are called direct methods ( {) for solving the problem Ax = b. They proceed through a finite number of steps and produce a solution x that would be completely accurate were it not for roundoff errors. An indirect method (m{), by contrast, produces a sequence of vectors that ideally converges to the solution. The computation is halted when an approximate solution is obtained having some specified accuracy or after a certain number of iterations. Indirect methods are almost always iterative in nature: a simple process is applied repeatedly to generate such sequence.

7 ) 5 küa {µ {µïlk ÚOŽ)nØþ ( OŽE,Ý O(n 3 ) þ? ;þ O(n 2 )þ? 'Ü unœ¹. S {µkø Ó^; m S y{ü cù ^u )Œ.DÕÝ(=Xê Ý ¹kŒþ0ƒ) K.

8 ) 5 küa {µ {µïlk ÚOŽ)nØþ ( OŽE,Ý O(n 3 ) þ? ;þ O(n 2 )þ? 'Ü unœ¹. S {µkø Ó^; m S y{ü cù ^u )Œ.DÕÝ(=Xê Ý ¹kŒþ0ƒ) K. S éõe, K 'Xœht& Ï~ 8( )Œ. n >> 1 DÕÝ 5 ê. dž S { ^ œ

9 ( )S {

10 ( )S { é ( )F(x) = 0 S { ÄÚ½µ

11 ( )S { é ( )F(x) = 0 S { ÄÚ½µ 1 Ed/ª F(x) = 0 x = Φ(x).

12 ( )S { é ( )F(x) = 0 S { ÄÚ½µ 1 Ed/ª F(x) = 0 x = Φ(x). 2 Ü ÐŠ x 0 ÑS ª x (k+1) = Φ(x (k) ).

13 ( )S { é ( )F(x) = 0 S { ÄÚ½µ 1 Ed/ª F(x) = 0 x = Φ(x). 2 Ü ÐŠ x 0 ÑS ª x (k+1) = Φ(x (k) ). 3 e4 x lim k + x (k) 3 Kx ). SOŽ ž S x (k+1) x (k) < εžêž.

14 ( )S { é ( )F(x) = 0 S { ÄÚ½µ 1 Ed/ª F(x) = 0 x = Φ(x). 2 Ü ÐŠ x 0 ÑS ª x (k+1) = Φ(x (k) ). 3 e4 x lim k + x (k) 3 Kx ). SOŽ ž S x (k+1) x (k) < εžêž. e4 lim x (k) Ø3 K }. IE#S ª½ Ð k + Š x (0).

15 ( )S { é ( )F(x) = 0 S { ÄÚ½µ 1 Ed/ª F(x) = 0 x = Φ(x). 2 Ü ÐŠ x 0 ÑS ª x (k+1) = Φ(x (k) ). 3 e4 x lim k + x (k) 3 Kx ). SOŽ ž S x (k+1) x (k) < εžêž. e4 lim x (k) Ø3 K }. IE#S ª½ Ð k + Š x (0). þs x (k) x lim k x (k) x = 0 ^S { ) 5 ê ÄgŽ )š 5 ÄgŽƒÓ.

16 Consider the equation Ax = b

17 Consider the equation Ax = b x = Gx + g.

18 Consider the equation Ax = b x = Gx + g. A certain nonsingular matrix Q, called the splitting matrix, is prescribed, then the problem above is equivalent to Qx = (Q A)x + b

19 Consider the equation Ax = b x = Gx + g. A certain nonsingular matrix Q, called the splitting matrix, is prescribed, then the problem above is equivalent to Qx = (Q A)x + b This suggests an iterative process, defined by Qx (k) = (Q A)x (k 1) + b (k 1) (1)

20 Consider the equation Ax = b x = Gx + g. A certain nonsingular matrix Q, called the splitting matrix, is prescribed, then the problem above is equivalent to Qx = (Q A)x + b This suggests an iterative process, defined by or Qx (k) = (Q A)x (k 1) + b (k 1) (1) x (k) = (I Q 1 A)x (k 1) + Q 1 b (k 1) (2) where x (0) is an arbitrary initial vector.

21 Consider the equation Ax = b x = Gx + g. A certain nonsingular matrix Q, called the splitting matrix, is prescribed, then the problem above is equivalent to Qx = (Q A)x + b This suggests an iterative process, defined by or Qx (k) = (Q A)x (k 1) + b (k 1) (1) x (k) = (I Q 1 A)x (k 1) + Q 1 b (k 1) (2) where x (0) is an arbitrary initial vector. Note that equation (2) is used for theoretical analysis only. In practice, no need to compute Q 1.

22 Now, our objective is to choose Q ( Q 0) s.t.

23 Now, our objective is to choose Q ( Q 0) s.t. 1 The Sequence {x (k) } is easily computed. 2 The Sequence {x (k) } converged rapidly to the solution of Ax = b.

24 Now, our objective is to choose Q ( Q 0) s.t. 1 The Sequence {x (k) } is easily computed. 2 The Sequence {x (k) } converged rapidly to the solution of Ax = b. Obviously, the exact solution x satisfies Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b

25 Now, our objective is to choose Q ( Q 0) s.t. 1 The Sequence {x (k) } is easily computed. 2 The Sequence {x (k) } converged rapidly to the solution of Ax = b. Obviously, the exact solution x satisfies Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b Clearly x is a fixed point of F(x) (I Q 1 A)x + Q 1 b.

26 Now, our objective is to choose Q ( Q 0) s.t. 1 The Sequence {x (k) } is easily computed. 2 The Sequence {x (k) } converged rapidly to the solution of Ax = b. Obviously, the exact solution x satisfies Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b Clearly x is a fixed point of F(x) (I Q 1 A)x + Q 1 b. Then, x (k) x = (I Q 1 A)(x (k 1) x) (k 1)

27 Now, our objective is to choose Q ( Q 0) s.t. 1 The Sequence {x (k) } is easily computed. 2 The Sequence {x (k) } converged rapidly to the solution of Ax = b. Obviously, the exact solution x satisfies Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b Clearly x is a fixed point of F(x) (I Q 1 A)x + Q 1 b. Then, x (k) x = (I Q 1 A)(x (k 1) x) (k 1) Then, x (k) x I Q 1 A x (k 1) x (k 1)

28 Now, our objective is to choose Q ( Q 0) s.t. 1 The Sequence {x (k) } is easily computed. 2 The Sequence {x (k) } converged rapidly to the solution of Ax = b. Obviously, the exact solution x satisfies Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b Clearly x is a fixed point of F(x) (I Q 1 A)x + Q 1 b. Then, x (k) x = (I Q 1 A)(x (k 1) x) (k 1) Then, x (k) x I Q 1 A x (k 1) x (k 1) x (k) x I Q 1 A k x (0) x (k 1)

29 Thus, if I Q 1 A < 1, one can conclude that lim x (k) x = 0 k

30 Thus, if I Q 1 A < 1, one can conclude that lim x (k) x = 0 k Observe that I Q 1 A < 1 implies the invertibility of Q 1 and A. Hence, we have:

31 Thus, if I Q 1 A < 1, one can conclude that lim x (k) x = 0 k Observe that I Q 1 A < 1 implies the invertibility of Q 1 and A. Hence, we have: Theorem (on Iterative Method Convergence) If I Q 1 A < 1 for some subordinate matrix norm, then the sequence produced by (1) converges to the solution of Ax = b for any initial vector x (0).

32 Remark The matrix G I Q 1 A is usually called the iteration matrix. If δ I Q 1 A < 1, then we can use the stopping condition for the iterative method as follows: x (k) x where ɛ is the tolerance. δ 1 δ x (k) x (k 1) < ɛ

33 Richardson Method

34 Richardson Method The splitting matrix is Q = I and the iteration matrix is G = I Q 1 A = I A. So, the iteration formula is x (k) = (I A)x (k 1) + b = x (k 1) + r (k 1) where r (k 1) is the residual vector, defined by r (k 1) = b Ax (k 1).

35 Richardson Method The splitting matrix is Q = I and the iteration matrix is G = I Q 1 A = I A. So, the iteration formula is x (k) = (I A)x (k 1) + b = x (k 1) + r (k 1) where r (k 1) is the residual vector, defined by r (k 1) = b Ax (k 1). With the preceding theorem, one can easily show that if I A < 1, then the sequence x (k) generated by the Richardson iteration will converge to a solution to Ax = b.

36 Exercise (in class) Find or write down the explicit form for the iteration matrix G = I Q 1 A and state the iteration formula in the Richardson method for the problem Ax = x 1 x 2 x 3 = Show that the Richardson method is successful (i.e., x (k) A 1 b ) for this problem.

37 For example, with initial guess x (0) = (0, 0, 0) T, the sequence {x (k) } generated by the Richardson method is: x (0) x (1) x (10) = ( , , ) T = ( , , ) T. = ( , , ) T.

38 For example, with initial guess x (0) = (0, 0, 0) T, the sequence {x (k) } generated by the Richardson method is: x (0) x (1) x (10) x (40) = ( , , ) T = ( , , ) T. = ( , , ) T. = ( , , ) T.

39 For example, with initial guess x (0) = (0, 0, 0) T, the sequence {x (k) } generated by the Richardson method is: x (0) x (1) x (10) x (40) x (80) = ( , , ) T = ( , , ) T. = ( , , ) T. = ( , , ) T. = ( , , ) T

40 Example Discuss whether the Richardson method is successful for the problem Ax = x =

41 Solution.

42 Solution. Splitting matrix Q = I and iteration matrix G G = I Q 1 1 A = I A =

43 Solution. Splitting matrix Q = I and iteration matrix G G = I Q 1 1 A = I A = Check G 1 = 1,

44 Solution. Splitting matrix Q = I and iteration matrix G G = I Q 1 1 A = I A = Check G 1 = 1, G = 1

45 Solution. Splitting matrix Q = I and iteration matrix G G = I Q 1 1 A = I A = Check G 1 = 1, G = 1 Recall ρ(g) G.

46 0 = λi G = det λ λ λ λ

47 0 = λi G = det λ λ λ λ 0 = = λ λ

48 0 = λi G = det λ λ λ λ 0 = = λ λ λ2 = 3 8 ± 5 8

49 0 = λi G = det λ λ λ λ 0 = = λ λ λ2 = 3 8 ± 5 8 < 1

50 0 = λi G = det λ λ λ λ 0 = = λ λ λ2 = 3 8 ± 5 8 < 1 ρ(g) < 1 G 2 = ρ(g) < 1

51 0 = λi G = det λ λ λ λ 0 = = λ λ λ2 = 3 8 ± 5 8 < 1 ρ(g) < 1 G 2 = ρ(g) < 1 Conclusion: The Richardson method works!

52 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A.

53 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Jacobi Method

54 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Jacobi Method In Jacobi Method, the splitting matrix Q = D and the iterative matrix G = D 1 (L + U) = I Q 1 A

55 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Jacobi Method In Jacobi Method, the splitting matrix Q = D and the iterative matrix G = D 1 (L + U) = I Q 1 A The iteration formula is Dx (k) = (L + U)x (k 1) + b

56 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Jacobi Method In Jacobi Method, the splitting matrix Q = D and the iterative matrix G = D 1 (L + U) = I Q 1 A The iteration formula is Dx (k) = (L + U)x (k 1) + b or Qx (k) = (Q A)x (k 1) + b (k 1)

57 JacobiS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x 2 + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 2n x n = b 2. a n1 x 1 + a n2 x 2 + a nn x n = b n

58 JacobiS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x 2 + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 2n x n = b 2. a n1 x 1 + a n2 x 2 + a nn x n = b n JacobiS ª þ/ª a 11 x (k) 1 = (a 12 x (k 1) 2 + a 13 x (k 1) a 1n x (k 1) n b 1 ) a 22 x (k) 2 = (a 21 x (k 1) 1 + a 23 x (k 1) a 2n x (k 1) n b 2 ). a nn x n (k) = (a n1 x 1 (k 1) + a n2 x 2 (k 1) + + a n,n 1 x n 1 (k 1) b n )

59 JacobiS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x 2 + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 2n x n = b 2. a n1 x 1 + a n2 x 2 + a nn x n = b n JacobiS ª þ/ª a 11 x (k) 1 = (a 12 x (k 1) 2 + a 13 x (k 1) a 1n x (k 1) n b 1 ) a 22 x (k) 2 = (a 21 x (k 1) 1 + a 23 x (k 1) a 2n x (k 1) n b 2 ). a nn x n (k) = (a n1 x 1 (k 1) + a n2 x 2 (k 1) + + a n,n 1 x n 1 (k 1) b n ) JacobiS ª3(=k Â)^ º

60 ~µ ^Jacobi {)e 2x 1 x 2 x 3 = 5 x 1 + 5x 2 x 3 = 8 x 1 + x x 3 = 11 )µ

61 ~µ ^Jacobi {)e 2x 1 x 2 x 3 = 5 x 1 + 5x 2 x 3 = 8 x 1 + x x 3 = 11 )µ ÝQ = diag{2, 5, 10}, ƒajacobis ª þ/ªµ x (k) 1 = 0.5x (k 1) x (k 1) x (k) 2 = 0.2x (k 1) x (k 1) x (k) 3 = 0.1x (k 1) 1 0.1x (k 1)

62 ~µ ^Jacobi {)e 2x 1 x 2 x 3 = 5 x 1 + 5x 2 x 3 = 8 x 1 + x x 3 = 11 )µ ÝQ = diag{2, 5, 10}, ƒajacobis ª þ/ªµ x (k) 1 x (k) 2 x (k) 3 x (k) 1 = 0.5x (k 1) x (k 1) x (k) 2 = 0.2x (k 1) x (k 1) x (k) 3 = 0.1x (k 1) 1 0.1x (k 1) = x (k 1) 1 x (k 1) 2 x (k 1)

63 JacobiS ŒÂñ?

64 JacobiS ŒÂñ? ÁÁOŽ G < 1?

65 JacobiS ŒÂñ? ÁÁOŽ G < 1? 2Á G 1 = 0.7 < 1.

66 JacobiS ŒÂñ? ÁÁOŽ G < 1? 2Á G 1 = 0.7 < 1. JacobiS Âñ.

67 JacobiS ŒÂñ? ÁÁOŽ G < 1? 2Á G 1 = 0.7 < 1. JacobiS Âñ. Ð Šx (0) = (1, 1, 1) T OŽ(JXeµ k x (k) 1 x (k) 2 x (k) 3 X (k) X (k 1)

68 JacobiS ŒÂñ? ÁÁOŽ G < 1? 2Á G 1 = 0.7 < 1. JacobiS Âñ. Ð Šx (0) = (1, 1, 1) T OŽ(JXeµ k x (k) 1 x (k) 2 x (k) 3 X (k) X (k 1) () x = ( 1, 2, 1) T.

69 Theorem (on Convergence of Jacobi Method ) If A is diagonally dominant, then the sequence produced by the Jacobi iteration converges to the solution of Ax = b for any starting vector.

70 Theorem (on Convergence of Jacobi Method ) If A is diagonally dominant, then the sequence produced by the Jacobi iteration converges to the solution of Ax = b for any starting vector. Proof. Diagonal dominance means that a ii > n j=1,j i a ij (1 i n)

71 Theorem (on Convergence of Jacobi Method ) If A is diagonally dominant, then the sequence produced by the Jacobi iteration converges to the solution of Ax = b for any starting vector. Proof. Diagonal dominance means that a ii > n j=1,j i It s easy to compute that I D 1 A = max a ij (1 i n) n 1 i n j=1,j i a ij a ii < 1

72 Theorem (on Convergence of Jacobi Method ) If A is diagonally dominant, then the sequence produced by the Jacobi iteration converges to the solution of Ax = b for any starting vector. Proof. Diagonal dominance means that a ii > n j=1,j i It s easy to compute that I D 1 A = max a ij (1 i n) n 1 i n j=1,j i a ij a ii < 1 By the preceding theorem, the Jacobi iteration converges.

73 Example Discuss whether the Jacobi method is successful for the problem Ax = x = = b

74 Solution.

75 Solution. Splitting matrix Q = D = diag(a) and iteration matrix G G = I Q 1 A = I D 1 1 A =

76 Solution. Splitting matrix Q = D = diag(a) and iteration matrix G G = I Q 1 A = I D 1 1 A = Check G 1 = 1,

77 Solution. Splitting matrix Q = D = diag(a) and iteration matrix G G = I Q 1 A = I D 1 1 A = Check G 1 = 1, G = 1.

78 Solution. Splitting matrix Q = D = diag(a) and iteration matrix G G = I Q 1 A = I D 1 1 A = Check G 1 = 1, G = 1. Recall that ρ(g) G and G 2 = ρ(g) if G is real and symmetric.

79 0 = λi G = det λ λ λ λ

80 0 = λi G = det λ λ λ λ 0 = = λ λ λ2 = 3 8 ± 5 8 < 1 ρ(g) < 1 G 2 = ρ(g) < 1

81 0 = λi G = det λ λ λ λ 0 = = λ λ λ2 = 3 8 ± 5 8 < 1 ρ(g) < 1 G 2 = ρ(g) < 1 Conclusion: The Jacobi method works!

82 Example Discuss whether the Richardson method is successful for the problem Ax = x = = b

83 Example Discuss whether the Richardson method is successful for the problem Ax = x = = b Recall that in the Richardson method, the splitting matrix Q = I and iteration matrix G = I Q 1 A = I A 2

84 Recall that the spectral radius of A is { } ρ(a) = max λ : det(a λi) = 0

85 Recall that the spectral radius of A is { } ρ(a) = max λ : det(a λi) = 0 Theorem (on Similar Upper Triangular Matrices) Every square matrix is similar to an (possibly complex) upper triangular matrix whose off-diagonal elements are arbitrarily small.

86 Recall that the spectral radius of A is { } ρ(a) = max λ : det(a λi) = 0 Theorem (on Similar Upper Triangular Matrices) Every square matrix is similar to an (possibly complex) upper triangular matrix whose off-diagonal elements are arbitrarily small. Proof. Hint. Use the Schur s theorem in Section 5.2. See your book in P214.

87 Theorem (on Spectral Radius ) The spectral radius function satisfies the equation ρ(a) = inf A in which the infimum is taken over all subordinate matrix norms.

88 Theorem (on Spectral Radius ) The spectral radius function satisfies the equation ρ(a) = inf A in which the infimum is taken over all subordinate matrix norms. Proof. Hint. Note that ρ(a) A, ( )

89 Theorem (on Spectral Radius ) The spectral radius function satisfies the equation ρ(a) = inf A in which the infimum is taken over all subordinate matrix norms. Proof. Hint. Note that ρ(a) A, ( ) ρ(a) inf A

90 Theorem (on Spectral Radius ) The spectral radius function satisfies the equation ρ(a) = inf A in which the infimum is taken over all subordinate matrix norms. Proof. Hint. Note that ρ(a) A, ( ) ρ(a) inf A and then use the preceding theorem. See also your book in P214.

91 Remark This theorem tells us that for any matrix A, its spectral radius is a lower bound for any subordinate matrix norm and moreover, a subordinate matrix norm exists with a value arbitrarily close to the spectral radius, i.e., ɛ > 0, there is a subordinate matrix norm ɛ s.t.

92 Remark This theorem tells us that for any matrix A, its spectral radius is a lower bound for any subordinate matrix norm and moreover, a subordinate matrix norm exists with a value arbitrarily close to the spectral radius, i.e., ɛ > 0, there is a subordinate matrix norm ɛ s.t. ρ(a) A ɛ ρ(a) + ɛ.

93 Remark This theorem tells us that for any matrix A, its spectral radius is a lower bound for any subordinate matrix norm and moreover, a subordinate matrix norm exists with a value arbitrarily close to the spectral radius, i.e., ɛ > 0, there is a subordinate matrix norm ɛ s.t. ρ(a) A ɛ ρ(a) + ɛ. Particularly, if ρ(a) < 1, then there is a subordinate matrix norm s.t. A < 1.

94 Theorem (on Necessary and Sufficient Conditions for Iterative Method Convergence) For the iterative formula x (k) = Gx (k 1) + c to produce a sequence converging to (I G) 1 c, for any vector c and any starting vector x (0), it is necessary and sufficient that the spectral radius of G be less than 1, i.e., ρ(g) < 1.

95 Proof. ( )

96 Proof. ( ) Suppose that ρ(g) < 1.

97 Proof. ( ) Suppose that ρ(g) < 1. By Theorem on Spectral Radius, there is a subordinate matrix norm s.t. G < 1.

98 Proof. ( ) Suppose that ρ(g) < 1. By Theorem on Spectral Radius, there is a subordinate matrix norm s.t. G < 1. We write x (1) x (2) x (3) = Gx (0) + c = Gx (1) + c = G 2 x (0) + Gc + c = G 3 x (0) + G 2 c + Gc + c

99 Proof. ( ) Suppose that ρ(g) < 1. By Theorem on Spectral Radius, there is a subordinate matrix norm s.t. G < 1. We write x (1) x (2) x (3) = Gx (0) + c = Gx (1) + c = G 2 x (0) + Gc + c = G 3 x (0) + G 2 c + Gc + c The general formula is k 1 x (k) = G k x (0) + G j c (3) j=0

100 continued... Then G k x (0) G k x (0) G k x (0) 0 as k.

101 continued... Then G k x (0) G k x (0) G k x (0) 0 as k. By the Theorem on Neumann Series (P198), we have G j c = (I G) 1 c j=0 Thus, by letting k in (3),

102 continued... Then G k x (0) G k x (0) G k x (0) 0 as k. By the Theorem on Neumann Series (P198), we have G j c = (I G) 1 c j=0 Thus, by letting k in (3), we obtain lim x (k) = (I G) 1 c k

103 (continued...) ( )

104 (continued...) ( ) For the converse, suppose that ρ(g) 1.

105 (continued...) ( ) For the converse, suppose that ρ(g) 1. Select u and λ s.t. Gu = λu λ 1 u 0

106 (continued...) ( ) For the converse, suppose that ρ(g) 1. Select u and λ s.t. Gu = λu λ 1 u 0 Let c = u and x (0) = 0. By Equation (3), k 1 k 1 x (k) = G j u = λ j u j=0 j=0

107 (continued...) ( ) For the converse, suppose that ρ(g) 1. Select u and λ s.t. Gu = λu λ 1 u 0 Let c = u and x (0) = 0. By Equation (3), k 1 k 1 x (k) = G j u = λ j u j=0 j=0 If λ = 1, x (k) = ku, it clearly diverges as k.

108 (continued...) ( ) For the converse, suppose that ρ(g) 1. Select u and λ s.t. Gu = λu λ 1 u 0 Let c = u and x (0) = 0. By Equation (3), k 1 k 1 x (k) = G j u = λ j u j=0 j=0 If λ = 1, x (k) = ku, it clearly diverges as k. If λ 1, x (k) = (λ k 1)(λ 1) 1 u, it also diverges since lim k λ k does not exist.

109 Corollary (Iterative Method Convergence Corollary) The iterative formula Qx (k) = (Q A)x (k 1) + b (k 1) will produce a sequence converging to the solution of Ax = b, for any starting vector x (0), if ρ(i Q 1 A) < 1.

110 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A.

111 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Gauss-Seidel Method

112 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Gauss-Seidel Method In Gauss-Seidel Method, the splitting matrix Q = D + L and the iterative matrix G = (D + L) 1 U

113 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Gauss-Seidel Method In Gauss-Seidel Method, the splitting matrix Q = D + L and the iterative matrix G = (D + L) 1 U = I Q 1 A

114 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Gauss-Seidel Method In Gauss-Seidel Method, the splitting matrix Q = D + L and the iterative matrix G = (D + L) 1 U = I Q 1 A So, the iteration formula is (D + L)x (k) = Ux (k 1) + b

115 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. Gauss-Seidel Method In Gauss-Seidel Method, the splitting matrix Q = D + L and the iterative matrix G = (D + L) 1 U = I Q 1 A So, the iteration formula is (D + L)x (k) = Ux (k 1) + b or Qx (k) = (Q A)x (k 1) + b (k 1)

116 Gauss-SeidelS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. a n1 x 1 + a n2 x a nn x n = b n

117 Gauss-SeidelS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. a n1 x 1 + a n2 x a nn x n = b n Gauss-SeidelS ª þ/ª

118 Gauss-SeidelS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. a n1 x 1 + a n2 x a nn x n = b n Gauss-SeidelS ª þ/ª x (k) 1 = 1 a 11 (a 12 x (k 1) a 1n x (k 1) n b 1 ) x (k) 2 = 1 a 22 (a 21 x (k) 1 + a 23 x (k 1) a 2n x (k 1) n b 2 ) x (k) 3 = 1 a 33 (a 31 x (k) 1 + a 32 x (k) 2 + a 34 x (k 1) a 3,n x (k 1) n b 3 ). x n (k) = 1 a nn (a n1 x 1 (k) + a n2 x 2 (k) + + a n,n 1 x n 1 (k) b n )

119 Gauss-SeidelS ª: þ/ª9ù3^ k 5 ê Ax = b a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. a n1 x 1 + a n2 x a nn x n = b n Gauss-SeidelS ª þ/ª x (k) 1 = 1 a 11 (a 12 x (k 1) a 1n x (k 1) n b 1 ) x (k) 2 = 1 a 22 (a 21 x (k) 1 + a 23 x (k 1) a 2n x (k 1) n b 2 ) x (k) 3 = 1 a 33 (a 31 x (k) 1 + a 32 x (k) 2 + a 34 x (k 1) a 3,n x (k 1) n b 3 ). x n (k) = 1 a nn (a n1 x 1 (k) + a n2 x 2 (k) + + a n,n 1 x n 1 (k) b n ) Gauss-SeidelS ª3(=k Â)^ º

120 ~µ^gauss-seidels {) ( ) A = Њ x (0) = (0, 0, 0) T. ): Ax = b,ù ), b = (

121 ~µ^gauss-seidels {) ( ) A = Њ x (0) = (0, 0, 0) T. Ax = b,ù ), b = ( ): Ïé a 22 = 0, IŠý?n ^ S µ.

122 ~µ^gauss-seidels {) ( ) A = Њ x (0) = (0, 0, 0) T. Ax = b,ù ), b = ( ): Ïé a 22 = 0, IŠý?n ^ S µ ( ) ( ) A 1 8 0, b

123 S ª µ x (k) 1 = 1 (k 1) (x x (k 1) 3 + 7) x (k) 2 = 1 (k) (x ) x (k) 3 = 1 (k) (x )

124 S ª µ x (k) 1 = 1 (k 1) (x x (k 1) 3 + 7) x (k) 2 = 1 (k) (x ) x (k) 3 = 1 (k) (x ) S k = 4Ú,µ x (1) = (0.7778, , ) x (2) = (0.9942, , ) x (3) = (0.9999, , ) x (4) = (1.0000, , )

125 S ª µ x (k) 1 = 1 (k 1) (x x (k 1) 3 + 7) x (k) 2 = 1 (k) (x ) x (k) 3 = 1 (k) (x ) S k = 4Ú,µ ()? x (1) = (0.7778, , ) x (2) = (0.9942, , ) x (3) = (0.9999, , ) x (4) = (1.0000, , )

126 Theorem (on Gauss-Seidel Method Convergence) If A is diagonally dominant, then the Gauss-Seidel Method converges for any starting vector.

127 Proof. It suffices to prove that ρ(i Q 1 A) < 1.

128 Proof. It suffices to prove that ρ(i Q 1 A) < 1. Let λ be any eigenvalue of I Q 1 A and x be a corresponding eigenvector. Assume, WLOG, that x = 1. We have (I Q 1 A)x = λx or Qx Ax = λqx

129 Proof. It suffices to prove that ρ(i Q 1 A) < 1. Let λ be any eigenvalue of I Q 1 A and x be a corresponding eigenvector. Assume, WLOG, that x = 1. We have (I Q 1 A)x = λx or Qx Ax = λqx Since the splitting matrix Q is the lower triangular part of A, including its diagonal,

130 Proof. It suffices to prove that ρ(i Q 1 A) < 1. Let λ be any eigenvalue of I Q 1 A and x be a corresponding eigenvector. Assume, WLOG, that x = 1. We have (I Q 1 A)x = λx or Qx Ax = λqx Since the splitting matrix Q is the lower triangular part of A, including its diagonal, n a ij x j = λ j=i+1 i a ij x j (1 i n) j=1

131 Proof. It suffices to prove that ρ(i Q 1 A) < 1. Let λ be any eigenvalue of I Q 1 A and x be a corresponding eigenvector. Assume, WLOG, that x = 1. We have (I Q 1 A)x = λx or Qx Ax = λqx Since the splitting matrix Q is the lower triangular part of A, including its diagonal, n a ij x j = λ j=i+1 i a ij x j (1 i n) j=1

132 Proof. Then λa ii x i = n j=i+1 i 1 a ij x j λ a ij x j (1 i n) j=1

133 Proof. Then λa ii x i = n j=i+1 i 1 a ij x j λ a ij x j (1 i n) j=1 Choose an index i s.t. x i = 1 x j for all j.

134 Proof. Then λa ii x i = n j=i+1 i 1 a ij x j λ a ij x j (1 i n) j=1 Choose an index i s.t. x i = 1 x j for all j. Then λ a ii n j=i+1 i 1 a ij + λ a ij (1 i n) j=1

135 Proof. Then λa ii x i = n j=i+1 i 1 a ij x j λ a ij x j (1 i n) j=1 Choose an index i s.t. x i = 1 x j for all j. Then λ a ii n j=i+1 i 1 a ij + λ a ij (1 i n) Solving for λ and using the diagonal dominance of A, we get { n λ j=i+1 j=1 }{ a ij a ii i 1 j=1 } 1 a ij

136 Proof. Then λa ii x i = n j=i+1 i 1 a ij x j λ a ij x j (1 i n) j=1 Choose an index i s.t. x i = 1 x j for all j. Then λ a ii n j=i+1 i 1 a ij + λ a ij (1 i n) Solving for λ and using the diagonal dominance of A, we get { n λ j=i+1 j=1 }{ a ij a ii i 1 1 a ij } < 1 j=1

137 ~µ O^JacobiS Gauss-seidelS { )Ax = b Ù A = ?Ø Âñ5.

138 ~µ O^JacobiS Gauss-seidelS { )Ax = b Ù A = ?Ø Âñ5. )µjacobis Ý µ G = I D 1 A =

139 ~µ O^JacobiS Gauss-seidelS { )Ax = b Ù A = ?Ø Âñ5. )µjacobis Ý µ ÙAõ ª G = I D 1 A = λi G = λ λ = 0 = λ 1 = 0, λ 2,3 = ± 5 2 i

140 ~µ O^JacobiS Gauss-seidelS { )Ax = b Ù A = ?Ø Âñ5. )µjacobis Ý µ ÙAõ ª G = I D 1 A = λi G = λ λ = 0 = λ 1 = 0, λ 2,3 = ± Ïρ(G) = 5 2 > i

141 ~µ O^JacobiS Gauss-seidelS { )Ax = b Ù A = ?Ø Âñ5. )µjacobis Ý µ ÙAõ ª G = I D 1 A = λi G = λ λ = 0 = λ 1 = 0, λ 2,3 = ± Ïρ(G) = 5 2 > 1 JacobiS ØÂñ. 5 2 i

142 e ^Gauss SiedelS { KS Ý µ G = (D + L) U =

143 e ^Gauss SiedelS { KS Ý µ G = (D + L) U = ŒÙAŠ λ 1 = 0, λ 2,3 = 1 2

144 e ^Gauss SiedelS { KS Ý µ G = (D + L) U = ŒÙAŠ d ρ(g) = 1 2 < 1 λ 1 = 0, λ 2,3 = 1 2

145 e ^Gauss SiedelS { KS Ý µ G = (D + L) U = ŒÙAŠ λ 1 = 0, λ 2,3 = 1 2 d ρ(g) = 1 < 1 Gauss-Seidel S Âñ. 2

146 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A.

147 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. SOR(tµ) Method

148 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. SOR(tµ) Method In SOR Method, the splitting matrix Q = ω 1 D + L and the iterative matrix G = (D + ωl) 1 ((1 ω)d ωu) = I Q 1 A and the iteration formula

149 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. SOR(tµ) Method In SOR Method, the splitting matrix Q = ω 1 D + L and the iterative matrix G = (D + ωl) 1 ((1 ω)d ωu) = I Q 1 A and the iteration formula (D+ωL)x (k) = ω( Ux (k 1) +b)+(1 ω)dx (k 1) or

150 Let A = D + L + U where D = diag(a), L is the strictly lower triangular part of A and U is the strictly upper triangular part of A. SOR(tµ) Method In SOR Method, the splitting matrix Q = ω 1 D + L and the iterative matrix G = (D + ωl) 1 ((1 ω)d ωu) = I Q 1 A and the iteration formula (D+ωL)x (k) = ω( Ux (k 1) +b)+(1 ω)dx (k 1) or Qx (k) = (Q A)x (k 1) + b (k 1)

151 SORS ª: þ/ª9ù3^ Recall Gauss-SeidelS þ/ª

152 SORS ª: þ/ª9ù3^ Recall Gauss-SeidelS þ/ª x (k) 1 = 1 a 11 (a 12 x (k 1) a 1n x (k 1) n b 1 ) x (k) 2 = 1 a 22 (a 21 x (k) 1 + a 23 x (k 1) a 2n x (k 1) n b 2 ) x (k) 3 = 1 a 33 (a 31 x (k) 1 + a 32 x (k) 2 + a 34 x (k 1) a 3,n x (k 1) n b 3 ). x n (k) = 1 a nn (a n1 x 1 (k) + a n2 x 2 (k) + + a n,n 1 x n 1 (k) b n )

153 SORS ª: þ/ª9ù3^ Recall Gauss-SeidelS þ/ª x (k) 1 = 1 a 11 (a 12 x (k 1) a 1n x (k 1) n b 1 ) x (k) 2 = 1 a 22 (a 21 x (k) 1 + a 23 x (k 1) a 2n x (k 1) n b 2 ) x (k) 3 = 1 a 33 (a 31 x (k) 1 + a 32 x (k) 2 + a 34 x (k 1) a 3,n x (k 1) n b 3 ). x n (k) = 1 a nn (a n1 x 1 (k) + a n2 x 2 (k) + + a n,n 1 x n 1 (k) b n ) SORS þ/ª x (k) 1 = x (k 1) 1 ω (a a 11 x (k 1) a 12 x (k 1) 2 + a 13 x (k 1) a 1n x (k 1) n b 1 ) x (k) 2 = x (k 1) 2 ω (a a 21 x (k) 1 + a 22 x (k 1) a 23 x (k 1) a 2n x (k 1) n b 2 ) x (k) 3 = x (k 1) 3 ω (a a 31 x (k) 1 + a 32 x (k) 2 + a 33 x (k 1) a 3n x (k 1) n b 3 ). x n (k) = x (k 1) n ω a nn (a n1 x 1 (k) + a n2 x 2 (k) + a n,n 1 x n 1 (k) + a nnx (k 1) n b n)

154 Theorem (on SOR Method Convergence) In the SOR method, suppose that the splitting matrix Q is chosen to be αd C, where α is a real parameter, D is any positive definite Hermitian matrix, and C is any matrix satisfying C + C = D A. If A is positive definite Hermitian, if Q is nonsingular and α > 1/2, then the SOR iteration converges for any s- tarting vector.

155 Theorem (on SOR Method Convergence) In the SOR method, suppose that the splitting matrix Q is chosen to be αd C, where α is a real parameter, D is any positive definite Hermitian matrix, and C is any matrix satisfying C + C = D A. If A is positive definite Hermitian, if Q is nonsingular and α > 1/2, then the SOR iteration converges for any s- tarting vector. Remark In the literature, the parameter α is usually denoted by 1/ω. So, The SOR iteration converges when 0 < ω < 2.

156 S {(

157 d/ª (assume Q 0) S {( Ax = b Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b

158 S {( d/ª (assume Q 0) Ax = b Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b S úª Qx (k) = (Q A)x (k 1) + b x (k) = (I Q 1 A)x (k 1) + Q 1 b

159 d/ª (assume Q 0) S {( Ax = b Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b S úª Qx (k) = (Q A)x (k 1) + b x (k) = (I Q 1 A)x (k 1) + Q 1 b Let A = D + L + U and assume 0 < ω < 2. S { Ý Q S Ý G = I Q 1 A Richardson I I A Jacobi D I D 1 A Gauss-Seidel D + L (D + L) 1 U ) 1 SOR(tµS ) ((1 D + L (D + ω ωl) 1 ω)d ωu

160 d/ª (assume Q 0) S {( Ax = b Qx = (Q A)x + b x = (I Q 1 A)x + Q 1 b S úª Qx (k) = (Q A)x (k 1) + b x (k) = (I Q 1 A)x (k 1) + Q 1 b Let A = D + L + U and assume 0 < ω < 2. S { Ý Q S Ý G = I Q 1 A Richardson I I A Jacobi D I D 1 A Gauss-Seidel D + L (D + L) 1 U ) 1 SOR(tµS ) ((1 D + L (D + ω ωl) 1 ω)d ωu SSOR( tµs ) (ω(2 ω)) 1 (D + ωl)d 1 (D + ωu) I Q 1 A

161 Extrapolation ( í)

162 Extrapolation ( í) Extrapolation is a technique that can be used to improve the convergence properties of a linear iterative process.

163 Extrapolation ( í) Extrapolation is a technique that can be used to improve the convergence properties of a linear iterative process. Consider the iterative formula x (k) = Gx (k 1) + c (4)

164 Extrapolation ( í) Extrapolation is a technique that can be used to improve the convergence properties of a linear iterative process. Consider the iterative formula x (k) = Gx (k 1) + c (4) Introduce a parameter γ 0 and embed the above iteration in a one-parameter family of iteration methods given by x (k) = γ(gx (k 1) + c) + (1 γ)x (k 1) = G γ x (k 1) + γc (5) where G γ = γg + (1 γ)i.

165 Extrapolation If the iteration in (5) converges, say to x, then by taking a limit, we get or x = γ(gx + c) + (1 γ)x x = Gx + c

166 Extrapolation If the iteration in (5) converges, say to x, then by taking a limit, we get x = γ(gx + c) + (1 γ)x or x = Gx + c Note the iteration in (4) is usually used to produce a sequence to x = Gx + c. If G = I Q 1 A and c = Q 1 b, then it corresponds to solving Ax = b.

167 Theorem (on Eigenvalues of p(a)) If λ is an eigenvalue of A and if p is a polynomial, then p(λ) is an eigenvalue of p(a).

168 Theorem (on Eigenvalues of p(a)) If λ is an eigenvalue of A and if p is a polynomial, then p(λ) is an eigenvalue of p(a). Proof. Hint: let Ax = λx with x 0. It s easy to see that A k x = λ k x (k 0) So, p(a)x = m c k A k x = k=0 m c k λ k x = p(λ)x k=0

169 Remark Suppose we do not know the eigenvalues of G precisely, but we know only an interval, say [a, b], contains all of its eigenvalues.

170 Remark Suppose we do not know the eigenvalues of G precisely, but we know only an interval, say [a, b], contains all of its eigenvalues. By the theorem, the eigenvalues of G γ = γg + (1 γ)i lie in the interval with endpoints γa + (1 γ) and γb + (1 γ). Denote by Λ(A) the set of eigenvalues of any matrix A. Then ρ(g γ ) = max λ = max γλ + 1 γ λ Λ(G γ ) λ Λ(G) max γλ + 1 γ λ [a,b]

171 Now, the purpose of extrapolation is

172 Now, the purpose of extrapolation is min γ ρ(g γ ) = min γ max γλ + 1 γ λ Λ(G)

173 Now, the purpose of extrapolation is min γ ρ(g γ ) = min γ min γ max λ Λ(G) γλ + 1 γ max γλ + 1 γ λ [a,b]

174 Now, the purpose of extrapolation is min γ ρ(g γ ) = min γ min γ max λ Λ(G) γλ + 1 γ max γλ + 1 γ < 1 λ [a,b]

175 Theorem (on Optimal Extrapolation Parameters ) If the only information available about the eigenvalues of G is that they lie in the interval [a, b], and if 1 / [a, b], then the best choice for γ is 2/(2 a b). With this value of γ, ρ(g γ ) 1 γ d, where d is the distance from 1 to [a, b].

176 Theorem (on Optimal Extrapolation Parameters ) If the only information available about the eigenvalues of G is that they lie in the interval [a, b], and if 1 / [a, b], then the best choice for γ is 2/(2 a b). With this value of γ, ρ(g γ ) 1 γ d, where d is the distance from 1 to [a, b]. Proof. See your lecture notes or P in the book.

177 Remark The extrapolation process or technique just discussed can be applied to methods that are not convergent themselves. All that is required is that the eigenvalues of G be real and lie in an interval that does not contain 1.

178 Example Determine the spectral radius of the optimal extrapolated Richardson method.

179 Example Determine the spectral radius of the optimal extrapolated Richardson method. Hint. In the Richardson method, Q = I, G = I A.

180 Example Determine the spectral radius of the optimal extrapolated Richardson method. Hint. In the Richardson method, Q = I, G = I A. Example Determine the spectral radius of the optimal extrapolated Jacobi method.

181 Chebyshev Acceleration

182 Chebyshev Acceleration Chebyshev Acceleration is an acceleration procedure that tries to use all available information to get a better approximation of the solution of linear equations. As before, consider a basic iteration method x (k) = Gx (k 1) + c (6)

183 Chebyshev Acceleration Chebyshev Acceleration is an acceleration procedure that tries to use all available information to get a better approximation of the solution of linear equations. As before, consider a basic iteration method x (k) = Gx (k 1) + c (6) Recall that a solution to the problem is a vector x s.t. x = Gx + c. At STEP k in the process, we shall have computed the vectors x (1), x (2),, x (k),

184 Chebyshev Acceleration Chebyshev Acceleration is an acceleration procedure that tries to use all available information to get a better approximation of the solution of linear equations. As before, consider a basic iteration method x (k) = Gx (k 1) + c (6) Recall that a solution to the problem is a vector x s.t. x = Gx + c. At STEP k in the process, we shall have computed the vectors x (1), x (2),, x (k), and we ask whether some linear combination of these vectors is perhaps a better approximation to the exact solution than x (k).

185 Assume that a (k) 0 + a (k) a (k) k = 1 and set k u (k) = a (k) i x (i) i=0

186 Assume that a (k) 0 + a (k) a (k) k = 1 and set k u (k) = a (k) i x (i) Then u (k) x = k i=0 i=0 a (k) i (x (i) x) = k i=0 a (k) i G i (x (0) x)

187 Assume that a (k) 0 + a (k) a (k) k = 1 and set k u (k) = a (k) i x (i) Then u (k) x = k i=0 i=0 a (k) i (x (i) x) = = p(g)(x (0) x) k i=0 where p is the polynomial defined by k p(z) = a (k) i z i (p(1) = 1). i=0 a (k) i G i (x (0) x)

188 Taking norms, we get u (k) x p(g) x (0) x

189 Taking norms, we get u (k) x p(g) x (0) x If the eigenvalues µ i of G lie within some bounded set S in the complex plane, then by the previous analysis, ρ(p(g)) = max 1 i n p(µ i) max z S p(z)

190 Taking norms, we get u (k) x p(g) x (0) x If the eigenvalues µ i of G lie within some bounded set S in the complex plane, then by the previous analysis, ρ(p(g)) = Then, this reduces to max p(µ i) max p(z) 1 i n z S min ρ(p(g)) min max p(z) p P k,p(1)=1 p P k,p(1)=1 z S where P k denotes the set of all real polynomials with degree k.

191 Taking norms, we get u (k) x p(g) x (0) x If the eigenvalues µ i of G lie within some bounded set S in the complex plane, then by the previous analysis, ρ(p(g)) = Then, this reduces to max p(µ i) max p(z) 1 i n z S min ρ(p(g)) min max p(z) p P k,p(1)=1 p P k,p(1)=1 z S where P k denotes the set of all real polynomials with degree k. It s a standard problem in approximation theory.

192 For example, if S is an interval, say [a, b] R, not containing 1, then a scaled and shifted Shebyshev polynomial can solve this min-max problem.

193 For example, if S is an interval, say [a, b] R, not containing 1, then a scaled and shifted Shebyshev polynomial can solve this min-max problem. The classic Chebyshev polynomial T k (k 1) is the unique polynomial of degree k with leading coefficient 2 k 1 that minimizes max T k(z) 1 z 1

194 For example, if S is an interval, say [a, b] R, not containing 1, then a scaled and shifted Shebyshev polynomial can solve this min-max problem. The classic Chebyshev polynomial T k (k 1) is the unique polynomial of degree k with leading coefficient 2 k 1 that minimizes max T k(z) 1 z 1 These polynomials can be generated recursively by { T0 (z) = 1 T 1 (z) = z T k (z) = 2zT k 1 (z) T k 2 (z) (k 2)

195 Now, suppose that the eigenvalues of G are contained in an interval 1 / [a, b], say b < 1.

196 Now, suppose that the eigenvalues of G are contained in an interval 1 / [a, b], say b < 1. We are interested in min-max problem: min p k P k,p k (1)=1 max p k(z) z [a,b]

197 Now, suppose that the eigenvalues of G are contained in an interval 1 / [a, b], say b < 1. We are interested in min-max problem: min p k P k,p k (1)=1 max p k(z) z [a,b] The answer to this problem is contained in the four lemmas in P of the textbook.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Chapter 3. Numerical linear algebra. 3.1 Motivation. Example 3.1 (Stokes flow in a cavity) Three equations,

Chapter 3. Numerical linear algebra. 3.1 Motivation. Example 3.1 (Stokes flow in a cavity) Three equations, Chapter 3 Numerical linear algebra 3. Motivation In this chapter we will consider the two following problems: ➀ Solve linear systems Ax = b, where x, b R n and A R n n. ➁ Find x R n that minimizes m (Ax

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Lecture 4 Basic Iterative Methods I

Lecture 4 Basic Iterative Methods I March 26, 2018 Lecture 4 Basic Iterative Methods I A The Power Method Let A be an n n with eigenvalues λ 1,...,λ n counted according to multiplicity. We assume the eigenvalues to be ordered in absolute

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Num. discretization. Numerical simulations. Finite elements Finite volume. Finite diff. Mathematical model A set of ODEs/PDEs. Integral equations

Num. discretization. Numerical simulations. Finite elements Finite volume. Finite diff. Mathematical model A set of ODEs/PDEs. Integral equations Scientific Computing Computer simulations Road map p. 1/46 Numerical simulations Physical phenomenon Mathematical model A set of ODEs/PDEs Integral equations Something else Num. discretization Finite diff.

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 1. General Iterative Methods 1.1. Introduction. Many applications lead to N N linear algebraic systems of the

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

SyDe312 (Winter 2005) Unit 1 - Solutions (continued)

SyDe312 (Winter 2005) Unit 1 - Solutions (continued) SyDe3 (Winter 5) Unit - Solutions (continued) March, 5 Chapter 6 - Linear Systems Problem 6.6 - b Iterative solution by the Jacobi and Gauss-Seidel iteration methods: Given: b = [ 77] T, x = [ ] T 9x +

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP

NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N. MATEMATICĂ, Tomul LIII, 2007, f.1 NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP BY GINA DURA and RĂZVAN ŞTEFĂNESCU Abstract. The aim

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Motivation: Sparse matrices and numerical PDE's

Motivation: Sparse matrices and numerical PDE's Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting

More information

Chapter 12: Iterative Methods

Chapter 12: Iterative Methods ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9 Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is

More information

MATH 5640: Functions of Diagonalizable Matrices

MATH 5640: Functions of Diagonalizable Matrices MATH 5640: Functions of Diagonalizable Matrices Hung Phan, UMass Lowell November 27, 208 Spectral theorem for diagonalizable matrices Definition Let V = X Y Every v V is uniquely decomposed as u = x +

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information