Introduction and Stationary Iterative Methods

Size: px
Start display at page:

Download "Introduction and Stationary Iterative Methods"

Transcription

1 Introduction and C. T. Kelley NC State University tim Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011

2 Outline Notation and Preliminaries General References What you Should Know Convergence and the Banach Lemma Matrix Splittings and Classical Methods References for

3 General References What you Should Know General References C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, no. 16 in Frontiers in Applied Mathematics, SIAM, Philadelphia, J. W. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, G. H. Golub and C. G. VanLoan, Matrix Computations, Johns Hopkins studies in the mathematical sciences, Johns Hopkins University Press, Baltimore, 3 ed., E. Isaacson and H. B. Keller, Analysis of numerical methods, Wiley, New York, G. W. Stewart, Introduction to matrix computations, Academic Press, New York, 1973.

4 General References What you Should Know Background I assume you have had courses in Numerical methods (Gaussian elimination, SVD, QR,... ) Linear Algebra (Vector spaces, norms, inner products,... ) Calculus and differential equations Some functional analysis would help. If at any time I use something you are not familiar with, stop me and I will review.

5 General References What you Should Know Other things you should know I have never done this before. I may have too much or too little material. Some of the codes I will give you are new. So they may have a few bugs. I ve set too many exercises. You ll have to be selective or stay up late.

6 General References What you Should Know What s in Monday s Directory A copy of today s lectures in pdf. A matlab code gauss.m, which you ll need for one of the exercises. A copy of a paper that may help.

7 Vectors Notation and Preliminaries General References What you Should Know All vectors are column vectors of dimension N. ξ 1 η 1 ξ 2 η 2 x =. ξ N, y =. η N Components of vectors use Greek letters. Scalar product N x T y = ξ i η i, where x T is the row vector i=1 x T = (ξ 1,..., ξ N )., RN

8 Matrices Notation and Preliminaries General References What you Should Know Matrices are in upper case, columns in lower case. a a 1M a a 2M A =..... = (a 1, a 2,..., a M ) a N1... a NM is an M N matrix.

9 Transpose Notation and Preliminaries General References What you Should Know A T = a a 1N a a 2N..... a M1... a MN is an N M matrix. This is consistent with x T being a row vector. We mostly do real arithmetic. In complex arithmetic we use A #, the complex conjugate transpose, in place of A T.

10 Linear Equations Notation and Preliminaries General References What you Should Know Ax = b explicitly a 11 ξ a 1j ξ j + + a 1N ξ N = b 1. a i1 ξ i + + a ij ξ j + + a in ξ N = b i. a N1 ξ a Nj ξ j + + a NN ξ N = b N Unless we explicitly say otherwise, A is nonsingular.

11 Vector Notation and Preliminaries General References What you Should Know Our vector norms will be the l p norms 1/p N x p = ξ j p (1 p < ) and x = max ξ j 1 j N j=1 The l 2 norm is connected with the scalar product x T x = x 2 2.

12 Matrix Norms Notation and Preliminaries General References What you Should Know Let be a norm on R N. The Induced Matrix Norm of an N N matrix A is defined by A = max x =1 Ax. We use induced norms. They have the important property: Ax A x, which implies that if Ax = b then A 1 1 x b A x.

13 General References What you Should Know Types of linear equations Dense: A has very few non-zero entries. Sparse: A has many zeros. Structured: A has structure which algorithms can use For example: sparsity, symmetry (A = A T ), connection to differential or integral equations. Unstructured: One must use general methods.

14 General References What you Should Know Structure Sparsity Symmetry A T = A A is symmetric positive definite (spd) if A = A T and x T Ax > 0 for all x 0. In this case x A = (x T Ax) 1/2 is a vector norm. Normality A T A = AA T Diagonalizability A = V ΛV 1 where Λ is diagonal. A is diagonalizable if and only if A has N linearly independent eigenvectors and then V = (v 1,... v n ). If A is symmetric then V is orthogonal, VV T = V T V = I. If A is normal then V is unitary, VV # = V # V = I.

15 General References What you Should Know Direct and Iterative Methods Direct Methods solve Ax = b in finite time in exact arithmetic. Examples: Gaussian Elimination and other matrix factorizations FFT for some problems (Toeplitz, Hankel, PDEs) Iterative Methods produce a sequence {x n } which (you hope) converges to x = A 1 x. Examples: Stationary iterative methods (L1 and L3) Krylov methods (L2, 3, 4) Multigrid methods (L3)

16 Condition Numbers Notation and Preliminaries General References What you Should Know The Condition Number of A relative to the norm is κ(a) = A A 1, where κ(a) is understood to be infinite if A is singular. κ p (A) means relative to the l p norm. If A is poorly conditioned (say κ > 10 8 ) we may not be able to obtain an accurate solution with any choice of algorithm. Poor conditioning is a property of A. Algorithms cannot help.

17 Termination of Iterations General References What you Should Know Most iterative methods terminate when the Residual r = b Ax is sufficiently small. One termination criterion is r k r 0 < τ where r 0 = b Az 0 and z 0 is a reference value. So, what does a small relative residual tell us about the error e = x x?

18 Residuals, Errors, Conditioning General References What you Should Know Theorem: Let b, x, x 0 R N. Let A be nonsingular and let x = A 1 b. 1 r κ(a) r 0 e r κ(a) e 0 r 0. We prove this. Note that r = b Ax = Ae so and e = A 1 Ae A 1 Ae = A 1 r r = Ae A e.

19 General References What you Should Know So, and as asserted. e e 0 A 1 r r A 1 = κ(a) r 0 r 0 e e 0 A 1 r r A 1 = κ(a) 1 r 0 r 0

20 Application of the Theorem General References What you Should Know Most of the methods we use will set the reference vector to zero. Hence r 0 = b and e 0 = x = A 1 b and the theorem connects the Relative Residual to the Relative Error b Ax b x x x If A is poorly conditioned, then termination on small relative residuals may be unreliable.

21 Convergence and the Banach Lemma Matrix Splittings and Classical Methods A Stationary Iterative Method converts Ax = b to x = Mx + c and the iteration is x n+1 = Mx n + c M is called the iteration matrix. This iteration is also called Richardson Iteration. The method is called stationary because the formula does not change as a function of x n.

22 Banach Lemma Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods Let M be N N. Assume that M < 1 for some induced matrix norm. Then (I M) is nonsingular (I M) 1 = l=0 Ml (I M) 1 (1 M ) 1

23 Proof of Banach Lemma: I Convergence and the Banach Lemma Matrix Splittings and Classical Methods We will show that the series M l = (I M) 1. l=0 The partial sums S k = form a Cauchy sequence in R N N. To see this note that for all m > k m S k S m M l. And... k l=0 M l l=k+1

24 Proof of Banach Lemma: II Convergence and the Banach Lemma Matrix Splittings and Classical Methods M l M l because is a matrix norm that is induced by a vector norm. Hence m ( ) 1 M S k S m M l = M k+1 m k 0 1 M l=k+1 as m, k. So the series converges. Let S = l=0 M l

25 Proof of Banach Lemma: II Convergence and the Banach Lemma Matrix Splittings and Classical Methods Clearly Finally MS = M l+1 = M l = S I and so l=0 l=1 (I M)S = I and S = (I M) 1. (I M) 1 M l = (1 M ) 1. l=0

26 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Convergence for If M < 1 for any induced matrix norm then the stationary iteration x n+1 = Mx n + c converges for all c and x 0 to x = (I M) 1 c Proof: Clearly x n+1 = n M l c + M n x 0 (I M) 1 c = x. l=0

27 Convergence Speed Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods Let M = α < 1 and x = (I M) 1 c. Then Proof: x x n α n x x 0. x x n = l=n Ml c M n x 0 = M( l=n 1 Ml c M n 1 x 0 ) = M(x x n 1 ) α x x n 1 α n x x 0.

28 Spectral Radius Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods The spectrum of M σ(m), is the set of eigenvalues of M. The spectral radius of M is ρ(m) = max λ λ σ(m) Theorem ρ(m) < 1 if and only if M < 1 for some induced matrix norm. A stationary iterative method will x n+1 = Mx n + c converges for all initial iterates and right sides if and only if ρ(m) < 1. The spectral radius does not depend on any norm.

29 Predicting Convergence Convergence and the Banach Lemma Matrix Splittings and Classical Methods Suppose you know that M α < 1. Then e n+1 = x x n+1 = (Mx + c) (Mx n + c) = Me n Hence e n α n e 0 and e n τ e 0 if α n < τ or n > log(τ)/ log(α).

30 Preconditioned Richardson Iteration Convergence and the Banach Lemma Matrix Splittings and Classical Methods If I A < 1 then one can apply Richardson iteration directly to Ax = b x n+1 = (I A)x n + b Sometimes one can find a approximate inverse B for which I BA < 1 and precondition with B to obtain BAx = Bb and the iteration is x n+1 = (I BA)x n + Bb

31 Approximate Inverse Preconditioning: I Convergence and the Banach Lemma Matrix Splittings and Classical Methods Theorem: If A and B are N N matrices and B is an approximate inverse of A, then A and B are both nonsingular and and A 1 A 1 B B 1 I BA, B 1 B I BA 1 I BA, A B 1 A 1 I BA, A I BA 1 I BA.

32 Approximate Inverse Preconditioning: II Convergence and the Banach Lemma Matrix Splittings and Classical Methods Proof: Let M = I BA. The Banach Lemma implies that I M = I (I BA) = BA is nonsingular. Hence both A and B are nonsingular. Moreover A 1 B 1 = (I M) M = 1 1 I BA.

33 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Approximate Inverse Preconditioning: III Use A 1 = (I M) 1 B to get the first part A 1 B (I M) 1 The second pair of inequalities follows from and the first. B 1 I BA. A 1 B = (I BA)A 1, A B 1 = B 1 (I BA)

34 Matrix Splittings Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods One way to convert Ax = b to Mx = c is to split A where A 1 is nonsingular A = A 1 + A 2 A 1 y = q is easy to solve for all q and then solve x = A 1 1 (b A 2x) Mx + c. Here M = A 1 1 A 2 and c = A 1 1 b. Remember A 1 z means solve A 1 y = z, not compute A 1 1.

35 Jacobi Iteration: I Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods Write Ax = b explicitly a 11 ξ a 1N ξ N = β 1. a N1 ξ a NN ξ N = β N and solve the ith equation for ξ i, pretending the other components are know. You get ξ i = 1 β i a ij ξ j a ii j i which is a linear fixed point problem equivalent to Ax = b.

36 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Jacobi Iteration: II The iteration is ξ New i = 1 a ii β i j i a ij ξ Old j So what are M and c? Split A = A 1 + A 2, where A 1 = D, A 2 = L + U, D is the diagonal of A, and L and U are the (strict) lower and upper triangular parts. then x New = D 1 (b (L + U))x Old.

37 Jacobi Iteration: III Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods So the iteration is x n+1 = D 1 (L + U)x n + D 1 b and the iteration matrix is M JAC = D 1 (L + U). Is there any reason for ρ(m JAC ) < 1?

38 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Convergence for Strictly Diagonally Dominant A Theorem: Let A be an N N matrix and assume that A is strictly diagonally dominant. That is for all 1 i N 0 < j i a ij < a ii. Then A is nonsingular and the Jacobi iteration converges to x = A 1 b for all b.

39 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Proof: Convergence for Strictly Diagonally Dominant A Our assumptions imply that a ii 0, so the iteration is defined. We can prove everything else showing that M JAC < 1. Remember that M JAC < 1 is the maximum absolute row sum. By assumptions, the ith row sum of M = M JAC satisfies That s it. N m ij = j=1 j i a ij < 1. a ii

40 Observations Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods Convergence of Jacobi implies A is nonsingular. Showing M JAC < 1 for any norm would do. The l norm fit the assumptions the best. We have said nothing about the speed of convergence. Jacobi iteration does not depend on the ordering of the variables. Each ξi New can be processed independently of all the others. So Jacobi is easy to parallelize.

41 Gauss-Seidel Iteration Convergence and the Banach Lemma Matrix Splittings and Classical Methods Gauss-Seidel changes Jacobi by updating each entry as soon as the computation is done. So ξ New i = 1 β i a ii j<i a ij ξ New j j>i a ij ξ Old j You might think this is better, because the most up-to-date information is in the formula.

42 Gauss-Seidel Iteration Convergence and the Banach Lemma Matrix Splittings and Classical Methods One advantage of Gauss-Seidel is that you need only store one copy of x. This loop does the job with only one vector. for i=1:n do sum=0; for j i do sum=sum+a(i,j)*x(j) end for x(i) = (b(i) + sum)/a(i,i) end for

43 Gauss-Seidel Iteration Matrix Convergence and the Banach Lemma Matrix Splittings and Classical Methods From the formula, running for i = 1,... N. ξi New = 1 β i a ij ξj New a ij ξ Old j a ii j<i j>i you can see that so (D + U)x n+1 = b Lx n M GS = (D + U) 1 L and c = (D + U) 1 b.

44 Backwards Gauss-Seidel Convergence and the Banach Lemma Matrix Splittings and Classical Methods Gauss-Seidel depends on the ordering. Backwards Gauss-Seidel is ξ New i = 1 β i a ii j>i a ij ξ New j j<i a ij ξ Old j running from i = N, So M BGS = (D + L) 1 U.

45 Symmetric Gauss-Seidel Convergence and the Banach Lemma Matrix Splittings and Classical Methods A symmetric Gauss-Seidel iteration is a forward Gauss-Seidel iteration followed by a backward Gauss-Seidel iteration. This leads to the iteration matrix M SGS = M BGS M GS = (D + U) 1 L(D + L) 1 U. If A is symmetric then U = L T. In that event M SGS = (D + U) 1 L(D + L) 1 U = (D + L T ) 1 L(D + L) 1 L T.

46 SOR iteration Notation and Preliminaries Convergence and the Banach Lemma Matrix Splittings and Classical Methods Add a relaxation parameter ω to Gauss-Seidel. M SOR = (D + ωl) 1 ((1 ω)d ωu). Much better performance with good choice of ω.

47 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Observations Gauss-Seidel and SOR depend on order of variables. So they are harder to parallelize. While they may perform better than simple Jacobi, it s not a lot better. These methods are not competitive with Krylov methods. They require the least amount of storage, and are still used for that reason.

48 Splitting Methods to Preconditioners Convergence and the Banach Lemma Matrix Splittings and Classical Methods Splitting methods can be seen as preconditioned Richardson iteration. You want to find the preconditioner B so that the iteration matrix from the splitting So I M = BA. M = A 1 1 A 2 = I BA.

49 Convergence and the Banach Lemma Matrix Splittings and Classical Methods Jacobi preconditioning For the Jacobi splitting A 1 = D, A 2 = L + U, we get D 1 (L + U) = I BA so BA = I + D 1 (L + U) = D 1 A Jacobi preconditioning is multiplication by D 1. This can be a surprisingly good preconditioner for Krylov methods.

50 References for References for P. Henrici, Discrete Variable Methods in Ordinary Differential Equations, Wiley, New York, R. J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, I. Stakgold, Green s Functions and Boundary Value Problems, Wiley-Interscience, New York, 1979.

51 in 1D References for One space dimension u (x) = f (x) for x (0, 1) Homogeneous Dirichlet boundary conditions u(0) = u(1) = 0 Eigenvalue Problem u (x) = λu(x) for x (0, 1), u(0) = u(1) = 0

52 Solution of References for We can diagonalize the operator using the solutions of the eigenvalue problem {u n } is an orthonormal basis for u n (x) = sin(nπx)/ 2, λ = n 2 π 2 L 2 0 = cl L 2{u C([0, 1]) u(0) = u(1) = 0} and the boundary value problem s solution is u(x) = 1 u n (x) n 2 π 2 n=1 1 1 u n (z)f (z) dz.

53 Properties of the Operator References for The operator d 2 dx 2 : C 2 0 ([0, 1]) C([0, 1]) is injective symmetric with respect to the L 2 scalar product has an L 2 0 orthonormal basis of eigenfunctions has positive eigenvalues

54 References for Solution Operator The solution of on [0, 1] with homogeneous Dirichlet boundary conditions is u(x) = 1 0 g(x, z)f (z) dz where x(1 z) 0 < x < z g(x, z) = z(1 x) z < x < 1

55 Central Difference Approximation References for Add u(x + h) = u(x) + u (x)h + u (x)h 2 /2 + u (h)h 3 /6 + O(h 4 ) to u(x h) = u(x) u (x)h + u (x)h 2 /2 u (h)h 3 /6 + O(h 4 ) and get u (x) = ( u(x + h) + 2u(x) u(x h))/h 2 + O(h 2 )

56 Finite Difference Equations References for Equally spaced grid x i = ih, 0 i N + 1, h = 1/(N + 1). Approximate u(x i ) by ξ i. Let u = (ξ 1,..., ξ N ) T. Boundary conditions imply that ξ 0 = ξ N+1 = 0. Finite difference equations at interior grid points are for 1 i N. ξ i 1 + 2ξ i ξ i+1 h 2 = b i f (x i )

57 Matrix Representation References for Au = b where A is tridiagonal and symmetric , , A = ,... 0 h ,, 0, ,...,, 0 1 2

58 Jacobi and Gauss-Seidel References for Jacobi: for i=1:n do ξi New (1/2)(h 2 b i + ξ Old i 1 + ξold i+1 ) end for Gauss-Seidel: for i=1:n do ξ i (1/2)(h 2 b i + ξ i 1 + ξ i+1 ) end for

59 Jacobi Iteration in MATLAB References for for ijac=1:n xnew(1) =.5*(hˆ2 * b(1) + xold(2)); for i=2:n 1 xnew(i) =.5*(hˆ2 * b(i) + xold(i 1) + xold(i+1)); end xnew(n) =.5*(hˆ2 * b(n) + xold(n 1)); xold=xnew; end How would you turn this into Gauss-Seidel with a text editor?

60 Jacobi Example Notation and Preliminaries References for Let s solve u = 0, u(0) = u(1) = 0. with h = 1/101 and N = 100. The solution is u = 0. We will use as an intial iterate u 0 = x(1 x) cos(49πx) We will take 100 Jacobi iterations.

61 Initial Error as Function of x References for 0.35 Initial Error as Function of $x$ error x

62 Final Error as Function of x References for 0.25 Final Error as Function of $x$ error x

63 Final Error as Function of x References for Convergence History 0.25 l error Iterations

64 Eigenvalues and Eigenvectors References for Theorem: A is symmetric positive definite. The eigenvalues are λ n = h 2 2 (1 cos(πnh)) = π 2 n 2 + O(h 2 ). The eigenvectors u n = (ξ n 1,..., ξn N )T are given by ξ n i = 2/h sin(niπh)

65 References for Comments and Proof Eigenvalues agree with continuous problem to second order. κ(a) = λ N /λ 1 = O(N 2 ) = O(h 2 ). ξi n = u n (x i ) 2/h Proof: Note that ξ0 n = ξn N+1 = 0 ξ n i 1 + 2ξn i ξ n i+1 = 2/h( sin(n(i 1)πh) + 2 sin(niπh) sin(n(i 1)πh))

66 End of Proof Notation and Preliminaries References for Set x = niπh and y = nπh. Use the trig identities sin(x ± y) = sin(x) cos(y) ± sin(y) cos(x) to get ξi 1 n + 2ξn i ξi+1 n = sin(x y) + 2 sin(x) sin(x + y) = 2 sin(x)(1 cos(y)) = λ n ξ n i as asserted.

67 Jacobi does Poorly for Poisson References for If you apply Jacobi to Poisson s equation, iteration matrix is M = D 1 (L + U) = I D 1 (D + L + U) = I D 1 A as we have seen. For Poisson, D = (2/h 2 )I so M = I D 1 A = I (h 2 /2)A. The eigenvalues of M are µ n = 1 (h 2 /2)λ n. So ρ(m) = 1 O(h 2 ) which is very bad. The performance gets worse as the mesh is refined!

68 Observations Notation and Preliminaries References for Jacobi (and GS, SOR,... ) are not scalable. The number of iterations needed to reduce the error by a given amount depends on the grid. Fixing this for PDE problems requires a different approach. You can solve the 1D problem in O(N) time with a tridiagonal solver, but... direct methods become harder to use for 2D and 3D problems on complex geometries with unstructured grids.

69 References for in Two Dimensions Equation: u xx u yy = f (x, y) for 0 < x, y < 1 Boundary conditions: u(0, y) = u(x, 0) = u(1, y) = u(x, 1) = 0 Similar properties to 1-D Physical Grid: (x i, x j ), x i = i h. Begin with two-dimensional matrix of unknowns u ij u(x i, x j ). Must order the unknowns (ie the grid points) to prepare for a packaged linear solver.

70 References for which leads to... u xx 1 (u(x h, y) 2u(x, y) + u(x + h, y)) h2 u yy 1 h 2 (u(x, y h) 2u(x, y) + u(x, y h))

71 Discrete 2D Poisson, Version 1 References for 1 h 2 ( U i 1,j U i,j 1 + 4U ij U i+1,j U i,j+1 ) = f ij f (x i, x j ) Jacobi, Gauss-Seidel,... are still easy. Here s GS for i=1:n do for j=1:n do U ij 1 ( 4 h 2 ) f ij + U i 1,j + U i,j 1 + U i+1,j + U i,j+1 end for end for So how did I order the unknowns?

72 Ordering the Unknowns References for N 2 N + 1 N 2 N N N + 1 2N N N + 1 N N N

73 References for Creating a Matrix-Vector Representation Define u = (ξ 1,..., ξ N 2) T R N2 by ξ N(i 1)+j = U i,j and β N(i 1)+j = U i,j The Matrix representation is Au = b where...

74 Matrix Laplacian in 2D: I References for A = 1 h 2 T I , 0 I T I, I T I, ,, 0, I T I 0...,...,, 0 I T where I is the N N identity matrix and T is the N N tridiagonal matrix

75 Discrete Laplacian in 2D: I References for T = , , , ,, 0, ,...,, 0 1 4

76 References for Mapping the 2D vector to/from a 1D vector Use the MATLAB reshape command. Example: N = u 2d = u 1d = reshape(u 2d, N N, 1) = (1, 4, 7, 2, 5, 8, 3, 6, 9) T and u 2d = reshape(u 1d, N, N). This means you can do things on the physical grid and still give solvers linear vectors when they need them.

77 Richardson Iteration Use the trapezoid rule to discretize the integral equation u(x) sin(x y)u(y) dy = cos(x) If you write your discrete equation as u Mu = b, prove that ρ(m) < 1/2. Write a MATLAB code to demonstrate that the convergence is independent of the mesh width.

78 Poisson Equation Compute the eigenvalues and eigenvectors for the 2D discrete Poission equation. Encode the 2D Laplacian as a MATLAB sparse matrix and use the eigs command to find a few eigenvectors and eigenvalues to verify your work in the previous exercise. Solve the 1D and 2D Poission equations with Jacobi and Gauss-Seidel with zero boundary data and f 1 as the right side. Try more interesting right sides f (x) and f (x, y).

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 11 Partial Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002.

More information

Finite Difference Methods for Boundary Value Problems

Finite Difference Methods for Boundary Value Problems Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point

More information

Introduction to Boundary Value Problems

Introduction to Boundary Value Problems Chapter 5 Introduction to Boundary Value Problems When we studied IVPs we saw that we were given the initial value of a function and a differential equation which governed its behavior for subsequent times.

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information