Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods

Size: px
Start display at page:

Download "Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods"

Transcription

1 Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) Summer Semester 2012/13

2 2.1. Numerical Linear Algebra - review Vectors: x 1 x n x =. is a vector with n components; Matlab: length(x) to determine the length vector x; transpose of x is a row vector x = (x 1,..., x n ); Matlab: x. Norms of a vector: Euclidean norm: x 2 = x1 2 + x x n; 2 Matlab>> norm(x,2) or norm(x). A vector x is a unit vector if x 2 = 1. maximum norm: x = max 1 k n x k ; Matlab>> norm(x,inf).

3 Operations with vectors the result of multiplication of a vector x by a scalar α is vector : αx = (αx 1, αx 2,..., αx n ); Matlab>> α x. For two vectors x any y of equal length: the sum of x and y: x + y = (x 1 + y 1,..., x n + y n ); Matlab>> x+y. the scalar product of x and y, denoted by < x, y > or x y or x y, is scalar number given by x y = x 1 y x n y n ; Matlab>> x *y. Note that: x x = x 2. componentwise product of x and y is a vector given by (x 1 y 1,..., x n y n ); Matlab>> x.*y

4 Linear Combination, Linear Independencies and Basis A vector x is a linear combination of vectors v 1, v 2,..., v m in R n if there are scalars α 1, α 2,..., α m such that x = α 1 v α m v m. a set of vectors v 1, v 2,..., v m in R n are linearly dependent if there are scalars α 1, α 2,..., α m not all of them equal to zero such that α 1 v 1 + α 2 v α m v m = 0. (1) If equation (1) holds true only for α 1 = α 2 =... = α m = 0, then these vectors are said to be linearly independent. The standard unit vectors e i = (0,..., 1,..., 0), i = 1,..., n, are linearly independent. a set of vectors linearly independent vectors {v 1, v 2,..., v m } is a basis of R n if any vector x in R n can be written as a linear combination of these vectors. That is, x = α 1 v α n v n.

5 Orthogonal vectors, Orthonormal vectors and Orthogonalization x and y are orthogonal if x y = 0, written x y; Orthogonal vectors are linearly independent. A set of vectors {v 1, v 2,..., v m } in R n is orthonormal if v i v j, i j and v i = 1, i = 1,..., n. The standard unit vectors e i = (0,..., 1,..., 0), i = 1,..., n, are linearly independent.

6 The Gram-Schmidt orthonormalization algorithm Given a set {v 1,..., v m } of linearly independent vectors, to construct a set of orthonormal vectors {q 1,..., q m }. Algorithm 1: The Modified Gram-Schmidt Algorithm 1: r 11 v 1 ; 2: q 1 v 1 /r 11 ; 3: for j = 2 : n do 4: q j v j ; 5: for i = 1 : (j 1) do 6: r ij (q i ) v j ; 7: q j q j r ij q i ; 8: end for 9: r jj q j ; 10: q j q j /r jj ; 11: end for

7 A Matlab implementation function Q=ModGrammSchmidt(V) [n,m]=size(v); R=zeros(n,m); Q=zeros(n,m); r(1,1)=norm(v(:,1)); Q(:,1)= V(:,1)/r(1,1); for j=2:n Q(:,j)=V(:,j); for i=1:(j-1) R(i,j)=Q(:,i) *V(:,j); Q(:,j)=Q(:,j)-R(i,j)*Q(:,i); end R(j,j)=norm(Q(:,j)); Q(:,j)=Q(:,j)/R(j,j); end

8 Matrices a 11 a a 1n a 21 a a 2n A =..... am 1,n a m1... a m,n 1 a mn is a matrix with m rows and n columns. A shorter notation A = (a ij ). transpose A is a matrix with n rows and m columns; Matlab>> A. rank of A is a number equal to the number of linear independent rows or columns of A; Matlab >> rank(a). Properties: rank(a) min{m, n}; if rank(a) = m, then A has full row rank; if rank(a) = n, then A has full column rank; If y R m and x R n, then matrix A = yx has rank(a) = 1.

9 Matrix norms Let A R m n ; i.e. A is an m by n matrix. Then Frobenious norm of A; m n A F = a ij 2 i=1 j=1 Maximum norm of A: 1/2 ; Matlab: norm(a, fro ). A = max a ij ; Matlab: norm(a,inf). 1 i m 1 j n Induced norm of A: Ax 2 A 2 = max = max x 0 x 2 x 0 Ax ; Matlab: norm(a,2) or norm(a). x For any matrix A, the following holds true: A 2 A F A.

10 Operations with matrices sum of two matrices A = (a ij ) and B = (b ij ) of equal size A + B = (a ij + b ij ); Matlab >> C=A+B. componentwise product of two matrices A = (a ij ) and B = (b ij ) of equal size A. B = (a ij b ij ); Matlab >> C=A.*B. product of a matrix A = (a ij ) size m n and matrix B = (b jk ) size n p is a matrix C = (c ik ) of size m p, such that C = AB: Algorithm 2: Matrix Multiplication 1: for i = 1 : m do 2: for k = 1 : p do 3: c ik = n j=1 a ijb jk ; 4: end for 5: end for Matlab>> C=A*B. product of a matrix A size m n and vector x of length n is a vector b of length m; Matlab>> b=a*x.

11 Square matrices and some properties an m by n matrix A is a square matrix if m = n; For the square matrix A, d = (a 11, a 22,..., a nn ) is the vector of diagonal elements; Matlab >> d=diag(a). the n by n matrix with all it diagonal elements equal to 1 and the rest of the elements equal to 0 is the identity matrix I n. Note that: I n 2 = I n F = I n = 1. Matlab >> I=eye(n). a n n matrix A is invertible if there is a matrix B such that AB = BA = I n. B is called the inverse of A; written B = A 1 ; Matlab>> B=inv(A). if an n n matrix A is invertible, then column vectors A or row vectors of A are linearly independent; Ax = 0 iff x = 0. rank(a) = n or det(a) 0; Matlab>> det(a).

12 Eigenvalues and eigenvectors A non-zero (real or complex) number λ is an eigenvalue of A if Av = λv, for some vector v. In this case is called an eigenvector of A. An eigenvalue can be a real or a complex number; Matlab>> [V,D]=eig(A). One use of eigenvalues: stability analysis of linear and nonlinear dynamic systems. λ is an eigenvalue of an n n square matrix A iff det(λi A) = 0. p(λ) = det(λi A) = 0 is called characteristic polynomial of A of degree n. Example: [ ] 1 2 If A =, then p(λ) = det(λi A) = λ 2 5λ

13 Spectrum and spectral radius For a matrix A, the set σ(a) = {λ λ is eigenvalue of A} is called the spectrum of the matrix A. The number of ρ(a) = max λ λ σ(a) is called the spectral radius of A. Example: [ ] 4 0 If A = 0 3 Then σ(a) = { 4, 3} and ρ(a) = 4. For an A R n n, it follows that A = ρ(aa ) Convergence of iterative algorithms for the solution of a system of equations Ax = b depend on the spectral radius of the iteration matrix.

14 Symmetric, Semi-definite, Orthogonal Matrices A square matrix A is symmetric if A = A. All eigenvalues of a symmetric matrix are real numbers. An n n symmetric matrix A is positive semi-definite if x Ax 0, for all x R n. all eigenvalues of a positive semi-definite matrix are non-negative real numbers. For any matrix B, the matrix A = BB is symmetric and positive definite. An n n symmetric matrix A is positive definite if x Ax 0, for all x R n, x 0. All eigenvalues of a positive definite matrix are positive real numbers. A positive definite matrix is invertible. A square matrix Q is orthogonal if QQ = I. An orthogonal matrix Q is invertible, Q 1 = Q and Q = 1.

15 Singular Value Decomposition (SVD) Let A an m n matrix with rank(a) = r, then A can be expressed as A = UΣV where U an m m and V is an n n orthogonal matrices and Σ is an m n diagonal matrix such that σ σ Σ = σ r....., where σ 1 σ 2... σ r and σ 1, σ 2,..., σ r are called singular values of A ; [U,S,V] = svd(a).

16 SVD...Image Compression In the SVD for A, let U = [u 1, u 2,..., u m ] R m m and V = [v 1, v 2,..., v n ] so that σ σ v 1 v A = [u 1, u 2,..., u m ] σ r vn } {{ } =Σ Then A = σ 1 U 1 V 1 + σ 2U 2 V σ r U r V r.

17 SVD...Image Compression... The gray-scale values of a digital image are repressed by a matrix A. The sum A = σ 1 U 1 V1 + σ 2U 2 V σ r U r Vr is weighted sum with decreasing singular values as decreasing weights, since σ 1 σ 2... σ r. Thus dropping some of the terms with smaller weights (singular-values) does not significantly affect image quality but saves storage space image compression. (More on this in the Tutorials!!) Note that: For a symmetric n n matrix A it follows that A = UΣV implies U = V The columns of U: U 1, U 2,..., U n are eigenvectors A. The singular values σ 1, σ 2,..., σ r are eigenvalues of A. In addition, if A positive semi-definite, then singular values are non-negative, i.e. σ 1 σ 2... σ r 0.

18 Condition Number,well/ill-conditioned matrices For a regular (or nonsingular or invertible) matrix A R n n, the number κ (A) = A A 1 is called the condition number of A; Matlab>> cond(a). If A is a nonsingular matrix with singular values σ 1, σ 2,..., σ n, then κ(a) = σ 1 σ n = σ max(a) σ min (A). A matrix A is well-conditioned if κ(a) is not too large; otherwise, it is ill-conditioned. [ ] Example:For A =, κ (A) = 5.0e

19 Condition Number,well/ill-conditioned matrices... Example: Impact of an ill-conditioned matrix. To solve the equation Ax = b with Let [ ] A = and b = [ ] Condition number of A : κ(a) [ ] = 3.992e Solution of Ax = b is x =. 1 Now suppose b has a small change (perturbation, noise) given as [ ] [ ] b = b + b = [ ] The solution of Ax = b will be x = A small inaccuracy in problem data may lead to a totally different result from the actual one.

20 Solution Methods for Systems of Linear Equations Consider a system of algebraic equations Ax = b, where A is a matrix of size m n and b is a vector of length. Algorithms to solve Ax = b depend on: Type of matrix A: eg. square, non-square (i.e. Ax = b is either over- or under-determined) Properties of A: symmetric, nonsymmetric, positive definite, regular (invertible), well-conditioned, ill-conditioned, etc. Structure of matrix A: dense matrix, sparse-matrix (many zeros than non-zero numbers), banded matrix, block-structured matrix, etc. Size of the matrix A: small to medium-sized matrix, very large matrix with a complicated structures, etc.

21 Solution Methods for Systems of Linear Equations.. Algorithms need to exploit the properties and structures of the matrix A. Specific algorithms are frequently preferred for specific applications. Do not do this x = A 1 b! Unless A 1 is already available or given to you for free! Otherwise, A 1 is usually quite expensive to compute, A 1 may not be available.

22 Solution Methods for Systems of Linear Equations.. (Simpler Instances) (a) A is a diagonal matrix: a a x 1 b x = b 2., 0 x n b n a nn If all a kk 0, then x k = b k a kk, k = 1,..., n. If for some index k, a kk = 0, then the system has no solution.

23 ... Simpler Instances (b) A is an upper-triangular matrix: a 11 a a 1n a 22 a a 2n.... a n 1,n 1 a n 1,n a nn Solution by backward substitution: x n = b n a nn ; x n 1 = b n 1 a n 1,n x n a n 1,n 1 ;. x 1 x 2. x n = b 1 b 2. b n, x k = b k n i=k+1 a kix i a kk, k = 1,..., n 1. If for some index k, a kk = 0, then the system has no solution.

24 ...Simpler Instances If A a lower-triangular matrix use forward substitution. (c) A is an orthogonal matrix, then AA = A A = I n. Thus Ax = b A Ax = A b I n x = A b Hence, the solution of Ax = b is given by x = A b. In practical applications A may have none of the above simpler structures.

25 Solution methods for systems of linear equations... The choice of an algorithm for the solution of Ax = b depends on whether: A is symmetric or non-symmetric A positive-definite A is avialble A is a square matrix A is well-conditioned or ill-conditioned a suitable preconditioner is available for A In general, there are two classes of algorithms: I. Direct Methods II. Iterative Methods I. Direct Methods factorize A as a product of matrices with simpler structures (diagonal, triangular, orthogonal matrices, etc.).

26 ... Direct Methods Known matrix factorization methods factorization type type of A LU A=LU symmetric, non symmetric Cholesky A = LL symmetric positive definite LDL A = LDL symmetric indefinite QR A = QR A R m n, m n, rank(a) = n SVD A = UΣV A R m n in LU, Cholesky and LDL : L - lower triangular and D - diagonal in QR : Q orthogonal, R upper triangular, frequently used in least square problems in SVD: V is m m and U is n n orthonormal matrices, Σ is m n with Σ = diag(σ 1, σ 2,..., σ n ), with σ 1 σ 2... σ n 0.

27 ... Direct Methods Example: Solution through LU factorization A = LU, where L =, U = Then Ax = b = (LU)x = b = L( }{{} Ux ) = b. (2) =y

28 ... Direct Methods... Algorithm 3: Solution of Ax = b through LU factorization 1: Set y = Ux; 2: Use forward substitution to solve for y from Ly = b.; 3: Use back substitution to solve for x from Ux = y. Algorithm 4: Solution of Ax = b through QR factorization 1: Put A = QR so that QRx = b; 2: Multiply both sides of QRx = b by Q to obtain Rx = Q b; 3: Use back substitution to solve for x from Rx = d with d = Q b.

29 ... Direct Methods Advantages : high accuracy of solutions suitable for small to medium-scale systems of equations matrix partitioning techniques can be applied easy to parallelize Disadvantages : computationally expensive inefficient for large and sparse systems may cause fill-in effect when A is a sparse matrix Matlab matrix factorization functions : [L,U]=lu(A), L=chol(A), [Q,R]=qr(A), [U,S,V]=svd(A), L = ldl(a) (after MATLAB Version 7.3 (R2006b))

30 II. Iterative Methods Algorithm 5: Principle of Iterative Algorithms 1: Step 0: Select an initial iterate x (0) ; 2: Step k: Determine x (k+1) from x (k), k = 1, 2,...; 3: Stop: If termination criteria is satisfied. Commonly used termination criteria: given ε Relative residual norm: b Ax (k) b ε. Two groups of Iterative methods: (A) Stationary Iterative Methods - Matrix Splitting Methods. (B) Dynamic Iterative Methods - Krylov-Subspace Methods.

31 A. Stationary Iterative Algorithms... Also known as matrix splitting methods or fixed-point iterative methods. Algorithm 6: Basic Algorithm Algorithm 1: Step 0: Start from x 0 ; 2: Step k: x k+1 = Bx k + d, k = 1, 2,...; 3: Stop: If termination criteria is satisfied.. Well-known algorithms: Jacobi, Gauss-Seidel, SOR, etc. SOR = Successive Over Relaxation. Jacobi, Gauss-Seidel, SOR Methods Given a square matrix A, split A as A = D + L + U.

32 Stationary Iterative Algorithms a a 1n a a a 2n L = a 31 a 32 0, U = a n 1,n a n1 a n2... a n,n and a a D = , a nn

33 Stationary Iterative Algorithms... Jacobi method: x (k+1) = Bx (k) + d, where B = D 1 (L + U), d = D 1 b. Gauss-Seidel: x (k+1) = Bx (k) + d, where B = (D + L) 1 U, d = (D + L) 1 b. SOR: x (k+1) = Bx (k) + d, with B = (D + ωl) 1 [(1 ω)d ωu], d = ω (D + ωl) 1 b, ω > 0 - relaxation factor (convergence tunining factor). SOR: Good values for ω are 0 < ω < 2. 0 < ω < 1 under relaxation. 1 < ω < 2 over-relaxation.

34 Stationary Iterative Algorithms... Convergence properties Convergence from any start point x (0) iff ρ(a) < 1. A is strictly diagonal dominant = Jacobi and Gauss-Seidel converge from any start point x (0). A symmetric positive definite = Jacobi and SOR (0 < ω < 2) converge from any start point x (0). N.B.: Convergence is guaranteed only for a well-conditioned matrix A. Advantages : Global convergence if A SPD and ρ(a) < 1. In general, SOR converges faster than the other two for ω (0, 2).

35 Stationary Iterative Algorithms - Limitations Disadvantages : Applicable only to smaller or problems with well-conditioned or strictly diagonal dominant matrix A. Matrix B is not (dynamically) adapted to the current iterate. May not converge if A is ill-conditioned. Serious Issues: What if A is ill-conditioned? I.e. ρ(a) >> 1 or cond(a) >> What if A is not symmetric? What if A is not definite? What if A is not a square matrix? What if A is very large and sparse matrix? How to exploit the structure of matrix A? - sparsity, band structures, block structures, etc.

36 B. Krylov Subspace Methods - Iterative Methods fo Sparse Linear Systems A matrix A is called sparse if most of its elements are equal to 0. Requirements: To solve Ax = b, for A very large and sparse, use methods that: do not alter the structure of the matrix A (i.e. avoid methods that cause fill-in which is a disadvantage in the Gauss elimination method) require limited memory space,i.e. storage of not more than a few vectors of length n=length(x). Basic Principle: Begin with an initial vector x (0). Generate a sequence of iterates: x (1) x (2)... x (k 1) x (k)... Stop when at some step k, b Ax (k) is sufficiently small. Question: How to generate the iterates x (k) that satisfy the requirements?

37 B. Iterative Methods for Sparse Linear Systems Efficient (dynamic) iterative solvers: Conjugate Gradient method (CG) - for SPD matrix A Generalized Minimized Residual (GMRES) - for general matrix A Bi-Conjugate Gradient Stabilized Method (BiCGStab) - for non-symmetric A (BiCGStab is not discussed here). Krylov Subspace Methods

38 The Conjugate Gradient method Initially, invented by Hestens & Steifel in (1959). Standard assumption: A is n n and SPD, b R n. Let x be a solution of Ax = b. Define the quadratic function Thus, φ can be also written as φ(x) = 1 2 x Ax b x. φ(x) = φ(x ) (x x ) A (x x ) φ(x ), for any x R n. }{{} 0 φ(x ) is the minimum value of φ(x). The function φ(x) has no decent at x ; i.e. φ(x ) = Ax b = 0 For an SPD matrix A: Solving the equation Ax = b is equivalent to solving the optimization problem min x ψ(x).

39 The CG method... Idea of the CG Algorithm: Step: Choose a start vector x (0). Step k: Set x (k+1) = x (k) α k d (k) where: the decent direction d (k) = φ(x (k) ) = b Ax (k) where the step-length α k is chosen to minimize the function φ ( x (k) αd (k)) w.r.t. α. Hence, ( b Ax (k) ) ( b Ax (k) ) α k = ( ) b Ax (k) ( ). A b Ax (k)

40 The CG method... Algorithm The CG Algorithm: Start with an arbitrary vector x (0). Compute: d 0 = r 0 = b Ax (0). for k = 1, 2,... do if r k = 0 then STOP! else α k = r k d k dk Ad. k x (k+1) = x (k) + α k r k. r k+1 = b Ax (k+1). β k = r k+1 Ad k dk Ad. k d k+1 = r k+1 + β k d k. end if end for

41 The CG method... as a Krylov Subsapce Method The iterates x (k) of the CG algorithm converge to the unique solution of Ax = b in a maximum of n steps. The vectors d 0, d 1,..., d n are conjugate to each other w.r.t. A; i.e. d i Ad j = 0, i j. The vectors d 0, d 1,..., d n are linearly independent. Span{r 0, r 1,..., r k } = Span { r 0, Ar 0,..., A k r k }, k = 0, 1..., n. Span{d 0, d 1,..., d k } = Span { r 0, Ar 0,..., A k r k }, k = 0, 1..., n. The subspace K k (r 0, A) := Span { r 0, Ar 0,..., A k } r k is called Krylov-subspace (of R n ) of dimension k + 1. Recall x (k+1) = x (k) + α k r k = x (0) + k j=0 α j r j Using above Thus, CG is a Krylov-subspace method. Furthermore, the CG method, we have x (k+1) x (0) + K k (r 0, A), k = 0, 1,..., n. α 0 α x (k+1) = x (0) 1 + [r 0, r 1,..., r k ] }{{}. = x(0) + R k η k, where η k = (α 1, α 2,..., α k ) The iteration matrix R R k varies. k α k

42 The Generalized Minimal Residual (GMRES) Consider again a system of linear equations Ax = b. Assumption: A may not be symmetric. Given x, the vector r = b Ax is called the residual vector corresponding to x. If r = 0, then Ax = b and x a solution of the equation. However, for a large-system of equations Ax = b, finding a solution is not trivial. Step-by-step minimize the norm of the residual r = b Ax. Basic Algorithm: Step 0:Start from a vector x (0) Step k:determine x (k) from x (k 1) such that r k = b Ax (k) r (k 1) = b Ax (k 1) Stop when either r k = 0 or r k is sufficiently small. Questions: How to construct x (k) s at each step: with less computational effort and every time decreasing the residual vector r?

43 The Generalized Minimal Residual (GMRES) Idea of GMRES Method At each iteration k, (A) determine matrices V k, V k+1 and H k so that AV k = V k+1 H k, where the columns of V k = [v 1, v 2,..., v k ] R n k and V k+1 = [v 1, v 2,..., v k, v k+1 ] R n (k+1) are orthonormal vectors and h h 1k..... h. 21. H k =.. (such a matrix is known as upper Hessenberg matrix of size (k + 1) k).. h kk h k+1,k (B) Set x (k) = x (0) + V k y (k), for some y (k). Question: How to determine the matrices V k, V k+1 and H k and the vector y (k).

44 The Generalized Minimal Residual (GMRES)... (a) To determine V k, V k+1 and H k use the Arnoldi Method. Algorithm 7: The Arnoldi Algorithm 1: Set r 0 = b Ax (0) 0, and set v 1 = r 0 / r 0. 2: for j = 1 to k do 3: for i = 1 to j do 4: h ij = vi Av j 5: end for 6: Compute w j = Av j j i=1 h ij v i 7: h j+1,j = w j 8: if h j+1,j = 0 then 9: STOP! 10: else 11: v j+1 = w j /h j+1,j 12: end if 13: end for Observe that the Arnoldi algorithm uses steps similar to the Gram-Schmidt Orthonormalization process.

45 The Generalized Minimal Residual (GMRES)... A Matlab code for the Arnoldi Algorithm function [V,H]=arnoldi(A,r0,k) n=length(a(:,1)); V=zeros(n,k+1); H=zeros(k+1,k); V(:,1)=r0/norm(r0); for j=1:k w=a*v(:,j); for i=1:j H(i,j)=(V(:,i)) *w; w = w - H(i,j)*V(:,i); end H(j+1,j)=norm(w); if (H(j+1,j)==0) break; else V(:,j+1)=w/H(j+1,j); end Note that: the matrix V returned from Matlab is V k+1 ; while the first k columns of V belong the matrix V k.

46 The Generalized Minimal Residual (GMRES)... Properties of the Arnoldi Algorithm: The expression AV k = V k+1 H k is the same as Observe that h h 1k. h A [v 1, v 2,..., v k ] = [v 1, v 2,..., v k, v k+1].. }{{}}{{}. columns of V k columns of V k+1. h kk h k+1,k In general, for a vector v j, we have Av 1 = h 11 v 1 + h 21 v 2 Av 2 = h 12 v 1 + h 22 v 2 + h 32 v 3. Av j = h 1,j v 1 + h 2,j v h j+1,j v j+1.

47 The Generalized Minimal Residual (GMRES)... (B) To determine y (k) for the iteration x (k) = x (0) + V k y (k). Observe that ( b Ax (k) = b A x (0) + V k y (k)) ( = b Ax (0)) ( AV k y (k)) r0 = V k+1 H k y (k) = βv 1 V k+1 H k y (k), where β = r0 ( recall also v 1 = r 0/ r 0 ) = = (k) β V k+1 e 1 V k+1h k y }{{} where e 1 = (1, 0,..., 0) R n =v 1 V k+1 (βe 1 H k y (k)) Since the columns of V k+1 are orthonormal, it follows that r k = b Ax (k) 2 = βe 1 H k y (k) 2 Hence, to minimize the norm of the residual r k, y (k) can be determined by solving the equation H k y = βe 1.

48 The Generalized Minimal Residual (GMRES)...Algorithm Algorithm 8: The GMRES Algorithm 1: Choose initial iterate x (0). Set r 0 = b Ax (0) 0, β = r 0 and v 1 = 1 β r 0. 2: for j = 1 to k do 3: for i = 1 to j do 4: h ij = vi Av j 5: end for 6: w = w h ij v i 7: h j+1,j = w 8: v j+1 = w/h j+1,j. 9: end for 10: Define the matrix H k 11: Solve the problem min y βe 1 H k y to obtain y (k) 12: Set x (k) = x (0) + V k y (k). The least square problem min y βe 1 H k y is solved by solving H k y = βe 1 which is a relatively small Problems. Use Givens rotation to transform the Hessenberg matrix H k into an upper triangular matrix.

49 GMRES...as a Krylov Subspace Method Observe in the GMRES Algorithm that: v 1 = r 0 / r 0 ; hence, we have Span{v 1 } = Span{r 0 } = K 0 (r 0, A). v 2 = 1 h 2,1 ( Av1 (v 1 Av 1)v 1 ). This implies Span{v 1, v 2 } = Span{r 0, Ar 0 } = K 1 (r 0, A). In general, inductively, assume that Span{v 1,..., v j } = K j (r 0, A). We want to show that Span{v 1,..., v j, v j+1 } = K j+1 (r 0, A). From the GMRES (or Arnoldi) Algorithm we have j j h j+1,j v j+1 = Av j h i,j v i h j+1,j v j+1 + h i,j v i = Av j i=1 i=1 By assumption v j K j (r 0, A). Consequently Av j AK j (r 0, A) K j+1 (r 0, A). This implies, Span{v 1,..., v j, v j+1 } K j+1 (r 0, A). Since both these subspaces have the same dimension, equality holds true.

50 Some Adivantages/Disadvantages of Iterative Methods Advantages usable for large-scale and sparse linear systems applicable to systems with matrices of arbitrary structures requires less computer memory Disadvantages efficiency depends on the type of problem difficult to parallelize require pre-conditioning of the iteration matrices for convergence

51 Preconditioning - Introduction The convergence of iterative methods depends on the condition number of the underlying matrices. If the matrix A is ill-conditioned, the solution obtained by an iterative method can be far from the true one. Hence, to improve the performance of an iterative methods it may be necessary pre-condition the matrix A Preconditioning: Find an matrix P an transform the system Ax = b to (PA)x = Pb so that the resulting matrix PA is well-conditioned. Requirement: the pre-conditioner P should be simple to compute. if A is symmetric and positive semi-definite, choose to a convenient P with these properties

52 Preconditioning (i) If P is symmetric and positive definite, then using Cholesky factorization P = LL ( ) L AL L 1 x = L b Define a new unknown y = L 1 x and solve the problem ( ) L AL y = L b to obtain a solution y and then compute x = Ly to get a solution for Ax = b. The matrix is ( L AL ) is symmetric and positive definite. ( ) L L AL L 1 = PA similarity transformation L AL has the same eigenvalues as PA. (ii) If A is an invertible matrix then A can serve as pre-conditioner A Ax = A b Note that A A is a symmetric positive definite matrix.

53 Preconditioning... (iii) A diagonal matrix as a scaling pre-conditioner. Define P = diag(p 11, p 22,..., p nn). If in A = (a ij ) such that a ii 0, i = 1,..., n, then define p ii = 1 a ii. The resulting matrix provides a scaling preconditioned for the diagonal elements of A. A row scaling pre-conditioner p ii = 1 n j=1 a, i = 1,..., n. ij The matrix PA will be a diagonal dominant matrix. (Similarly, a column scaling pre-conditioner). Additionally, scaling pre-conditioners can be defined using norms of either the column or row vectors of A. More pre-conditioner types: polynomial pre-conditioners, pre-conditioners based on matrix splitting, pre-conditioners based on incomplete LU or Cholesky factorization, etc.

54 Matlab s Krylov Subspace Methods 1 PCG - Preconditioned Conjugate Gradients Method. [x,exitflag] = pcg(a,b,tol,maxit,m1,m2,x0) 2 SYMMLQ - Symmetric LQ Method. [x,exitflag] = symmlq(a,b,tol,maxit,m1,m2,x0) 3 LSQR - LSQR Method [x,exitflag] = lsqr(a,b,tol,maxit,m1,m2,x0) 4 MINRES - Minimum Residual Method [x,exitflag] = minres(a,b,tol,maxit,m1,m2,x0) 5 GMRES - Generalized Minimum Residual Method. [x,exitflag] = gmres(a,b,tol,maxit,m1,m2,x0) 6 QMR - Quasi-Minimal Residual Method. [x,exitflag] = qmr(a,b,tol,maxit,m1,m2,x0) 7 BICG - BiConjugate Gradients Method [x,exitflag] = bicg(a,b,tol,maxit,m1,m2,x0) 8 BICGSTAB - BiConjugate Gradients Stabilized Method. [x,exitflag] = bicgstab(a,b,tol,maxit,m1,m2,x0) NB: For details read the help for the respective Matlab function; e.g. >> help bigstab.

55 References 1 J. W. Demmel: Applied numerical linear algebra. SIAM L. N. Trefethen, D. Bau III: Numerical linear algebra. SIAM T. A. Davis Direct methods for sparse linear systems. SIAM Y. Saad: Iterative methods for sparse linear systems. SIAM Y. Saad, M. H. Schultz: GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. ScI. STAT. COMPUT. Vol. 7, No. 3, July C.T Kelley: Iterative Methods for Linear and Nonlinear Equations. SIAM, ( book.pdf) Matlab codes: 7 M. Benzi: Preconditioning Techniques for Large Linear Systems: A Survey. Journal of Computational Physics V. 182, pp , J. Liesen, Z. Strakos: Krylov subspace methods: principles and analysis. Oxford Univ Press, December V. Simoncini, D. B. Szyld: Recent computational developments in Krylov subspace methods for linear systems (Review Article). Numer. Linear Algebra Appl. V. 14, pp Adam Bojanczyk (ed.): Linear algebra for signal processing (Workshop proceedings). Springer, R. V. Patel: Numerical linear algebra techniques for systems and control. IEEE Press, A. Meister: Numerik linearer Gleichungssysteme : eine Einführung in moderne Verfahren. Vieweg, C. Kanzow: Numerik linearer Gleichungssysteme : direkte und iterative Verfahren. Springer, Y. Saad, H. A. van der Vorst: Iterative solution of linear systems in the 20th century. J. of Comput. and Appl. Math. V. 123, 1 33, R. A. Horn, C. R. Johnson: Topics in matrix analysis. Cambridge Univ. Press, R. A. Horn, C. R. Johnson: Matrix analysis. Cambridge Univ. Press, B. A. Cipra: The Best of the 20th Century - Top 10 Algorithms. SIAM News, Volume 33, Number J. R. Schwechuk: An introduction to the conjugate gradient method without an agonizing pain. Technical Report, Carnegie Mellon University Pittsburgh, PA, USA, H. A. van der Vorst: Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of non-symmetric linear systems. SIAM J. Sci. Stat. Comput., V.13, pp , 1992.

56 Linear Algebra Packages 1 Freely Available Software for Linear Algebra. 2 Software: Linear Algebra: 3 Lapack++: 4 PARDISO: 5 MUMPS (amultifrontal Massively Parallel sparse direct Solver) 6 HSL (Harwell Subroutine Library):

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Dense Matrices for Biofluids Applications

Dense Matrices for Biofluids Applications Dense Matrices for Biofluids Applications by Liwei Chen A Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information