Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Size: px
Start display at page:

Download "Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble."

Transcription

1 Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, September 215

2 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations 2 Power iteration algorithm Deflation Galerkin Jacobi QR 3 Direct methods Iterative methods Preconditioning 4 Band storage Sparse storage 5

3 Elementary operations on vectors Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations C n resp. R n : linear space of vectors with n entries in C resp. R. Generically: F n, where F is a field. Linear combination of vectors w = α u + β v, α, β C or R. f o r i = 1 to n w( i ) = a l p h a u ( i ) + b e t a v ( i ) Scalar product of 2 vectors P u v = n u i v i i=1 uv = f o r i = 1 to n uv = uv + u ( i ) v ( i ) l 2 norm of a vector s np u 2 = ui 2 i=1 uu = f o r i = 1 to n uu = uu + u ( i ) u ( i ) norm = s q r t ( uu )

4 Elementary operations on matrices Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations M np(f ): linear space of matrices with n p entries in F. Linear combination of matrices C = αa + βb, α, β F. f o r i = 1 to n f o r j = 1 to p C( i, j ) = a l p h a A( i, j ) + b e t a B( i, j ) Matrix vector product P w = A u, w i = p A ij u j j=1 f o r i = 1 to n wi = f o r j = 1 to p wi = wi + A( i, j ) u ( j ) w( i ) = wi Matrix matrix product P C = AB, C ij = p A ik B kj k=1 f o r i = 1 to n f o r j = 1 to q c i j = f o r k = 1 to p c i j = c i j + A( i, k ) B( k, j ) C( i, j ) = c i j

5 Gram Schmidt orthonormalization Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Let { v 1,..., v p} be a free family of vectors. It generates the vector space E p with dimension p. We want to construct { e 1,..., e p}, an orthonormal basis of E p. Gram Schmidt algorithm u 1 = v 1 e 1 = u 2 = v 2 ( v 2 e 1) e 1 e 2 = u p = v p p 1 P ( v p e k ) e k e p = k=1 u 1 u 1 2 u 2 u 2 2 u p u p 2 u 2 e 2 v 2 e 1 ( v 2 e 1) e 1 u 1 = v 1

6 Matrix norms Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Definition A, A M nn(f ), F = C or R. A = A =. λa = λ A, A M nn(f ), λ F. A + B A + B, AB A B, Subordinate matrix norms A, B M nn(f ) (triangle inequality). A, B M nn(f ) (specific for matrix norms). s Ax p A p = max = max Ax p, x F n, where x p = x p x p p x p=1 P P in particular: A 1 = max j i A ij and A = max i j A ij. np i=1 x p i. Matrix-vector product estimate A p Ax p x p and hence Ax p A p x p for all x F n.

7 Matrix conditioning Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Definition Cond(A) = A 1 A. Properties Cond(A) 1, Cond(A 1 ) = Cond(A), Cond(αA) = Cond(A). For the Euclidian norm Cond 2(A) = λmax λ min.

8 Conditioning and Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Problem (S ) A x = b, (S per) (A + δa)( x + δ x) = ( b + δ b). (S per) (S ): Aδ x + δa( x + δ x) = δ b, δ x = A δ 1 b δa( x + δ x), δ x A 1 δ b δa( x + δ x) (for a subordinate matrix norm), δ x A 1 δ b + δa x + δ x, δ x x + δ x δ A 1! b x + δ x + δa. Result δ x x + δ x Cond(A) δ! b A x + δ x + δa. A relative error on x = Cond(A) (relative error on b + relative error on A).

9 Hermitian, orthogonal... Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Transposed matrix: ( t A) ij = A ji. Adjoint matrix: (A ) ij = A ji. Symmetric matrix t A = A. Hermitian matrix A = A and hence t A = Ā. Orthogonal matrix (in M nn(r)) t AA = I. Unitary matrix (in M nn(c)) A A = I. Similar matrices ( semblables in French) A and B are similar if P/B = P 1 AP.

10 Profiles Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Lower triangular Upper triangular Tridiagonal Lower Hessenberg Upper Hessenberg

11 Householder matrices Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Definition H v = I 2 v t v v 2 2 Properties 1 H v is orthogonal. 2 If v = a b and a 2 = b 2, then H v a = b. t v v = a 2 2 t a b + b 2 = 2 a 2 2 t a b = 2 t a v = 2 t v a H v a = a 2 v t v a v 2 = a v = b. Application Let a K n, we look for H v such that H v a = t (α,,..., ). Solution: take b = t (α,,..., ) with α = a 2, and v = a b. Then H v a = b.

12 Householder tridiagonalisation Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Aim A: symmetric matrix. Construct a sequence A (1) = A,..., A (n) tridiagonal and A (n) n = HA t H. A (2) = A (3) = A (4) = A (5) =... A (n) = First step A (1) A(1) 11 a (1) 21 t a (1) 12 à (1)! «H (1) 1 t A (2) H(1) A (1) 11 H (1) a (1) 21! t ( H (1) a (1) 21 ) H (1). (1)t à H(1) Choose H (1) such that H (1) a (1) 21 = t (α,,..., ) n 1 = α( e 1) n 1. α = a (1) 21 2, u1 = a(1) 21 α( e1)n 1, H (1) = H u1. Complexity Order 2 3 n3 products.

13 Givens tridiagonalization Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations c s Let G pq(c, s) = 1 with c 2 + s 2 = 1. B s c 1 A 1 t GAG = If A is symmetric: Complexity... else: leads to Hessenberg matrix Order 4 3 n3 products.

14 Principles of LU factorization Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations = 1 A L U Some regular matrix (with non-zero determinant) are not LU-transformable, e.g. ([ 1; 1 1]) is not. If it exists, the LU decomposition of A is not unique. It is unique if A is non-singular. A is non-singular and LU-transformable all the determinants of the fundamental principal minors are non zero (and in this case the decomposition is unique).

15 Doolittle LU factorization principle Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations It proceeds line by line. 8 A 11 = L 11U 11 >< A 12 = L 11U 12 L 11 = 1... >: A 1n = L 11U 1n {U 1j } j=1,...,n 8 >< >: 8 >< >: A 21 = L 21U 11 L 21 A 22 = L 21U 12 + U {U 2j } j=2,...,n A 2n = L 21U 1n + U 2n A 31 = L 31U 11 L 31 A 32 = L 31U 12 + L 32U 22 L 32 A 33 = L 31U 13 + L 32U 23 + U {U 3j } j=3,...,n A 3n = L 31U 1n + L 32U 2n + U 3n...

16 Doolittle LU factorization algorithm Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Doolittle algorithm L ij = A ij j 1 P k=1 U jj L ik U kj f o r i = 1 to n f o r j = 1 to i 1 sum = f o r k=1 to j 1 sum = sum + L ( i, k ) U( k, j ) L ( i, j ) = (A( i, j ) sum )/U( j, j ) L ( i, i ) = 1 f o r j = i to n sum = f o r k = 1 to i 1 sum = sum + L ( i, k ) U( k, j ) U( i, j ) = A( i, j ) sum U ij = A ij i 1 P k=1 L ik U kj Complexity Order n 3 products

17 Cholesky factorization for an Hermitian matrix Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Principle A = C t C Cholesky algorithm s C ii = A ii i 1 P C ik C ik C ij = k=1 C( 1, 1 ) = s q r t (A( 1, 1 ) ) f o r i = 2 to n f o r j = 1 to i 1 sum = f o r k = 1 to j 1 sum = sum + C( i, k ) C( j, k ) C( i, j ) = (A( i, j ) sum )/C( j, j ) sum= f o r k = 1 to i 1 sum = sum + C( i, k ) C( i, k ) C( i, i ) = s q r t (A( i, i ) sum ) j 1 P A ij k=1 C ik C jk, j i C jj Complexity Order n 3 products

18 LU factorization profiles Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Non-zero elements In blue: A In red: superposition of L and U The interior of the profile is filled!

19 QR factorization principle Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Principle A = QR, where Q orthogonal and R right (upper) triangular. G m... G 2G 1A = R, A = t G t 1 G 2... t G {z m R } Q c s Let G ij (c, s) = 1 with c 2 + s 2 = 1. B s c 1 A 1 (GA) ij = sa jj + ca ij A jj c = s = = (i, j) q A 2 ij + A2 jj A ij q A 2 ij + A2 jj

20 QR factorization algorithm Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations Algorithm R = A Q = I d // s i z e o f A f o r i = 2 to n f o r j = 1 to i 1 r o o t = s q r t (R( i, j ) R( i, j )+R( j, j ) R( j, j ) ) i f r o o t!= c = R( j, j )/ r o o t s = R( i, j )/ r o o t e l s e c = 1 s = end i f C o n s t r u c t G j i R = G j i R // m a t r i x p r o d u c t Q = Q t r a n s p o s e ( G j i ) // m a t r i x p r o d u c t Complexity Order n 3 products

21 QR factorization Python example Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning Specific matrices Tridiagonalisation LU and QR factorizations A = B A R = B A Q = B

22 Power iteration algorithm Python experiment Power iteration algorithm Deflation Galerkin Jacobi QR : Construct the series x = A = λ 1 = 1, λ 2 = 1, v 1 = 1 «9 1 x k = A x k 1 ««1, v 1 2 =. 1 ««««2, x 1 2 =, x 2 2 =, x 3 2 = x tends to the direction of the eigenvector associated to the higher modulus eigenvalue. x k / x k 1 tends to the higher modulus eigenvalue.

23 Power iteration algorithm Algorithm Power iteration algorithm Deflation Galerkin Jacobi QR Computation of the eigenvalue with higher modulus. A may be diagonalizable or not, the dominant eigenvalue can be unique or not. Algorithm choose q ( ) f o r k = 1 to c o n v e r g e n c e x ( k ) = A q ( k 1) q ( k ) = x ( k ) / norm ( x ( k ) ) lambdamax = x ( k ) ( j )/ q ( k 1)( j ) Attention: good choice of component j.

24 Power iteration algorithm Python example Power iteration algorithm Deflation Galerkin Jacobi QR 1 1 A 5 A 2 Rotations: cos(1) 1 sin(1) 1 1 R 1 1 A, R 2 cos(2) sin(2) A sin(1) cos(1) sin(2) cos(2) B = R 2R 1A t R t 1 R A : λ 1 = 2, λ 2 = 5, λ 3 = 1, v 1 v 2 A, v 3 A..352

25 Power iteration algorithm Remarks Power iteration algorithm Deflation Galerkin Jacobi QR 1 Convergence results depend on the fact that the matrix is diagonalizable or not the dominant eigenvalue is multiple or not 2 The choice of the norm is not explicit: usually max norm or euclidian norm 3 q should not be orthogonal to the eigen-subspace associated to the dominant eigenvalue.

26 Inverse iteration algorithm Algorithm Power iteration algorithm Deflation Galerkin Jacobi QR Computation of the eigenvalue with smallest modulus. A may be diagonalizable or not, the dominant eigenvalue can be unique or not. Based on the fact that 1 λ min(a) = λ max(a ) 1. Algorithm choose q ( ) f o r k = 1 to c o n v e r g e n c e s o l v e A x ( k ) = q ( k 1) q ( k ) = x ( k ) / norm ( x ( k ) ) lambdamin = q ( k 1)( j ) / x ( k ) ( j )

27 Inverse iteration algorithm Python example Power iteration algorithm Deflation Galerkin Jacobi QR 1 1 A 5 A 2 Rotations: cos(1) 1 sin(1) 1 1 R 1 1 A, R 2 cos(2) sin(2) A sin(1) cos(1) sin(2) cos(2) B = R 2R 1A t R t 1 R A : λ 1 = 2, λ 2 = 5, λ 3 = 1, v 1 v 2 A, v 3 A..352

28 Generalized inverse iteration algorithm Algorithm Power iteration algorithm Deflation Galerkin Jacobi QR Computation of the closest eigenvalue to a given µ. The eigenvalues of A µi are the λ i µ, where λ i are the eigenvalues of A. apply the inverse iteration algorithm to A µi. Algorithm choose q ( ) f o r k = 1 to c o n v e r g e n c e s o l v e (A mu I ) x ( k ) = q ( k 1) q ( k ) = x ( k ) / norm ( x ( k ) ) lambda = q ( k 1)( j ) / x ( k ) ( j ) + mu

29 Generalized inverse iteration algorithm Python example Power iteration algorithm Deflation Galerkin Jacobi QR 1 1 A 5 A µ = 4. 2 Rotations: cos(1) 1 sin(1) 1 1 R 1 1 A, R 2 cos(2) sin(2) A sin(1) cos(1) sin(2) cos(2) B = R 2R 1A t R t 1 R A : λ 1 = 2, λ 2 = 5, λ 3 = 1, v 1 v 2 A, v 3 A..352

30 Deflation Algorithm and Python example Power iteration algorithm Deflation Galerkin Jacobi QR Computation of all the eigenvalues in modulus decreasing order. When an eigenelement (λ, q) is found, it is removed from further computation by replacing A A λ q t q. Algorithm f o r i = 1 to n choose q ( ) f o r k = 1 to c o n v e r g e n c e x ( k ) = A q ( k 1) q ( k ) = x ( k ) / norm ( x ( k ) ) lambda = x ( k ) ( j ) / q ( k 1)( j ) A = A lambda q t r a n s p o s e ( q ) // e l i m i n a t e s d i r e c t i o n q

31 Galerkin method Algorithm Power iteration algorithm Deflation Galerkin Jacobi QR Let H be a subspace of dimension m, generated by the orthonormal basis ( q 1,..., q m). Construct the rectangular matrix Q = ( q 1,..., q m). Remark: Q Q = Id m Goal Look for in H. P If u H, u = m α i q i (unique). i=1 u = Q U, where U = t (α 1,..., α m). A u = λ u AQ U = λq U. Project on H: Q AQ U = λq Q U = λ U. We look for eigenelements of B = Q AQ. Vocabulary: {λ i, u i } are the Ritz elements, B is the Rayleigh matrix.

32 Jacobi method Algorithm Power iteration algorithm Deflation Galerkin Jacobi QR Goal Diagonalize the (real symmetric) matrix. Until a reasonably diagonal matrix is obtained: Choose the largest off-diagonal element (largest modulus) Construct a rotation matrix that annihilates this term In the end, the eigenvalues are the diagonal elements.

33 QR method Algorithm Power iteration algorithm Deflation Galerkin Jacobi QR Algorithm A( 1 ) = A f o r k = 1 to c o n v e r g e n c e [Q( k ),R( k ) ] = Q R f a c t o r (A( k ) ) A( k+1) = R( k ) Q( k ) The eigenvalues are the diagonal elements of the last matrix A k+1. Properties A k+1 = R k Q k = Q k Q k R k Q k = Q k A k Q k A k+1 and A k are similar. If A k is tridiagonal or Hessenberg, A k+1 also is First restrict to this case keeping similar matrices.

34 QR method Convergence and Python example Power iteration algorithm Deflation Galerkin Jacobi QR Theorem Let V be the matrix of left of A (A u = λ u ). If the principal minors of V are non-zero. the eigen-values of A are such that λ 1 > > λ n. Then the QR method converges A k+1 tends to an upper triangular form and (A k ) ii tends to λ i.

35 Eigenvalues Summary Power iteration algorithm Deflation Galerkin Jacobi QR We want to know all the eigenvalues QR method better than Jacobi Preprocessing: find a similar tridiagonal or Heisenberg matrix (Householder or Givens algorithm). We only want one eigenvector whose eigenvalue is known (or an approximation) Power iteration algorithm and variants... We only want a sub-set of eigenelements We know the eigenvalues and look for : deflation and variants We know the subspace for : Galerkin and variants

36 Principles Direct methods Iterative methods Preconditioning A x = b Elimination methods The solution to the system remains unchanged if lines are permuted, line i is replaced by a linear combination P l i n µ k l k, with µ i. k=1 Factorisation methods A = LU LU x = b We solve two triangular systems L y = b U x = y.

37 Lower triangular matrix Direct methods Iterative methods Preconditioning Algorithm x i = b i i 1 P i f A(1,1)== then s t o p x ( 1 ) = b ( 1 ) /A( 1, 1 ) f o r i = 2 to n i f A( i, i )== then s t o p ax = f o r k = 1 to i 1 ax = ax + A( i, k ) x ( k ) x ( i ) = ( b ( i ) ax )/A( i, i ) k=1 A ii A ik x k Complexity Order n 2 /2 products.

38 Upper triangular matrix Direct methods Iterative methods Preconditioning Algorithm x i = b i i f A( n, n)== then s t o p x ( n ) = b ( n )/A( n, n ) f o r i = n 1 to 1 i f A( i, i )== then s t o p ax = f o r k = i +1 to n ax = ax + A( i, k ) x ( k ) x ( i ) = ( b ( i ) ax )/A( i, i ) n P k=i+1 A ii A ik x k Complexity Order n 2 /2 products.

39 Gauss elimination Principle Direct methods Iterative methods Preconditioning Aim Transform A to upper triangular matrix. At rank p 1: l i l i A ip l p A pp A ij = if i > j, j < p. f o r p = 1 to n p i v o t = A( p, p ) i f p i v o t == s t o p l i n e ( p ) = l i n e ( p )/ p i v o t f o r i = p+1 to n Aip = A( i, p ) l i n e ( i ) = l i n e ( i ) Aip l i n e ( p ) x = s o l v e (A, b ) // upper t r i a n g u l a r Complexity Still order n 3 products.

40 Gauss Jordan elimination Principle Direct methods Iterative methods Preconditioning Aim Transform A to identity. At rank p 1: l i l i A ip l p A pp A ii = 1 if i < p, A ij = if i j, j < p. f o r p = 1 to n p i v o t = A( p, p ) i f p i v o t == s t o p l i n e ( p ) = l i n e ( p )/ p i v o t f o r i = 1 to n, i!=p Aip = A( i, p ) l i n e ( i ) = l i n e ( i ) Aip l i n e ( p ) x = b Attention take into account le right-hand side in the line. what if A pp =?

41 Gauss Jordan elimination Algorithm Direct methods Iterative methods Preconditioning // unknown e n t r i e s numbering f o r i = 1 to n num( i ) = i f o r p=1 to n // maximal p i v o t pmax = abs (A( p, p ) ) imax = p jmax = p f o r i = p to n f o r j = p to n i f abs (A( i, j ) ) > pmax then pmax = abs (A( i, j ) ) imax = i jmax = j end i f // l i n e p e r m u t a t i o n f o r j = p to n permute (A( p, j ),A( imax, j ) permute ( b ( p ), b ( imax ) ) // column p e r m u t a t i o n f o r i = p to n permute (A( i, p ),A( i, jmax ) permute (num( p ), num( jmax ) ) p i v o t = A( p, p ) i f pivot == stop, rank (A) = p 1 f o r j = p to n A( p, j ) = A( p, j )/ p i v o t b ( p ) = b ( p )/ p i v o t f o r i = 1 to n, i!=p Aip = A( i, p ) f o r j = p to n A( i, j ) = A( i, j ) Aip A( p, j ) b ( i ) = b ( i ) Aip b ( p ) // l o o p on p f o r i = 1 to n x (num( i ) ) = b ( i ) Complexity Order n 3 products. Remark Also computes the rank of the matrix.

42 Factorization methods Thomas algorithm principle Direct methods Iterative methods Preconditioning LU decomposition for tridiagonal matrices = A L U We suppose that L ij and U ij are known for i < p. Then A p,p 1 = L p,p 1U p 1,p 1, A p,p = L p,p 1U p 1,p + U p,p, A p,p+1 = U p,p+1. L p,p 1 = A p,p 1/U p 1,p 1, U p,p = A p,p L p,p 1U p 1,p = A p,p A p,p 1U p 1,p/U p 1,p 1, U p,p+1 = A p,p+1.

43 Factorization methods Thomas algorithm algorithm Direct methods Iterative methods Preconditioning Algorithmm // f a c t o r i z a t i o n U( 1, 1 ) = A( 1, 1 ) U( 1, 2 ) = A( 1, 2 ) f o r i = 2 to n i f U( i 1, i 1) = then s t o p L ( i, i 1) = A( i, i 1)/U( i 1, i 1) U( i, i ) = A( i, i ) L ( i, i 1) U( i 1, i ) U( i, i +1) = A( i, i +1) // c o n s t r u c t i o n o f t h e s o l u t i o n y = s o l v e ( L, b ) // l o w e r t r i a n g u l a r x = s o l v e (U, y ) // upper t r i a n g u l a r Complexity Order 5n products.

44 Factorization methods general matrices Direct methods Iterative methods Preconditioning For general matrices: Factorize the matrix LU algorithm Choleski algorithm Solve upper triangular system Solve lower triangular system.

45 Iterative methods Principle Direct methods Iterative methods Preconditioning A= E + D + F To solve A x = b, write A = M N and iterate M x k+1 N x k = b, i.e. x k+1 = M 1 N x k + M 1 b. Attention M should be easy to invert. M 1 N should lead to a stable algorithm. Jacobi M = D, N = (E + F ), Gauss Seidel M = D + E, N = F, Successive Over Relaxation M = D «1 ω + E, N = ω 1 D F.

46 Jacobi method Direct methods Iterative methods Preconditioning Algorithm choose x ( k=) f o r k = to c o n v e r g e n c e f o r i = 1 to n r h s = b ( i ) f o r j = 1 to n, j!= i r h s = r h s A( i, j ) x ( j, k ) x ( i, k+1) = r h s / A( i, i ) t e s t = norm ( x ( k+1) x ( k))< e p s i l o n ( w h i l e not t e s t ) x k+1 i = 1 A i nx j=1,j i A ij xj k A x k+1 = D 1 ( b (E + F ) x k ) = D 1 ( b + (D A) x k ) = D 1 b + (I D 1 A) x k. 1 Remarks simple, two copies of the variable x k+1 and x k, insensible to permutations, converges if the diagonal is strictly dominant.

47 Gauss Seidel method Direct methods Iterative methods Preconditioning Algorithm x k+1 i = 1 A ii Xi 1 b i A ij x k+1 j j=1 choose x ( k=) f o r k = to c o n v e r g e n c e f o r i = 1 to n r h s = b ( i ) f o r j = 1 to i 1 r h s = r h s A( i, j ) x ( j, k+1) f o r j = i +1 to n r h s = r h s A( i, j ) x ( j, k ) x ( i, k+1) = r h s / A( i, i ) t e s t = norm ( x ( k+1) x ( k))< e p s i l o n ( w h i l e not t e s t )! nx A ij xj k j=i+1 Remarks still simple, one copy of the variable x, sensible to permutations, often converges better than Jacobi.

48 SOR method Direct methods Iterative methods Preconditioning i = ω Xi 1 b i A ij x k+1 j A ii x k+1 x k+1 = j=1! nx A ij xj k j=i+1 + (1 ω)x k i «1 «1» «D ω + E b D 1 + ω + E ω 1 D F x k Remarks still simple, one copy of the variable x, Necessary condition for convergence: < ω < 2.

49 Descent method general principle Direct methods Iterative methods Preconditioning For A symmetric definite positive!! Principle Construct a series of approximations of the solution to the system x k+1 = x k + α k p k, where p k descent direction and α k to be determined. The solution x minimizes the functional J( x) = t xa x 2 t b x. J x i ( x) X x j A jk x k 2 X x i j,k j 1 b j x j A = X A ik x k + X k j = 2 A x b, i J ( x) =. x i x j A ji 2b i

50 Descent method determining α k Direct methods Iterative methods Preconditioning x also minimizes the functional E( x) = t ( x x)a ( x x), and E( x) =. For a given p k, which α minimizes E( x k+1 )? E( x k + α p k ) = t ( x k + α p k x)a( x k + α p k x), α E( x k + α p k ) = t p k A( x k + α p k x) + t ( x k + α p k x)a p k = 2 t ( x k + α p k x)a p k. t ( x k + α k p k x)a p k = t x k A p k + α k t p k A p k t xa p k = t p k A x k + α k t p k A p k t p k A x =. α k t p k A x k t p k A x = t p k A p k

51 Descent method functional profiles (good cases) Direct methods Iterative methods Preconditioning ««2 A =, 2 2 b = A 1 Cond(A) = A = Cond(A) = 7 «, b = A «

52 Descent method functional profiles (bad cases) Direct methods Iterative methods Preconditioning Nonpositive «case «2 8 A =, b = A Nonsymmetric case 2 3 A = 3 2 «, b = A «

53 Descent method optimal parameter (principle) Direct methods Iterative methods Preconditioning Principle Choose p k = r k b A x k. Choose α k is such that r k+1 is orthogonal to p k. r k+1 = b A x k+1 = b A( x k + α p k ) = r k α k A p k, = t p k r k+1 = t p k r k α k t p k A p k. α k = t p k r k t p k A p k. E( x k+1 ) = (1 γ k )E( x k ) with γ k ( t p k r k ) 2 = ( t p k A p k )( t r k A 1 r k ) 1 t p k r k Cond(A) p k r k.

54 Descent method optimal parameter (algorithm) Direct methods Iterative methods Preconditioning Algorithm choose x ( k=1) f o r k = 1 to c o n v e r g e n c e r ( k ) = b A x ( k ) p ( k ) = r ( k ) a l p h a ( k ) = r ( k ). p ( k ) / p ( k ). A p ( k ) x ( k+1) = x ( k ) + a l p h a ( k ) p ( k ) // r ( k ) s m a l l

55 Descent method conjugate gradient (principle) Direct methods Iterative methods Preconditioning Principle Choose p k = r k + β k p k 1. Choose β k to minimize the error, i.e. maximize the factor γ k Properties t r k p j = j < k, Span( r 1, r 2,..., r k ) = Span( r 1, A r 1,..., A k 1 r 1 ) Span( p 1, p 2,..., p k ) = Span( r 1, A r 1,..., A k 1 r 1 ) t p k A p j = j < k t r k A p j = j < k The algorithm converges in at most n iterations.

56 Descent method conjugate gradient (algorithm) Direct methods Iterative methods Preconditioning Algorithm choose x ( k=1) p ( 1 ) = r ( 1 ) = b A x ( 1 ) f o r k = 1 to c o n v e r g e n c e a l p h a ( k ) = r ( k ). p ( k ) / p ( k ). A p ( k ) x ( k+1) = x ( k ) + a l p h a ( k ) p ( k ) r ( k+1) = r ( k ) a l p h a ( k ) A p ( k ) b e t a ( k+1) = r ( k+1). r ( k+1) / r ( k ). r ( k ) p ( k+1) = r ( k+1) + b e t a ( k+1) p ( k ) // r ( k ) s m a l l

57 Descent method GMRES Direct methods Iterative methods Preconditioning For generic matrices A GMRES: General Minimal RESidual method Take a fair approximation x k of the solution Construct the m-dimensional set of free vectors This spans the Krylov space H k m. { r k, A r k,..., A m 1 r k } Construct an orthonormal basis for H k m e.g. via Gram-Schmidt { v 1,..., v m} Look for a new approximation x k+1 H k m: x k+1 = mx X j v j = [V ] X j=1 We obtain a system of n equations with m unknowns A x k+1 = A[V ] X = b.

58 Descent method GMRES (cont d) Direct methods Iterative methods Preconditioning Project on Hm k [ t V ]A[V ] X = [ t V ] b. Solve this system of m equations with m unknowns x k+1 = [V ] X. and so on... To work well GMRES should be preconditioned!

59 Preconditioning principle Direct methods Iterative methods Preconditioning Principle Replace system A x = b by C 1 A x = C 1 b where Cond(C 1 A) Cond(A). Which matrix C? C should be easily invertible, typically the product of two triangular matrices. C = diag(a), simplest but well... incomplete Cholesky or LU factorization...

60 Preconditioning symmetry issues Direct methods Iterative methods Preconditioning Symmetry Even if A and C are symmetric, C 1 A may not be symmetric. What if symmetry is needed? Let C 1/2 such that C 1/2 C 1/2 = C 1. Then C 1/2 AC 1/2 is similar to C 1 A. We consider the system C +1/2 (C 1 A)C 1/2 C +1/2 x = C +1/2 C 1 b (C 1/2 AC 1/2 )C +1/2 x = C 1/2 b Solve and then (C 1/2 AC 1/2 ) y = C 1/2 b y = C +1/2 x.

61 Preconditioning preconditioned conjugate gradient Direct methods Iterative methods Preconditioning Algorithm choose x ( k=1) r ( 1 ) = b A x ( 1 ) s o l v e Cz ( 1 ) = r ( 1 ) p ( 1 ) = r ( 1 ) f o r k = 1 to c o n v e r g e n c e a l p h a ( k ) = r ( k ). z ( k ) / p ( k ). A p ( k ) x ( k+1) = x ( k ) + a l p h a ( k ) p ( k ) r ( k+1) = r ( k ) a l p h a ( k ) A p ( k ) s o l v e C z ( k+1) = r ( k+1) b e t a ( k+1) = r ( k+1). z ( k+1) / r ( k ). z ( k ) p ( k+1) = z ( k+1) + b e t a ( k+1) p ( k ) At each iteration a system C z = r is solved.

62 main issues Band storage Sparse storage Problems involve often a large number of variables, of degrees of freedom, say 1 6. To store a full matrix for a 1 6 -order system, 1 12 real numbers (if real) are needed... In simple precision this necessitates 4 To of memory. But high order problems are often very sparse. We therefore use a storage structure which consists in only storing relevant, non-zero, data. Access to one element A ij should be very efficient.

63 CDS: Compressed Diagonal Band storage Sparse storage where L(A) = max L i (A) i L i (A) = max i j j/a ij

64 CRS: Compressed Row Band storage Sparse storage tabi tabj tabv All the non-zero values of the matrix are stored in a table tab; they are stored line by line in the increasing order of columns. A table tabj, with same size than tab stores the column number of the values in tabv. A table tabi with size n + 1 stores the indices in tabj of the first element of each line. The last entry is the size of tabv. CCS: Compressed Column = Harwell Boeing Generalization to symmetric matrices

65 CRS: Compressed Row exercise Band storage Sparse storage Question CRS storage? A = B A 3 8 Solution tabi = {1, 4, 6, 8, 1} tabj = {2, 3, 4, 1, 3, 2, 3, 3, 4} tabv = {4, 1, 6, 2, 5, 9, 7, 3, 8}

66 Cuthill McKee algorithm construction of a graph Cuthill McKee algorithm Goal Reduce the bandwidth of a large sparse matrix by renumbering the unknowns. Construction The nodes of the graph are the unknowns of the system. They are labelled with a number from 1 to n. The edges are the relations between the unknowns. Two unknowns i and j are linked if A ij. The distance d(i, j) between two nodes is the minimal number of edges to follow to join both nodes. The excentricity E(i) = max j d(i, j) Far neighbors are λ(i) = {j/d(i, j) = E(i)} Graph diameter D = max i E(i) Peripheral nodes P = {j/e(j) = D}.

67 Cuthill McKee algorithm bandwidth reduction Cuthill McKee algorithm This graph is used to renumber the unknonws. Choose a first node and label it with 1. Attribute the new numbers (2,3,... ) to the neighbors of node 1 with have the less non-labelled neighbors. Label the neighbors of node 2 and so on... until all nodes are labelled. once this is done the numbering is reversed: the first become the last.

68 Cuthill McKee algorithm example 1 Cuthill McKee algorithm

69 Cuthill McKee algorithm example 2 Cuthill McKee algorithm

70 Bibliography P. Lascaux, R. Théodor, Analyse numérique matricielle appliquée à l art de l ingénieur Volumes 1 and 2, 2ème édition, Masson (1997). Gene H. Golub, Charles F. van Loan, Matrix Computations, 3rd edition, Johns Hopkins University Press (1996).

71 TheEnd TheEnd

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Dr. ir. Gerrit C. Groenenboom 14 june - 18 june, 1993 Last revised: May 11, 2005 Contents 1 Introduction

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Topics in Numerical Linear Algebra

Topics in Numerical Linear Algebra Topics in Numerical Linear Algebra Axel Ruhe 1 January 17, 2005 1 Royal Institute of Technology, SE-10044 Stockholm, Sweden (ruhe@kth.se). Material given to course in Applied Numerical Analysis. 2 Preface

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information