Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1

Size: px
Start display at page:

Download "Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1"

Transcription

1 Scientific Computing WS 2018/2019 Lecture 10 Jürgen Fuhrmann Lecture 10 Slide 1

2 Homework assessment Lecture 10 Slide 2

3 General Please apologize terse answers - on the bright side of this I found time to reply to all individually who handed things in by yesterday noon please stick to the filename scheme, this makes it easier for me to give feedback to all of you Good style with zip files is that they unpack into subdir with the same name. E.g. abc.zip unpacks into directory abc. Mac users: try to pack your stuff without the.ds Store subdirectories No need to include binaries MACOSX and Always try to calculate errors if exact data is available (I should have been more specific in assignment text) Lecture 10 Slide 3

4 Code style Try to specify datatypes in constants: 0.1f for float, 0.1l for long double and avoid mixing of datatypes in expressions. In particular write x/2.0 instead of x/2 if you do division of a double number. (There are reasonable automatic conversion rules, but things are clearer if they are explicit). Cast ints to double explicitely in floating point expressions. This ensures that you don t accidentally create an integer intermediate result. ( 1/i*i was the reason of many overflow errors in your codes) Math headers: use <cmath> instead of <math.h>. In particular, this gives you long double version of functions if needed, in particular for abs. When using printf, use the right format specifiers for output of floating point numbers: %e for float and double, and %Le for long double. %e,%le give the exponential notation, and %f, %Lf give a fixed point notation without exponential which is not very helpful for accuracy assessment. Lecture 10 Slide 4

5 Lecture 10 Slide 5 Representation of real numbers Any real number x R can be expressed via representation formula: x = ± d iβ i β e i=0 β N, β 2: base d i N, 0 d i < β: mantissa digits e Z : exponent Scientific notation of floating point numbers: e.g. x = β = 10 d = (6, 0, 2, 2, 0... ) e = 23 Non-unique: x = β = 10 d = (0, 6, 0, 2, 2, 0... ) e = 24 Infinite for periodic decimal numbers, irrational numbers Lecture 4 Slide 27

6 Lecture 10 Slide 6 Floating point numbers Computer representation uses β = 2, therefore d i {0, 1} Truncation to fixed finite size t 1 x = ± d iβ i β e i=0 t: mantissa length Normalization: assume d 0 = 1 save one bit for mantissa k: exponent size β k + 1 = L e U = β k 1 Extra bit for sign storage size: (t 1) + k + 1 IEEE 754 single precision (C++ float ): k = 8, t = bit IEEE 754 double precision (C++ double ): k = 11, t = bit Lecture 4 Slide 28

7 Lecture 10 Slide 7 Floating point limits Finite size of reprensentation there are minimal and maximal possible numbers which can be represented symmetry wrt. 0 because of sign bit smallest positive normalized number: d 0 = 1, d i = 0, i = 1... t 1 x min = β L float: e 38 double: e 308 smallest positive denormalized number: d i = 0, i = 0... t 2, d t 1 = 1 x min = β 1 t β L largest positive normalized number: d i = β 1, 0... t 1 x max = β(1 β 1 t )β U float: e+38 double: e+308 Lecture 4 Slide 29

8 Lecture 10 Slide 8 Machine precision There cannot be more than 2 t+k floating point numbers almost all real numbers have to be approximated Let x be an exact value and x be its approximation Then: x x < ɛ is the x best accuracy estimate we can get, where ɛ = β 1 t (truncation) ɛ = 1 2 β1 t (rounding) Also: ɛ is the smallest representable number such that 1 + ɛ > 1. Relative errors show up in partiular when subtracting two close numbers adding smaller numbers to larger ones Lecture 4 Slide 30

9 Machine epsilon Smallest floating point number ɛ such that 1 + ɛ > 1 in floating point arithmetic In exact math it is true that from 1 + ε = 1 it follows that 0 + ε = 0 and vice versa. In floating point computations this is not true Many of you used the right algorithm and used the first value or which 1 + ε = 1 as the result. This is half the desired quantity. Some did not divide start with 1.0 but by other numbers. E.g. 0.1 is not represented exactly in floating point arithmetic Recipe for calculation: Set ɛ = 1.0; while ɛ/2.0 > 1.0 do ɛ = ɛ/2.0 end But... this may be optimized away... Lecture 10 Slide 9

10 Normalized floating point number IEEE bit floating point number normally the same as C++ float ± e 0 e 1 e 1 e 2 e 4 e 5 e 6 e 7 d 1 d 2 d 3 d 4 d 5 d 6 d 7 d 8 d 9 d 10 d 11 d 12 d 13 d 14 d 15 d 16 d 17 d 18 d 19 d 20 d 21 d 22 d 23 Storage layout for a normalized number (d 0 = 1) bit 0: sign, 0 +, 1 bit : r = 8 exponent bits, value e + 2 r 1 1 = 127 is stored no need for sign bit in exponent bit : t = 23 mantissa bits d1... d 23 d0 = 1 not stored hidden bit Examples 1 0_ _ e = 0, stored _ _ e = 1, stored _ _ e = 1, stored _ _ infinite periodic 0 0_ _ Numbers which are exactly represented in decimal system may not be exactly represented in binary system. Lecture 10 Slide 10

11 How Additionworks? General: 1. Adjust exponent of number to be added: Until both exponents are equal, add one to exponent, shift mantissa to right by one bit 2. Add both numbers 3. Normalize result For 1+ɛ, We have at maximum t bit shifts of normalized mantissa until mantissa becomes 0, so ɛ = 2 t. Lecture 10 Slide 11

12 Data of IEEE 754 floating point representations size t r ɛ float e-07 double e-16 long double e-19 Floating point format not standardized by language but by IEEE comitee Implementation of long double varies, may even be the same as double, or may be significantly slower, so it is mostly no good option There are high accuracy floating point number packages available, which however perform calculations without support of the CPU floating point arithmetic Lecture 10 Slide 12

13 Summation Basel sum: K n=1 1 n 2 = π2 6 Intended answer for accuracy: sum in reverse order. Start with adding up many small values which would be cancelled out if added to an already large sum value. Results for float: n forward sum forward sum error reverse sum reverse sum error e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-07 No gain in accuracy for forward sum for n > Lecture 10 Slide 13

14 Kahan summation Some of you hinted at the Kahan compensated summation algorithm (thanks!): T sum_kah=0.0; T error_compensation=0.0; for (int i=1; i<=n;i++) { T x=i; T increment=1.0/(x*x); T corrected_increment=increment-error_compensation; T good_sum=sum_kah+corrected_increment; error_compensation= (good_sum-sum_kah)-corrected_increment; sum_kah =good_sum; } When implementing, be careful that expressions are not optimized away... William Kahan (1933-) is the principle architect of the IEEE 754 floating point standard... Lecture 10 Slide 14

15 Recap on nonnegative matrices Lecture 10 Slide 15

16 Lecture 10 Slide 16 The Gershgorin Circle Theorem (Semyon Gershgorin,1931) (everywhere, we assume n 2) Theorem (Varga, Th. 1.11) Let A be an n n (real or complex) matrix. Let Λ i = a ij j=1...n j i If λ is an eigenvalue of A then there exists r, 1 r n such that λ a rr Λ r Proof Assume λ is eigenvalue, x a corresponding eigenvector, normalized such that max i=1...n x i = x r = 1. From Ax = λx it follows that (λ a ii )x i = a ij x j λ a rr = j=1...n j i j=1...n j r a rj x j j=1...n j r a rj x j j=1...n j r a rj = Λ r Lecture 9 Slide 11

17 Lecture 10 Slide 17 Gershgorin Circle Corollaries Corollary: Any eigenvalue of A lies in the union of the disks defined by the Gershgorin circles λ {µ V : µ a ii Λ i } Corollary: i=1...n ρ(a) max i=1...n ρ(a) max j=1...n n a ij = A j=1 n a ij = A 1 i=1 Proof n µ a ii Λ i µ Λ i + a ii = a ij j=1 Furthermore, σ(a) = σ(a T ). Lecture 9 Slide 12

18 Lecture 10 Slide 18 Gershgorin circles: example A = , λ 1 = 1, λ 2 = 2, λ 3 = 3, Λ 1 = 5.2, Λ 2 = 0.8, λ 3 = Im Re Lecture 9 Slide 13

19 Lecture 10 Slide 19 Gershgorin circles: heat example I A = B = (I D 1 A) = 2 h 1 h 1 h 2 h 1 h h 2 h 1 h h h 1 h h 2 h 1 h h 2 h We have b ii = 0, Λ i = { 1 2, i = 1, n 1 i = 2... n 1 estimate λ i 1 Lecture 9 Slide 14

20 Lecture 10 Slide 20 Gershgorin circles: heat example II Let n=11, h=0.1: ( ) ihπ λ i = cos 1 + 2h (i = 1... n) Im Re the Gershgorin circle theorem is too pessimistic... Lecture 9 Slide 15

21 Lecture 10 Slide 21 Weighted directed graph representation of matrices Define a directed graph from the nonzero entries of a matrix A = (a ik ): 4 Nodes: N = {N i } i=1...n Directed edges: E = { N k N l a kl 0} Matrix entries weights of directed edges A = N N 1 9 N N N :1 equivalence between matrices and weighted directed graphs Convenient e.g. for sparse matrices Lecture 9 Slide 16

22 Lecture 10 Slide 22 Reducible and irreducible matrices Definition A is reducible if there exists a permutation matrix P such that PAP T = ( ) A11 A 12 0 A 22 A is irreducible if it is not reducible. Theorem (Varga, Th. 1.17): A is irreducible the matrix graph is connected, i.e. for each ordered pair (N i, N j ) there is a path consisting of directed edges, connecting them. Equivalently, for each i, j there is a sequence of consecutive nonzero matrix entries a ik1, a k1k 2, a k2k 3..., a kr 1k r a kr j. Lecture 9 Slide 17

23 Lecture 10 Slide 23 Taussky theorem (Olga Taussky, 1948) Theorem (Varga, Th. 1.18) Let A be irreducible. Assume that the eigenvalue λ is a boundary point of the union of all the disks λ {µ C : µ a ii Λ i } i=1...n Then, all n Gershgorin circles pass through λ, i.e. for i = 1... n, λ a ii = Λ i Lecture 9 Slide 18

24 Lecture 10 Slide 24 Consequences for heat example from Taussky theorem B = I D 1 A We had b ii = 0, Λ i = { 1 2, i = 1, n 1 i = 2... n 1 estimate λ i 1 Assume λ i = 1. Then λ i lies on the boundary of the union of the Gershgorin circles. But then it must lie on the boundary of both circles with radius 1 2 and 1 around 0. Contradiction λ i < 1, ρ(b) < 1! Lecture 9 Slide 20

25 Lecture 10 Slide 25 Diagonally dominant matrices Definition Let A = (a ij ) be an n n matrix. A is diagonally dominant if (i) for i = 1... n, a ii j=1...n j i a ij A is strictly diagonally dominant (sdd) if (i) for i = 1... n, a ii > a ij j=1...n j i A is irreducibly diagonally dominant (idd) if (i) A is irreducible (ii) A is diagonally dominant for i = 1... n, a ii j=1...n j i a ij (iii) for at least one r, 1 r n, a rr > j=1...n j r a rj Lecture 9 Slide 21

26 Lecture 10 Slide 26 A very practical nonsingularity criterion Theorem (Varga, Th. 1.21): Let A be strictly diagonally dominant or irreducibly diagonally dominant. Then A is nonsingular. If in addition, a ii > 0 is real for i = 1... n, then all real parts of the eigenvalues of A are positive: Reλ i > 0, i = 1... n Lecture 9 Slide 22

27 Lecture 10 Slide 27 Corollary Theorem: If A is complex hermitian or real symmetric, sdd or idd, with positive diagonal entries, it is positive definite. Proof: All eigenvalues of A are real, and due to the nonsingularity criterion, they must be positive, so A is positive definite. Lecture 9 Slide 25

28 Lecture 10 Slide 28 Perron-Frobenius Theorem (1912/1907) Definition: A real n-vector x is positive (x > 0) if all entries of x are positive nonnegative (x 0) if all entries of x are nonnegative Definition: A real n n matrix A is positive (A > 0) if all entries of A are positive nonnegative (A 0) if all entries of A are nonnegative Theorem(Varga, Th. 2.7) Let A 0 be an irreducible n n matrix. Then (i) A has a positive real eigenvalue equal to its spectral radius ρ(a). (ii) To ρ(a) there corresponds a positive eigenvector x > 0. (iii) ρ(a) increases when any entry of A increases. (iv) ρ(a) is a simple eigenvalue of A. Proof: See Varga. Lecture 9 Slide 27

29 Lecture 10 Slide 29 Perron-Frobenius for general nonnegative matrices Each n n matrix can be brought to the normal form R 11 R R 1m PAP T 0 R R 2m = R mm where for j = 1... m, either R jj irreducible or R jj = (0). Theorem(Varga, Th. 2.20) Let A 0 be an n n matrix. Then (i) A has a nonnegative eigenvalue equal to its spectral radius ρ(a). This eigenvalue is positive unless A is reducible and its normal form is strictly upper triangular (ii) To ρ(a) there corresponds a nonzero eigenvector x 0. (iii) ρ(a) does not decrease when any entry of A increases. m Proof: See Varga; σ(a) = σ(r jj ), apply irreducible Perron-Frobenius to R jj. j=1 Lecture 9 Slide 28

30 Lecture 10 Slide 30 Jacobi method convergence Corollary: Let A be sdd or idd, and D its diagonal. Assume that a ii > 0 and a ij 0 for i j. Then ρ(i D 1 A) < 1, i.e. the Jacobi method converges. Proof In this case, B = B. Lecture 9 Slide 31

31 Lecture 10 Slide 31 Regular splittings A = M N is a regular splitting if M is nonsingular M 1, N are nonnegative, i.e. have nonnegative entries Regard the iteration u k+1 = M 1 Nu k + M 1 b. We have I M 1 A = M 1 N. Lecture 9 Slide 32

32 Lecture 10 Slide 32 Convergence theorem for regular splitting Theorem: Assume A is nonsingular, A 1 0, and A = M N is a regular splitting. Then ρ(m 1 N) < 1. Proof: Let G = M 1 N. Then A = M(I G), therefore I G is nonsingular. In addition A 1 N = (M(I M 1 N)) 1 N = (I M 1 N) 1 M 1 N = (I G) 1 G By Perron-Frobenius (for general matrices), ρ(g) is an eigenvalue with a nonnegative eigenvector x. Thus, 0 A 1 Nx = ρ(g) 1 ρ(g) x Therefore 0 ρ(g) 1. As I G is nonsingular, ρ(g) < 1. Lecture 9 Slide 33

33 Lecture 10 Slide 33 Convergence rate comparison Corollary: ρ(m 1 N) = Proof: Rearrange τ = τ 1+τ where τ = ρ(a 1 N). ρ(g) 1 ρ(g) Corollary: Let A 0, A = M 1 N 1 and A = M 2 N 2 be regular splittings. If N 2 N 1 0, then 1 > ρ(m 1 2 N 2 ) ρ(m 1 1 N 1 ). Proof: τ 2 = ρ(a 1 N 2 ) ρ(a 1 N 1 ) = τ 1 But τ 1+τ is strictly increasing. Lecture 9 Slide 34

34 Lecture 10 Slide 34 M-Matrix definition Definition Let A be an n n real matrix. A is called M-Matrix if (i) a ij 0 for i j (ii) A is nonsingular (iii) A 1 0 Corollary: If A is an M-Matrix, then A 1 > 0 A is irreducible. Proof: See Varga. Lecture 9 Slide 35

35 Lecture 10 Slide 35 Main practical M-Matrix criterion Corollary: Let A be sdd or idd. Assume that a ii > 0 and a ij 0 for i j. Then A is an M-Matrix. Proof: We know that A is nonsingular, but we have to show A 1 0. Let B = I D 1 A. Then ρ(b) < 1, therefore I B is nonsingular. We have for k > 0: I B k+1 = (I B)(I + B + B B k ) (I B) 1 (I B k+1 ) = (I + B + B B k ) The left hand side for k converges to (I B) 1, therefore (I B) 1 = k=0 B k As B 0, we have (I B) 1 = A 1 D 0. As D > 0 we must have A 1 0. Lecture 9 Slide 36

36 Lecture 10 Slide 36 Application Let A be an M-Matrix. Assume A = D E F. Jacobi method: M = D is nonsingular, M 1 0. N = E + F nonnegative convergence Gauss-Seidel: M = D E is an M-Matrix as A M and M has non-positive off-digonal entries. N = F 0. convergence Comparison: N J N GS Gauss-Seidel converges faster. More general: Block Jacobi, Block Gauss Seidel etc. Lecture 9 Slide 37

37 Lecture 10 Slide 37 Intermediate Summary Given some matrix, we now have some nice recipies to establish nonsingularity and iterative method convergence: Check if the matrix is irreducible. This is mostly the case for elliptic and parabolic PDEs. Check if the matrix is strictly or irreducibly diagonally dominant. If yes, it is in addition nonsingular. Check if main diagonal entries are positive and off-diagonal entries are nonpositive. If yes, in addition, the matrix is an M-Matrix, its inverse is nonnegative, and elementary iterative methods converge. These critera do not depend on the symmetry of the matrix! Lecture 9 Slide 38

38 Incomplete LU factorizations (ILU) Idea (Varga, Buleev, 1960): fix a predefined zero pattern apply the standard LU factorization method, but calculate only those elements, which do not correspond to the given zero pattern Result: incomplete LU factors L, U, remainder R: A = LU R Problem: with complete LU factorization procedure, for any nonsingular matrix, the method is stable, i.e. zero pivots never occur. Is this true for the incomplete LU Factorization as well? Lecture 10 Slide 38

39 Comparison of M-Matrices Theorem(Saad, Th. 1.33): Let A, B n n matrices such that (i) A B (ii) b ij 0 for i j. Then, if A is an M-Matrix, so is B. Proof: For the diagonal parts, one has D B D A > 0, D A A D B B 0 Therefore I D 1 A A D 1(D B B) D 1 (D B B) = I D 1 B =: G 0. A B B Perron-Frobenius ρ(g) = ρ(i D 1 B B) ρ(i D 1 A A) < 1 I G is nonsingular. From the proof of the M-matrix criterion, B = (I G) 1 = k=0 G k 0. As D B > 0, we get B 0. D 1 B Lecture 10 Slide 39

40 M-Property propagation in Gaussian Elimination Theorem:(Ky Fan; Saad Th 1.10) Let A be an M-matrix. Then the matrix A 1 obtained from the first step of Gaussian elimination is an M-matrix. Proof: One has aij 1 = a ij ai1a1j a 11, a ij, a i1, a 1j 0, a 11 > 0 aij 1 0 for i j a 12 a A = L 1 A 1 with L 1 = a 1n a A 1 nonsingular nonsingular, nonnegative Let e 1... e n be the unit vectors. Then A 1 1 e 1 = 1 a e For j > 1, A 1 1 e j = A 1 L 1 e j = A 1 e j 0. A Lecture 10 Slide 40

41 Stability of ILU Theorem (Saad, Th. 10.2): If A is an M-Matrix, then the algorithm to compute the incomplete LU factorization with a given nonzero pattern A = LU R is stable. Moreover, A = LU R is a regular splitting. Lecture 10 Slide 41

42 Stability of ILU decomposition II Proof Let à 1 = A 1 + R 1 = L 1 A + R 1 where R 1 is a nonnegative matrix which occurs from dropping some off diagonal entries from A 1. Thus, à 1 A 1 and Ã1 is an M-matrix. We can repeat this recursively à k = A k + R k = L k A k 1 + R k = L k L k 1 A k 2 + L k R k 1 + R k = L k L k 1... L 1 A + L k L k 1... L 2 R R k Let L = (L n 1... L 1 ) 1, U = à n 1. Then U = L 1 A + S with S = L n 1 L n 2... L 2 R 1 + +R n 1 = L n 1 L n 2... L 2 (R 1 +R R n 1 ) Let R = R 1 + R R n 1, then A = LU R where U 1 L 1, R are nonnegative. Lecture 10 Slide 42

43 ILU(0) Special case of ILU: ignore any fill-in. Representation: M = ( D E) D 1 ( D F ) D is a diagonal matrix (wich can be stored in one vector) which is calculated by the incomplete factorization algorithm. Setup: for(int i=0;i<n;i++) d(i)=a(i,i) for(int i=0;i<n;i++) { d(i)=1.0/d(i) for (int j=i+1;j<n;j++) d(j)=d(j)-a(i,j)*d(i)*a(j,i) } Lecture 10 Slide 43

44 ILU(0) Solve Mu = v for(int i=0;i<n;i++) { double x=0.0; for (int j=0;j<i;i++) x=x+a(i,j)*u(j) u(i)=d(i)*(v(i)-x) } for(int i=n-1;i>=0;i--) { double x=0.0 for(int j=i+1;j<n;j++) x=x+a(i,j)*u(j) u(i)=u(i)-d(i)*x } Lecture 10 Slide 44

45 ILU(0) Generally better convergence properties than Jacobi, Gauss-Seidel One can develop block variants Alternatives: ILUM: ( modified ): add ignored off-diagonal entries to D ILUT: zero pattern calculated dynamically based on drop tolerance Dependence on ordering Can be parallelized using graph coloring Not much theory: experiment for particular systems I recommend it as the default initial guess for a sensible preconditioner Incomplete Cholesky: symmetric variant of ILU Lecture 10 Slide 45

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Chapter 1 Error Analysis

Chapter 1 Error Analysis Chapter 1 Error Analysis Several sources of errors are important for numerical data processing: Experimental uncertainty: Input data from an experiment have a limited precision. Instead of the vector of

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460 Notes for Part 1 of CMSC 460 Dianne P. O Leary Preliminaries: Mathematical modeling Computer arithmetic Errors 1999-2006 Dianne P. O'Leary 1 Arithmetic and Error What we need to know about error: -- how

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Roundoff errors and floating-point arithmetic

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

This ensures that we walk downhill. For fixed λ not even this may be the case.

This ensures that we walk downhill. For fixed λ not even this may be the case. Gradient Descent Objective Function Some differentiable function f : R n R. Gradient Descent Start with some x 0, i = 0 and learning rate λ repeat x i+1 = x i λ f(x i ) until f(x i+1 ) ɛ Line Search Variant

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

Math 502 Fall 2005 Solutions to Homework 3

Math 502 Fall 2005 Solutions to Homework 3 Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since

More information

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Module No. # 05 Lecture No. # 24 Gauss-Jordan method L U decomposition method

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

Introduction, basic but important concepts

Introduction, basic but important concepts Introduction, basic but important concepts Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch1 October 7, 2017 1 / 31 Economics

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Matrix Assembly in FEA

Matrix Assembly in FEA Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic Computer Arithmetic MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Machine Numbers When performing arithmetic on a computer (laptop, desktop, mainframe, cell phone,

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors 3-1 Roundoff errors and floating-point arithmetic

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 0. Gaussian Elimination with Partial Pivoting 0.2 Iterative Methods for Solving Linear Systems 0.3 Power Method for Approximating Eigenvalues 0.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

printing Three areas: solid calculus, particularly calculus of several

printing Three areas: solid calculus, particularly calculus of several Math 5610 printing 5600 5610 Notes of 8/21/18 Quick Review of some Prerequisites Three areas: solid calculus, particularly calculus of several variables. linear algebra Programming (Coding) The term project

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

22A-2 SUMMER 2014 LECTURE 5

22A-2 SUMMER 2014 LECTURE 5 A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

ACM 106a: Lecture 1 Agenda

ACM 106a: Lecture 1 Agenda 1 ACM 106a: Lecture 1 Agenda Introduction to numerical linear algebra Common problems First examples Inexact computation What is this course about? 2 Typical numerical linear algebra problems Systems of

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information