LINEAR SYSTEMS (11) Intensive Computation
|
|
- Pauline O’Connor’
- 5 years ago
- Views:
Transcription
1 LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni
2 EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2
3 CHOLESKY DECOMPOSITION Direct method to solve linear systems, Ax=b. If A is symmetric and positive-definite, it can be written as the product of a lower triangular matrix L and its transpose, A=LL T. L is the Cholesky factor of A. More notions on positive-definite matrices: A matrix is positive-definite iff its eigenvalues are positive iff its principal minors are positive (Silvester s criterion). Eigenvalues: solutions of the characteristic polynomial (spectrum). Principal minors: determinants of the sub matrices on the principal diagonal. 3
4 BRIEF NOTE ON THE GEOMETRIC MEANING OF POSITIVE-DEFINITE MATRICES Since we have already met positive-definite matrices, let s try to visualise them in R 2. x y a c x c b y = ax 2 +2cxy + by 2 >, 8(x, y) 6= (, ) if A is positive-definite, 8(x, y) 6= (, ) if A is positive-semidefinite? if A is indefinite 4
5 POSITIVE-DEFINITE: ELLIPTIC PARABOLOID
6 POSITIVE-SEMIDEFINITE: PARABOLIC CYLINDER
7 INDEFINITE: HYPERBOLIC PARABOLOID (SADDLE)
8 CHOLESKY DECOMPOSITION step: Given a symmetric square matrix A, we can write it as follows: A2 A22 A2 T a, a,2... a,n a 2, a 2,2... a 2,n a, A B C T 2, A 2, 2 R A A A 2,2 2 R n n 2, A 2,2 a n, a n,2... a n,n Now we want to write A as LL T where L is lower-triangular and can be written as follows: l,... l 2, l 2,2... C. A l n, l n,2... l n,n l, L 2,2 L 2, 8 where L22 is again lower-triangular.
9 CHOLESKY DECOMPOSITION So A=LL T is: a, A T 2, A 2, A 2,2 = l, L 2, L 2,2 l, L T 2, L T 2,2 l 2 =, l, L T 2, l, L 2, L 2, L T 2, + L 2,2 L T 2,2 Remember! We are looking for the entries of L! At this point we can compute the following portions of L: l, = p Basically we get the values a, L 2, = l, A 2, of the entries of the first column of L. Then we compute the following: A 2,2 = L 2, L T 2, + L 2,2 L T 2,2 =) L 2,2 L T 2,2 = A 2,2 L 2, L T 2, =: A (2) 9
10 CHOLESKY DECOMPOSITION 2 step: repeat step on the matrix A (2) =L2,2 L2,2 T. a (2), A (2)T 2, A (2) 2, A (2) 2,2! = A (2) (n ) (n ) 2 R (2),2,A(2) 2,,L (n 2) 3,2 2 R A (2) 2,2,L (n 2) (n 2) 3,3 2 R l2,2 l2,2 L T 3,2 L 3,2 L 3,3 L T 3,3 Iterate until the last sub matrix A (n) is considered. At every step, the size of A (k) decreases of one and the k column of L is computed. l 2,2 = l 2 = 2,2 l 2,2 L T 3,2 l 2,2 L 3,2 L 3,2 L T 3,2 + L 3,3 L T 3,3 q a (2), L 3,2 = l 2,2 A (2) 2, A (2) 2,2 = L 3,2L T 3,2 + L 3,3 L T 3,3 =) L 3,3 L T 3,3 = A (2) 2,2 L 3,2 L T 3,2 =: A (3)
11 CHOLESKY DECOMPOSITION Once we get L, the linear system Ax=b can be written as LL T x=b, so: - Solve Ly=b using forward substitution; - Solve L T x=y using backward substitution. Why does A have to be positive-definite? If A (k) is positive-definite, then: a (k) (it is its first principal minor), hence is well, > defined. is positive-definite. A (k+) = A (k) 2,2 a (k), A 2, A T 2, l k,k = q a (k), 8k =,...,n
12 A AN EXAMPLE A l, l 2, l 2,2 l, l 2, l 3, l 2,2 l 3,2 A = 5 l 3, l 3,2 l 3,3 l 3,3 l, 2 l, l 2, l, l 3, l, l 2, l2, 2 + l2,2 2 l 2, l 3, + l 2,2 l 3,2 A l, l 3, l 2, l 3, + l 2,2 l 3,2 l3, 2 + l3,2 2 + l3,3 2 Thus 5 3 L 2,2 L T 2,2 = A l, = p 25 = 5 L 2, = l2, l 3, = is the first column of L. The matrix for the next iteration is: = = = A (2)
13 AN EXAMPLE step 2 Repeat the same procedure on matrix A (2) : Thus: l 2,2 = p 9=3 So the second column of L is A (3) := = =9 l2,2 l2,2 l 3,2 l 3,2 l 3,3 l 3,3 L 3,2 = l 3,2 = 3 3A =, while: l 2 2,2 l 2,2 l 3,2 l 2,2 l 3,2 l3,2 2 + l3,3 2 3
14 AN EXAMPLE step 3 A (3) =9=L 3,3 L T 3,3 = l 2 3,3 hence: l 3,3 = p 9=3 So the last column of L is In conclusion: A = LL A A 3 3 4
15 AN EXAMPLE Assume b =(,...,) T forward substitution backward substitution Ly = b L T x = b y 2 A A 3 y 3 y 2 A y x 2 A 3 x x 2 A x
16 CHOLESKY DECOMPOSITION Algorithm: Given a linear system Ax=b, A is symmetric and positive-definite and its size is n Initialize L as a nxn zero matrix; For i=:n O(n 3 ) - expensive! L(i,i)= A(i,i); If i < n L(i+:n,i) = A(i+:n,i) / L(i,i); A(i+i:n,i+:n)=A(i+:n,i+:n) - L(i+:n,i) * L(i+:n,i) T ; Solve Ly=b (forward substitution); Solve L T x=y (backward substitution); 6
17 CHOLESKY FOR SPARSE If A is sparse, so may be L. MATRICES If L is sparse, the cost of factorization is less than n 3 /3 and it depends on n, the number of nonzero elements, and the sparsity pattern.. % Fill-in example: a a T I x y = b c Factorization a T a I = a T a L 2,2 L T 2,2 B A = B A B A 7
18 CHOLESKY FOR SPARSE MATRICES This can be solved by shifting columns of A of one position to the left (and swapping entries on vectors x and b), getting no fill-in at all. a a T I Factorization x y = I a a T B A b c = = Shift I a y a T x I p I a T at a 8 C B p a = at a c b C A
19 CHOLESKY FOR SPARSE MATRICES We have seen that permuting the entries of A can prevent fill-in during factorization, so one can factorize a permutation of A instead of A itself. Permutation matrix: square matrix having exactly one on every row and column. If P is a permutation matrix, then PA permutes A s row, AP permutes A s column, P T AP permutes A s rows and columns. Permutation matrices are orthogonal: P T = P -. Instead of factorizing A, one can factorize P T AP in order to prevent fill-in P effects the sparsity pattern of A and it is not known a-priori, but there are heuristic methods to select good permutation matrices. 9
20 FACTS ON CHOLESKY DECOMPOSITION Speaking of methods to avoid pivoting, as the transpose and the exponential methods produce positive-definite matrices, the Cholesky decomposition may be applied. Gaussian elimination vs. Cholesky decomposition: The Gaussian elimination can be applied to any matrix, even to non square ones, while the Cholesky decomposition can be applied only on (square) symmetric and positive-definite matrices. They have the same asymptotic computational cost (O(n 3 )), but Cholesky is faster by a factor 2. 2
21 EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL
22 ITERATIVE METHODS Iterative methods for solving a linear system Ax=b consist in finding a series of approximate solutions x, x 2, etc., starting from an initial approximate solution x until convergence to the exact solution. Iteration can be interrupted when the desired precision is reached. For example: precision threshold, ε = -3. Suppose the error is computed as the absolute value of the difference of the exact and the approximate solutions element- wise, xi - x k i. It follows that an approximate solution x k is accepted if xi - x k i < ε. Assume the solution of Ax=b is x=(,,) T, the approximate solution x k =(.9,,.9) T satisfies the threshold. 22
23 ITERATIVE METHODS Iterative methods are usually faster than exact methods, specially if the coefficient matrix is large and sparse. One can play with the threshold to find the desired tradeoff between speed and accuracy (increasing the precision threshold reduces the number of iterations to reach acceptable approximate solutions, but at the same time it produces less accurate solutions). On the other hand, for some problems convergence may be very slow or the solution may not converge at all. 23
24 EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 24
25 JACOBI METHOD Strictly diagonally dominant matrix: The absolute value of the diagonal entries is strictly greater than the sum of the absolute value of the non diagonal entries of the corresponding rows. a i,i > n P j= j6=i a i,j 8i =...n The Jacobi method always succeeds if A or A T are strictly diagonally dominant. * if the system is not strictly diagonally dominant, it may still converge. We will consider strictly diagonally dominant linear systems. 25
26 JACOBI METHOD Given the linear system: a, x + a,2 x a,n x n = b a 2, x + a 2,2 x a 2,n x n = b 2. a n, x + a n,2 x a n,n x n = b n where A is strictly diagonally-dominant, it can be rewritten as follows by isolating xi in the i equation: x = b a,2 a, a, x 2... x 2 = b 2 a 2, a 2,2 a 2,2 x... a,n a, x n a 2,n a 2,2 x n. x n = b n a n,n a n, a n,n x a n,2 a n,n x
27 JACOBI METHOD Let x () =(x (),...,x() n ) x () is the zero vector). be the initial approximate solution (commonly Find the approximate solution x () by substituting x () in the right hand side of the linear system: x () = b a, a,2 a, x () 2... x () 2 = b 2 a 2,2 a 2, a 2,2 x ()... a,n a, x () n a 2,n a 2,2 x () n. x () n = b n a n, a n,n a n,n x () a n,2 a n,n x () 2... Then use the approximate solution x () just computed to find x (2) by substituting x () in the right hand side of the linear system. 27
28 JACOBI METHOD Iterate. At the k step one finds the approximate solution x (k) by substituting the previous one in the right hand side of the linear system: x (k) = b a, a,2 x (k) 2 = b 2 a 2,2 a 2, (k ) a, x (k ) a 2,2 x a,n (k ) a, x n a 2,n (k ) a 2,2 x n. x (k) n = b n a n,n a n, (k ) a n,n x a n,2 (k ) a n,n x 2... If A is strictly diagonally-dominant, the produced approximate solutions are more and more accurate. 28
29 JACOBI - STOPPING CRITERIA When should one stop iterating? When the error produced is small enough. Different ways to compute the error at each step: e (k) := x (k+) -x (k) : the error is the difference between the last and the previous solutions. e (k) := x-x (k) : the error is the difference between the exact solution and the approximate solution(exact solution is usually unknown tho). e (k) := x (k+) -x (k) / x (k+) : the error is the change rate between the last and the previous solutions. where. is the l 2 norm: x 2= (x 2 + +xn 2 ). So the iterations are repeated while e (k) >= ε 29
30 AN EXAMPLE The linear system 9x + y + z = 2x + y +3z = 9 3x +4y + z = has x y z A 2 A Since A is strictly diagonally dominant, the system can be solved with Jacobi. Compute the error as e (k) := x (k+) -x (k) and let x () =(,,) be the initial approximate solution. x () = 9 ( y() z () )= 9 y () = (9 2x() 3z () )= 9 z () = ( 3x() 4y () )= So the first approximate solution is x (): (x, y, z) T =( 9, 9, )T The error is: e () =2.2. 3
31 AN EXAMPLE Substituting x () we get: x (2) = 9 ( y() z () )= 9 ( 9 )= 9 y (2) = (9 2x() 3z () )= (9 2 9 z (2) = ( 3x() 4y () )= ( and the error is: e () =.4. One more iteration: x (3) = 9 ( y(2) z (2) )= 9 ( )= 5 9 )= )=.353 y (3) = (9 2x(2) 3z (2) )= ( z (3) = ( 3x(2) 4y (2) )= ( )= 8 9 The error is: e (2) =.395 )=
32 AN EXAMPLE Assume the required precision is ε = -3. Then after iteration the error is x -4. and the approximate solution is x y () A = z 2. A
33 JACOBI METHOD Algorithm: Given Ax=b, where A is strictly diagonally dominant: Initialize the first approximate solution x and an array x and set err=inf and ε; while err >= ε for i = :n x(i) = b(i); for j = :n if j!= i x(i) = x(i) - A(i,j) * x(j); x(i) = x(i) / A(i,i); update err; x = x; 33 O(#of iterations x n 2 )
34 JACOBI METHOD If k is the number of iterations (in the while), then the number of operations executed by the Jacobi algorithm is kn 2. The Gaussian elimination requires n 3 /3 operations. So it is more convenient to use the Jacobi method instead of the Gaussian elimination if kn 2 < n 3 /3, that is when k<n/3. For this reason it is important to assess the minimum number of iterations before deciding whether to apply Jacobi. Such number is the smallest k that satisfies: k> log( ( ) ) log( ) where 34 = max i=,...,n np j= j6=i a i,j a i,i = max i=,...,n x() i x () i
35 JACOBI - MATRICIAL FORM Given Ax=b, where A is strictly diagonally dominant, one can write A as follows: a, a,2... a,n a,... a 2, a 2,2... a 2,n B A = a 2,2... C. A a n, a n,2... a n,n... a n,n 2... a 2,... 6B 4@.... C.. A a n, a n, a,2... a,n... a 2,n C7. A5... A = D [ L U ] So the linear system can be rewritten as: {D [ L U]}x = b If A is diagonally dominant, D is invertible, so: Dx [ L U]x = b $ x D [ L U]x = D b $ x = D [ L U]x + D b Matricial form of Jacobi: x (k+) = D [ L U]x (k) + D b 35 k =,, 2,...
36 EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL
37 GAUSS-SEIDEL Iterative method, very similar to Jacobi s. It always converges if A is strictly diagonally dominant or symmetric and positive-definite. x (k) = b a,2 (k ) x 2 a, a, x (k) 2 = b 2 a 2, x (k) a 2,2 a 2,2 a 2,3 a,3 (k ) x 3... a, (k ) x 3... a 2,2 a 2,n a,n a, x a 2,2 x (k ) n (k ) n x (k) 3 = b 3 a 3, x (k) a 3,3 a 3,3. x (k) n = b n a n, x (k) a n,n a n,n a 3,2 a 3,3 x (k) 2... a n,2 a n,n x (k) a 3,n a 3,3 x a n,n a n,n (k ) n x (k) n * if the system is not strictly diagonally dominant nor positive-definite, it may still converge. We will consider strictly diagonally dominant or positive-definite linear systems.
38 AN EXAMPLE The same linear system we used before: 9x + y + z = 2x + y +3z = 9 3x +4y + z = Compute the error as e (k) := x (k+) -x (k) and let x () =(,,) be the initial approximate solution. x () = 9 y () z () = 9 y () = x() z() = = 5 9 z () = 3 x() 4 y() = = error =
39 AN EXAMPLE Second iteration: x (2) = 9 9 y() 9 z() = = y (2) = x(2) z() = = z (2) = 3 x(2) 4 y(2) = = error =
40 AN EXAMPLE If the error threshold is -3, after 5 iterations we get: ya = 2 A Computed error: * -4 In iteration 4, the approximate solution was: ya = 2 A Computed error:.34. 4
41 GAUSS-SEIDEL-MATRICIAL FORM Write A as follows: a, a,2... a,n a 2, a 2,2... a 2,n B A = a n, a n,2... a n,n a,... a 2,2... C. A... a n,n 2... a 2,... 6B 4@.... C.. A a n, a n, a,2... a,n... a 2,n C7. A5... A = D [ L U ] Then the linear system can be written as: (D + L + U)x = b Then: (D + L)x + Ux = b So the matricial form of Gauss-Seidel is: (D + L)x (k+) = b Ux (k) 8k =,, 2,... 4
42 GAUSS-SEIDEL Algorithm: Given Ax=b, where A is strictly diagonally dominant or positive-definite: Initialize x, x arrays of length n. Set err=inf and ε; while err ε for i = :n x(i) = x(i) \ A(i,i); x(i) = b(i); Update err; if i == x = x; for j = 2:n x(i) = x(i) - A(i,j) * x(j); else for j = :i - x(i)=x(i) - A(i,j) * x(j); O(#of iterations x n 2 ) for j = i + :n x(i) = x(i) - A(i,j) * x(j); 42
43 JACOBI VS. GAUSS-SEIDEL On several classes of matrices, Gauss-Seidel converges twice as fast as Jacobi (meaning that at every iteration, the number of fixed exact solution digits that Gauss-Seidel computes is twice as large as Jacobi s). As soon as the improved entries of the approximate solution are computed, they are immediately used in the same iteration step, k. On the other hand, Jacobi can be parallelized, while Gauss-Seidel is inherently non parallelizable. The same stopping criteria work for both methods. 43
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationA LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin
A LINEAR SYSTEMS OF EQUATIONS By : Dewi Rachmatin Back Substitution We will now develop the backsubstitution algorithm, which is useful for solving a linear system of equations that has an upper-triangular
More informationIntroduction to PDEs and Numerical Methods Lecture 7. Solving linear systems
Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationMatrices and Matrix Algebra.
Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationSystems of Linear Equations
Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationIterative Solvers. Lab 6. Iterative Methods
Lab 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationCS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University
CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationThe purpose of computing is insight, not numbers. Richard Wesley Hamming
Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationParallel Scientific Computing
IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationMATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationTMA4125 Matematikk 4N Spring 2017
Norwegian University of Science and Technology Institutt for matematiske fag TMA15 Matematikk N Spring 17 Solutions to exercise set 1 1 We begin by writing the system as the augmented matrix.139.38.3 6.
More informationNumerical Methods I: Numerical linear algebra
1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationHere each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as
Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationLecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)
1 Introduction Lecture 12 Linear systems of equations II We have looked at Gauss-Jordan elimination and Gaussian elimination as ways to solve a linear system Ax=b. We now turn to the LU decomposition,
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More information1.Chapter Objectives
LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationSolving Linear Systems of Equations
1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationMATH2210 Notebook 2 Spring 2018
MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Vectors Arrays of numbers Vectors represent a point
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationApplied Linear Algebra
Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationGauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University
Gauss-Seidel method Dr. Motilal Panigrahi Solving system of linear equations We discussed Gaussian elimination with partial pivoting Gaussian elimination was an exact method or closed method Now we will
More information