Computational Economics and Finance
|
|
- Felicity Knight
- 5 years ago
- Views:
Transcription
1 Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods: (Gauss-) Jacobi and Gauss-Seidel Acceleration and stabilization methods Application: Google PageRank 2
2 Linear Equations System of linear equations Ax = b where A = a 11 a 12 a 1n a 21 a 22 a 2n a n1 a n2 a nn Rn n is an n n matrix and b = b 1 b 2. b n Rn is a column vector of dimension n 3 Importance of Solution Methods for Linear Systems Some important problems are linear. Many nonlinear solution methods are sequences of linear problems. Solution methods for linear systems illustrate general ideas and concepts for solving systems of equations. 4
3 Triangular Systems The matrix A is lower triangular if all nonzero elements lie on or below the diagonal: a a A = 21 a a n1 a n2 a nn Upper triangular: all nonzero entries on or above the diagonal Triangular: either lower or upper triangular Diagonal: all nonzero entries are on the diagonal A triangular matrix is nonsingular if and only if all diagonal elements are nonzero. Lower (upper) triangular matrices are closed under addition, multiplication and inversion. 5 Direct Method: Back Substitution Suppose A is lower triangular and nonsingular Back substitution (forward substitution?): x 1 = b 1 a 11, x k = b k k 1 j=1 a kjx j a kk, k =2, 3,...,n. Suppose A is upper triangular and nonsingular Back substitution: x n = b n, a nn x k = b k n j=k+1 a kj x j a kk, k = n 1,n 2,...,2, 1. 6
4 Direct Method: LU Decomposition LU decomposition: If A is nonsingular, then A = LU, wherel is lower triangular and U is upper triangular. L and U computed by Gaussian elimination Ax = b LUx = b Lz = b with Ux = z Step 1: Solve Lz = b for z by back substitution Step 2: Solve Ux = z for x by back substitution 7 LU Decomposition with Pivoting Gaussian elimination may require pivoting LU = PA for permutation matrix P Matlab: [L, U] =lu(a), [L, U, P ]=lu(a) Solving linear equations in Matlab with Gaussian elimination x = A\b (backslash or left division) mldivide(a, b) 8
5 Direct Method: Cholesky Decomposition A is symmetric and positive definite Example: covariance matrix A = LL,whereL (L ) is lower (upper) triangular. L called Cholesky factor Special case of LU factorization: L is upper triangular and takes role of U Matlab: C = chol(a) (careful, C is upper triangular) 9 Error Analysis Direct methods can be exact but are susceptible to round-off errors Iterative methods (Jacobi, Gauss-Seidel) are approximate by nature and typically suffer from truncation error Question: How large is the error? Exact solution x such that Ax = b (exactly) Numerical solution ˆx such that Aˆx b Unknown true error e =ˆx x Known residual r = Aˆx b = Aˆx Ax = Ae Residual r does not indicate magnitude of the error in the solution ˆx for x 10
6 A = [ ; ]; b = [2;2]; A*[1.02;1.02]-b ans = A*[2;0]-b ans = A \ b Example in Matlab ans = Vector Norms Norms : R n R + for vectors x R n l 1 norm: x 1 = n i=1 x i (1-norm) l 2 norm: x 2 = ( ni=1 x 2 i ) 1/2 = ( x x ) 1/2 (Euclidean norm) l norm: x =max i=1,...,n x i (infinity norm) Measures for the length or size of a vector 12
7 Norms in Mathematica x = {3, 4}; Norm[x, 1] Norm[x, 2] Norm[x, ] Norm[x] Matrix Norms For a vector norm on R n the induced matrix norm of an n n matrix A is defined by A =max x 0 Ax x = max x =1 Ax A 1 =max j=1,...,n ni=1 a ij A 2 = ρ ( A A ) (matrix 1-norm) (matrix 2-norm) A =max i=1,...,n nj=1 a ij (matrix infinity norm) Spectral radius ρ (B) =max{ λ : λ is an eigenvalue of the matrix B} Spectral radius ρ (B): largest (in absolute value) eigenvalue For any induced norm: ρ (A) A 14
8 Frobenius Norm of a Matrix n n A F = a 2 ij i=1 j=1 1/2 Not induced by a vector norm Induced vector norms have the property Ax A x 15 Matrix Norms in Matlab A = [1 2; 3 4]; [norm(a,1),norm(a,2),norm(a,inf),norm(a, fro ),norm(a)] ans = sqrt(eig(a *A)) ans =
9 Matrix Norms in Mathematica A = {{1, 2}, {3, 4}} ; N[{Norm[A,1], Norm[A,2], Norm[A, ], Norm[A,Frobenius], Norm[A]}] {6., , 7., , } N[Sqrt[Eigenvalues[Transpose[A].A]]] { , } 17 Error Bounds Recall error e =ˆx x and residual r = Aˆx b = Aˆx Ax = Ae Ae = r A e r x = A 1 b A 1 b x A e x r A 1 b e = A 1 r A 1 r e Ax = b A x b A 1 r b e A x Lower and upper bound on the relative error e x 1 r A A 1 b e x A A 1 r b 18
10 Condition Number Condition number: cond(a) = A A 1 Lower and upper bound on the relative error e x : 1 r cond(a) b e x cond(a) r b Condition number of matrix A bounds the relationship between relative error in b, r b, and relative error in the solution, e x. 1 cond(a) e / x r / b cond(a) is the percentage error in x relative to the percentage error in b, i.e., the elasticity of the output error with respect to the input error. 19 Condition Number Rule of thumb: As the condition number increases by an order of magnitude, one decimal digit of accuracy is lost. Drawback of condition number: norm dependent Spectral condition number: Ratio of largest to smallest (in absolute value) eigenvalue cond (A) =ρ(a)ρ(a 1 ) Independent of norm, lower bound on condition number A ρ(a) cond(a) cond (A) cond (A) often used as estimate for condition number If A is singular, then the condition number is infinite 20
11 Condition Number in Matlab A = [1 2; 3 4]; [cond(a,1),cond(a,2),cond(a,inf),cond(a, fro ),cond(a)] ans = max(abs(eig(a)))/min(abs(eig(a))) ans = A = [ ; ]; cond(a) ans = b = [2;2]; r = A*[2;0]-b r= cond(a)*norm(r)/norm(b) Error Analysis: Example ans =
12 Error Analysis: Example Exercise 2.3: Consider the system of linear equations x y = 1, x y = 0. The exact solution is x = and y = However, double-precision arithmetic yields x = e and y = e Note that cond(a) = e Rescaling and Pre-conditioning Condition numbers are sensitive to scaling Linear system x = a, my = b for x, y R trivial to solve Matrix ( m has spectral condition number m ) Define z = my then problem becomes x = a, z = b Condition number for new system is 1 Change in units ( rescaling ) or a linear transformation ( preconditioning ) can improve conditioning 24
13 Iterative Solution Methods Iterative solution approach for solving linear system: Rewrite Ax = b as a linear fixed-point iteration with n n identity matrix I x =(I A)x + b Richardson iteration x (k+1) =(I A)x (k) + b Advantage: can handle very large problems 25 Stationary Iterative Methods More generally: solve system of linear equations x = Mx + c by generating a sequence {x (k) } via the linear fixed-point iteration x (k+1) = Mx (k) + c Initial iterate x (0), n n iteration matrix M Error after k + 1 iterations e (k+1) = x (k+1) x = Mx (k) + c ( Mx + c ) = M ( x (k) x ) = Me (k) =...= M k e (0) e (k+1) = M k e (0) 0 for arbitrary e (0) if and only if ρ(m) < 1 Theorem: The linear fixed-point iteration converges to x = (I M) 1 c for all c R n and all initial iterates x (0) R n if and only if ρ(m) < 1. 26
14 Linear Equations Ax = b a 11 a 12 a 1n a 21 a 22 a 2n a n1 a n2 a nn x 1 x 2. x n b 1 b 2. = b n The ith equation in a system of linear equations is n j=1 a ij x j = b i If a ii 0 we can isolate the variable x i : x i = 1 a ii b i a ij x j j i 27 Jacobi Iterative Method Idea: replace system of linear equations with sequence of singlevariable linear equations Given kth iterate x (k) =(x (k) 1,x(k) 2,...,x(k) n Jacobi iteration: ) x (k+1) i = 1 a ii b i j i a ij x (k) j, i =1,...,n Delayed replacement: no x (k+1) i is used until the entire (k +1)th iterate x (k+1) has been computed Iterations continue until some stopping rule is satisfied 28
15 Gauss-Seidel Iterative Method Idea: replace system of linear equations with sequence of singlevariable linear equations and use new information immediately Given kth iterate x (k) =(x (k) 1,x(k) 2,...,x(k) n Compute new iterate for x 1 from first equation x (k+1) 1 = 1 a 11 ) ( b 1 a 12 x (k) 2 a 13x (k) 3... a 1nx n (k) Use x (k+1) 1 immediately to compute x (k+1) 2 x (k+1) 2 = 1 a 22 ( b 2 a 21 x (k+1) 1 a 23 x (k) 3... a 2nx n (k) ) ) Gauss-Seidel iteration x (k+1) i = 1 a ii b i i 1 j=1 a ij x (k+1) j n j=i+1 a ij x (k) j, i =1,...,n. 29 Iterative Methods: Jacobi and Gauss-Seidel Gauss-Seidel method is sensitive to matching between variables and equations ordering of equations Gauss-Seidel method requires less computer memory than the Jacobi method Block Jacobi/Gauss-Seidel iteration: Combine a direct method to solve a subset of equations with an iterative method for the entire system 30
16 Operator Splitting Approach Transform problem into another problem with identical solutions where fixed-point iteration is easy Split A into two operators A = N P, then Ax = b (N P )x = b Nx = b + Px and if N is easily invertible the iteration becomes x (k+1) = N 1 b + ( N 1 P ) x (k) Compute N 1 b and N 1 P only once Convergence for ρ(n 1 P ) < 1 31 Operator Splitting and Jacobi Iteration N = a a a nn, P = 0 a 12 a 1n a 21 0 a 2n a n1 a n2 0 N is diagonal matrix and thus easy to invert Convergence criterion ρ(n 1 P ) N 1 P = max 1 i n j i a ij a ii < 1 Equivalent to (row) diagonal dominance of the matrix A, a ij < a ii i =1, 2,...,n j i 32
17 Operator Splitting and Gauss-Seidel Iteration N = a a 21 a a n1 a n2 a nn, P = 0 a 12 a 1n 0 0 a 2n Same sufficient condition for convergence Theorem: Let A be an n n matrix that is diagonally dominant. Then A is nonsingular and both the Jacobi iteration and the Gauss-Seidel iteration converge to x = A 1 b for all b and all initial iterates. Jacobi and Gauss-Seidel are (at best) linearly convergent at rate ρ(n 1 P ) 33 Equilibrium Problem Table 3.2: supply and demand problem Supply curve q = p 2 +1 Inverse demand curve p =10 q Equilibrium requires supply = demand, quantity q and price p such that p = 10 q q = p
18 Jacobi Iteration Jacobi iteration q (k+1) = p(k) 2 +1 p (k+1) = 10 q (k) Initial iterate (q (0),p (0) )=(1, 4) First iterate (q (1),p (1) )=(3, 9) Second iterate (q (2),p (2) )=(5.5, 7) andsoon Iterative Methods: Gauss-Jacobi and Gauss-Seidel Gauss-Jacobi and Gauss-Seidel iteration. Source: Judd (1998), Figure
19 Gauss-Seidel iteration Gauss-Seidel iteration q (k+1) = p(k) 2 +1 p (k+1) = 10 q (k+1) Initial iterate (q (0),p (0) )=(1, 4) First iterate (q (1),p (1) )=(3, 7) Second iterate (q (2),p (2) )=(4.5, 5.5) andsoon Iterative Methods: Acceleration and Stabilization Methods Recall general fixed-point iteration approach x (k+1) = Gx (k) + b At best linearly convergent at rate ρ(g) But we may be able to increase the linear rate of convergence Extrapolation and dampening: x (k+1) = ω ( Gx (k) + b ) +(1 ω)x (k) = G [ω] x (k) + ωb with G [ω] = ωg +(1 ω)i 38
20 Extrapolation and Dampening ω>1: extrapolation Convergence of initial approach indicates that ( Gx (k) + b ) x (k) is a good direction to move towards solution Try to accelerate convergence by going further in that direction ω<1: dampening ( Gx (k) + b ) x (k) may indicate good direction but full step overshoots Try to avoid overshooting by taking smaller steps 39 Extrapolation and Dampening Choose ω to minimize ρ(g [ω] ): ω = 2 M m 2,whereM =maxσ(g) andm =minσ(g) M<1 implies ρ(g [ω ] )= M m 2 M m < 1 independent of m, i.e., ω either ensures or accelerates convergence 40
21 Extrapolation Procedure for Gauss-Seidel Successive overrelaxation (SOR) x (k+1) i = ω 1 a ii b i i 1 j=1 a ij x (k+1) j n j=i+1 a ij x (k) j +(1 ω)x(k) i Write A as A = L + D + U, wherel, D, andu consists of the elements of A below, on, and above the diagonal, respectively. Then, x (k+1) = ωd 1 b ωd 1 Lx (k+1) ωd 1 Ux (k) +(1 ω)x (k) which is equivalent to x (k+1) = M 1 ω ( Nω x (k) + ωb ) = Mω 1 N ωx (k) + ωmω 1 b, where M ω = D + ωl and N ω =(1 ω)d ωu. 41 Gauss-Seidel and SOR Example Figure 3.4: Inverse demand function is p =21 q and supply function is q = p 2 3 Gauss-Seidel iteration: SOR: where ω =0.75 p (k+1) = 21 3q (k) q (k+1) = p(k+1) 2 3 p (k+1) = ω ( 21 3q (k)) +(1 ω)p (k) q (k+1) = ω 3 +(1 ω)q (k) p(k+1) 2 42
22 SOR Example Dampening. Source: Judd (1998), Figure Gauss-Seidel and SOR Example Gauss-Seidel diverges A is not diagonally dominant Absolute value of largest eigenvalue of N 1 P exceeds 1 SOR converges Altered iteration matrix has small eigenvalues 44
23 Duopoly Game Figure 3.5: Nash equilibrium computation Firm 1 s reaction function: p 1 = BR 1 (p 2 )=1+0.75p 2 Firm 2 s reaction function: p 2 = BR 2 (p 1 )=2+0.8p 1 Gauss-Seidel iteration p (k+1) 1 = p (k) 2 p (k+1) 2 = 2+0.8p (k+1) 1 SOR where ω =1.5 p (k+1) ( 1 = ω p (k) 2 ( p (k+1) 2 = ω 2+0.8p (k+1) 1 ) +(1 ω)p (k) 1 ) +(1 ω)p (k) 2 45 Nash Equilibrium of Duopoly Game Acceleration. Source: Judd (1998), Figure
24 Nonlinear Equations Many concepts and methods carry over from linear to nonlinear equations Idea: f(x) = 0 is approximated by f(x 0 )+f x (x 0 )(x x 0 )=0 Error analysis, condition number Iterative methods: Jacobi, Gauss-Seidel Acceleration and stabilization methods Convergence (local instead of global) 47 Further Topics Sparse matrix techniques Krylov space methods conjugate-gradient method GMRES-method... and other algorithms Matlab; help sparfun 48
25 Application: Google PageRank Google s original algorithm for establishing the importance of web pages: PageRank L. Page, S. Brin, R. Motwani and T. Winograd The PageRank Citation Ranking: Bringing Order to the Web, Stanford Digital Library working paper SIDL-WP Intuitive basis for the algorithm: the random surfer model Mathematically: a random walk on a graph (network) The critical step in the algorithm: solving a large system of linear equations in order to determine the steady-state distribution of amarkovchain 49 World Wide Web Set W of web pages that can be reached from some root page (a portion of the web) n pages in W = {1, 2,...,n} Connectivity matrix G of W : n n matrix with elements g ij { 1 link from page i to page j g ij = 0 link from page i to page j Matrix G can be huge but is very sparse, the number of ones in G is the total number of hyperlinks in W Row and column sums of G n n r i = g ij i W, c j = g ij j=1 i=1 j W 50
26 Random Surfer Model Sum r i is the out-degree of page i: the number of links on page i pointing to other pages Sum c j is the in-degree of page j: the number of links on other pages pointing to page j Random Surfer Model: random walk on the graph of the web Model part I: Surfer on page i randomly clicks on a link Probability to click on the link to page j: g ij r i Model part II: If the surfer enters a small loop or gets bored, then (s)he randomly jumps to another web page in W Probability p of clicking a link, probability 1 p of jumping to another web page; typical value: p = Markov Chain n n matrix A of transition probabilities a ij a ij = p g ij + 1 p r i n assuming r i 0foralli W a ij is the probability of jumping from page i to page j A is the transition probability matrix of a Markov chain; all elements are between 0 and 1 and its row sums are all equal to 1 Markov chain is irreducible and aperiodic; it has a unique stationary probability distribution (invariant measure) x Row vector x R n with x i [0, 1] and n i x i =1 52
27 Stationary Distribution Perron-Frobenius Theorem implies that the stationary distribution x satisfies x lim k Ak = x. x Stationary distribution also satisfies the equation x = xa This system has a one-dimensional solution set. Additional restriction n i x i = 1 yields unique solution; this solution is Google s PageRank; the larger the element x i,themore important the web page i 53 Two Helpful Matrices Let D be a diagonal matrix with d ii = 1 r i,so D = 1 r r r n Let e be the column n-vector of all ones, so ee T isthematrixof all ones, that is, ee T =
28 Computing the PageRank Using these two matrices, the system x = xa canbewrittenas ( x = x pdg + 1 p ) n eet For x to be a probability distribution we need and so we need to solve Perhaps more familiar xe = n i=1 x i =1 x (I pdg) = 1 p n et (I pdg) T x T = 1 p n e 55 Example Matlab: PageRankExample.m First approach: Solve linear system of equation x (I pdg) = 1 p n et Second approach: Take a large power K of the transition matrix ( pdg + 1 p ) K n eet Third approach: Power iteration From random start vector calculate ya, (ya)a, (ya 2 )A,... 56
Lecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationIterative Solution methods
p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationSolving Linear Systems of Equations
1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly
More informationeigenvalues, markov matrices, and the power method
eigenvalues, markov matrices, and the power method Slides by Olson. Some taken loosely from Jeff Jauregui, Some from Semeraro L. Olson Department of Computer Science University of Illinois at Urbana-Champaign
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More information6 Linear Systems of Equations
6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationLet x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.
ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More informationAMSC/CMSC 466 Problem set 3
AMSC/CMSC 466 Problem set 3 1. Problem 1 of KC, p180, parts (a), (b) and (c). Do part (a) by hand, with and without pivoting. Use MATLAB to check your answer. Use the command A\b to get the solution, and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationAIMS Exercise Set # 1
AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationCalculating Web Page Authority Using the PageRank Algorithm
Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationIntroduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa
Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis
More informationMATH36001 Perron Frobenius Theory 2015
MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,
More informationScientific Computing: Dense Linear Systems
Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationIterative solution methods
J.M. Burgerscentrum Research School for Fluid Mechanics Iterative solution methods Prof.dr.ir. C. Vuik 2017 Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More information6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities
6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov
More informationMATH2071: LAB #5: Norms, Errors and Condition Numbers
MATH2071: LAB #5: Norms, Errors and Condition Numbers 1 Introduction Introduction Exercise 1 Vector Norms Exercise 2 Matrix Norms Exercise 3 Compatible Matrix Norms Exercise 4 More on the Spectral Radius
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationChapter 2 - Linear Equations
Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationScientific Computing: Solving Linear Systems
Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant
More informationNumerical Linear Algebra And Its Applications
Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More information