Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University
|
|
- Natalie O’Neal’
- 6 years ago
- Views:
Transcription
1 Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 1 / 27
2 Outline 1 Steepest Descent Methods 2 Krylov Subspace Methods Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 2 / 27
3 Overview Standard Problems in Numerical Analysis Two Standard problems in Numerical Analysis are: (a) Root finding: solving linear/nonlinear equations $ f 1 px 1,..., x n q 0 & fpxq 0, f : R n Ñ R n ô. % f n px 1,..., x n q 0 (b) Optimization: e.g, minimizing F pxq Ñ min. F : R b Ñ R. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 3 / 27
4 Overview cont ed (a) v.s. (b) To a certain degree, we can convert one problem to the other: e.g., (1) Root finding Ñ Optimization fpxq 0 ñ F pxq min., F }f} 2 (2) Optimization Ñ Root finding F pxq min. ñ fpxq 0, f F Usually, we have case (1). Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 4 / 27
5 Review of Calculus Assume F : R n Ñ R. Taylor series? Gradient, Hessian of F? Assume x is local minimum ô F px ` hq ě F pxq@ small h ñ DF pxq 0, D 2 F pxq positive definite. F : direction of steepest increase F : direction of steepest decrease K to F : tangent to level surface, no increase/decrease. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 5 / 27
6 Standard Method: Direction Search Direction Search x 0 initial guess, d 0 initial search direction at x 0 x 1 local min. along line from x 0 in direction d 0, d 1 next search direction at x 1 repeat until find global min. Special case Assume A is symmetric, F pxq 1 2 xt Ax b T x DF pxq x T A b T F pxq Ax b D 2 F pxq A Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 6 / 27
7 Solving Ax b with A SPD Solving Ax b is equivalent to minimizing F pxq 1 2 xt Ax b T x if A is SPD (symmetric, positive definite). Assume A is SPD ă x, y ą A y T Ax is inner product All eigen-values of A are real, positive; eigenvectors are mutually orthogonal. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 7 / 27
8 Line Search The line/direction search can be done exactly in current case, we know F pxq 1 2 xt Ax b T x F Ax b r residual gradient D 2 F A The Taylor series with 3 terms: F px ` αdq F pxq ` αdf pxq d ` 1 2 α2 d T D 2 F pxqd F pxq ` αd T r ` 1 2 α2 d T Ad d dα F px ` αdq dt r ` αd T Ad 0 ñ α dt r d T Ad Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 8 / 27
9 A Generic Iterative Method Denote: x exact solution, so A x b. x i ith approximation of solution, r i Ax i b residual F px i q e i x i x error note Ae i r i. x 0 initial guess, r 0 Ax 0 b for i 0, 1, 2,..., choose search direction d i. (all methods only differ here) α i dt i r i d T i Ad i x i`1 x i ` α i d i r i`1 Ax i`1 b (or r i`1 r i ` α i Ad i ) until }r i`1 } is small enough. (e i is not available, so stop if r i gets small) Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 9 / 27
10 Method of Steepest Descent d i F px i q r i x 0 initial guess, r 0 Ax 0 b for i 0, 1, 2,..., rt i r i α i r T i Ar i x i`1 x i α i r i r i`1 Ax i`1 b until }r i`1 } is small enough. This algorithm requires 2 matrix-vector multiplication per iteration. Better by r i`1 Ax i`1 b r i α i Ar i Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 10 / 27
11 Method of Steepest Descent cont ed Better Algorithm x 0 initial guess, r 0 Ax 0 b for i 0, 1, 2,..., rt i r i α i r T i Ar i x i`1 x i α i r i r i`1 r i α i Ar i until }r i`1 } is small enough. This algorithm requires 1 matrix-vector multiplication per iteration. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 11 / 27
12 Error Estimates It is not feasible to estimate }e i } directly, so we estimate }e i } 2 A et i Ae i. Direct calculation leads to Fact Corollary }e i`1 } 2 A }e i } 2 Ap1 ω 2 q, ω 2 a 1 ω 2 ď κpaq 1 κpaq ` 1 pr T i r iq 2 pr T i Ar iqpe T i Ae iq Steepest decent is linearly convergent, with convergence factor ď κpaq 1. The number of iterations to achieve certain error is OpκpAqq κpaq ` 1 Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 12 / 27
13 Preconditioning Idea Replace Ax b by E 1 Ax E 1 b where E is non-singular, E 1 is easy to compute, and κpe 1 Aq «1. E 1 A is not symmetric, we will do x E T x to get with à E 1 AE T, b E 1 b. à is SPD. Want κpãq «1, κpãq! κpaq à x b Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 13 / 27
14 Preconditioned Steepest Descent Algorithm x 0 initial guess, r 0 à x 0 b for i 0, 1, 2,..., α i rt i r i r T i à r i x i`1 x i α i r i r i`1 r i α i à r i until } r i`1 } is small enough. Solve E T x x. (à x b) Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 14 / 27
15 Preconditioned Steepest Descent (Ax b with M EE T ) Algorithm x 0 initial guess, r 0 Ax 0 b, s 0 M 1 r 0. for i 0, 1, 2,..., rt i s i α i s T i As i x i`1 x i α i s i r i`1 r i α i As i s i`1 M 1 r i`1 until }r i`1 } is small enough. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 15 / 27
16 Preconditioning Matrices Sources From stationary iterative methods Jacobi: M D Gauss-Seidel: M pd ` LqD 1 pd ` L T q 1 SOR: M ωp2 ωq pd ` ωlqd 1 pd ` ωl T q Incomplete factorization: incomplete Cholesky, incomplete LU, etc. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 16 / 27
17 Conjugate Directions Definition x, y are conjugate with respect to A if ă x, y ą A y T Ax 0. For anything we can do with orthogonal vectors, we can to with conjugate vectors. For example: if tv i u is orthogonal basis, if td i u is orthogonal basis, x ÿ j x ÿ j ă x, v j ą ă v j, v j ą v j ă x, d j ą A d j ÿ ă d j, d j ą A j d T j Ax d T d j j Ad j Also, modified Gram-Schmidt algorithm. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 17 / 27
18 Generic Conjugate Directions Method x 0 initial guess, r 0 Ax 0 b for i 0, 1, 2,..., choose search direction d i, conjugate to all previous directions. α i dt i r i d T i Ad i x i`1 x i ` α i d i r i`1 Ax i`1 b (or r i`1 r i ` α i Ad i ) until }r i`1 } is small enough. (e i is not available, so stop if r i gets small) Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 18 / 27
19 Conjugate Gradient Method Using Gram-Schmidt algorithm for finding td i u from tr i u: Conjugate Gradient Method x 0 initial guess, r 0 Ax 0 b, d 0 r 0. for i 0, 1, 2,..., rt i r i α i dt i r i d T i Ad i d T i Ad i x i`1 x i ` α i d i r i`1 r i ` α i Ad i β i rt i`1 r i`1 r T i r i d i`1 r i`1 ` β i d i until }r i`1 } is small enough. (e i is not available, so stop if r i gets small) Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 19 / 27
20 Preconditioned Conjugate Gradient Method x 0 initial guess, r 0 Ax 0 b, s 0 M 1 r 0, d 0 s 0. for i 0, 1, 2,..., α i rt i s i d T i Ad i x i`1 x i ` α i d i r i`1 r i ` α i Ad i s i`1 M 1 r i`1 β i rt i`1 s i`1 r T i s i d i`1 r i`1 ` β i d i until }r i`1 } is small enough. (e i is not available, so stop if r i gets small) Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 20 / 27
21 Convergence Analysis We have from algorithm: e i`1 e i ` α i d i r i`1 Ae i`1 r i ` α i Ad i d i`1 r i`1 ` β i d i In general, we have e i P e 0 ` spanpae 0, A 2 e 0,..., A i e 0 q r i, d i P e 0 ` spanpae 0, A 2 e 0,..., A i`1 e 0 q Definition A subspace of the form: spanpv, Av,..., A k vq is called a Krylov subspace. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 21 / 27
22 Convergence Analysis cont ed Theorem we have spanpae 0, A 2 e 0,..., A i e 0 q spanpr 0, r 1,..., r i 1 q spanpd 0, d 1,..., d i 1 q Note: e i is the element of smallest } } A norm in shifted subspace e 0 ` spanpd 0, d 1,..., d i 1 q; e i is the orthogonal projection in ă, ą A inner product of e 0 onto spanpd i, d i`1,..., d n 1 q, which is the orthogonal to the space spanpd 0, d 1,..., d i 1 q x i is the optimal approximation of x among all possible candidates in spanpb 0, Ab 0,..., A i 1 b 0 q (with x 0 0). Theorem ˆ? κ 1 }e i } A ď 2? }e 0 } A κ ` 1 i Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 22 / 27
23 Krylov Subspace Methods General Idea Denote mth Krylov subspace So K m spanpr 0, Ar 0,..., A m 1 r 0 q. K 1 Ă K 2 Ă Ă K m Ă For each K m, find optimal approximation x m P K m of the exact solution x among all possible candidates in K m. How to define the optimal approximation will determine the Krylov subspace method: for example }e m } A is minimum for x m P K m : conjugate gradient method }Ax m b} is minimized among all x P K m under some norm: GMRES, etc. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 23 / 27
24 Generalized Minimal Residual Method (GMRES) General Idea: Least Squares Find x m P K m such that }Ax m b} 2 min }Ax b} 2 xpk m Arnoldi Process A variation of Gram-Schmidt specifically for Krylov subspaces. We want to orthonormalize tr 0, Ar 0,..., A m 1 r 0 u. v 1 r 0 {}r 0 } for j 1, 2,..., m 1, h ij ă Av j, v i ą, i 1,..., j w j`1 Av j řj i 1 h ijv i h j`1,j }w j`1 } v j`1 w j`1 {h j`1,j. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 24 / 27
25 GMRES cont ed Define V m rv 1 v 2 v m s, H m rh ij s (Hessenberg form), we see AV m V m`1 H m where A is n ˆ n, V m is n ˆ m, V m`1 is n ˆ pm ` 1q, and H m is pm ` 1q ˆ m. And note that Therefore, r 0 }r 0 }v 1 }r 0 }V m`1 ẽ 1, with ẽ 1 p1, 0, 0,..., 0q T. min }r 0 Az} min }r zpk m ypr m 0 AV m y} min }}r 0}V m`1 e 1 V m`1 H m y} ypr m min yr m }}r 0}ẽ 1 H m y} Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 25 / 27
26 GMRES: Implementation with restart choose m, x 0 calculate r 0 Ax 0 b generate v 1,..., v m`1 by Arnoldi process, save H solve Hy }r 0 }p1, 0,..., 0q T in least square sense x 1 x 0 V m y Repeat to get x 2, x 3,... until convergence. Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 26 / 27
27 Error Estimates CG: }e m } A ď }p m paq} 2 }e 0 } A for some polynomial p m of degree ď m, with }p m paq} 2 max p mpλq λpσpaq GMRES: }r m } 2 ď }p m paq} 2 }r 0 } 2 If A V ΛV 1 is diagonalizable, }p m paq} 2 ď κpv q max p mpλq λpσpaq Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH481 MATH 481 Numerical Metho 27 / 27
Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University
Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationDraft. Lecture 14 Eigenvalue Problems. MATH 562 Numerical Analysis II. Songting Luo. Department of Mathematics Iowa State University
Lecture 14 Eigenvalue Problems Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH562
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationKRYLOV SUBSPACE ITERATION
KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationPETROV-GALERKIN METHODS
Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More information1 Extrapolation: A Hint of Things to Come
Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.
More informationThe conjugate gradient method
The conjugate gradient method Michael S. Floater November 1, 2011 These notes try to provide motivation and an explanation of the CG method. 1 The method of conjugate directions We want to solve the linear
More informationCourse Notes: Week 4
Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationMath 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019
Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent
More informationConjugate Gradient Method
Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationConjugate Gradient Method
Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationLinear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.
Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning
More informationSome minimization problems
Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationLecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico
Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationA Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems
A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationDraft. Lecture 01 Introduction & Matrix-Vector Multiplication. MATH 562 Numerical Analysis II. Songting Luo
Lecture 01 Introduction & Matrix-Vector Multiplication Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in]
More informationMATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.
MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationDraft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo
Lecture 12 Gaussian Elimination and LU Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa State University[0.5in]
More informationLecture 18 Finite Element Methods (FEM): Functional Spaces and Splines. Songting Luo. Department of Mathematics Iowa State University
Lecture 18 Finite Element Methods (FEM): Functional Spaces and Splines Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Draft Songting
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationMaster Thesis Literature Study Presentation
Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationAMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007 This unit: So far: A survey of iterative methods for solving linear
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-14 On the Convergence of GMRES with Invariant-Subspace Deflation M.C. Yeung, J.M. Tang, and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied Mathematics
More informationNumerical Methods Orals
Numerical Methods Orals Travis Askham April 4, 2012 Contents 1 Floating Point Arithmetic, Conditioning, and Stability 3 1.1 Previously Asked Questions................................ 3 1.2 Numerical Methods
More informationLinear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017
Linear Independence Stephen Boyd EE103 Stanford University October 9, 2017 Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Linear independence 2 Linear dependence set of n-vectors
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationSteady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods
Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) Summer Semester
More informationTsung-Ming Huang. Matrix Computation, 2016, NTNU
Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9
Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers
More information