Block-tridiagonal matrices
|
|
- Ronald Merritt
- 6 years ago
- Views:
Transcription
1 Block-tridiagonal matrices. p.1/31
2 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute the eigenvalues of a matrix.. p.2/31
3 Block-tridiagonal matrices Ω1 Ω2 Consider a two-dimensional domain partitioned in strips. Assume that points on the lines of intersection are only coupled to their nearest neighbors in the underlying mesh (and we do not have periodic boundary conditions). Hence, there is no coupling between subdomains except through the glue on the interfaces. Ω3. p.3/31
4 Block-tridiagonal matrices When the subdomains are ordered lexicographically from left to right, a domain Ω becomes coupled only to its pre- and postdecessors Ω 1 and Ω +1, respectively and the corresponding matrix takes the form of a block tridiagonal matrix = tridiag ( 1 +1 ), or ¾ = ÒÒ 1 ÒÒ For definiteness we let the boundary meshline Ω Ω +1 belong to Ω. In order to preserve the sparsity pattern we shall factor without use of permutations. Naturally, the lines of intersection do not have to be straight.. p.4/31
5 Block-tridiagonal matrices How do we factorize a (block)-tridiagonal matrix?. p.5/31
6 Let be block-tridiagonal, and expressed as = Ä Í. Convenient: seek Ä,, Í such that = Ä 1 Í and where is diagonal, Ä = Ä and Í = Í Direct computation: = ( Ä ) 1 ( Í ) = Ä Í +Ä 1 Í = Ä Í i.e., = + Ä 1 Í Important: Ä and Í are strictly lower and upper triangular.. p.6/31
7 = Ä 1 Í for pointwise tridiagonal matrices = ÒÒ ÒÒ Ò 1 Ò Factorization algorithm: 1 = 11 = p.7/31
8 = Ä 1 Í for pointwise tridiagonal matrices Solution of systems with Ä 1 Í. p.8/31
9 Block-tridiagonal matrices Let be block-tridiagonal, and expressed as = Ä Í. One can envisage three major versions of the factorization algorithm: (i) = ( Ä ) 1 ( Í ) (ii) = ( 1 Ä )( 1 Í ) (iii) = (Á Ä ) 1 (Á Í ) = = 11 ( = 1 1 1) = 0 (Inverse free substitutions), where Ä Ä =, Í Í =. Here 1 (Á = Í ) 1 Ä ) 1 (Á Í ) 1 = 2 + ) + (Á 2 Í )(Á Í + Í ) and similarly (Á for Ä ) 1 (Á. (Á. p.9/31
10 Existence of factorization for block-tridiagonal matric We assume that the matrices are real. It can be shown that (Ö) is ÖÖ always nonsingular for two important classes of matrices, namely for matrices which are positive definite, i.e., 0 for all Ü ¾ Ê ( if Ò has order Ò) Ü Ì Ü blockwise generalized diagonally dominant matrices (also called block À-matrices), i.e., for which the diagonal matrices are nonsingular and 1 1 (here 01 = 0 Ò+1Ò = 0) = 1 2 Ò. p.10/31
11 A factorization passes through stages Ö = 1 2 Ò For two important classes of matrices there holds that the successive top blocks, i.e., pivot matrices which arise after every factorization stage, are nonsingular. At every stage the current matrix (Ö) is partitioned in 2 2 blocks, (1) = = ÒÒ 1 ÒÒ = 2 4 (1) (1) (1) (1) At the Öth stage we compute (Ö) 11 = (Ö) 1 11 and factor (Ö), (Ö) = Á (Ö) 21 (Ö) 11 Á (Ö) (Ö) (Ö+1) 3 5 where (Ö+1) = (Ö) 22 (Ö) 21 (Ö) 11 (Ö) 12 complement. is the so-called Schur. p.11/31
12 Existence of factorization for block-tridiagonal matric The factorization of a block matrix is equivalent to the block Gaussian elimination of it. Note then that the only block in (Ö) 22 which will be affected by the elimination (of block matrix (1) 21 block tridiagonal decomposition of (Ö) 22 matrix. ) is the top block of the, i.e., (Ö+1) 11, the new pivot We show that for the above matrix classes the Schur complement (Ö+1) = (Ö) 22 (Ö) 21 (Ö) 11 (Ö) 12 belongs to the same class as (Ö), i.e., in particular that the pivot entries are nonsingular.. p.12/31
13 Lemma 1 Let = be positive definite. Then = 1 2 and the Schur complement Ë = are also positive definite. Proof Ü Ü Ü There holds 11Ü = 1 1 Ü for all (Ü = 1 0). Hence 1 ÜÌ 11Ü 1 0 for all Ì Ì Ì Ì Ü 1, i.e., 11 is positive definite. Similarly, it can be shown that 22 is positive definite. Since is nonsingular then Ü Ì Ü = Ü Ì Ì Ü = Ý Ì 1 Ý for Ý = Ü Ý so 1 0 for all = i.e., the inverse of is also positive definite. Ý Use Ý 0 now the explicit form of the inverse Ì computed by use of the factorization, = Á = Á Ë Á 0 Ë where indicates entries not important for the present discussion. Hence, since 1 is positive definite, so is its diagonal block Ë 1. Hence, the inverse of Ë 1, and therefore also Ë, is positive definite. Á. p.13/31
14 Corollary 1 When (Ö) is positive definite, (Ö+1) and in particular, (Ö+1) 11 are positive definite. Proof (Ö+1) is a Schur complement of (Ö) so by Lemma 1, (Ö+1) is positive definite when (Ö) is. In particular, its top diagonal block is positive definite.. p.14/31
15 Lemma 2 Let = be blockwise generalized diagonally dominant where is block tridiagonal. Then the Schur complement Ë = is also generalized diagonally dominant. Proof (Hint) Since the only matrix block in Ë which has been changed from 22 is its top block 11 to (Ö+1) 11 it suffices to show that 11 is nonsingular and the first block column is generalized diagonally dominant.. p.15/31
16 Linear recursions Consider the solution of the linear system of equations x = b, where has been already factorized as = ÄÍ or = ÄÍ. The matrices Ä = Ð and Í = Ù are lower- and upper-triangular, respectively. To compute x, we must perform two steps: forward substitution: Äz = b, i.e., Þ 1 = 1 Þ = 1 È =1 Ð Þ = 2 3 Ò backward substitution: Í x = z, i.e., Ü Ò = Þ Ò Ü = Þ ÒÈ Ù Ü = Ò 1 Ò 2 1 =+1. p.16/31
17 While the implementation of the forward and back-substitution on a serial computer is trivial, to implement them on a vector or parallel computer system is problematic. The reason is that the relations are particular examples of a linear recursion which is an inherently sequential process. A general Ñ-level recurrence relation reads as Ü = 1 Ü Ü Ñ Ü Ñ + and the performance of its straightforward vector or parallel implementation is degraded due to the existing backwards data dependencies.. p.17/31
18 Block-tridiagonal matrices Can we speedup somehow the solution of systems with bi- or tri-diagonal matrices?. p.18/31
19 Multifrontal solution methods x n0 (a) Two way frontal (Ü method Ò0 is the middle node (b) The structure of the matrix Any tridiagonal or block tridiagonal matrix can be attacked in parallel from both ends, after a proper numbering of the unknowns It can be seen that we can work independently on the odd numbered and even numbered points until we have eliminated all entries except the final corner one.. p.19/31
20 Hence, the factorization and forward substitution can occur in parallel for the two fronts (the even and the odd). At the final point we can either continue in parallel with the back substitution to compute the solution at all the other interior points, or we can use the same type of two way frontal method now for each of the two structures which have been split by the already computed solution at the middle point. This method of recursively dividing the domain in smaller and smaller pieces which can be handled all in parallel, can be continued log 2 Ò steps, after which we have just one unknown per subinterval.. p.20/31
21 The idea to perform Gaussian elimination from both ends of a tridiagonal matrix, called also twisted factorization, was proposed first by Babushka in Note that in this method no back substitution is required.. p.21/31
22 Odd-even elimination/cyclic reduction/divideand-conquer We sketch some parallel computation methods for recurrence relations. The methods are applicable for general (block-)band matrices. For simplicity of presentation, the idea is illustrated on one-level or two-level scalar recursions: Ü 1 = 1 Ü = Ü 1 + = 2 3 Ò 1Ü 1 + Ü + +1 Ü +1 = = 1 2 Ò 10 = ÒÒ+1 = 0 The corresponding matrix-vector equivalent of the above recursions is to solve a system x = b, where is lower bidiagonal and tridiagonal, respectively.. p.22/31
23 An idea to gain some parallelism when solving linear recursions is to reduce the size of the corresponding linear system by eliminating the odd-indexed unknowns from the even-numbered equations (or vice versa). This elimination can be done in parallel for each of the equations because the odd numbered equations and the even numbered equations are both mutually uncoupled. The system of equations resulting for the even numbered and for the odd numbered unknowns after the elimination can be applied for the reduced equations and so on. For every elimination step we reduce the order of the coupled equations to about half its previous order and eventually we are left with a single equation or a system of uncoupled equations p.23/31
24 In the odd-even elimination (or odd-even reduction) method we eliminate the odd numbered unknowns (i.e., numbers 1 (mod 2)) and we are left with a tridiagonal system for the even numbered (i.e., numbers 2 (mod 2)) unknowns. The method is repeated, i.e., we eliminate the unknowns 2 (mod 4) and are left with the unknowns numbered 4 (mod 4) and so on. Eventually we are left with just a single equation which we solve. At this point we can use back substitution to compute the remaining unknowns.. p.24/31
25 ...the odd-even simultaneous... There exist a second version of this method, called the odd-even simultaneous elimination. In the odd-even simultaneous elimination method we eliminate the odd-numbered unknowns from the even numbered equations and simultaneously the even numbered unknowns from the odd equations. In this way we are left with two decoupled equations, one for the even numbered unknowns and one for the odd numbered unknowns. The same method can be recursively applied for these two sets in parallel. Hence, in this method we do not reduce the size of the problem but we successively decouple the problems into smaller and smaller sizes of subproblems. Eventually we arrive at a system on diagonal form which we solve for all unknowns in parallel. Therefore, in this method there is no need to perform back substitution.. p.25/31
26 ...the odd-even Two elimination steps of the simultaneous elimination method. p.26/31
27 ...the odd-even... The computational complexity of the sequential LU factorization and forward and back substitution method for three-diagonal matrices is 8Ò. While performing the odd-even simultaneous elimination we perform 9Òlog 2 Ò flops to transform the system and Ò flops to solve the last diagonal system. Hence, the redundancy of the odd-even simultaneous elimination method is 98log 2 Ò which is the price we pay to get a fully parallel method.. p.27/31
28 2 1Ù Ù Ù 2+1 = FMB - NLA Algebraic description of the odd-even... Consider the three-term recursion, which we rewrite as 2 Ù Ù Ù 2+2 = Ù Ù Ù 2+3 = 2+2 We multiply the first equation by, the third by 2+1 and add the 2 resulting equations to the second equation. The so-resulting equation is () Ù 1 + (1) (1) Ù 2 2 (1) = 2 (1) 2+1 = 2+1 (1) = (1) = = (1) 2+1 = 0 1 where Ù Next, the odd-even reduction is repeated for all odd numbered equations. The resulting system can be reduced in a similar way and eventually we are left with just one equation.. p.28/31
29 Similarly, for the even points we get (1) 1 Ù (1) 2 2 Ù 2 + (1) 2 Ù 2+2 = (1) 2 = 1 2 where (1) 1 (1) (1) 2 2 and (1) 2 are defined accordingly. 2 It is interesting to note that for a sufficiently diagonally dominant matrix, the reduction can be terminated or truncated after fewer than Ç(log 2 Ò) steps, since the reduced system can be considered numerically (i.e., up to a machine precision) as a diagonal system.. p.29/31
30 With the same indices for a block tridiagonal system we get = blocktridiag( 1 ) (1) = (1) 2+1 = (1) 2+1 = (1). p.30/31
31 Some keywords to discuss Load balancing for cyclic reduction methods Divide-and-conquer techniques Domain decomposition ordering. p.31/31
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationMULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS
MULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS JIANLIN XIA Abstract. We propose multi-layer hierarchically semiseparable MHS structures for the fast factorizations of dense matrices arising from
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationMultilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses
Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationCyclic Reduction History and Applications
ETH Zurich and Stanford University Abstract. We discuss the method of Cyclic Reduction for solving special systems of linear equations that arise when discretizing partial differential equations. In connection
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationOptimal Interface Conditions for an Arbitrary Decomposition into Subdomains
Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Martin J. Gander and Felix Kwok Section de mathématiques, Université de Genève, Geneva CH-1211, Switzerland, Martin.Gander@unige.ch;
More informationCitation Osaka Journal of Mathematics. 43(2)
TitleIrreducible representations of the Author(s) Kosuda, Masashi Citation Osaka Journal of Mathematics. 43(2) Issue 2006-06 Date Text Version publisher URL http://hdl.handle.net/094/0396 DOI Rights Osaka
More information12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126
12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR Cramer s Rule Solving a physical system of linear equation by using Cramer s rule Cramer s rule is really
More informationDepartment of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015
Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationOptimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36
Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,
More informationMatrix Computations and Semiseparable Matrices
Matrix Computations and Semiseparable Matrices Volume I: Linear Systems Raf Vandebril Department of Computer Science Catholic University of Louvain Marc Van Barel Department of Computer Science Catholic
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationV C V L T I 0 C V B 1 V T 0 I. l nk
Multifrontal Method Kailai Xu September 16, 2017 Main observation. Consider the LDL T decomposition of a SPD matrix [ ] [ ] [ ] [ ] B V T L 0 I 0 L T L A = = 1 V T V C V L T I 0 C V B 1 V T, 0 I where
More informationMTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education
MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life
More informationSolving PDEs with CUDA Jonathan Cohen
Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationA Sparse QS-Decomposition for Large Sparse Linear System of Equations
A Sparse QS-Decomposition for Large Sparse Linear System of Equations Wujian Peng 1 and Biswa N. Datta 2 1 Department of Math, Zhaoqing University, Zhaoqing, China, douglas peng@yahoo.com 2 Department
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationNumerical Linear Algebra
Numerical Linear Algebra By: David McQuilling; Jesus Caban Deng Li Jan.,31,006 CS51 Solving Linear Equations u + v = 8 4u + 9v = 1 A x b 4 9 u v = 8 1 Gaussian Elimination Start with the matrix representation
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationComputational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras
Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 46 Tri-diagonal Matrix Algorithm: Derivation In the last
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationAnalysis of Spectral Kernel Design based Semi-supervised Learning
Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationFinite difference method for elliptic problems: I
Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen
More informationShortest paths with negative lengths
Chapter 8 Shortest paths with negative lengths In this chapter we give a linear-space, nearly linear-time algorithm that, given a directed planar graph G with real positive and negative lengths, but no
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this
More informationAlgorithms PART II: Partitioning and Divide & Conquer. HPC Fall 2007 Prof. Robert van Engelen
Algorithms PART II: Partitioning and Divide & Conquer HPC Fall 2007 Prof. Robert van Engelen Overview Partitioning strategies Divide and conquer strategies Further reading HPC Fall 2007 2 Partitioning
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationPreconditioning techniques to accelerate the convergence of the iterative solution methods
Note Preconditioning techniques to accelerate the convergence of the iterative solution methods Many issues related to iterative solution of linear systems of equations are contradictory: numerical efficiency
More informationChapter 12 Block LU Factorization
Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More information3 QR factorization revisited
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationLecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University
Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationFast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation
Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Christina C. Christara and Kit Sun Ng Department of Computer Science University of Toronto Toronto, Ontario M5S 3G4,
More informationReview. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.
Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationAlgebraic Methods in Combinatorics
Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets
More informationCALU: A Communication Optimal LU Factorization Algorithm
CALU: A Communication Optimal LU Factorization Algorithm James Demmel Laura Grigori Hua Xiang Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-010-9
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationOutline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina
A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University Outline Background Schur-Horn Theorem Mirsky Theorem
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More informationCommunication Avoiding LU Factorization using Complete Pivoting
Communication Avoiding LU Factorization using Complete Pivoting Implementation and Analysis Avinash Bhardwaj Department of Industrial Engineering and Operations Research University of California, Berkeley
More informationImage Reconstruction And Poisson s equation
Chapter 1, p. 1/58 Image Reconstruction And Poisson s equation School of Engineering Sciences Parallel s for Large-Scale Problems I Chapter 1, p. 2/58 Outline 1 2 3 4 Chapter 1, p. 3/58 Question What have
More informationy b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1
LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationLinear Algebra: A Constructive Approach
Chapter 2 Linear Algebra: A Constructive Approach In Section 14 we sketched a geometric interpretation of the simplex method In this chapter, we describe the basis of an algebraic interpretation that allows
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationThe Generalization of Quicksort
The Generalization of Quicksort Shyue-Horng Shiau Chang-Biau Yang Ý Department of Computer Science and Engineering National Sun Yat-Sen University Kaohsiung, Taiwan 804, Republic of China shiaush@cse.nsysu.edu.tw
More information