Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones.
|
|
- David Blankenship
- 5 years ago
- Views:
Transcription
1 A matrix is sparse if most of its entries are zero. Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones. In fact sparse matrices appear in many applications areas from structural analysis to fluid dynamics and from DNA sequencing to graph theory. Examples: Structural mechanics: load on a plane truss Directed graph for a network
2 A matrix is sparse if most of its entries are zero. Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones. In fact sparse matrices appear in many applications areas from structural analysis to fluid dynamics and from DNA sequencing to graph theory. Examples: Pattern of non-zeros in the linear system for determining the force balance 0 Pattern of non-zeros in the adjacency matrix for the graph nz = nz = 1
3 <latexit sha1_base="epwp/bia+iyov+eed+hoiye=">aaadbhicbvjnb9qweecvwkfqilimsawx0abibobqcohirupzuqpxjonvzbzzdnqbcyvysj/bbfltgcnsifm/ebjuptbmpz1vytvr19sprvbzdowde+lloioogyloooyrplqtgxwprvctkvvliyxlyvqffkwlhaqdm9xvrgcrcymxyhpyukktcogo/iamsdrliopqgbyc+abierbfuwsuajgakomfhi9jlfuwhigemjtrrtveebjaisyngcgrqqgqvyaxi0ugmlqqky0tdy1ybdgokpwwvxqy0qr09poxyrqmtmxzrxup+qk0bnyarxphhv1xpbuwwizt+ltkedmfhebfwoylwydbzxfxlvsgtivvkjsklnw0hcm1nllbgrkveakwzpetm1krei9axfdfdyiilkutldbhbovywvvrpqtetpyv+mktb/9xmzymez1rbdwnqellrlltgphqlxrsozcbcmtxpxdhqdemguqswvyt1buiqvf0rq+tpwxec1l0fidjysivyszzfvii+m9tgnrlzlitvgmjzzvdzcrqzechjz9ujnfforvprxcmdnffueiskvg/vp/dredvzvzzlpv/a0+rx9g==</latexit> <latexit sha1_base="epwp/bia+iyov+eed+hoiye=">aaadbhicbvjnb9qweecvwkfqilimsawx0abibobqcohirupzuqpxjonvzbzzdnqbcyvysj/bbfltgcnsifm/ebjuptbmpz1vytvr19sprvbzdowde+lloioogyloooyrplqtgxwprvctkvvliyxlyvqffkwlhaqdm9xvrgcrcymxyhpyukktcogo/iamsdrliopqgbyc+abierbfuwsuajgakomfhi9jlfuwhigemjtrrtveebjaisyngcgrqqgqvyaxi0ugmlqqky0tdy1ybdgokpwwvxqy0qr09poxyrqmtmxzrxup+qk0bnyarxphhv1xpbuwwizt+ltkedmfhebfwoylwydbzxfxlvsgtivvkjsklnw0hcm1nllbgrkveakwzpetm1krei9axfdfdyiilkutldbhbovywvvrpqtetpyv+mktb/9xmzymez1rbdwnqellrlltgphqlxrsozcbcmtxpxdhqdemguqswvyt1buiqvf0rq+tpwxec1l0fidjysivyszzfvii+m9tgnrlzlitvgmjzzvdzcrqzechjz9ujnfforvprxcmdnffueiskvg/vp/dredvzvzzlpv/a0+rx9g==</latexit> <latexit sha1_base="epwp/bia+iyov+eed+hoiye=">aaadbhicbvjnb9qweecvwkfqilimsawx0abibobqcohirupzuqpxjonvzbzzdnqbcyvysj/bbfltgcnsifm/ebjuptbmpz1vytvr19sprvbzdowde+lloioogyloooyrplqtgxwprvctkvvliyxlyvqffkwlhaqdm9xvrgcrcymxyhpyukktcogo/iamsdrliopqgbyc+abierbfuwsuajgakomfhi9jlfuwhigemjtrrtveebjaisyngcgrqqgqvyaxi0ugmlqqky0tdy1ybdgokpwwvxqy0qr09poxyrqmtmxzrxup+qk0bnyarxphhv1xpbuwwizt+ltkedmfhebfwoylwydbzxfxlvsgtivvkjsklnw0hcm1nllbgrkveakwzpetm1krei9axfdfdyiilkutldbhbovywvvrpqtetpyv+mktb/9xmzymez1rbdwnqellrlltgphqlxrsozcbcmtxpxdhqdemguqswvyt1buiqvf0rq+tpwxec1l0fidjysivyszzfvii+m9tgnrlzlitvgmjzzvdzcrqzechjz9ujnfforvprxcmdnffueiskvg/vp/dredvzvzzlpv/a0+rx9g==</latexit> <latexit sha1_base="epwp/bia+iyov+eed+hoiye=">aaadbhicbvjnb9qweecvwkfqilimsawx0abibobqcohirupzuqpxjonvzbzzdnqbcyvysj/bbfltgcnsifm/ebjuptbmpz1vytvr19sprvbzdowde+lloioogyloooyrplqtgxwprvctkvvliyxlyvqffkwlhaqdm9xvrgcrcymxyhpyukktcogo/iamsdrliopqgbyc+abierbfuwsuajgakomfhi9jlfuwhigemjtrrtveebjaisyngcgrqqgqvyaxi0ugmlqqky0tdy1ybdgokpwwvxqy0qr09poxyrqmtmxzrxup+qk0bnyarxphhv1xpbuwwizt+ltkedmfhebfwoylwydbzxfxlvsgtivvkjsklnw0hcm1nllbgrkveakwzpetm1krei9axfdfdyiilkutldbhbovywvvrpqtetpyv+mktb/9xmzymez1rbdwnqellrlltgphqlxrsozcbcmtxpxdhqdemguqswvyt1buiqvf0rq+tpwxec1l0fidjysivyszzfvii+m9tgnrlzlitvgmjzzvdzcrqzechjz9ujnfforvprxcmdnffueiskvg/vp/dredvzvzzlpv/a0+rx9g==</latexit> We can quantify sparsity of a matrix using the sparsity ratio: Definition. Let A be an m-by-n matrix, and nnz(a) denote the number of non-zero entries of A. Then sparsity ratio of A =1 nnz(a) mn What is the sparsity ratio of an n-by-n tridiagonal matrix? d 1 e 1 c 1 d e c d e, c j =0,d j =0,e j =0 c n d n 1 e n 1 c n 1 d n In MATLAB, we can compute the sparsity ratio using: sparsity_ratio = 1 - nnz(a)/ numel(a)
4 Banded linear systems <latexit sha1_base="fcd0j/dyqjebqwad1sjor/sqya=">aaacxicbvfnb9naelxnvwlfkxdjmqjgifydstnoagidohtkrpakyi9xpsbjewvj0sj1gf/bh+pfsheiqvvm9ptevjdn0kthaew/o1neun1n+gv0hdxndx9/n1wjoujst9mjcduiickccjpvgviystllpv+cobaieod0rgwclyjtlbgvlqpvzvxv0jbsiboazzkkjtrr0cfwgjeylp0rkbvqz+b99eabiqzkuqiheiowrw+0gftmkqjmbn1amfh0cqgg91nohxiupiz9ewsvbmsgxxxeo1imfzmrseoan/xlyr9ydccowlgooiycz1nn91nzpxzssfxhjjjlgyuzlmksxgiibudnenllupuqsvknlotcdfzj++9llsloxdlsunwzwjswzfeaqtib/p00bytnwqhqhldxzacskuavrgcqddisasyfzb+dnwgmngyvuebu0z1ufw1mee9obrvfbaqesbbfagrhadacsatmmwuqfryjqldbnxdivodt9sa91xnjnpnx0ncthxizonz1h0csfev/zrzztavnbj1pnevl/fwdgypxog==</latexit> <latexit sha1_base="fcd0j/dyqjebqwad1sjor/sqya=">aaacxicbvfnb9naelxnvwlfkxdjmqjgifydstnoagidohtkrpakyi9xpsbjewvj0sj1gf/bh+pfsheiqvvm9ptevjdn0kthaew/o1neun1n+gv0hdxndx9/n1wjoujst9mjcduiickccjpvgviystllpv+cobaieod0rgwclyjtlbgvlqpvzvxv0jbsiboazzkkjtrr0cfwgjeylp0rkbvqz+b99eabiqzkuqiheiowrw+0gftmkqjmbn1amfh0cqgg91nohxiupiz9ewsvbmsgxxxeo1imfzmrseoan/xlyr9ydccowlgooiycz1nn91nzpxzssfxhjjjlgyuzlmksxgiibudnenllupuqsvknlotcdfzj++9llsloxdlsunwzwjswzfeaqtib/p00bytnwqhqhldxzacskuavrgcqddisasyfzb+dnwgmngyvuebu0z1ufw1mee9obrvfbaqesbbfagrhadacsatmmwuqfryjqldbnxdivodt9sa91xnjnpnx0ncthxizonz1h0csfev/zrzztavnbj1pnevl/fwdgypxog==</latexit> <latexit sha1_base="fcd0j/dyqjebqwad1sjor/sqya=">aaacxicbvfnb9naelxnvwlfkxdjmqjgifydstnoagidohtkrpakyi9xpsbjewvj0sj1gf/bh+pfsheiqvvm9ptevjdn0kthaew/o1neun1n+gv0hdxndx9/n1wjoujst9mjcduiickccjpvgviystllpv+cobaieod0rgwclyjtlbgvlqpvzvxv0jbsiboazzkkjtrr0cfwgjeylp0rkbvqz+b99eabiqzkuqiheiowrw+0gftmkqjmbn1amfh0cqgg91nohxiupiz9ewsvbmsgxxxeo1imfzmrseoan/xlyr9ydccowlgooiycz1nn91nzpxzssfxhjjjlgyuzlmksxgiibudnenllupuqsvknlotcdfzj++9llsloxdlsunwzwjswzfeaqtib/p00bytnwqhqhldxzacskuavrgcqddisasyfzb+dnwgmngyvuebu0z1ufw1mee9obrvfbaqesbbfagrhadacsatmmwuqfryjqldbnxdivodt9sa91xnjnpnx0ncthxizonz1h0csfev/zrzztavnbj1pnevl/fwdgypxog==</latexit> <latexit sha1_base="fcd0j/dyqjebqwad1sjor/sqya=">aaacxicbvfnb9naelxnvwlfkxdjmqjgifydstnoagidohtkrpakyi9xpsbjewvj0sj1gf/bh+pfsheiqvvm9ptevjdn0kthaew/o1neun1n+gv0hdxndx9/n1wjoujst9mjcduiickccjpvgviystllpv+cobaieod0rgwclyjtlbgvlqpvzvxv0jbsiboazzkkjtrr0cfwgjeylp0rkbvqz+b99eabiqzkuqiheiowrw+0gftmkqjmbn1amfh0cqgg91nohxiupiz9ewsvbmsgxxxeo1imfzmrseoan/xlyr9ydccowlgooiycz1nn91nzpxzssfxhjjjlgyuzlmksxgiibudnenllupuqsvknlotcdfzj++9llsloxdlsunwzwjswzfeaqtib/p00bytnwqhqhldxzacskuavrgcqddisasyfzb+dnwgmngyvuebu0z1ufw1mee9obrvfbaqesbbfagrhadacsatmmwuqfryjqldbnxdivodt9sa91xnjnpnx0ncthxizonz1h0csfev/zrzztavnbj1pnevl/fwdgypxog==</latexit> The bandwidth of a matrix A is the defined as the maximum distance of the nonzero elements from the main diagonal of A. Definition. The bandwidth of A is the smallest integer k such that a i,j =0if i j >k, for all i =1,...,n and j =1,...,m. What is the bandwidth of the identity matrix? What is the bandwidth of a tridiagonal matrix? d 1 e 1 c 1 d e c d e, c j =0,d j =0,e j =0 c n d n 1 e n 1 c n 1 d n In MATLAB, we can compute the bandwidth of A using: [i,j] = find(a); bandwidth = max(abs(i-j))
5 If a matrix has a high sparsity ratio and a small bandwidth then fast algorithms can typically be used to solve linear systems with these matrices. MATLAB has state-of-the-art libraries for computing with sparse matrices.
Consider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationx 2 b 1 b 2 b 3. x 1 x 3 c n 2 d n 1 e n 1 x n 1 b n 1 5 c n 1 d n x n b n
Tridiagonal linear systems aaadicjvjdb9mwfhubpkbsyjpfhutlxqne0mafpghckebgszzoaqrkd9raktzdyqyjtpifd+cd+gfpjflz+igep9dhx8tm+tfnitty/1qtb0bn/drnj1/8hdb9lyyrtichyjjmnfmijekhigxgbnca0taqf8/f19fnobshmfczfduouxupopwboqcleeoqcglynkuqagmmdzsyvadillvbtjyg8xfkvpgwobzyo0ladyizd0hqprpo/ifkmeuofgnbkevchtlsuzwig0bxfoolqv+pwlssioxfsdrsmwwqarkuqhycv9w8exnbj09t+9dewgsxvlmv1e1rnz8w88azn8888yplvcfb9ywn1+rtdbsaqdslwn9o/ednmfckofakzzht0chyxtkmucdiewkdoxdmlywshyimycekgrklplbpraabtpa9vqnkqxglfjulfanmn+vppbzkyftl+nsnyakgjjmhajbqzwk8rjaqggcncaia0tg+lysy0ehng8nifxmckeggolzkyoewrvcv1nfugtznakhzheokztxd8vgvwebynjvvugnwxflduqx+qpeuc8pick/fkhayjad9pv1/ah/0upfv++z9btt1vloikyvb+/auzkk9c
More informationLinear Systems of Equations. ChEn 2450
Linear Systems of Equations ChEn 450 LinearSystems-directkey - August 5, 04 Example Circuit analysis (also used in heat transfer) + v _ R R4 I I I3 R R5 R3 Kirchoff s Laws give the following equations
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationTOPIC 2 Computer application for manipulating matrix using MATLAB
YOGYAKARTA STATE UNIVERSITY MATHEMATICS AND NATURAL SCIENCES FACULTY MATHEMATICS EDUCATION STUDY PROGRAM TOPIC 2 Computer application for manipulating matrix using MATLAB Definition of Matrices in MATLAB
More informationFinite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system
Finite Math - J-term 07 Lecture Notes - //07 Homework Section 4. - 9, 0, 5, 6, 9, 0,, 4, 6, 0, 50, 5, 54, 55, 56, 6, 65 Section 4. - Systems of Linear Equations in Two Variables Example. Solve the system
More information1 - Systems of Linear Equations
1 - Systems of Linear Equations 1.1 Introduction to Systems of Linear Equations Almost every problem in linear algebra will involve solving a system of equations. ü LINEAR EQUATIONS IN n VARIABLES We are
More informationLow-Complexity Encoding Algorithm for LDPC Codes
EECE 580B Modern Coding Theory Low-Complexity Encoding Algorithm for LDPC Codes Problem: Given the following matrix (imagine a larger matrix with a small number of ones) and the vector of information bits,
More informationChapter 1: Systems of Linear Equations and Matrices
: Systems of Linear Equations and Matrices Multiple Choice Questions. Which of the following equations is linear? (A) x + 3x 3 + 4x 4 3 = 5 (B) 3x x + x 3 = 5 (C) 5x + 5 x x 3 = x + cos (x ) + 4x 3 = 7.
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationMatrices and Vectors
Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationDirect solution methods for sparse matrices. p. 1/49
Direct solution methods for sparse matrices p. 1/49 p. 2/49 Direct solution methods for sparse matrices Solve Ax = b, where A(n n). (1) Factorize A = LU, L lower-triangular, U upper-triangular. (2) Solve
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationCHUNG-ANG UNIVERSITY Linear Algebra Spring Solutions to Computer Project #2
CHUNG-ANG UNIVERSITY Linear Algebra Spring 2014 s to Computer Project #2 Problem 2.1 Find the adjacency matrix for the following graph (all edges are bidirectional). P5 From the given bidirectional graph
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationLinear algebra & Numerical Analysis
Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric
More informationLinear Algebra I Lecture 8
Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n
More informationPreserving sparsity in dynamic network computations
GoBack Preserving sparsity in dynamic network computations Francesca Arrigo and Desmond J. Higham Network Science meets Matrix Functions Oxford, Sept. 1, 2016 This work was funded by the Engineering and
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that
More informationSolving PDEs with CUDA Jonathan Cohen
Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear
More informationMath 1314 Week #14 Notes
Math 3 Week # Notes Section 5.: A system of equations consists of two or more equations. A solution to a system of equations is a point that satisfies all the equations in the system. In this chapter,
More informationJacobi-Davidson Eigensolver in Cusolver Library. Lung-Sheng Chien, NVIDIA
Jacobi-Davidson Eigensolver in Cusolver Library Lung-Sheng Chien, NVIDIA lchien@nvidia.com Outline CuSolver library - cusolverdn: dense LAPACK - cusolversp: sparse LAPACK - cusolverrf: refactorization
More informationAM205: Assignment 2. i=1
AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationHomework Set 5 Solutions
MATH 672-010 Vector Spaces Prof. D. A. Edwards Due: Oct. 7, 2014 Homework Set 5 Solutions 1. Let S, T L(V, V finite-dimensional. (a (5 points Prove that ST and T S have the same eigenvalues. Solution.
More informationMatrix Algebra 2.1 MATRIX OPERATIONS Pearson Education, Inc.
2 Matrix Algebra 2.1 MATRIX OPERATIONS MATRIX OPERATIONS m n If A is an matrixthat is, a matrix with m rows and n columnsthen the scalar entry in the ith row and jth column of A is denoted by a ij and
More informationMath 502 Fall 2005 Solutions to Homework 3
Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationNumerical Analysis Fall. Gauss Elimination
Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing
More information= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.
3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationUnit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence
Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an
More informationMatrix Assembly in FEA
Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,
More informationMatrices: 2.1 Operations with Matrices
Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationL. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions.
L Vandenberghe EE133A (Spring 2017) 3 Matrices notation and terminology matrix operations linear and affine functions complexity 3-1 Matrix a rectangular array of numbers, for example A = 0 1 23 01 13
More informationSolutions to Assignment 3
Solutions to Assignment 3 Question 1. [Exercises 3.1 # 2] Let R = {0 e b c} with addition multiplication defined by the following tables. Assume associativity distributivity show that R is a ring with
More informationSMCP Documentation. Release Martin S. Andersen and Lieven Vandenberghe
SMCP Documentation Release 0.4.5 Martin S. Andersen and Lieven Vandenberghe May 18, 2018 Contents 1 Current release 3 2 Future releases 5 3 Availability 7 4 Authors 9 5 Feedback and bug reports 11 5.1
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationSparsity of Matrix Canonical Forms. Xingzhi Zhan East China Normal University
Sparsity of Matrix Canonical Forms Xingzhi Zhan zhan@math.ecnu.edu.cn East China Normal University I. Extremal sparsity of the companion matrix of a polynomial Joint work with Chao Ma The companion matrix
More informationSection 1.1: Systems of Linear Equations
Section 1.1: Systems of Linear Equations Two Linear Equations in Two Unknowns Recall that the equation of a line in 2D can be written in standard form: a 1 x 1 + a 2 x 2 = b. Definition. A 2 2 system of
More informationCSE 245: Computer Aided Circuit Simulation and Verification
: Computer Aided Circuit Simulation and Verification Fall 2004, Oct 19 Lecture 7: Matrix Solver I: KLU: Sparse LU Factorization of Circuit Outline circuit matrix characteristics ordering methods AMD factorization
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationMatrix Arithmetic. j=1
An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationHigh Dimensional Inverse Covariate Matrix Estimation via Linear Programming
High Dimensional Inverse Covariate Matrix Estimation via Linear Programming Ming Yuan October 24, 2011 Gaussian Graphical Model X = (X 1,..., X p ) indep. N(µ, Σ) Inverse covariance matrix Σ 1 = Ω = (ω
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationNUMERICAL MATHEMATICS & COMPUTING 7th Edition
NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc7 October 16, 2011 Ward Cheney/David Kincaid
More informationsublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)
sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank
More informationMa/CS 6a Class 19: Group Isomorphisms
Ma/CS 6a Class 19: Group Isomorphisms By Adam Sheffer A Group A group consists of a set G and a binary operation, satisfying the following. Closure. For every x, y G x y G. Associativity. For every x,
More informationCOMPSCI 514: Algorithms for Data Science
COMPSCI 514: Algorithms for Data Science Arya Mazumdar University of Massachusetts at Amherst Fall 2018 Lecture 8 Spectral Clustering Spectral clustering Curse of dimensionality Dimensionality Reduction
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationHani Mehrpouyan, California State University, Bakersfield. Signals and Systems
Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationLecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems
Lecture4 INF-MAT 4350 2010: 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010 Test Matrix
More informationMatlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems
Matlab s Krylov Methods Library For Large Sparse Ax = b Problems PCG Preconditioned Conjugate Gradients Method. X = PCG(A,B) attempts to solve the system of linear equations A*X=B for X. The N-by-N coefficient
More informationAn Algorithmist s Toolkit September 10, Lecture 1
18.409 An Algorithmist s Toolkit September 10, 2009 Lecture 1 Lecturer: Jonathan Kelner Scribe: Jesse Geneson (2009) 1 Overview The class s goals, requirements, and policies were introduced, and topics
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationCholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems
Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Thái Anh Nhan and Niall Madden Abstract We consider the solution of large linear systems
More informationCompressive sampling, or how to get something from almost nothing (probably)
Compressive sampling p. 1/13 Compressive sampling, or how to get something from almost nothing (probably) Willard Miller miller@ima.umn.edu University of Minnesota The problem 1 A signal is a real n-tuple
More informationA Sparse QS-Decomposition for Large Sparse Linear System of Equations
A Sparse QS-Decomposition for Large Sparse Linear System of Equations Wujian Peng 1 and Biswa N. Datta 2 1 Department of Math, Zhaoqing University, Zhaoqing, China, douglas peng@yahoo.com 2 Department
More informationCS 219: Sparse matrix algorithms: Homework 3
CS 219: Sparse matrix algorithms: Homework 3 Assigned April 24, 2013 Due by class time Wednesday, May 1 The Appendix contains definitions and pointers to references for terminology and notation. Problem
More informationICS 6N Computational Linear Algebra Matrix Algebra
ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University
More informationPre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix
Pre-Calculus I 8.1 Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural
More informationScientific Computing: Solving Linear Systems
Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationVector Spaces ปร ภ ม เวกเตอร
Vector Spaces ปร ภ ม เวกเตอร 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition and
More informationSection 4.5 Eigenvalues of Symmetric Tridiagonal Matrices
Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete
More informationThe Structure of the Jacobian Group of a Graph. A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College
The Structure of the Jacobian Group of a Graph A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College In Partial Fulfillment of the Requirements for the Degree Bachelor of
More informationIllustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith
More informationAlgebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix
Algebra & Trig. I 8.1 Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural
More information12. Cholesky factorization
L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationDefinitive Screening Designs with Added Two-Level Categorical Factors *
Definitive Screening Designs with Added Two-Level Categorical Factors * BRADLEY JONES SAS Institute, Cary, NC 27513 CHRISTOPHER J NACHTSHEIM Carlson School of Management, University of Minnesota, Minneapolis,
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationEE364a Review Session 7
EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative
More informationL-RCM: a method to detect connected components in undirected graphs by using the Laplacian matrix and the RCM algorithm
L-RCM: a method to detect connected components in undirected graphs by using the Laplacian matrix and the RCM algorithm arxiv:1206.5726v1 [cs.dm] 25 Jun 2012 Francisco Pedroche Miguel Rebollo Carlos Carrascosa
More information