Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina
|
|
- Ethelbert Franklin
- 5 years ago
- Views:
Transcription
1 A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University
2 Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Original Proof by Induction An Innocent Mistake A Recursive Clause in Programming The Matrix Structure A Modied Proof A Symbolic Example Numerical Experiment
3 Schur-Horn Theorem Given arbitrary Hermitian matrix H, Let = [ i ] = eigenvalues. Let a = [a i ] = diagonal entries. Assume Then kx i=1 nx i=1 a j1 mi k X ::: a jn m1 ::: mn : i=1 mi = n X i=1 a ji a ji :. Known as a majorizing. Given vectors a R n, Assume a majorizes. for k = 1 :::n Then a Hermitian matrix H with eigenvalues and diagonal entries a exists. How to solve the inverse eigenvalue problem numerically?
4 Mirsky Theorem Any similar connection between eigenvalues and diagonal entries of a general matrix? A matrix with eigenvalues 1 ::: n and main diagonal elements a 1 ::: a n exists if and only if nx i=1 a i = n X i=1 i :
5 Sing-Thompson Theorem Any connection between singular values and diagonal entries of a general matrix? Given vectors d s R n, Assume s 1 s :::s n jd 1 j jd j :::jd n j: Then a real matrix with singular values s and main diagonal entries d (possibly in dierent order) exists if and only if 0 n;1 X i=1 kx jd i j 1 i=1 C jd i ja ; jd n j k X i=1 s i 0 n;1 X i=1 for k = 1 ::: n s i 1 C A ; s n : How to solve the inverse singular value problem numerically?
6 Weyl-Horn Theorem Any connection between singular values and eigenvalues of a general matrix? singular value = jeigenvaluej for Hermitian matrices. Given vectors C n and R n, Assume j 1 j ::: j n j 1 ::: n : Then a matrix with eigenvalues 1 ::: n and singular values 1 ::: n exists if and only if ky j=1 ny j=1 j j j j j j = k Y j=1 Y n j=1 j j : k = 1 ::: n; 1. If j n j > 0, then log majorizes log jj. How to solve the inverse eigenvalue (singular value) problem numerically?
7 The Case The Weyl-Horn Condition: j 1 j 1 j 1 jj j = 1 : + j j j 1 j 1 j 1 j + j j 1 + : The building block A triangular matrix A = 1 0 has singular value f 1 g if and only if = r 1 + ; j 1j ; j j : A is complex-valued when eigenvalues are complex. A stable way of computing : = 0 if j( 1 ; ) ;(j 1 j;j j) j r j(1 ; ) ;(j 1 j;j j) j otherwise.
8 Ideas in Horn's Proof Reduce the original inverse problem to two problems of smaller sizes. Problems of smaller sizes are guaranteed to be solvable by the induction hypothesis. The subproblems are axed together by working on a suitable corner. The problem has an explicit solution.
9 Key to the Algorithmic Success 9 The eigenvalues and singular values of each of the two subproblems can be derived explicitly. Each of the two subproblems can further be down-sized. The original problem is divided into subproblems of size or 1 1. The smaller problems can be conquered to build up the original size. In an environment that allows a subprogram to invoke itself recursively, only one-step of the divide-and-conquer procedure will be enough. Very similar to the radix- FFT =) fast algorithm.
10 10 Outline of Proof Suppose i > 0 for all i = 1 ::: n. So i = 0 for all i. The case of zero singular values can be handled in a similar way. Dene 1 i := 1 := i i;1 for i = j i ::: n; 1: j Assume := min 1in;1 i occurs at the index j. Dene := j 1 n j : The following three sets of inequalities are true. The numbers satisfy the Weyl-Horn conditions. j 1 j j n j : j j ::: j j j 1 ::: j : j j+1 j ::: j n;1 j j+1 ::: n;1 n :
11 By induction hypothesis, There exist unitary matrices U 1, V 1 C jj and triangular matrices A 1 such that U ::: ::: j V 1 = A 1 = ::: j There exist unitary matrices U, V C (n;j)(n;j), and triangular matrix A such that U j+1 0 :::0 0 j ::: n V = A = j+1 ::: 0 j n;1 0 0 ::: 0 11 : :
12 1 Horn's claim: The block matrix A 1 A can be permuted to the triangular matrix ::: j 0 ::: ::: 0 0 ::: j ::: 0 0 n;1 The corner can now be glued together by U V 0 = A 0 = 1 0 n : How to do the permutation, or is it a mistake? It takes more than permutation to rearrange the diagonals of a triangular matrix.
13 A MATLAB Program 1 function [A]=svd_eig(alpha,lambda) n = length(alpha) if n == 1 % The 1 by 1 case A = [lambda(1)] elseif n == % The by case [U,V,A] = two_by_two(alpha,lambda) else % Check zero singular values tol = n*alpha(1)*eps k = sum(alpha > tol) m = sum(abs(lambda) > tol) if k == n % Nonzero singular values j = 1 s = alpha(1) temp = s for i = :n-1 temp = temp*alpha(i)/abs(lambda(i)) if temp < s, j = i s = temp end end rho = abs(lambda(1)*lambda(n))/s [U0,V0,A0] = two_by_two([s rho],[lambda(1) lambda(n)]) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [A1] = svd_eig(alpha(1:j),[s lambda(:j)]) % RECURSIVE % [A] = svd_eig(alpha(j+1:n),[lambda(j+1:n-1) rho]) % CALLING % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% A = [A1,zeros(j,n-j) zeros(n-j,j),a] Temp = A A(1,:)=U0(1,1)*Temp(1,:)+U0(1,)*Temp(n,:) A(n,:)=U0(,1)*Temp(1,:)+U0(,)*Temp(n,:) Temp = A A(:,1)=V0(1,1)*Temp(:,1)+V0(1,)*Temp(:,n) A(:,n)=V0(,1)*Temp(:,1)+V0(,)*Temp(:,n) else % Zero singular values beta = prod(abs(lambda(1:m)))/prod(alpha(1:m-1)) [U,V,A] = svd_eig([alpha(1:m-1) beta],lambda(1:m)) A = zeros(n) A(1:m,1:m) = V'*A*V for i = m+1:k, A(i,i+1) = alpha(i) end A(m,m+1) = sqrt(abs(alpha(m)^-beta^)) end end
14 1 Correct that \Mistake" Horn's requirement: Both intermediate matrices A 1 and A are upper triangular matrices. Diagonal entries are arranged in a certain order.. Valid from the Schur decomposition theorem.. More than permutation, not easy for computation.. To rearrange diagonal entries via unitary similarity transformations while maintaining the upper triangular structure is expensive. Our contribution: The triangular structure is entirely unnecessary. The matrix A produced from our algorithm is generally not triangular. Do not need to rearrange the diagonal entries Modifying the rst and the last rows and columns of the block diagonal matrix A 1, as if nothing A happened, is enough.
15 Algorithm: Denote U 0 = [u 0 st ] and V 0 = [v 0 st ]. Then u u I n;1 0 u u 0 U U is the desired matrix. A has the structure A = 1 0 ::: n V V v v I n;1 0 v v 0 1 ::: j;1 ::: j 0 ::: 0 0 j+1 ::: 0 j ::: n = unchanged, original entries from A 1 or A. = entries of A 1 or A that are modied by scalar multiplications. = possible new entries that were originally zero. : 1
16 1 A Variation of Horn's Proof Does the algorithm really works? Clearly, A has singular values f 1 ::: n g. Need to show that A has eigenvalues f 1 ::: n g. What has been changed? (P1) Diagonal entries of A 1 and A are in xed orders, ::: j and j+1 ::: n;1, respectively. (P) Each A i is similar through permutations, which need not to be known, to a lower triangular matrix whose diagonal entries constitute the same set as the diagonal entries of A i. (Thus, each A i has precisely its own diagonal entries as its eigenvalues.) (P) The rst row and the last row have the same zero pattern except that the lower-left corner is always zero. (P) The rst column and the last column have the same zero pattern except that the lower-left corner is always zero. Use graph theory to show that the axed matrix A has exactly the same properties.
17 A Symbolic Example 1 Dividing process: j 1 = j = + j =
18 1 Conquering process: j 1 = * j = * j = 1 * h i h 1 i
19 Computational Cost 19 The divide-and-conquer feature brings on fast computation. The overall cost is estimated at the order of O(n ). A numerical experiment: flops size of problem Figure 1: log-log plot of computational ops versus problem sizes
20 0 Rosser Test Rosser matrix R: R = ;19 0 ; ; ; ;19 ;1 ; ; ; ; ; ; ; ; ; ; ; 9 ; ;9 ; ;911 9 ; ; 0 0 ; Has one double eigenvalue, three nearly equal eigenvalues, one zero eigenvalue, two dominant eigenvalues of opposite sign and one small nonzero eigenvalue. The computed eigenvalues and singular values of R are = ;1: e+0 1: e+0 1: e+0 1: e+0 1: e+0 9: e+0 9:00101e;0 : e;1 = : 1: e+0 1: e+0 1: e+0 1: e+0 1: e+0 9: e+0 9:001e;0 1:0009e;1 :
21 1 Using the above and, A nonsymmetric matrix is produced: 1:000e ;1:000e :000e :0199e :e; :0000e :0000e ;1:e; :09e; :0e;0 0 : The re-computed eigenvalues and singular values of A are ^ = ;1: e+0 1: e+0 1: e+0 1: e+0 1: e+0 9: e+0 9:0011e;0 0 ^ = 1: e+0 1: e+0 1: e+0 1: e+0 1: e+0 9: e+0 9:001e;0 The re-computed eigenvalues and singular values agree with those of R up to the machine accuracy. 0 :
22 Wilkinson Test Wilkinson's matrices: All are symmetric and tridiagonal. Have nearly, but not exactly, equal eigenvalue pairs. Using these data: Discrepancy in eigenvalues and singular values between our constructed matrices and Wilkinson's matrices Discrepancy in Eigenvalues and Singular Values * = discrepancy in eigenvalues 10 1 o = discrepancy in singular values semilog on nomr of discrepancy size of Wilkinson matrix Figure : L norm of discrepancy in eigenvalues and singular values.
23 Matrices constructed are nearly but not symmetric. Wilkinson Matrix of Size Constructed Matrix Figure : -D mesh representation of 1 1 matrices
24 Conclusion Weyl-Horn Theorem completely characterizes the relationship between eigenvalues and singular values of a general matrix. The original proof has been modied. With the aid of programming languages that allow a subprogram to invoke itself recursively, an induction proof can be implemented as a recursive algorithm. The resulting algorithm is fast. The cost of construction is approximately O(n ). The matrix being constructed usually is not symmetric and is complex-valued, if complex eigenvalues are present. Numerical experiment on some very challenging problems suggests that our method is quite robust.
as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo
ON CONSTRUCTING MATRICES WITH PRESCRIBED SINGULAR VALUES AND DIAGONAL ELEMENTS MOODY T. CHU Abstract. Similar to the well known Schur-Horn theorem that characterizes the relationship between the diagonal
More informationInverse Eigenvalue Problems: Theory and Applications
Inverse Eigenvalue Problems: Theory and Applications A Series of Lectures to be Presented at IRMA, CNR, Bari, Italy Moody T. Chu (Joint with Gene Golub) Department of Mathematics North Carolina State University
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin
More informationSolution Set 7, Fall '12
Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationACM106a - Homework 2 Solutions
ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationCompound matrices and some classical inequalities
Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationThe determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram
The determinant Motivation: area of parallelograms, volume of parallepipeds Two vectors in R 2 : oriented area of a parallelogram Consider two vectors a (1),a (2) R 2 which are linearly independent We
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationBlock-tridiagonal matrices
Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationSolving Linear Systems Using Gaussian Elimination. How can we solve
Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationThe QR Factorization
The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationOrthogonal iteration to QR
Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationConnections and Determinants
Connections and Determinants Mark Blunk Sam Coskey June 25, 2003 Abstract The relationship between connections and determinants in conductivity networks is discussed We paraphrase Lemma 312, by Curtis
More informationA TOUR OF LINEAR ALGEBRA FOR JDEP 384H
A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationLeast squares and Eigenvalues
Lab 1 Least squares and Eigenvalues Lab Objective: Use least squares to fit curves to data and use QR decomposition to find eigenvalues. Least Squares A linear system Ax = b is overdetermined if it has
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationApril 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition
Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.
More informationThe Jordan Canonical Form
The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationCanonical lossless state-space systems: staircase forms and the Schur algorithm
Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More informationThe Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1
8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ]
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationNormalized power iterations for the computation of SVD
Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant
More information12. Cholesky factorization
L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationACM106a - Homework 4 Solutions
ACM106a - Homework 4 Solutions prepared by Svitlana Vyetrenko November 17, 2006 1. Problem 1: (a) Let A be a normal triangular matrix. Without the loss of generality assume that A is upper triangular,
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationMatrix Multiplication Chapter IV Special Linear Systems
Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.
More informationREPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW
20 CHAPTER 1 Systems of Linear Equations REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW The last type of operation is slightly more complicated. Suppose that we want to write down the elementary
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationMatrix Theory, Math6304 Lecture Notes from October 25, 2012
Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More information6 Linear Systems of Equations
6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential
More information