Real Eigenvalue Extraction and the Distance to Uncontrollability

Size: px
Start display at page:

Download "Real Eigenvalue Extraction and the Distance to Uncontrollability"

Transcription

1 Real Eigenvalue Extraction and the Distance to Uncontrollability Emre Mengi Computer Science Department Courant Institute of Mathematical Sciences New York University May 22nd, 2006 Emre Mengi, Courant Institute

2 Real Eigenvalue Problems In control theory and numerical linear algebra various algorithms require real, imaginary or unit eigenvalues of large matrices. Distance to uncontrollability of a pair (A,B) with A C n n and B C n m : Gu s method and its variants depend on the extraction of the real eigenvalues of problems of size O(n 2 ), i.e. they look for real α such that det(i F(α) F(α + η) T I) = 0 where [ F(α) = ˆB A αi A αi δi ] Emre Mengi, Courant Institute 1

3 Sep-λ of a pair (A,B) with A C n n and B C p p : Gu and Overton s method seeks for real x such that det(i G A (x) G B (x) T I) = 0 where [ A xi εi G A (x) = εi A + xi ] Hamiltonian eigenvalue problems: Hamiltonian matrices appear frequently in control theory. A Hamiltonian matrix H C 2n 2n satisfying the property [ ] HJ = (HJ) 0 I, J = I 0 or a Hamiltonian pencil H 1 λh 2 with the property H 1 JH2 T = H 2JH1 T has eigenvalues in pairs (λ, λ). Often extraction of the imaginary eigenvalues of Hamiltonian problems is a crucial step for numerical algorithms. Emre Mengi, Courant Institute 2

4 Symplectic eigenvalue problems: A symplectic pencil S 1 λs 2 with S 1,S 2 C 2n 2n satisfying S 1 JS T 1 = S 2JS T 2 has eigenvalues in pairs (λ,1/ λ). The unit eigenvalues of the symplectic problems are of interest in the applications. The symplectic pencil S 1 λs 2 can be mapped to a Hamiltonian pencil via a Cayley transformation. H 1 λh 2 = (S 2 + S 1 ) (S 2 S 1 )λ. Emre Mengi, Courant Institute 3

5 Overview A divide and conquer technique for real eigenvalue extraction Algorithm description Average case and worst case performance Practical issues Distance to uncontrollability Problem definition A variant of Gu s algorithm Application of the real eigenvalue extraction Emre Mengi, Courant Institute 4

6 A Divide and Conquer Algorithm The algorithm is based on an efficient and accurate computation of the closest eigenvalue of X C p p to a given point in the complex plane. The closest eigenvalue can be computed via inverse Iteration or a shift and invert preconditioned Arnoldi if given a vector v C p, the multiplication can be performed efficiently. (X νi) 1 v = u Emre Mengi, Courant Institute 5

7 A Divide and Conquer Algorithm To extract the real eigenvalues of X contained in an interval I = [l,u], compute the eigenvalue λ closest to the midpoint ν m of I (add λ to the set of desired eigenvalues in case λ is real), repeat the same procedure in the left interval I l = [l,ν m λ ν m ] and in the right interval I r = [ν m + λ ν m,u]. Emre Mengi, Courant Institute 6

8 Initially call Divide And Conquer(A, D,D) where D is an upper bound on the norm of X. Divide And Conquer(X,u,l) [1.] Set the shift ν m (u+l) 2. [2.] Compute the eigenvalue λ closest to the shift ν m. if u l < 2 λ ν m then % Base case: there is no real eigenvalue in the interval [l,u]. Return [ ]. else % Recursive case: Search the left and right intervals. Λ l Divide And Conquer(X,l,ν m λ ν m ) Λ r Divide And Conquer(X,ν m + λ ν m,u) if λ is real then Return λ Λ l Λ r. else Return Λ l Λ r. end if end if Emre Mengi, Courant Institute 7

9 8

10 9

11 A Divide and Conquer Algorithm Worst case: the algorithm can iterate at most 2p + 1 times The progress of the algorithm can be represented by a full binary tree with at most p + 1 leaves. A full binary tree with p + 1 leaves has p internal nodes. Each node of the tree corresponds to a closest eigenvalue computation. Emre Mengi, Courant Institute 10

12 A Divide and Conquer Algorithm Average case: Under the assumption that the eigenvalues are uniformly distributed, on average the number of closest eigenvalue computation is O( p). Let E j (X) denote the expected number of iterations given there are j eigenvalues inside the circle of radius D. Theorem 1 (Average case for the divide and conquer algorithm). Suppose the eigenvalues of the matrices input to the divide and conquer algorithm are selected uniformly and independently inside the circle of radius µ > D. Then the expectation E j (X) is bounded above by c j + d 1 for all c 12 and d [4/c 2,1/3]. Emre Mengi, Courant Institute 11

13 A Divide and Conquer Algorithm Practical issues: We use ARPACK for the computation of the closest eigenvalue which is an implementation of the Arnoldi method. There is no result stating that shift and invert preconditioned Arnoldi converges to the closest eigenvalue, but in practice this is almost always the case. The convergence problems are observed especially when the norm of X is big. Choosing D very large does not alter the number of closest eigenvalue computations significantly. But this may cause convergence problems for ARPACK. Emre Mengi, Courant Institute 12

14 Distance to Uncontrollability Given a continuous first order dynamical system with the coefficient matrices A C n n and B C n m x = Ax + Bu, x(0) = c 0 where x : R C n and u : R C m are state and input functions of time respectively. We have full control over the system if the system is controllable, that is the system can be driven into any state at any given time τ. Equivalent characterizations: rank[b AB A 2 B... A n 1 B] = n, or λ C rank[a λi B] = n. Emre Mengi, Courant Institute 13

15 Distance to Uncontrollability Norm of the smallest perturbation yielding an uncontrollable system (singular value characterization due to Eising) τ(a,b) = inf{ A B : (A + A,B + B) is uncontrollable} = inf λ C σ n [A λi B]. Emre Mengi, Courant Institute 14

16 Distance to Uncontrollability Trisection algorithm by Burke, Lewis and Overton for arbitrary precision (variant of the bisection algorithm introduced by Gu). 1. Initialize U = σ n [A B] and L = If U L < tol, terminate with L and U. 3. Set δ 1 = L + 2/3(U L) and δ 2 = L + 1/3(U L). 4. If a (δ,η)-pair exists, update the upper bound U = δ 1. Otherwise update the lower bound L = δ Go to step 2. Emre Mengi, Courant Institute 15

17 (δ,η)-pair: a pair of complex points z, z + η such that σ min [A zi B] = σ min [A (z + η)i B] = δ. Emre Mengi, Courant Institute 16

18 Distance to Uncontrollability A (δ,η)-pair of (A,B) exists: Using the definition τ(a,b) = inf λ C σ n [A λi B] we deduce δ 1 = δ τ(a,b). No (δ,η)-pair of (A,B) exists: The result below based on the fact that singular values are well-conditioned implies η > 2(δ τ(a, B)) meaning τ(a,b) > δ η/2 = δ 2. Emre Mengi, Courant Institute 17

19 Theorem 2. Let δ τ. For all ξ [0,2(δ τ)], there exists a pair of real numbers (α,β) such that σ n [A (α + βi)i B] = δ, σ n [A (α + ξ + βi)i B] = δ. Emre Mengi, Courant Institute 18

20 Distance to Uncontrollability Checking the existence of a (δ, η)-pair: The rectangular matrix [A (α + βi)i B] has δ as a singular value if and only if iβ is an eigenvalue of the Hamiltonian matrix [ A H(α) = ] + αi δi BB. /δ δi A αi If the points α+βi and α+η+βi is a (δ,η)-pair, the matrices H(α) and H(α+η) must share an imaginary eigenvalue namely βi meaning for some X 0 H(α)X XH(α + η) = 0 (I H(α) H(α + η) T I)vec(X) = 0. This can be converted into a standard eigenvalue problem Av = αv of size 2n 2 whose real eigenvalues we seek. Emre Mengi, Courant Institute 19

21 Distance to Uncontrollability Checking the existence of a (δ, η)-pair: 1. Extract the real eigenvalues of A. 2. For each real eigenvalue α, check whether H(α) and H(α + η) share an imaginary eigenvalue. 3. If both 1 and 2 succeeds, the existence is verified. Otherwise it is failed. The complexity of the method is typically O(n 6 ) when the QR algorithm is used to compute the eigenvalues of A. Emre Mengi, Courant Institute 20

22 Distance to Uncontrollability The multiplication (A νi) 1 u = v can be performed efficiently Reversing the process from which the eigenvalue problem is obtained yields the Sylvester equation [ A νi BB /δ δi δi A + νi ][ V1 W 1 W 2 V 2 ] [ V1 W + 1 W 2 V 2 ][ A (η + ν)i BB /δ δi δi A + (η + ν)i ] [ U1 0 = 2 0 U 2 ]. The right-hand side is provided by the vector u = onto (A νi) 1 [ vec(u1 ) vec(u 2 ) ] C 2n2 that will be multiplied [ ] vec(v1 ) The product v = C vec(v 2 ) 2n2 can be obtained from the upper left and lower right blocks of the solution of the Sylvester equation. Emre Mengi, Courant Institute 21

23 Distance to Uncontrollability Under the assumption that each closest eigenvalue computation requires the matrix vector multiplication constant number of times, the cost of a closest eigenvalue computation is O(n 3 ). When the real eigenvalue extraction technique is used, the overall average running time reduces to O(n 4 ), the worst case running time reduces to O(n 5 ). Emre Mengi, Courant Institute 22

24 Distance to Uncontrollability Comparison of the running times on the normally distributed matrices: Old method with QR to compute the eigenvalues of A. (n,m) (Lower bound, Upper bound] Running time (10,5) ( , ] 76 (25,10) ( , ] (30,23) ( , ] New method with shifted inverse iteration to compute the eigenvalues of A. (n,m) (Lower bound, Upper bound] Running time ave. calls to eigs (10,5) ( , ] (25,10) ( , ] (30,23) ( , ] (40,10) ( , ] (45,40) ( , ] (50,13) ( , ] Emre Mengi, Courant Institute 23

25 Distance to Uncontrollability Eigenvalue distributions of A for a normal matrix pair (n = 5, m = 3) for various η η = , δ = η = , δ = Emre Mengi, Courant Institute 24

26 η = , δ = η = , δ = Emre Mengi, Courant Institute 25

27 Distance to Uncontrollability Accuracy issues: For more precision η needs to be set smaller. As η goes to zero, the norm of A approaches. Norm of A for a normally distributed pair and for various η η norm of A Therefore the eigenvalues of A cannot be computed accurately in an absolute sense. δλ = O(κ λ (A) A ε mach ) When the norm of A is large, inverse iteration and Arnoldi have convergence problems, since the matrix vector multiplications cannot be performed accurately. Emre Mengi, Courant Institute 26

28 Other applications The complexity of the algorithm by Gu and Overton for the sep-λ of a pair A C n n and B C p p reduces from O(n 7 ) to O(n 5 ) on average. sep λ (A,B) = inf{max( A, B ) : A C n n, B C p p s.t. A + A and B + B share an eigenvalue} The computation of the distance to instability of a large and sparse matrix can be improved whenever the sparse LU decomposition can be performed efficiently. γ(a) = inf{ A : A C n n s.t.(a + A) has an unstable eigenvalue} Emre Mengi, Courant Institute 27

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Solving large Hamiltonian eigenvalue problems

Solving large Hamiltonian eigenvalue problems Solving large Hamiltonian eigenvalue problems David S. Watkins watkins@math.wsu.edu Department of Mathematics Washington State University Adaptivity and Beyond, Vancouver, August 2005 p.1 Some Collaborators

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 6 Problems marked (T) are for discussions in Tutorial sessions. 1. Find the eigenvalues

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:

More information

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 38, No. 3, pp. 89 923 c 207 Society for Industrial and Applied Mathematics CRISS-CROSS TYPE ALGORITHMS FOR COMPUTING THE REAL PSEUDOSPECTRAL ABSCISSA DING LU AND BART VANDEREYCKEN

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Key words. eigenvalue problem, real pseudospectra, spectral abscissa, subspace methods, crisscross methods, robust stability

Key words. eigenvalue problem, real pseudospectra, spectral abscissa, subspace methods, crisscross methods, robust stability CRISS-CROSS TYPE ALGORITHMS FOR COMPUTING THE REAL PSEUDOSPECTRAL ABSCISSA DING LU AND BART VANDEREYCKEN Abstract. The real ε-pseudospectrum of a real matrix A consists of the eigenvalues of all real matrices

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

NEAREST LINEAR SYSTEMS WITH HIGHLY DEFICIENT REACHABLE SUBSPACES

NEAREST LINEAR SYSTEMS WITH HIGHLY DEFICIENT REACHABLE SUBSPACES NEAREST LINEAR SYSTEMS WITH HIGHLY DEFICIENT REACHABLE SUBSPACES EMRE MENGI Abstract. We consider the 2-norm distance τ r(a, B) from a linear time-invariant dynamical system (A, B) of order n to the nearest

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

An overview of key ideas

An overview of key ideas An overview of key ideas This is an overview of linear algebra given at the start of a course on the mathematics of engineering. Linear algebra progresses from vectors to matrices to subspaces. Vectors

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution Biswa N. Datta, IEEE Fellow Department of Mathematics Northern Illinois University DeKalb, IL, 60115 USA e-mail:

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Algorithm Analysis Divide and Conquer. Chung-Ang University, Jaesung Lee

Algorithm Analysis Divide and Conquer. Chung-Ang University, Jaesung Lee Algorithm Analysis Divide and Conquer Chung-Ang University, Jaesung Lee Introduction 2 Divide and Conquer Paradigm 3 Advantages of Divide and Conquer Solving Difficult Problems Algorithm Efficiency Parallelism

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review ORIE 4741 September 1, 2017 Linear Algebra Review September 1, 2017 1 / 33 Outline 1 Linear Independence and Dependence 2 Matrix Rank 3 Invertible Matrices 4 Norms 5 Projection Matrix

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Calculating the H -norm using the implicit determinant method.

Calculating the H -norm using the implicit determinant method. Calculating the H -norm using the implicit determinant method. Melina Freitag Department of Mathematical Sciences University of Bath 18th ILAS Conference Providence, Rhode Island 4th June 2013 joint work

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points) UCSD ECE269 Handout #8 Prof Young-Han Kim Monday, March 9, 208 Final Examination (Total: 30 points) There are 5 problems, each with multiple parts Your answer should be as clear and succinct as possible

More information

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont. Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:

More information

Reliable and Efficient Geometric Computation p.1/27

Reliable and Efficient Geometric Computation p.1/27 Reliable and Efficient Geometric Computation Kurt Mehlhorn Max-Planck-Institut für Informatik slides and papers are available at my home page partially supported by EU-projects ECG and ACS Effective Computational

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

An efficient algorithm to compute the real perturbation values of a matrix

An efficient algorithm to compute the real perturbation values of a matrix An efficient algorithm to compute the real perturbation values of a matrix Simon Lam and Edward J. Davison 1 Abstract In this paper, an efficient algorithm is presented for solving the nonlinear 1-D optimization

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Algorithms for the computation of the pseudospectral radius and the numerical radius of a matrix

Algorithms for the computation of the pseudospectral radius and the numerical radius of a matrix IMA Journal of Numerical Analysis (2005) 25, 648 669 doi:10.1093/imanum/dri012 Advance Access publication on March 7, 2005 Algorithms for the computation of the pseudospectral radius and the numerical

More information

MICHIEL E. HOCHSTENBACH

MICHIEL E. HOCHSTENBACH VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Least squares problems Linear Algebra with Computer Science Application

Least squares problems Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x. ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 16 1 / 21 Overview

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Eigenvalue problems III: Advanced Numerical Methods

Eigenvalue problems III: Advanced Numerical Methods Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 205, 2, p. 20 M a t h e m a t i c s MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I Yu. R. HAKOPIAN, S. S. ALEKSANYAN 2, Chair

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Contents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3

Contents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3 Contents 1 Repeated Gram Schmidt 1 1.1 Local errors.................................. 1 1.2 Propagation of the errors.......................... 3 Gram-Schmidt orthogonalisation Gerard Sleijpen December

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

LECTURE NOTE #10 PROF. ALAN YUILLE

LECTURE NOTE #10 PROF. ALAN YUILLE LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure

More information

Integer Programming. Wolfram Wiesemann. December 6, 2007

Integer Programming. Wolfram Wiesemann. December 6, 2007 Integer Programming Wolfram Wiesemann December 6, 2007 Contents of this Lecture Revision: Mixed Integer Programming Problems Branch & Bound Algorithms: The Big Picture Solving MIP s: Complete Enumeration

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information