SOLUTION OF SPECIALIZED SYLVESTER EQUATION. Ondra Kamenik. Given the following matrix equation AX + BX C = D,

Size: px
Start display at page:

Download "SOLUTION OF SPECIALIZED SYLVESTER EQUATION. Ondra Kamenik. Given the following matrix equation AX + BX C = D,"

Transcription

1 SOLUTION OF SPECIALIZED SYLVESTER EQUATION Ondra Kamenik Given the following matrix equation i AX + BX C D, where A is regular n n matrix, X is n m i matrix of unknowns, B is singular n n matrix, C is m m regular matrix with βc < 1 i.e. modulus of largest eigenvalue is less than one, i is an order of Kronecker product, and finally D is n m i matrix. First we multiply the equation from the left by A 1 to obtain: i X + A 1 BX C A 1 D Then we find real Schur decomposition K UA 1 BU T, and F V CV T. The equation can be written as i UX V T i + KUX V T i i F UA 1 D V T This can be rewritten as i Y + KY F D, and vectorized I + i F T K vecy vec D Let i F denote i F T for the rest of the text. Lemma 1. For any n n matrix A and β 1 β 2 > 0, if there is exactly one solution of β1 x1 d1 I 2 I n + A, β 2 then it can be obtained as solution of x 2 d 2 In + 2A β 2 A 2 x 1 d 1 In + 2A β 2 A 2 x 2 d 2 1 Typeset by AMS-TEX

2 2 ONDRA KAMENIK where β β1β2, and d1 I 2 I n + d 2 β1 β 2 A Proof. Since β1 β1 β1 β1 β 2 β 2 β 2 β 2 it is easy to see that if the equation is multiplied by β1 I 2 I n + A β 2 d1 d β β 2, we obtain the result. We only need to prove that the matrix is regular. But this is clear because matrix β1 β 2 collapses an eigenvalue of A to 1 iff the matrix β1 β 2 does. Lemma 2. For any n n matrix A and δ 1 δ 2 > 0, if there is exactly one solution of I 2 I n + 2 A β 2 2 A 2 x1 x 2 it can be obtained as In + 2a 1 A + a b 2 1A 2 I n + 2a 2 A + a b 2 2A 2 x 1 d 1 d1 d 2 In + 2a 1 A + a b 2 1A 2 I n + 2a 2 A + a b 2 2A 2 x 2 d 2 where d1 d 2 and I 2 I n + 2 γ δ 2 A β 2 a 1 γ βδ b 1 δ + γβ a 2 γ + βδ b 2 δ γβ δ δ 1 δ 2 γ δ 2 2 A 2 d1 d 2

3 SOLUTION OF SPECIALIZED SYLVESTER EQUATION 3 Proof. The matrix can be written as I 2 I n + + iβ A I 2 I n + iβ A. Note that the both matrices are regular since their product is regular. For the same reason as in the previous proof, the following matrix is also regular I 2 I n + + iβ γ δ 2 A I 2 I n + iβ γ δ 2 A, and we may multiply the equation by this matrix obtaining d 1 and d 2. Note that the four matrices commute, that is why we can write the whole product as I 2 I n + + iβ A I 2 I n + + iβ A δ 2 γ I 2 I n + iβ A I 2 I n + iβ A δ 2 γ γ 0 I 2 I n iβ A + + iβ 2 γ 2 + δ γ 0 γ 2 + δ 2 A 2 γ 0 I 2 I n + 2 iβ A + iβ 2 γ 2 + δ γ 0 γ 2 + δ 2 A 2 The product is a diagonal consiting of two n n blocks, which are the same. The block can be rewritten as product: and after reordering I n + + iβγ + iδa I n + + iβγ iδa I n + iβγ + iδa I n + iβγ iδa I n + + iβγ + iδa I n + iβγ iδa I n + + iβγ iδa I n + iβγ + iδa I n + 2γ βδa β 2 γ 2 + δ 2 A 2 I n + 2γ + βδa β 2 γ 2 + δ 2 A 2 Now it suffices to compare a 1 γ βδ and verify that b β 2 γ 2 + δ 2 a γ 2 + β 2 γ β 2 + β 2 δ 2 γ 2 + 2βγδ βδ 2 βγ 2 + β 2 + 2βγδ βγ + β 2 For b 2 it is done in the same way.

4 4 ONDRA KAMENIK The Algorithm Below we define three functions of which vecy solv11, vec D, i provides i the solution Y. X is then obtained as X U T Y V. Synopsis. F T is m m lower quasi-triangular matrix. Let m r be a number of real eigenvalues, m c number of complex pairs. Thus m m r + 2m c. Let F j denote j-th diagonal block of F T 1 1 or 2 2 matrix for j 1,..., m r + m c. For a fixed j, let j denote index of the first column of F j in F T. Whenever we write something like I m i I n + r i F Kx d, x and d denote column vectors of appropriate dimensions, and x j is j-th partition of x, and x j x j x j+1 T if j-th eigenvalue is complex, and x j x j if j-th eigenvalue is real. Function solv1. The function x solv1r, d, i solves equation Im i I n + r i F K x d. The function proceedes as follows: If i 0, the equation is solved directly, K is upper quasi-triangular matrix, so this is easy. If i > 0, then we go through diagonal blocks F j for j 1,..., m r + m c and perform: 1 if F j f j j f, then we return x j solv1rf, d j, i 1. Then precalculate y d j x j, and eliminate guys below F j. This is, for each k j + 1,..., m, we put d k d k i 1 rf jk F K x j d k f jk f y β1 2 if F j, we return x β 2 j solv2r, rβ 1, rβ 2, d j, i 1. Then we precalculate β1 y β 2 I m i 1 I n d j x j d j+1 x j+1 and eliminate guys below F j. This is, for each k j + 2,..., n we put d k d k rf jk f j+1k i 1 F K x j 1 d k 2 + β 1 β f jk f j+1k I m i 1 I n y 2 Function solv2. The function x solv2, β 1, β 2, d, i solves equation β1 I 2 I m i I n + i F K x d β 2 According to Lemma 1 the function returns solv2p, β1 β x 2, d 1, i solv2p, β 1 β 2, d, 2, i where d 1, and d 2 are partitions of d from the lemma.

5 SOLUTION OF SPECIALIZED SYLVESTER EQUATION 5 Function solv2p. The function x solv2p, β 2, d, i solves equation Im i I n + 2 i F K β 2 i F 2 K 2 x d The function proceedes as follows: If i 0, the matrix I n + 2K β 2 K 2 is calculated and the solution is obtained directly. Now note that diagonal blocks of F 2T are of the form F 2 j, since if the F T is block partitioned according to diagonal blocks, then it is lower triangular. If i > 0, then we go through diagonal blocks F j for j 1,..., m r + m c and perform: 1 if F j f j j f then j-th diagonal block of takes the form I m i I n + 2 i F K β 2 i F 2 K 2 I m i 1 I n + 2f i 1 F K β 2 f 2 i 1 F 2 K 2 and we can put x j solv2pf, f 2 β 2, d j, i 1. Then we need to eliminate guys below F j. Note that f 2 < f, therefore we precalculate y β 2 f 2 i 1 F 2 K 2 x j, and then precalculate y 1 2f i 1 F Kx j d j x j y 2. Let g pq denote element of F 2T at position q, p. The elimination is done by going through k j + 1,..., m and putting d k d k 2f jk i 1 F K β 2 g jk i 1 F 2 K 2 x j 2 if F j takes the form d k f jk f y 1 g jk f 2 y 2 I m i 1 I n + 2, then j-th diagonal block of I m i I n + 2 i F K β 2 i F 2 K 2 2 i 1 F K β 2 i 1 F 2 K 2 According to Lemma 2, we need to calculate d j, and d j+1, and a 1, b 1, a 2, b 2. Then we obtain x j solv2pa 1, b 2 1, solv2pa 2, b 2 2, d j, i 1, i 1 x j+1 solv2pa 1, b 2 1, solv2pa 2, b 2 2, d j+1, i 1, i 1

6 6 ONDRA KAMENIK Now we need to eliminate guys below F j. Since F 2 j < F j, we precalculate y β 2 γ 2 + δ 2 I 2 i 1 F 2 K 2 x j y 1 2γ 2 + δ 2 I 2 i 1 F K x j γ 2 + δ 2 F 1 j I m n 1 i 1 d j x j γ 2 + δ 2 I δ 2 γ m i 1 n d j x j F 2 j I m i 1 n y2 I m i 1 n y 2 Then we go through all k j +2,..., m. For clearer formulas, let f k denote pair of F T elements in k-th line below F j, this is f k f jk f j+1k. And let g k denote the same for F 2T, this is g k g jk g j+1k. For each k we put d k d k 2f k i 1 F K β 2 g k i 1 F 2 K 2 x j 1 d k γ 2 + δ 2 f 1 k I m i 1 n y 1 γ 2 + δ 2 g k I m i 1 n y 2 Final Notes Numerical Issues of A 1 B. We began the solution of the Sylvester equation with multiplication by A 1. This can introduce numerical errors, and we need more numerically stable supplement. Its aim is to make A and B commutative, this is we need to find a regular matrix P, such that P AP B P BP A. Recall that this is neccessary in solution of I 2 I m i P A + D + C i F P Bx d, since this equation is multiplied by I 2 I m i P A + D C i F P B, and the diagonal result I 2 I m i P AP A + 2D i F P AP B + D 2 C 2 i F 2 P BP B is obtained only if P AP B P BP A. Finding regular solution of P AP B P BP A is equivalent to finding regular solution of AP B BP A 0. Numerical error of the former equation is P -times greater than the numerical error of the latter equation. And the numerical error of the latter equation also grows with the size of P. On the other hand, truncation error in P multiplication decreases with growing the size of P. By intuition, stability analysis will show that the best choice is some orthonormal P. Obviously, since A is regular, the equation AP B BP A 0 has solution of the form P A 1, where 0. There is a vector space of all solutions P including singular ones. In precise arithmetics, its dimension is n 2 i, where n i is number of repetitions of i-th eigenvalue of A 1 B which is similar to BA 1. In floating point arithmetics, without any further knowledge about A, and B, we are only sure about dimension n which is implied by similarity of A 1 B and BA 1. Now we try to find the base of the vector space of solutions.

7 SOLUTION OF SPECIALIZED SYLVESTER EQUATION 7 Let L denote the following linear operator: LX AXB BXA T. Let vecx denote a vector made by stacking all the columns of X. Let T n denote n 2 n 2 matrix representing operator vecx vecx T. And finally let M denote n 2 n 2 matrix represening the operator L. It is not difficult to verify that: M T n B T A A T B Now we show that M is skew symmetric. Recall that T n X Y Y XT n, we have: M T B T A A T B T T n B A T A B T T n T n A T B B T A M We try to solve M vecx T n 0 0. Since M is skew symmetric, there is real orthonormal matrix Q, such that M Q MQ T, and M is block diagonal matrix consisting of 2 2 blocks of the form 0 i, i 0 and of additional zero, if n 2 is odd. Now we solve equation My 0, where y Q T vecx. Now there are n zero rows in M coming from similarity of A 1 B and BA 1 structural zeros. Note that the additional zero for odd n 2 is already included in that number, since for odd n 2 is n 2 n even. Besides those, there are also zeros esp. in floating point arithmetics, coming from repetitive or close eigenvalues of A 1 B. If we are able to select the rows with the structural zeros, a solution is obtained by picking arbitrary numbers for the same positions in y, and put vecx Qy. The following questions need to be answered: 1 How to recognize the structural rows? 2 Is A 1 generated by a y, which has non-zero elements only on structural rows? Note that A can have repetitive eigenvalues. The positive answer to the question implies that in each n-partition of y there is exactly one structural row. 3 And very difficult one: How to pick y so that X would be regular, or even close to orthonormal pure orthonormality overdeterminates y? Making Zeros in F. It is clear that the numerical complexity of the proposed algorithm strongly depends on a number of non-zero elements in the Schur factor F. If we were able to find P, such that P F P 1 has substantially less zeros than F, then the computation would be substantially faster. However, it seems that we have to pay price for any additional zero in terms of less numerical stability of P F P 1 multiplication. Consider P, and F in form P I X, F 0 I A C 0 B

8 8 ONDRA KAMENIK we obtain P F P 1 A C + XB AX 0 B Thus, we need to solve C AX XB. Its clear that numerical stability of operator Y P Y P 1 and its inverse Y P 1 Y P is worse with growing norm X. The norm can be as large as F /δ, where δ is a distance of eigenspectra of A and B. Also, a numerical error of the solution is proportional to C /δ. Although, these difficulties cannot be overcome completely, we may introduce an algorithm, which works on F with ordered eigenvalues on diagonal, and seeks such partitioning to maximize δ and minimize C. If the partitioning is found, the algorithm finds P and then is run for A and B blocks. It stops when further partitioning is not possible without breaking some user given limit for numerical errors. We have to keep in mind that the numerical errors are accumulated in product of all P s of every step. Exploiting constant rows in F. If some of F s rows consists of the same numbers, or a number of distict values within a row is small, then this structure can be easily exploited in the algorithm. Recall, that in both functions solv1, and solv2p, we eliminate guys below diagonal element or block of F T, by multiplying solution of the diagonal and cancelling it from right side. If the elements below the diagonal block are the same, we save one vector multiplication. Note that in solv2p we still need to multiply by elements below diagonal of the matrix F T 2, which obviously has not the property. However, the heaviest elimination is done at the very top level, in the first call to solv1. Another way of exploitation the property is to proceed all calculations in complex numbers. In that case, only solv1 is run. How the structure can be introduced into the matrix? Following the same notation as in previous section, we solve C AX XB in order to obtain zeros at place of C. If it is not possible, we may relax the equation by solving C R AX XB, where R is suitable matrix with constant rows. The matrix R minimizes C R in order to minimize X if A, and B are given. Now, in the next step we need to introduce zeros or constant rows to matrix A, so we seek for regular matrix P, doing the job. If found, the product looks like: P 0 0 I A R 0 B P I P AP 1 P R 0 B Now note, that matrix P R has also constant rows. Thus, preconditioning of the matrix in upper left corner doesn t affect the property. However, a preconditioning of the matrix in lower right corner breaks the property, since we would obtain RP 1.

Solving SDGE Models: New Algorithm for Sylvester Equation

Solving SDGE Models: New Algorithm for Sylvester Equation Solving SDGE Models: New Algorithm for Sylvester Equation Ondřej Kameník 1 Abstract This paper presents a new numerical algorithm for solving Sylvester equation involved in higher order perturbation method

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

1300 Linear Algebra and Vector Geometry

1300 Linear Algebra and Vector Geometry 1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Matrix Inversion Algorithm One payoff from this theorem: It gives us a way to invert matrices.

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Math 515 Fall, 2008 Homework 2, due Friday, September 26. Math 515 Fall, 2008 Homework 2, due Friday, September 26 In this assignment you will write efficient MATLAB codes to solve least squares problems involving block structured matrices known as Kronecker

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0 535 The eigenvalues are 3,, 3 (ie, the diagonal entries of D) with corresponding eigenvalues,, 538 The matrix is upper triangular so the eigenvalues are simply the diagonal entries, namely 3, 3 The corresponding

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Properties of Matrix Arithmetic

Properties of Matrix Arithmetic Properties of Matrix Arithmetic I've given examples which illustrate how you can do arithmetic with matrices. Now I'll give precise definitions of the various matrix operations. This will allow me to prove

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

More information

pset3-sol September 7, 2017

pset3-sol September 7, 2017 pset3-sol September 7, 2017 1 18.06 pset 3 Solutions 1.1 Problem 1 Suppose that you solve AX = B with and find that X is 1 1 1 1 B = 0 2 2 2 1 1 0 1 1 1 0 1 X = 1 0 1 3 1 0 2 1 1.1.1 (a) What is A 1? (You

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes LU Decomposition 30.3 Introduction In this Section we consider another direct method for obtaining the solution of systems of equations in the form AX B. Prerequisites Before starting this Section you

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation. 2.1 Matrices Reminder: A matrix with m rows and n columns has size m x n. (This is also sometimes referred to as the order of the matrix.) The entry in the ith row and jth column of a matrix A is denoted

More information

Roots and Coefficients Polynomials Preliminary Maths Extension 1

Roots and Coefficients Polynomials Preliminary Maths Extension 1 Preliminary Maths Extension Question If, and are the roots of x 5x x 0, find the following. (d) (e) Question If p, q and r are the roots of x x x 4 0, evaluate the following. pq r pq qr rp p q q r r p

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a Math 360 Linear Algebra Fall 2008 9-10-08 Class Notes Matrices As we have already seen, a matrix is a rectangular array of numbers. If a matrix A has m columns and n rows, we say that its dimensions are

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

MATH 115, SUMMER 2012 LECTURE 4 THURSDAY, JUNE 21ST

MATH 115, SUMMER 2012 LECTURE 4 THURSDAY, JUNE 21ST MATH 115, SUMMER 2012 LECTURE 4 THURSDAY, JUNE 21ST JAMES MCIVOR Today we enter Chapter 2, which is the heart of this subject. Before starting, recall that last time we saw the integers have unique factorization

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Matrix Factorization Reading: Lay 2.5

Matrix Factorization Reading: Lay 2.5 Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 =

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

Math 312/ AMS 351 (Fall 17) Sample Questions for Final

Math 312/ AMS 351 (Fall 17) Sample Questions for Final Math 312/ AMS 351 (Fall 17) Sample Questions for Final 1. Solve the system of equations 2x 1 mod 3 x 2 mod 7 x 7 mod 8 First note that the inverse of 2 is 2 mod 3. Thus, the first equation becomes (multiply

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

22m:033 Notes: 3.1 Introduction to Determinants

22m:033 Notes: 3.1 Introduction to Determinants 22m:033 Notes: 3. Introduction to Determinants Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman October 27, 2009 When does a 2 2 matrix have an inverse? ( ) a a If A =

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

3.2 Gaussian Elimination (and triangular matrices)

3.2 Gaussian Elimination (and triangular matrices) (1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving

More information

Linear Algebra. Carleton DeTar February 27, 2017

Linear Algebra. Carleton DeTar February 27, 2017 Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Chapter 3 - From Gaussian Elimination to LU Factorization

Chapter 3 - From Gaussian Elimination to LU Factorization Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can

More information