Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Size: px
Start display at page:

Download "Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia"

Transcription

1 Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008

2 2

3 Contents Introduction 5 2 Technical Background 9 3 The Angle Test 3 4 Breaking Down the Problem 9 4. Our Approach The One Eigenvalue Case The Two Eigenvalue Case The Three Eigenvalue Case References 37 3

4 4 CONTENTS

5 Chapter Introduction A complex symmetric matrix (CSM) is a square complex matrix T such that T = T t, where T t denotes the transpose of the matrix T. However, since any given operator has many different matrix representaions, we wish to identify those linear operators that can be represented as a complex symmetric matrix with respect to some orthonormal basis. To do this we introduce the concept of unitary equivalence. Definition. Let U be a n n matrix and let U denote the adjoint (i.e., conjugate transpose) of U. We say U is unitary if UU = U U = I. Definition. Let T and S be n n matrices. We say T and S are unitarily equivalent if there exists some unitary matrix U such that U T U = S. It turns out that two matrices are unitarily equivalent precisely when they represent the same operator with respect to two (possibly different) orthonormal bases. Thus the class of matrices we wish to study is precisely the class of matrices which are unitarily equivalent to a complex symmetric matrix (UECSM). A powerful tool for analyzing such matrices (Theorem. below) comes from Garcia and Putinar [2, 3]. We first require a few preliminaries. Definition. A function C : C n C n is a conjugation if. C is antilinear (i.e., C(ax + y) = acx + Cy for all a C, x, y C n ). 2. C is isometric (i.e., x, y = Cy, Cx for all x, y C n ). 3. C is an involution (i.e., C 2 = I). 5

6 6 CHAPTER. INTRODUCTION A simple example of a conjugation is the standard conjugation J : C n C n, which is defined by J(x,..., x n ) = (x,..., x n ). Definition. Let T be a n n complex matrix and let T denote its conjugate transpose. Let C be a conjugation on C n ; then T is C-symmetric if T = CT C. We now have the following theorem: Theorem.. Let T be a n n complex matrix. T is UECSM if and only if T is C-symmetric for some conjugation C. Proof. Suppose T is C-symmetric and let {e i } be an orthonormal basis with Ce i = e i (for a proof that such a basis exists, see [3, Lemma ]). Then T e i, e j = e i, T e j = Ce i, T Ce j = CT Ce j, e i (by the isometric property) = T e j, e i. Thus, since T e i, e j is the jith entry of T when expressed with respect to the orthonormal basis {e i }, we have represented T as a CSM with respect to this basis. Therefore the matrix T is UECSM. Conversely, suppose T is UECSM. It must be complex symmetric with respect to some orthonormal basis {u i }. Define C by extending Cu i = u i antilinearly to all of C n. Then as above, we have T u i, u j = CT Cu j, u i for all i, j, but since T is symmetric with respect to {u i } we have T u i, u j = T u j, u i. Therefore CT Cu i, u j = T u i, u j for all i, j, and thus T = CT C. Though many examples of matrices which are UECSM are known, no complete classification exists. In this paper we complete a classification of the 3 3 matrices which are UECSM. Since unitary equivalence is an equivalence relation, we may pick one representative of each equivalence class and determine whether this matrix is UECSM. To select this representative we use Schur s Theorem (see [5] for details):

7 7 Theorem.2 (Schur). Given a n n matrix A with eigenvalues λ,..., λ n in any prescribed order, there is a unitary matrix U M n such that U AU = T = [t ij ] is upper triangular, with diagonal entries t ii = λ i, i =,..., n. That is, every square matrix A is unitarily equivalent to a triangular matrix whose diagonal entries are the eigenvalues of A in a prescribed order. Thus, given a 3 3 matrix A, we know it is unitarily equivalent to a (not necessarily unique) matrix of the form λ a b 0 λ 2 c, (.) 0 0 λ 3 where λ i are the eigenvalues of A in any given order. In Chapter 2 we show that further simplifying assumptions can be made to yield a particularly easyto-work-with form. We provide a complete classification of which matrices in this form are UECSM, allowing us to determine whether any given 3 3 matrix is UECSM.

8 8 CHAPTER. INTRODUCTION

9 Chapter 2 Technical Background In this section we present several lemmas that allow us to conduct our classification. First we describe some simple categories of matrices that are known to be UECSM. The results in this section are known and can be found in [2, 3, 4]. We begin with a definition: Definition. We say a matrix is algebraic of degree n if its minimal polynomial is a degree n polynomial. In other words, T is of degree n if there is some polynomial f of degree n with f(t ) = 0, but no polynomial g of degree less than n such that g(t ) = 0. We now present a theorem due to Garcia and Wogen [4]. Theorem 2.. If T is a square matrix that is algebraic of degree two, then T is UECSM. This theorem has a few useful corollaries: Corollary 2.2. All and 2 2 matrices are UECSM. Proof. The degree of the minimum polynomial of a n n matrix is always n. Thus every and 2 2 matrix is algebraic of degree 2, and by Theorem 2. is UECSM. Corollary 2.3. Every rank matrix is UECSM. Proof. Every rank one operator has the form T x = x, v u for some u, v C n. But then T 2 = ( x, v u), v u = x, v u, v u = u, v T, so we have T 2 u, v T = 0 and T is algebraic of degree two. Thus by Theorem 2., T is UECSM. 9

10 0 CHAPTER 2. TECHNICAL BACKGROUND In our attempt to classify the 3 3 matrices which are UECSM, it will be useful to be able to decompose our matrices into simpler matrices. The next lemma shows that we can, in some instances, do this: Lemma 2.4. Suppose T = n i= T i, where T i is C i symmetric for some C i. Then T is C-symmetric for C = n i= C i. Proof. We prove the result for n = 2; the theorem then follows by induction. Let T and T 2 be square complex matrices and C, C 2 conjugations such that C T C = T and C 2 T 2 C 2 = T2. Then we claim C = C C 2 is a conjugation. Antilinearity is trivially preserved by direct summation, and C 2 = I since ( ) ( ) ( ) C 0 C 0 C C 0 0 C 2 0 C 2 = 0 C 2 C 2 = ( I 0 0 I To show C is isometric, let x = (x, x 2 ) be in the domain of T. Since (x, 0) and (0, x 2 ) are orthogonal we have (x, 0) 2 + (0, x 2 ) 2 = (x, x 2 ) 2. Thus ( ) ( ) Cx 2 = C 0 x 2 0 C 2 x 2 ( ) = C x 2 C 2 x 2 ( ) = C x 2 ( ) C 2 x 2 = x 2 + x 2 2 = x 2. Thus C is a conjugation. Now we show that T is C-symmetric: ( ) ( ) ( ) C 0 T 0 C 0 CT C = 0 C 2 0 T 2 0 C 2 ( ) ( ) C T = C 0 T = 0 0 C 2 T 2 C 2 0 T2 = T. Thus we have the desired result. Our final lemma allows us to make certain simplifying assumptions about the structure of our matrix: Lemma 2.5. If T is a C-symmetric matrix, then so is at +bi for all a, b C. ).

11 Proof. Consider the matrix C(aT + bi)c. We have C(aT + bi)c = C(aT )C + C(bI)C = act C + bc 2 = at + bi = (at + bi), which is the desired result.

12 2 CHAPTER 2. TECHNICAL BACKGROUND

13 Chapter 3 The Angle Test Suppose that T is a C-symmetric matrix and that u is a generalized eigenvector with eigenvalue λ of order n (i.e., (T λi) n u = 0 but (T λi) n u 0). We have (T λi) k (Cu) = C((T λi) k u) for all k, and thus Cu is a generalized eigenvector of T with eigenvalue λ of order n. Furthermore, since C is an isometry we have u = Cu. Note that the case k = implies that if u is an eigenvector of T, then Cu is an eigenvector of T. Now suppose the eigenspace of T corresponding to the eigenvalue λ is one-dimensional, and u is a unit eigenvector with eigenvalue λ. Let v be a unit eigenvector of T with eigenvalue λ. Then since each eigenspace is one-dimensional, these eigenvectors are unique up to multiplication by a unimodular constant, and we have Cu = αv for some unimodular α. A similar argument will hold for generalized eigenvectors as long as we have a canonical method for choosing a basis for the generalized eigenspace which is respected by C. Our next lemma provides such a canonical basis: Lemma 3.. Let T be a n n matrix with eigenvalues λ,..., λ r. Suppose that T has precisely one Jordan block for each eigenvalue. Let K = ker(t λ i I) k ker(t λ i I) k 3

14 4 CHAPTER 3. THE ANGLE TEST and ˆK = ker(t λ i I) k ker(t λ i I) k such that K {0}, and let u K and v ˆK be unit vectors. Then Cu = αv for some unimodular constant α. Proof. It is clear from our discussion that Cu is a generalized eigenvector of T with eigenvalue λ of order k, and thus is an element of ˆK. Since there is only one Jordan block for λ i in the Jordan form of T we have that dim ker(t λ i I) k = k, and thus dim ˆK =. Then since v is non-zero it must span K, and so there is some scalar with αv = Cu. But v = Cu, so α =. We are now prepared to state and prove a necessary condition for a matrix T to be C-symmetric. Lemma 3.2 (The Angle Test). Let T be a n n matrix with r distinct eigenvalues λ,..., λ r which is UECSM, and suppose T has precisely one Jordan block for each eigenvalue. Let and K = ker(t λ i I) k ker(t λ i I) k L = ker(t λ j I) l ker(t λ j I) l, and let ˆK and ˆL be the analogous spaces for T. Let u K, u 2 L, v ˆK, v 2 ˆL. Then u, u 2 = v, v 2. (3.) Proof. If K (or L) is {0}, then so is ˆK (or ˆL). We can see that u = v = 0 (or u 2 = v 2 = 0) and so u, u 2 = v, v 2 = 0. So assume K, L {0}. By the isometric property of C, u, u 2 = Cu 2, Cu. By Lemma 3. there exist unimodular α, α 2 with Cu i = α i v i. Thus we have u, u 2 = Cu 2, Cu = α 2 v 2, α v

15 5 = α α 2 v, v 2. Since α i are unimodular, taking the absolute value of both sides yields Equation (3.). Lemma 3.2 provides a useful technique to prove that a given matrix with a single Jordan block for each eigenvalue is not UECSM: find a canonical basis for the generalized eigenspaces of T and T as described in Lemma 3., take the pairwise inner products of the basis eigenvectors, and check whether the norms are all the same. In particular, if a n n matrix T has n distinct eigenvalues then the pairwise inner products of the eigenvectors must have matching norms. If equation (3.) holds for all pairs of vectors in our canonical basis, we say that T passes the Angle Test. While the condition in Lemma 3.2 is clearly a necessary condition for T to be UECSM, it is not clear that it is sufficient. However, Garcia showed [] that the following stronger condition is sufficient when T has distinct eigenvalues: Theorem 3.3. Let T be a 3 3 matrix with distinct eigenvalues λ, λ 2, λ 3. Let u i be a unit eigenvector of T with eigenvalue λ i, and let v i be a unit eigenvector of T with eigenvalue λ i. Then T is UECSM if and only if T passes the Angle Test and there exist unimodular constants b,2, b,3, b 2,3 such that b i,j = u i, u j v j, v i whenever v j, v i 0, and the matrix has rank one. B = b,2 b,3 b,2 b 2,3 b,3 b 2,3 In fact, a similar condition holds for any n n matrix with n distinct eigenvalues. We omit the proof for brevity. However, we will prove the analogous (and somewhat more difficult) result for the case where a 3 3 matrix T has only two distinct eigenvalues. Let T be a 3 3 C-symmetric matrix with two distinct eigenvalues. Without loss of generality we may, by Lemma 2.5, assume that the eigenvalues

16 6 CHAPTER 3. THE ANGLE TEST are 0 and, and that 0 has multiplicity two. Furthermore, we assume that dim(ker T ) =, since whenever dim(ker T ) = 2 then T is a rank one matrix and the case is trivial due to Corollary 2.3. Let u 0 be a unit eigenvector of T corresponding to the eigenvalue 0, u 00 be a unit generalized eigenvector of T corresponding to the eigenvalue 0 that is orthogonal to u 0, u be a unit eigenvector of T corresponding to the eigenvalue. Furthermore, T also has eigenvalues 0 and with the same multiplicity. Let v 0 be a unit eigenvector of T corresponding to the eigenvalue 0, v 00 be a unit generalized eigenvector of T corresponding to the eigenvalue 0 that is orthogonal to v 0, v be a unit eigenvector of T corresponding to the eigenvalue. Lemma 3. tells us that C determines a triple of unimodular constants α 0, α 00, α with Thus for any i, j we have Cu 0 = α 0 v 0, Cu 00 = α 00 v 0, Cu = α v. u i, u j = Cu j, Cu i = α j v j, α i v i = α i α j v j, v i. Let B be the matrix given by α 0 B = α 00 ( ) α 0 α 00 α. α Whenever v j, v i 0 we have b i,j = α i α j = u i, u j v j, v i.

17 Furthermore, we see that b i,i = for all i, and that b i,j = b j,i for all i, j. Thus the matrix B is self-adjoint. Also, note that rank(b) = since α 0 colspace(b) = span α 00. α Finally, note that since T u 00 ker(t ), we have T u 00 = au 0 for some a C. But T v 00 = CT Cv 00 = CT α 00 u 00 = α 00 Cau 0 = (aα 00 α 0 )v 0. Thus the coefficients of u 0 in T u 00 and v 0 in T v 00 are related by a unimodular a factor of α 00 α 0. a We are now prepared to state our main theorem. Theorem 3.4. Let T be a 3 3 matrix under the above hypotheses, with T u 00 = au 0. We have that T is C-symmetric if and only if there exists a set of unimodular constants α 0, α 00, α such that. u i, u j = α i α j v j, v i. 2. The function C defined by letting Cu i = α i v i and extending antilinearly to C 3 is an involution. 3. T v 00 = (aα 00 α 0 )v 0. Proof. We wish to show that C is a conjugation and that CT C = T. We see that C is antilinear by construction and an involution by condition (2). We wish to show that it is an isometry. The u i form a basis for our space, so let x = a i u i. We have x 2 = x, x = = = i {0,00,} i,j {0,00,} i,j {0,00,} a i u i, j {0,00,} a i u i, a j u j a i aj u i, u j a j u j 7

18 8 CHAPTER 3. THE ANGLE TEST = = = = = i,j {0,00,} i,j {0,00,} i,j {0,00,} i,j {0,00,} j {0,00,} = Cx, Cx = Cx 2. a i a j u i, u j a i a j α i α j v j, v i a i a j α j v j, α i v i a i a j Cu j, Cu i a j Cu j, i {0,00,} a i Cu i Thus C is indeed a conjugation. We now wish to show that CT C = T. Since both maps are linear we only need to show that they agree on the basis {u 0, u 00, u }. Note that since C is an involution we have Cv i = α i u i. For u 0 and u we have Now consider CT Cu 00. We get CT Cu i = CT α i v i = α i Cλ i v i = α i λ i α i u i = λ i u i = T u i. CT Cu 00 = α 00 CT v 00 = α 00 C(aα 00 α 0 )v 0 = α 00 aα 00 α 0 Cv 0 = aα 0 α 0 u 0 = au 0 = T u 00. Thus CT C = T and we have that T is C-symmetric.

19 Chapter 4 Breaking Down the Problem 4. Our Approach By Schur s Theorem (Theorem.2), any 3 3 matrix T is unitarily equivalent to a matrix of the form shown in (.). Thus a matrix is UECSM if and only if its upper triangular form is UECSM. By Lemma 2.5 we may, without loss of generality, subtract a multiple of the identity to ensure one eigenvalue is 0. If there is more than one eigenvalue we may further divide by a scalar to obtain a matrix with an eigenvalue of. Based on the number of distinct eigenvalues we have the following three cases: Case : One distinct eigenvalue. We may assume T has the form T = 0 a b 0 0 c (4.) Case 2: Two distinct eigenvalues. We may assume T has the form T = 0 a b 0 0 c 0 0. (4.2) 9

20 20 CHAPTER 4. BREAKING DOWN THE PROBLEM Case 3: Three distinct eigenvalues. We may assume T has the form for some λ {0, }. T = 0 a b 0 c 0 0 λ (4.3) 4.2 The One Eigenvalue Case Proposition 4.. A 3 3 matrix T in the upper triangular form given in (4.) is UECSM if and only if one of the following holds:. a = c = a = c, a, c 0. Proof. The proof breaks down into three cases. Case : a = 0 or c = 0. We have T = 0 0 b 0 0 c or T = 0 a b In either case, T is a rank one matrix and thus UECSM by Corollary 2.3. Case 2: a, c 0, a = c.. Define an antilinear map C : C 3 C 3 by x 0 0 u C y = ā 0 0 cu z u 0 0 x y z

21 4.2. THE ONE EIGENVALUE CASE 2 where u is a unimodular constant such that uc R. Then we have CT C x y z u z = CT ay cu u x = = = C a ȳ cu 0 āx cu = cu u bx + a yū cu x ā 0 0 y. b c 0 z + bu x cu x 0 0 āx bx + c y c Therefore CT C = T and thus T is UECSM. Case 3: a, c 0, a c. Let e, e 2, e 3 be the standard basis vectors for C 3. We can see that T has precisely one one-dimensional eigenspace for the eigenvalue 0, spanned by e. Similarly, T has precisely one one-dimensional eigenspace, spanned by e 3. By Lemma 3., if T is C-symmetric then Ce = αe 3 for some α =. Since C is an involution, this gives us e = Cαe 3 = αce 3 and thus Ce 3 = αe. We see that e 2 is orthogonal to e and e 3, so since C is an isometry we must have Ce 2 orthogonal to Ce = αe 3 and Ce 3 = αe. Thus Ce 2 must be some scalar multiple of e 2, and in fact we have Ce 2 = βe 2 where β = since C is norm-preserving. Now T e 2 = ae. But T e 2 = CT Ce 2 = CT (βe 2 ) = βc(ce 3 ) = αβce and thus a = αβc. Since α = β = this gives us a = c, a contradiction. Thus in this case T is not UECSM.

22 22 CHAPTER 4. BREAKING DOWN THE PROBLEM 4.3 The Two Eigenvalue Case Proposition 4.2. A 3 3 matrix T in the upper triangular form given in (4.2) is UECSM if and only if one of the following holds:. a = b = c = a, c 0 and b + ac 2 = c 2 + c 4. Proof. The proof breaks down into five cases. Case : a = 0. We have T = 0 0 b 0 0 c 0 0. Thus T is a rank one matrix, and is UECSM by Corollary 2.3. Case 2: b = c = 0 It is clear that T is the direct sum of a 2 2 matrix and a matrix: 0 a 0 T = By Corollary 2.2, every 2 2 matrix and every matrix is UECSM, and the direct sum of matrices which are UECSM is also UECSM by Lemma 2.4. Thus T is UECSM. Case 3: c = 0 and a, b 0. We have T = 0 a b We see that T has a one-dimensional eigenspace with eigenvalue, and some eigenspace with eigenvalue 0. Since T has -dimensional nullspace,.

23 4.3. THE TWO EIGENVALUE CASE 23 the eigenspace with eigenvalue 0 cannot be 2-dimensional. Thus T has one one-dimensional eigenspace with eigenvalue 0, and one one-dimensional eigenspace with eigenvalue. These spaces are spanned by the following unit vectors: u 0 = 0 with eigenvalue 0, 0 b b 2 + u = 0 with eigenvalue. b 2 + Note that u 0, u = b b. 2 + Similarly, T has two one-dimensional eigenspaces: 0 v 0 = with eigenvalue 0, v = with eigenvalue. Note that v 0, v = 0. But since b 0 we have u 0, u v 0, v, and T fails the Angle Test of Lemma 3.2. Thus T is not UECSM. Case 4: a, c 0. We have T = 0 a b 0 0 c 0 0 We use the conditions of Theorem 3.4: T is UECSM if and only if there exist unimodular α 0, α 00, α such that. u i, u j = α i α j v j, v i. 2. The function C defined by letting Cu i = α i v i and extending antilinearly is an involution. 3. T v 00 = (aα 00 α 0 )v 0, where a is as in T..

24 24 CHAPTER 4. BREAKING DOWN THE PROBLEM It is clear that if such α i exist then the matrix α 0 B = α 00 ( ) α 0 α 00 α α has rank one. Conversely, if there exists some matrix B such that b i,i =, b i,j =, B is self-adjoint, and whenever v j, v i 0 we have b i,j = u i, u j v j, v i, then this matrix will factor as above and we will have a triple (α 0, α 00, α ) that satisfies condition (). Looking now at the normalized eigenvectors of T and T, we get that u 0 = (, 0, 0), u 00 = (0,, 0), u = (b + ac, c, ), b + ac 2 + c 2 + and v 0 = v 00 = (0,, c), + c 2 ( + c 2 )( ac + b 2 + c 2 + ) ( + c 2, c(ac + b), (ac + b)), v = (0, 0, ). Thus u 0, u 00 = v 00, v 0 = 0, but the other pairs of eigenvectors are not orthogonal. Assuming that T is UECSM, the Angle Test gives us u 0, u = v 0, v, b + ac c = b + ac 2 + c c 2, b + ac 2 ( + c 2 ) = c 2 ( b + ac 2 + c 2 + ), b + ac 2 = c 2 + c 4, (4.4)

25 4.3. THE TWO EIGENVALUE CASE 25 and u 00, u = v 00, v, c b + ac 2 + c 2 + = b + ac ( + c 2 )( ac + b 2 + c 2 + ), c 2 ( + c 2 ) = b + ac 2, c 2 + c 4 = b + ac 2, which is the same as (4.4). Thus the conditions of the Angle Test are satisfied if and only if (4.4) holds. Now consider the matrix B = x x u,u 0 v 0,v u 0,u v,v 0 u 00,u v,v 00 u,u 00 v 00,v, where the x is a free unimodular constant that we get since v 00, v 0 = 0. It is clear that this matrix has rank one only if But as long as we also have x = u 0,u u,u 00. v,v 0 v 00,v u,u 0 v 0,v = u 00,u v,v 00 = then we will have rank(b) =. Thus as long as (4.4) holds there exists a triple that satisfies condition (). In particular, if we set ( ) ( ) α0 α 00 α = u 0,u u,u 00 v,v 0 v 00,v u 0,u v,v 0 then the conditions in () are satisfied. Assuming that (4.4) holds, we get α 0 =, α 00 = b + ac + c + c 2 2 c α = (b + ac) c + c 23, c + c + c (b + ac) =,

26 26 CHAPTER 4. BREAKING DOWN THE PROBLEM and further we see that u = v 00 = (b + ac, c, ), + c 2 + c 2 3 ( + c 2, c(ac + b), (ac + b)). Now assume the preceding conditions are met and let C be defined as in (2), by extending the map Cu i = α i v i antilinearly. To see if C is an involution we calculate Ce i where e i are the standard basis vectors. We see that e = u 0 and e 2 = u 00, so Ce = α v and Ce 2 = α 00 v 00. To compute Ce 3 we use which implies that e 3 = (0, 0, ) = b + ac 2 + c 2 + u (b + ac)u 0 cu 00 = ( + c 2 )u (b + ac)u 0 cu 00 by (4.4), Ce 3 = ( c 2 + )Cu (b + ac)cu 0 ccu 00 = ( c 2 + )α v (b + ac)α 0 v 0 cα 00 v 00 = ( + c 2 ) (c( + 3 c 2 ), b + ac, b+ac ) c We know that C is an involution if Cv i = α i u i for all i. We can see that v = e 3 so we need But we have ( + c 2 ) 3 (c( + c 2 ), b + ac, b+ac c ) = α u. α u = u 0, u v, v 0 = b + ac + c 2 = (b + ac, c, ) + c 2 + c 2 (b + ac, c, ) c + c 2 ), b + ac, b+ac c ( b+ac 2 ( + c 2 ) 3 c

27 4.3. THE TWO EIGENVALUE CASE 27 = ) (( + c 2 )c, b + ac, b+ac ( + c 2 ) 3 c = Ce 3 For v 0 we have v 0 = + c 2 (e 2 ce 3 ). Using our definition of C on the basis vectors we get ( ) Cv 0 = α 00 v 00 c + c 2 ( + c 2 ) (c( + 3 c 2 ), b + ac, b+ac ) c ( ) ( + c 2 )( + c 2 ), c(ac + b) + c(b + ac), (b + ac) (ac + b) = ( + c 2 ) 2 = (, 0, 0) and so = u 0. Finally, we consider v 00. We see that Cv 00 = v 00 = + c 2 3 ( + c 2 )e c(ac + b)e 2 (ac + b)e 3, + c 2 3 ( + c 2 )α 0 v 0 c(ac + b)α 00 v 00 (ac + b)ce 3. Taking each of these terms in turn, we have ( + c 2 )α 0 v 0 = + c 2 3 (0, ( + c 2 ) 2, c( + c 2 ) 2 ), c(ac + b)α 00 v 00 = (ac + b)ce 3 = + c 2 3 (c(ac + b)( + c 2 ), c 2 ac + b 2, c ac + b 2 ), + c 2 3 (c(ac + b)( + c 2 ), ac + b 2, ac+b 2 c ). So we see that Cv 00 = ( + c 2 ) 3 (0, ( c 2 ) 3, c( + 2 c 2 + c 4 ) + c( + c 2 ) 2 )

28 28 CHAPTER 4. BREAKING DOWN THE PROBLEM = (0,, 0) = α 00 u 00. Thus for any u i, we get C 2 u i = Cα i v i = α i α i u i = u i, and we see that C 2 is the identity on the basis formed by {u 0, u 00, u }. Thus C is an involution, and condition (2) of Theorem 3.4 is satisfied. Finally, we wish to show that condition (3) of Theorem 3.4 holds. We have T u 00 = (a, 0, 0) = au 0, so we wish to show that But T v 00 = T v 00 = av 0. + c 2 3 (0, ( + c 2 )a, ( + c 2 )ac = a (0,, c) + c 2 = av 0. Thus T meets the conditions of Theorem 3.4 if and only if b+ac 2 = c 2 + c 4, and we have the desired result. 4.4 The Three Eigenvalue Case Proposition 4.3. Let T M 3 3 (C) be in the upper triangular form given in (4.3). Then T is UECSM if and only if one of the following holds:. a = b = b = c = a = c = a, c, b ac 0, (ac + b(λ ))a cλ + c a 2 + ab(λ ) = a( c 2 + λ 2 λ) bc λ ( + b ac 2 ( + a 2 )) 2 (a c 2 bc),

29 4.4. THE THREE EIGENVALUE CASE 29 a( a 2 c + ab(λ ) + cλ) ( + a 2 )(ac + b(λ )) a c 2 bc a( c 2 + λ 2 λ) bc = a( c 2 + λ 2 λ) bc ( + λ c 2 )(a c 2 bc), (ac + b(λ ))( a 2 c + ab(λ ) + cλ) = c aλ(λ )( + λ 2 + ac+b(λ ) λ 2 λ 2 )(λ 2 λ). Proof. The proof breaks down into seven cases. Case : a = b = 0 It is clear that T is the direct sum of a matrix and a 2 2 matrix, and thus UECSM: T = 0 c. 0 0 λ Case 2: b = c = 0. Similarly, T is the direct sum of a 2 2 matrix and a matrix, and thus UECSM: 0 a 0 T = λ Case 3: a = c = 0, b 0. We have If wee define U by T = U = 0 0 b λ , we see that U is a unitary matrix. Furthermore, 0 b 0 U T U = 0 λ 0, 0 0 and thus T is unitarily equivalent to the direct sum of a 2 2 matrix and a matrix, and UECSM.

30 30 CHAPTER 4. BREAKING DOWN THE PROBLEM Case 4: c = 0 and a, b 0. We have T = 0 a b λ We can see that T has three one-dimensional eigenspaces, spanned by the following unit eigenvectors:. u 0 = (, 0, 0) with eigenvalue 0, ( ) u = a, a a 2 + a a, 0 with eigenvalue, 2 + ( ) u λ = b, 0, λ b with eigenvalue λ. λ 2 + b 2 b λ 2 + b 2 Similarly, T has three corresponding eigenspaces with corresponding unit eigenvectors: ( ) λ v 0 = ( a, λā 2 +)λ 2 + b ( a, b, 2 2 +)λ 2 + b 2 v = (0,, 0), v λ = (0, 0, ). ( a 2 +)λ 2 + b 2 By Lemma 2.4 we know that if T is C-symmetric, then u 0, u λ = v 0, v λ, and computing these inner products gives us which implies that either b = 0, or b λ 2 + b 2 = b ( a 2 +)λ 2 + b 2, λ 2 + b 2 = ( a 2 + )λ 2 + b 2 and thus a = 0, which is a contradiction. Thus T is not UECSM.

31 4.4. THE THREE EIGENVALUE CASE 3 Case 5: a = 0, b, c 0. By an argument similar to the argument in Case 4, T is not UECSM. We have 0 0 b T = 0 c. 0 0 λ Now T has three one-dimensional eigenspaces, spanned by the following unit eigenvectors: u 0 = (, 0, 0) with eigenvalue 0, u = (0,, 0) with eigenvalue, ( u λ = + c 2 + b b, c 2 λ λ with eigenvalue λ. λ λ Similarly, T has three corresponding one-dimensional eigenspaces spanned by corresponding unit eigenvectors: v 0 = + λ 2 b v = + λ c v λ = (0, 0, ). ( λb, 0, ), 2 ( 0, λ c, ), By Lemma 2.4 we know that if T is C-symmetric, then u 0, u = v 0, v, and thus we have 0 = ( + λ 2) ( + > 0, λ 2) b c which is a contradiction. Thus T is not UECSM.

32 32 CHAPTER 4. BREAKING DOWN THE PROBLEM Case 6: a, c 0, b = ac. It is clear that T has three one-dimensional eigenspaces, spanned by the following unit eigenvectors: u 0 = (, 0, 0) with eigenvalue 0, u = (a,, 0) + a 2 with eigenvalue, u λ = (ac, c, λ ) with eigenvalue λ. ac 2 + c 2 + λ 2 Similarly, T has three corresponding one-dimensional eigenspaces spanned by corresponding unit eigenvectors: but v 0 = v = (, a, 0), + a 2 (0, λ, c), λ 2 + c 2 v λ = (0, 0, ). We can compute that ac u 0, u λ = 0 ac 2 + c 2 + λ 2 v 0, v λ = 0, and thus T fails the Angle Test and is not UECSM. Case 7: a, c, b ac 0. We use the rank one condition given in Theorem 3.3. Simple computation gives us that T has eigenspaces spanned by the normalized eigenvectors u 0 = (, 0, 0), u = (a,, 0), + a 2 u λ = c + λ 2 + b( λ) ac λ 2 λ 2 ( ac+b(λ ) λ 2 λ, c λ, ).

33 4.4. THE THREE EIGENVALUE CASE 33 Similarly, T has eigenspaces spanned by the normalized eigenvectors ( ) λ v 0 =, aλ,, λ + b ac 2 + aλ ac b b ac b ac 2 ( ) v = 0, λ,, + λ c c 2 us v λ = (0, 0, ). Computing the pairwise inner products of these eigenvectors in turn gives u 0, u = u 0, u λ = u, u λ = v 0, v = v 0, v λ = v, v λ = a + a 2, ac + b(λ ), (λ 2 c λ) + λ 2 + b( λ) ac λ 2 λ 2 a 2 c + ab(λ ) + cλ, (4.5) ( + a 2 c )( + λ 2 + b( λ) ac λ 2 λ 2 )(λ 2 + λ) + λ c a(λ λ 2 c 2 ) + bc 2 λ + b ac 2 + aλ + λ b ac 2 + aλ b ac 2,. + λ c 2, b ac 2 (bc a c 2 ) Given our hypotheses that λ 0, and b ac 0, these are all well-defined. So now consider the matrix u 0,u u 0,u λ v,v 0 v λ,v 0 u B =,u 0. v 0,v u λ,u 0 v 0,v λ u,u λ v λ,v u λ,u v,v λ The conjugate symmetry of the inner product gives us that u i, u j v j, v i = u j, u i v i, v j,

34 34 CHAPTER 4. BREAKING DOWN THE PROBLEM and so this matrix has the form of the matrix B from Theorem 3.3. The matrix B is rank one if and only if the columnspace of B is onedimensional, if and only if each column is a scalar multiple of each other column. Let B i be the ith column of B; then if B is rank one we must have B = u 0, u v, v 0 B 2, B 2 = u, u λ v λ, v B 3, (4.6) B 3 = u 0, u λ v λ, v 0 B. The system in (4.6) implies nine equations, three of which are trivially true. The other six are given by u 0, u v, v 0 =, (4.7) u, u λ v λ, v =, (4.8) u 0, u λ v λ, v =, (4.9) u 0, u u λ u 0 = u λ, u v, v 0 v 0, v λ v, v λ, (4.0) u 0, u u, u λ = u 0, u λ v, v 0 v λ, v v λ, v 0, (4.) u, u λ u λ, u 0 = u, u 0 v λ, v v 0, v λ v 0, v. (4.2) Note first that satisfying the rank-one condition implies satisfying the Angle Test because of equations (4.7), (4.8), and (4.9). Further, note that if we assume (4.0), (4.), and (4.2), then substituting (4.) into (4.0) gives u λ, u v, v λ = u 0, u u 0, u u, u λ v, v 0 v, v 0 v λ, v = u λ, u u 0, u v, v λ v, v 0 and thus (4.7) holds. A similar argument gives that (4.8) and (4.9) hold as well, so the conditions of Theorem 3.3 are met if and only if we have (4.0),

35 4.4. THE THREE EIGENVALUE CASE 35 (4.), and (4.2). Rearranging these equations gives u 0, u u 0, u λ u, u λ u 0, u u, u λ u 0, u λ u 0, u λ u, u λ u 0, u = v 0, v v 0, v λ v, v λ = v 0, v v, v λ v 0, v λ = v 0, v λ v, v λ v 0, v Plugging in values from Equation (4.5) gives the desired condition. This completes our classification of the 3 3 matrices from the upper triangular form. Given a 3 3 upper triangular matrix, these results allows us to easily determine whether it is UECSM. Since every matrix is equivalent to an upper triangular matrix, this is in effect a classification of all 3 3 matrices which are UECSM.

36 36 CHAPTER 4. BREAKING DOWN THE PROBLEM

37 Bibliography [] Stephan Garcia, Personal Communication. [2] Stephan Ramon Garcia, Conjugation and Clark operators, Contemporary Mathematics 393 (2006), [3] Stephan Ramon Garcia and Mihai Putinar, Complex symmetric operators and applications, Transactions of the American Mathematical Society 358 (2005), [4] Stephan Ramon Garcia and Warren R. Wogen, Some new classes of complex symmetric operators, preprint. [5] Roger A. Horn and Charles R. Johnson, Matrix analysis, Cambridge University Press,

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Eigenvectors. Prop-Defn

Eigenvectors. Prop-Defn Eigenvectors Aim lecture: The simplest T -invariant subspaces are 1-dim & these give rise to the theory of eigenvectors. To compute these we introduce the similarity invariant, the characteristic polynomial.

More information

Homework 11 Solutions. Math 110, Fall 2013.

Homework 11 Solutions. Math 110, Fall 2013. Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting

More information

Lec 2: Mathematical Economics

Lec 2: Mathematical Economics Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Bilinear and quadratic forms

Bilinear and quadratic forms Bilinear and quadratic forms Zdeněk Dvořák April 8, 015 1 Bilinear forms Definition 1. Let V be a vector space over a field F. A function b : V V F is a bilinear form if b(u + v, w) = b(u, w) + b(v, w)

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky

Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky Linear Algebra Igor Yanovsky, 5 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation.

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book] LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

CLASSIFICATION OF COMPLETELY POSITIVE MAPS 1. INTRODUCTION

CLASSIFICATION OF COMPLETELY POSITIVE MAPS 1. INTRODUCTION CLASSIFICATION OF COMPLETELY POSITIVE MAPS STEPHAN HOYER ABSTRACT. We define a completely positive map and classify all completely positive linear maps. We further classify all such maps that are trace-preserving

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

6 Matrix Diagonalization and Eigensystems

6 Matrix Diagonalization and Eigensystems 14.102, Math for Economists Fall 2005 Lecture Notes, 9/27/2005 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information