J-SPECTRAL FACTORIZATION

Size: px
Start display at page:

Download "J-SPECTRAL FACTORIZATION"

Transcription

1 J-SPECTRAL FACTORIZATION of Regular Para-Hermitian Transfer Matrices Qing-Chang Zhong School of Electronics University of Glamorgan United Kingdom

2 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 2/42

3 Notations G(s) = A B C D = D + C(sI A) 1 B G (s) = G T ( s) = G( s ) = J p,q = I p 0 0 I q A B : the signature matrix C D Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 3/42

4 Some definitions A transfer matrix Λ(s) is called a para-hermitian matrix if Λ (s) = Λ(s). A transfer matrix W(s) is a J-spectral factor of Λ(s) if W(s) is bistable and Λ(s) = W (s)jw(s). Such a factorization of Λ(s) is referred to as a J-spectral factorization. A matrix W(s) is a J-spectral cofactor of a matrix Λ(s) if W(s) is bistable and Λ(s) = W(s)JW (s). Such a factorization of Λ(s) is referred to as a J-spectral cofactorization. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 4/42

5 Background J-spectral factorization plays an important role in H control of finite-dimensional systems and H control of infinite-dimensional systems as well. The necessary and sufficient condition has been well understood. The J-spectral factorizations involved in the literature are done for matrices in the form G JG, mostly with a stable G. For the case with an unstable G, the following three steps can be used to find the J-spectral factor of G JG: (i) to find the modal factorization of Λ = G JG; (ii) to construct a stable G such that Λ = G JG ; (iii) to derive the J-spectral factor of G JG, i.e., of G JG. The problem is that, in some cases, a para-hermitian transfer matrix Λ is given in the form of a state-space realization and cannot be explicitly written in the form G JG, e.g., in the context of H control of time-delay systems. In order to use the above-mentioned results, one would have to find a G such that Λ = G JG. It would be advantageous if this step could be avoided. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 5/42

6 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 6/42

7 Regular para-hermitian transfer matrices Theorem 1 A given square, minimal, rational matrix Λ(s), having no poles or zeros on the jω-axis including, is a para-hermitian matrix if and only if a minimal realization can be represented as A R B (1) Λ = E A C C B D where D = D, E = E and R = R. Such a para-hermitian transfer matrix is called regular. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 7/42

8 General state-space form Theorem 1 says that a regular para-hermitian transfer matrix Λ realized in the general state-space form Hp B (2) Λ = Λ D C Λ can always be transformed into the form of (1) after a certain similarity transformation. the canonical form of regular para-hermitian transfer matrices. Notations: H p : the A-matrix of Λ. H z : the A-matrix of Λ 1 (H z = H p B Λ D 1 C Λ ). Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 8/42

9 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 9/42

10 Projections For a given nonsingular matrix partitioned as M N, denote the projection onto the subspace ImM along the subspace ImN by P. Then, PM = M, PN = 0. P M N = M 0. Hence, the projection matrix P is P = M 0 M N 1. Similarly, the projection Q onto the subspace Im N along the subspace ImM is 1 1 Q = 0 N M N = N 0 N M. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 10/42

11 Properties of projections (i) P + Q = I; (ii) PQ = 0; (iii) P 2 = P and Q 2 = Q; (iv) ImP = ImM; (v) M 0 M N 1 M 0 = M 0 ; (vi) 0 N M N 1 0 N = 0 N ; (vii) M 0 M N 1 0 N = 0. If M T N = 0, i.e., the projection is orthogonal, then the projections reduce to P = M(M T M) 1 M T and Q = N(N T N) 1 N T. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 11/42

12 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 12/42

13 J-spectral factorization for the full set Via similarity transformations with two matrices Via similarity transformations with one matrix Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 13/42

14 Triangular forms of H p and H z Assume that a para-hermitian matrix Λ as given in (2) is minimal and has no poles or zeros on the jω-axis including. There always exist nonsingular matrices p and z (e.g. via Schur decomposition) such that? 0 (3) 1 p H p p =? A + and (4) 1 z H z z = A? 0?, where A + is antistable and A is stable (A + and A have the same dimension). Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 14/42

15 With two matrices z and p Lemma 1 Λ admits a J p,q -spectral factorization for some unique J p,q (where p is the number of the positive eigenvalues of D and q is the number of the negative eigenvalues of D) iff I 0 (5) = z p 0 is nonsingular. If this condition is satisfied, then a J spectral factor is formulated as (6) W = I 0 J p,q D W C Λ 1 H p I 0 I 0 I I 0 1 B Λ D W where D W is a nonsingular solution of D W J p,qd W = D., Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 15/42

16 Proof: The existence condition Formulae (3) and (4) mean that 0 0 H p p = I p I A +, and I I H z z = 0 z A 0. 0 I Hence, p and I z span the antistable 0 eigenspace M of H p and the stable eigenspace M of H z, respectively. It is well known that there exists a J-spectral factorization iff M M = {0}. This is equivalent to that the given in (5) is nonsingular. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 16/42

17 Derivation of the factor When the condition holds, there exists a projection P onto M along M. The projection matrix P is I 0 P = With this projection formula, a J-spectral factor can be found as PH p P PB Λ W = J p,q DW C. ΛP D W Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 17/42

18 Simplification of the factor Apply a similarity transformation with, then W = 1 PH p I J p,q D W C Λ I PB Λ D W. After removing the unobservable states by deleting the second row and the second column, W becomes W = I 0 1 PH p I 0 J p,q D W C Λ I 0 I 0 1 PB Λ D W. Since I 0 1 P = I 0 1, W can be further simplified as given in (6). Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 18/42

19 With one common matrix In general, z p. However, these two can be the same. Theorem 2 Λ admits a J-spectral factorization if and only if there exists a nonsingular matrix such that A p 1 0 A H p =? A p, 1 z H z =? + 0 A z + where A z and Ap are stable, and A z + and Ap + are antistable. When this condition is satisfied, a J-spectral factor W is given in (6). Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 19/42

20 Proof Sufficiency. It is obvious according to Lemma 1. In this case, z = p =. Necessity. If there exists a J-spectral factorization then the given in (5) is nonsingular. This does satisfy the two formulae in Theorem 2. Since this is in the same form as that in Lemma 1, the J-spectral factor is the same as that in (6). Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 20/42

21 The bistability of W The A-matrix of W is and that of W 1 is They are all stable. I 0 1 H p I 0 I 0 1 H z I 0 = A p = A z. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 21/42

22 Interpretation This theorem says that a J-spectral factorization exists if and only if there exists a common similarity transformation to transform H p (H z, resp.) into a 2 2 lower (upper, resp.) triangular block matrix with the (1, 1)-block including all the stable modes of H p (H z, resp.). Once the similarity transformation is done, a J- spectral factor can be formulated according to (6). If there is no such a similarity transformation, then there is no J-spectral factorization. Since the similarity transformation corresponds to the change of state variables, this theorem might provide a way to further understand the structure of H control. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 22/42

23 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 23/42

24 J-spectral factorization for a smaller subset Λ = A R B E A C C B D In this case, H p = A R E A, H z = A R E A B C D 1 C B.= A z E z R z A z. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 24/42

25 Theorem 3 For a Λ characterized in Theorem 1, assume that: (i) (E, A) is detectable and E is sign definite; (ii) (A z, R z ) is stabilizable and R z is sign definite. Then the two ARE I L o H p L o = 0, L c I H z I = 0 I L c always have unique symmetric solutions L o and L c. Λ(s) has a J p,q -spectral factorization iff det(i L o L c ) 0. If so, A + L W = o E B + L o C J p,q D W (B L c + C)(I L o L c ) 1 D W where D W is nonsingular and D W J p,qd W = D., Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 25/42

26 Proof In this case, z = I 0 L c I, p = I L o 0 I and = I L c L o I. According to Lemma 1, there exists a J-spectral factorization iff is nonsingular, i.e., det(i L o L c ) 0. Substitute into (6) and apply a similarity transformation with (I L o L c ) 1, then W = = I L o H p I L c (I L o L c ) 1 J p,q D W (C + B L c )(I L o L c ) 1 I L o H p I 0 J p,q D W (C + B L c )(I L o L c ) 1 B + L o C D W B + L o C D W, where the first ARE in the theorem was used. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 26/42

27 The A-matrix of W 1 A + L o E + (B + L o C )D 1 W J p,q 1 D W (B L c + C)(I L o L c ) 1 = I L o H z I (I L o L c ) 1 L c (I L o L c ) 1 I L o H z I L c = (I L o L c ) 1 I L o H z I L c + L o L c I H z I L c = I 0 H z I L c, where the means similar to and the second ARE in Theorem 3 was used. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 27/42

28 The dual case Theorem 4 For a Λ characterized in Theorem 1, assume that: (i) (A, R) is stabilizable and R is sign definite; (ii) (E z, A z ) is detectable and E z is sign definite. Then the two ARE L c I H p I = 0, I L o H z L o L c I = 0 always have unique symmetric solutions L c and L o. In this case, Λ(s) has a J p,q -spectral factorization iff det(i L o L c ) 0. If so, W(s) = A + RL c (I L o L c ) 1 (B + L o C )D W J p,q, B L c + C D W where D W is nonsingular and D W J p,q D W = D. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 28/42

29 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 29/42

30 Appl.: Λ = G JG with a stable G Here, G(s) = A B does not have poles or zeros on the jω-axis including and J = J C D is a signature matrix. The realization of Λ = G JG is Λ = A 0 B C JC A C JD D JC B D JD. In this case, H p = A 0 C JC A H z = A 0 C JC A B C JD (D JD) 1 D JC B. Since A is stable, there is no similarity transformation needed to bring H p into the triangular form. Hence, p can be chosen as p = I 0. 0 I Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 30/42

31 Since p = I 0 0 I, a in the following form = X 1 0 X 2 I is possible to bring H z into the upper triangular form. According to Theorem 2, Λ admits a J- spectral factorization iff is non-singular. This is equivalent to that X 1 is non-singular. When X 1 is non-singular, then a further similarity transformation with X can be done. 0 I This gives another, with X = X 2 X 1 1, as = I 0. Substitute this into Theorem X I 2, then I 0 X I H z I 0 X I = Az? 0 A z +. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 31/42

32 This means X I H z I X = 0, A z = I 0 H z I X is stable. Hence, Λ admits a J-spectral factorization iff the above ARE has a stabilizing solution X. When this condition holds, the J-spectral factor can be easily found, according to Theorem 2, as W = = I 0 J p,q D W I 0 H p I 0 I X I X I 0 D JC B I 0 I X I 0 A B, J p,q D W (D JC + B X) D W I 0 I 0 X I D W B C JD where D W is nonsingular and D W J p,qd W = D JD. This result has been well documented in the literature. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 32/42

33 Outline Notations and definitions Regular para-hermitian matrices Properties of projections J-spectral factorization for the full set J-spectral factorization for a smaller set Applications Numerical examples Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 33/42

34 Numerical example I Λ(s) = A minimal realization of Λ is Λ = 0 s 1 s+1 s+1 s Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 34/42

35 H p = , H z = = Apparently, there does not exist a common similarity transformation to make the (1, 1)-elements of H p and H z all stable. Hence, this Λ does not admit a J-spectral factorization. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 35/42

36 Numerical example II Λ(s) = s2 4 s s 2 1 s 2 4 A minimal realization of Λ is Λ = Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 36/42

37 H p = and H z = By doing similarity transformations, it is easy to transform H p and H z into a lower (upper, resp.) triangular matrix with the first two diagonal elements being negative. In order to bring H p into a lower triangular matrix, two steps are used. The first step is to bring it into an upper triangular matrix and the second step is to group the stable modes, as shown below. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 37/42

38 with p1 = with p2 = This is a lower triangular matrix, to which H p is similarly transformed with p = p1 p2. In general, a third step is needed to make the non-zero elements, if any, in the upper-right area to 0. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 38/42

39 Similarly, H z is transformed into an upper triangular matrix 1 z H z z = , with z = can be obtained by combining the first two columns of z and the last two columns of p as: = z = , p = This is nonsingular and, hence, there exists a J-spectral factorization. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 39/42

40 The D-matrix D = This gives J p,q = of Λ has one positive eigenvalue and a negative eigenvalue and D W = such that D W J p,qd W = D. According to (6), a J-spectral factor of Λ is found as W = , which can be described in transfer matrix as W(s) = s+1 0 s+2 s+2 0 s+1. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 40/42

41 One step further It is worth noting that a corresponding J-spectral factor for Λ(s) = Λ(s) = s 2 4 s s2 1 s 2 4 with J p,q = is W(s) = W(s) = s+2 0 s+1 0 s+1 s+2 because J p,q = J p,q. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 41/42

42 Summary A class of regular invertible para-hermitian transfer matrices is characterized and then the J-spectral factorization problem is revisited using the elementary tool, similarity transformations. A transfer matrix Λ in this class admits a J-spectral factorization if and only if there exists a common nonsingular matrix to similarly transform the A-matrices of Λ and Λ 1, resp., into 2 2 lower (upper, resp.) triangular block matrices with the (1,1)-block including all the stable modes of Λ (Λ 1, resp.). One possible way to obtain this matrix involves two steps: (i) separately bringing the A-matrices into triangular forms and (ii) combining the corresponding columns of the matrices used in the two similarity transformations. Another possible way is to simultaneously triangularize the two A-matrices. However, this is not a trivial problem. The resulting J-spectral factor is formulated in terms of the original realization of Λ. When a transfer matrix meets additional conditions, there exists a J-spectral factorization if and only if a coupling condition related to the stabilizing solutions of two ARE holds. Two numerical examples are given to illustrate the theory, which is the key to solve the delay-type Nehari problem. Q.-C. ZHONG: J-SPECTRAL FACTORIZATION OF REGULAR PARA-HERMITIAN TRANSFER MATRICES p. 42/42

43 Qing-Chang Zhong Robust Control of Time-Delay Systems Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Five Mini-Courses on Analysis

Five Mini-Courses on Analysis Christopher Heil Five Mini-Courses on Analysis Metrics, Norms, Inner Products, and Topology Lebesgue Measure and Integral Operator Theory and Functional Analysis Borel and Radon Measures Topological Vector

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

A Case Study for the Delay-type Nehari Problem

A Case Study for the Delay-type Nehari Problem Proceedings of the 44th EEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 MoA A Case Study for the Delay-type Nehari Problem Qing-Chang Zhong

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

ANSWERS. E k E 2 E 1 A = B

ANSWERS. E k E 2 E 1 A = B MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Rational Implementation of Distributed Delay Using Extended Bilinear Transformations

Rational Implementation of Distributed Delay Using Extended Bilinear Transformations Rational Implementation of Distributed Delay Using Extended Bilinear Transformations Qing-Chang Zhong zhongqc@ieee.org, http://come.to/zhongqc School of Electronics University of Glamorgan United Kingdom

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions Warm-up True or false? 1. proj u proj v u = u 2. The system of normal equations for A x = y has solutions iff A x = y has solutions 3. The normal equations are always consistent Baby proof 1. Let A be

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Comprehensive Introduction to Linear Algebra

Comprehensive Introduction to Linear Algebra Comprehensive Introduction to Linear Algebra WEB VERSION Joel G Broida S Gill Williamson N = a 11 a 12 a 1n a 21 a 22 a 2n C = a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn a m1 a m2 a mn Comprehensive

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

3.2 Gaussian Elimination (and triangular matrices)

3.2 Gaussian Elimination (and triangular matrices) (1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4 Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them. Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam This study sheet will not be allowed during the test Books and notes will not be allowed during the test Calculators and cell phones

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented Question. How many solutions does x 6 = 4 + i have Practice Problems 6 d) 5 Question. Which of the following is a cubed root of the complex number i. 6 e i arctan() e i(arctan() π) e i(arctan() π)/3 6

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

MATH. 20F SAMPLE FINAL (WINTER 2010)

MATH. 20F SAMPLE FINAL (WINTER 2010) MATH. 20F SAMPLE FINAL (WINTER 2010) You have 3 hours for this exam. Please write legibly and show all working. No calculators are allowed. Write your name, ID number and your TA s name below. The total

More information

Periodicity & State Transfer Some Results Some Questions. Periodic Graphs. Chris Godsil. St John s, June 7, Chris Godsil Periodic Graphs

Periodicity & State Transfer Some Results Some Questions. Periodic Graphs. Chris Godsil. St John s, June 7, Chris Godsil Periodic Graphs St John s, June 7, 2009 Outline 1 Periodicity & State Transfer 2 Some Results 3 Some Questions Unitary Operators Suppose X is a graph with adjacency matrix A. Definition We define the operator H X (t)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

Components and change of basis

Components and change of basis Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information