CHAPTER 2. Unitary similarity and unitary equivalence

Size: px
Start display at page:

Download "CHAPTER 2. Unitary similarity and unitary equivalence"

Transcription

1 CHAPTER 2 Unitary similarity and unitary equivalence 2.0 Introduction yin Capter 1, we made an initial study of similarity of A 2 M n via a general nonsingular matrix S, tat is, te transformation A! S 1 AS. For certain very special nonsingular matrices, called unitary matrices, te inverse of S as a simple form: S 1 = S. Similarity via a unitary matrix U, A! U AU, is not only conceptually simpler tan general similarity (te conjugate transpose is muc easier to compute tan te inverse), but it also as superior stability properties in numerical computations. A fundamental property of unitary similarity is tat every A 2 M n is unitarily similar to an upper triangular matrix wose diagonal entries are te eigenvalues of A. Tis triangular form can be furter rened under general similarity; we study te latter in Capter 3. Te transformation A! S AS, in wic S is nonsingular but not necessarily unitary, is called *congruence; we study it in Capter 4. Notice tat similarity by a unitary matrix is bot a similarity and a *congruence. For A 2 M n;m, te transformation A! UAV, in wic U 2 M m and V 2 M n are bot unitary, is called unitary equivalence. Te upper triangular form acievable under unitary similarity can be greatly rened under unitary equivalence and generalized to non-square matrices: every A 2 M n;m is unitarily equivalent to a nonnegative diagonal matrix wose diagonal entries (te singular values of A) are of great importance. y Matrix Analysis, second edition by Roger A. Horn and Carles R. Jonson, copyrigt Cambridge University Press

2 102 Unitary similarity and unitary equivalence 2.1 Unitary matrices and te QR factorization Denition. A set of vectors fx 1 ; : : : ; x k g C n is ortogonal if x i x j = 0 for all i 6= j, i; j = 1; : : : ; k. If, in addition, x i x i = 1 for all i = 1; : : : ; k (tat is, te vectors are normalized), ten te set is ortonormal. Exercise. If fy 1 ; : : : ; y k g is an ortogonal set of nonzero vectors, sow tat te set fx 1 ; : : : ; x k g dened by x i = (y i y i) 1=2 y i, i = 1; : : : ; k, is an ortonormal set Teorem. Every ortonormal set of vectors in C n is linearly independent. Proof: Suppose tat fx 1 ; : : : ; x k g is an ortonormal set, and suppose tat 0 = 1 x k x k. Ten 0 = ( 1 x k x k ) ( 1 x k x k ) = i;j i j x i x j = k i=1 j ij 2 x i x i = k i=1 j ij 2 because te vectors x i are ortogonal and normalized. Tus, all i = 0 and ence fx 1 ; : : : ; x k g is a linearly independent set. Exercise. Sow tat every ortogonal set of nonzero vectors in C n is linearly independent. Exercise. If fx 1 ; : : : ; x k g 2 C n is an ortogonal set, sow tat eiter k n or at least k n of te vectors x i are equal to zero. An independent set need not be ortonormal, of course, but one can apply te Gram Scmidt ortonormalization procedure (0.6.4) to it and obtain an ortonormal set wit te same span as te original set. Exercise. Sow tat any nonzero subspace of R n or C n as an ortonormal basis (0.6.5) Denition. A matrix U 2 M n is unitary if U U = I. If, in addition, U 2 M n (R), U is real ortogonal. Exercise. Sow tat U 2 M n and V 2 M m are unitary if and only if U V 2 M n+m is unitary. Exercise. Verify tat te matrices Q; U; and V in Problems 19, 20, and 21 in (1.3) are unitary. Te unitary matrices in M n form a remarkable and important set. We list some of te basic equivalent conditions for U to be unitary in (2.1.4) Teorem. If U 2 M n, te following are equivalent: (a) U is unitary;

3 2.1 Unitary matrices and te QR factorization 103 (b) U is nonsingular and U = U 1 ; (c) UU = I; (d) U is unitary; (e) Te columns of U form an ortonormal set; (f) Te rows of U form an ortonormal set; and (g) For all x 2 C n, kxk 2 = kuxk 2, tat is, x and Ux ave te same Euclidean norm. Proof: (a) implies (b) since U 1 (wen it exists) is te unique matrix, left multiplication by wic produces I (0.5); te denition of unitary says tat U is suc a matrix. Since BA = I if and only if AB = I (for A; B 2 M n (0.5)), (b) implies (c). Since (U ) = U, (c) implies tat U is unitary; tat is, (c) implies (d). Te converse of eac of tese implications is similarly observed, so (a) (d) are equivalent. Partition U = [u 1 : : : u n ] according to its columns. Ten U U = I means tat u i u i = 1 for all i = 1; : : : ; n and u i u j = 0 for all i 6= j. Tus, U U = I is anoter way of saying tat te columns of U are ortonormal, and ence (a) is equivalent to (e). Similarly, (d) and (f) are equivalent. If (a) olds and y = Ux, ten y y = x U Ux = x Ix = x x, so (a) implies (g). To prove te converse, let U U = A = [a ij ], let z; w 2 C be given, and take x = z + w in (g). Ten x x = z z + w w + 2 Re z w and y y = x Ax = z Az + w Aw + 2 Re z Aw; (g) ensures tat z z = z Az and w w = w Aw, and ence Re z w = Re z Aw for any z and w. Take z = e p and w = ie q and compute Re ie T p e q = 0 = Re ie T p Ae q = Re ia pq = Im a pq, so every entry of A is real. Finally, take z = e p and w = e q and compute e T p e q = Re e T p e q = Re e T p Ae q = a pq, wic tells us tat A = I and U is unitary Denition. A linear transformation T : C n! C m is called a Euclidean isometry if x x = (T x) (T x) for all x 2 C n. Teorem (2.1.4) says tat a square complex matrix U 2 M n is a Euclidean isometry (via U : x! Ux) if and only if it is unitary. See (5.2) for oter kinds of isometries. Exercise. Let T () = cos sin sin cos i, in wic is a real parameter. (a) Sow tat a given U 2 im 2 (R) is real ortogonal if and only if eiter U = T () or U = T () for some 2 R. (b) Sow tat a given U 2 i M 2 (R) is real ortogonal if and only if eiter U = T () or U = T () for some 2 R. Tese are two different presentations, involving a parameter, of te 2-by-2 real ortogonal matrices. Interpret tem geometrically.

4 104 Unitary similarity and unitary equivalence Observation. If U; V 2 M n are unitary (respectively, real ortogonal), ten UV is also unitary (respectively, real ortogonal). Exercise. Use (b) of (2.1.4) to prove (2.1.6) Observation. Te set of unitary (respectively, real ortogonal) matrices in M n forms a group. Tis group is generally referred to as te n-by-n unitary (respectively, ortogonal) group, a subgroup of GL(n; C) (0.5). Exercise. A group is a set tat is closed under a single associative binary operation ( multiplication ) and is suc tat te identity for and inverses under te operation are contained in te set. Verify (2.1.7). Hint: Use (2.1.6) for closure; matrix multiplication is associative; I 2 M n is unitary; and U = U 1 is again unitary. Te set (group) of unitary matrices in M n as anoter very important property. Notions of convergence and limit of a sequence of matrices will be presented precisely in Capter 5, but can be understood ere in terms of convergence and limit of entries. Te dening identity U U = I means tat every column of U as Euclidean norm 1, and ence no entry of U = [u ij ] can ave absolute value greater tan 1. If we tink of te set of unitary matrices as a subset of C n2, tis says it is a bounded subset. If U k [u (k) ij ] is an innite sequence of unitary matrices, k = 1; 2; : : : suc tat lim k!1 u (k) ij u ij exists for all i; j = 1; 2; : : : ; n, ten from te identity Uk U k = I for all k = 1; 2; : : : we see tat lim k!1 Uk U k = U U = I, in wic U = [u ij ]. Tus, te limit matrix U is also unitary. Tis says tat te set of unitary matrices is a closed subset of C n2. Since a closed and bounded subset of a nite dimensional Euclidean space is a compact set (see Appendix E), we conclude tat te set (group) of unitary matrices in M n is compact. For our purposes, te most important consequence of tis observation is te following selection principle for unitary matrices Lemma. Let U 1 ; U 2 ; : : : 2 M n be a given innite sequence of unitary matrices. Tere exists an innite subsequence U k1 ; U k2 ; : : :, 1 k 1 < k 2 <, suc tat all of te entries of U ki converge (as sequences of complex numbers) to te entries of a unitary matrix as i! 1. Proof: All tat is required ere is te fact tat from any innite sequence in a compact set one may always select a convergent subsequence. We ave already observed tat if a sequence of unitary matrices converges to some matrix, ten te limit matrix must be unitary.

5 2.1 Unitary matrices and te QR factorization 105 Te unitary limit guaranteed by te lemma need not be unique; it can depend upon te subsequence cosen. i Exercise. Consider te sequence of unitary matrices U k = 0 1 k; 1 0 k = 1; 2; : : : : Sow tat tere are two possible limits of subsequences. Exercise. Explain wy te selection principle (2.1.8) applies as well to te (real) ortogonal group; tat is, an innite sequence of real ortogonal matrices as an innite subsequence tat converges to a real ortogonal matrix. A unitary matrix U as te property tat U 1 equals U. One way to generalize te notion of a unitary matrix is to require tat U 1 be similar to U. Te set of suc matrices is easily caracterized as te range of te mapping A! A 1 A for all nonsingular A 2 M n Teorem. Let A 2 M n be nonsingular. Ten A 1 is similar to A if and only if tere is a nonsingular B 2 M n suc tat A = B 1 B. Proof: If A = B 1 B for some nonsingular B 2 M n, ten A 1 = (B ) 1 B and B A 1 (B ) 1 = B(B ) 1 = (B 1 B ) = A, so A 1 is similar to A via te similarity matrix B. Conversely, if A 1 is similar to A, ten tere is a nonsingular S 2 M n suc tat SA 1 S 1 = A and ence S = A SA. Set S e i S for 2 R, so tat S = A S A and S = A S A. Adding tese two identities gives H = A H A, in wic H S + S is Hermitian. If H were singular, tere would be a nonzero x 2 C n suc tat 0 = H x = S x + S x, so 1 x = S x = e 2i S 1 S x and S 1 S x = e 2i x. Coose a value of = 0 2 [0; 2) suc tat e 2i0 is not an eigenvalue of S 1 S ; te resulting Hermitian matrix H H 0 is nonsingular and as te property tat H = A HA. Now coose any complex suc tat jj = 1 and is not an eigenvalue of A. Set B (I A )H, in wic te complex parameter 6= 0 is to be cosen, and observe tat B is nonsingular. We want to ave A = B 1 B, or BA = B. Compute B = H( I A), and BA = (I A )HA = (HA A HA) = (HA H) = H(A I). We sall be done if we can select a nonzero suc tat =, but if = e i i(, ten = e )=2 will do. If a unitary matrix is presented as a 2-by-2 block matrix, ten te ranks of its off-diagonal blocks are equal; te ranks of its diagonal blocks are related by a simple formula Lemma. Let a unitary U 2 M n be partitioned as U = U11 U 12 U 21 U 22 i,

6 106 Unitary similarity and unitary equivalence in wic U 11 2 M k. Ten rank U 12 = rank U 21 and rank U 22 = rank U 11 + n 2k. In particular, U 12 = 0 if and only if U 21 = 0, in wic case U 11 and U 22 are unitary. Proof: Te two assertions about rank follow immediately from te ilaw of complementary nullities (0.7.5) using te fact tat U 1 =. U 11 U 21 U 12 U 22 Plane rotations and Houseolder matrices are special (and very simple) unitary matrices tat play an important role in establising some basic matrix factorizations Example. plane rotations. Let U(; i; j) = cos 0 : : : 0 sin row j sin 0 : : : 0 cos row i 7 5 mn i column j Tis is simply te identity matrix, wit te i; i and j; j entries replaced by cos and te i; j entry (respectively j; i entry) replaced by sin (respectively, sin ). Exercise. Verify tat U(; i; j) 2 M n (R) is real ortogonal for any pair of indices 1 i < j n and any parameter 0 < 2. Te matrix U(; i; j) carries out a rotation (troug an angle ) in te i, j coordinate plane of R n. Left multiplication by U(; i; j) affects only rows i and j of te matrix multiplied; rigt multiplication by U(; i; j) affects only columns i and j of te matrix multiplied Example. Houseolder matrices. Let w 2 C n be a nonzero vector.

7 2.1 Unitary matrices and te QR factorization 107 Te Houseolder matrix U w 2 M n is dened by U w = I 2(w w) 1 ww. If w is a unit vector, ten U w = I 2ww. Exercise. Sow tat a Houseolder matrix U w is bot unitary and Hermitian; if w 2 R n ten U w is real ortogonal and symmetric. Exercise. Sow tat a Houseolder matrix U w acts as te identity on te subspace w? and tat it acts as a reection on te one-dimensional subspace spanned by w; tat is, U w x = x if x? w and U w w = w. Exercise. Use ( ) to sow tat det U w = 1 for all n. Tus, for all n and every nonzero w 2 R n, te Houseolder matrix U w 2 M n (R) is a real ortogonal matrix tat is never a proper rotation matrix (a real ortogonal matrix wose determinant is +1). Exercise. Use (1.2.8) to sow tat te eigenvalues of a Houseolder matrix are always 1; 1; : : : ; 1 and explain wy its determinant is always 1. Houseolder matrices provide a simple way to construct a unitary matrix tat takes a given vector into any oter vector tat as te same Euclidean norm Teorem. Let x; y 2 C n be given and suppose kxk 2 = kyk 2. If y = e i x for some real, let U(y; x) = e i I n ; oterwise let 2 [0; 2) be suc tat x y = e i jx yj (take = 0 if x y = 0), let w = e i x y, and let U(y; x) = e i U w, in wic U w is te Houseolder matrix U w = I 2(w w) 1 ww. Ten U(y; x) is unitary and essentially Hermitian, U(y; x)x = y, and U(y; x)z? y wenever z? x. If x and y are real, ten U(y; x) is real ortogonal: U(y; x) = I if y = x, and U(y; x) is te real Houseolder matrix U x y oterwise. Proof: Te assertions are readily veried if x and y are linearly dependent, tat is, if y = e i x for some real. If x and y are linearly independent, te Caucy-Scwarz inequality (0.6.3) ensures tat x x 6= jx yj. Compute and w w = (e i x y) (e i x y) = x x e i x y e i y x + y y = 2(x x Re(e i x y)) = 2(x x jx yj) w x = e i x x y x = e i x x e i jy xj = e i (x x jx yj) and, nally, e i U w x = e i (x 2(w w) 1 ww x) = e i (x (e i x y)e i ) = y

8 108 Unitary similarity and unitary equivalence If z is ortogonal to x, ten w z = y U(y; x)z = e i y z kxk 2 2 y z and 1 jx yj = e i (y z + ( y x)) = 0! e i y x kyk 2 2 ( y x) Since U w is unitary and Hermitian, U(y; x) = (e i I)U w is unitary (as a product of two unitary matrices) and essentially Hermitian (0.2.5). Exercise. Let y 2 C n be a given unit vector and let e 1 be te rst column of te n-by-n identity matrix. Construct U(y; e 1 ) using te recipe in te preceding teorem and verify tat its rst column is y (wic it sould be, since y = U(y; e 1 )e 1 ). Exercise. Let x 2 C n be a given nonzero vector. Explain wy te matrix U(kxk 2 e 1 ; x) constructed in te preceding teorem is an essentially Hermitian unitary matrix tat takes x into kxk 2 e 1. Te following QR factorization of a complex or real matrix is of considerable teoretical and computational importance Teorem. (QR factorization) Suppose A 2 M n;m and n m. Ten (a) Tere is a Q 2 M n;m wit ortonormal columns and an upper triangular R 2 M m wit non-negative main diagonal entries suc tat A = QR. (b) If rank A = m, ten te factors Q and R in (a) are uniquely determined and te main diagonal entries of R are all positive. (c) If m = n, ten te factor Q in (a) is unitary. (d) If A is real, ten bot of te factors Q and R in (a) may be taken to be real. Proof: Let a 1 2 C n be te rst column of A, let r 1 = ka 1 k 2, and let U 1 be a unitary matrix suc tat U 1 a 1 = r 1 e 1. Teorem (2.1.13) gives an explicit construction for suc a matrix, wic is eiter a unitary scalar matrix or te product of a unitary scalar matrix and a Houseolder matrix. Partition r1 F U 1 A = 0 A 2 in wic A 2 2 M n 1;m 1. Let a 2 2 C n 1 be te rst column of A 2 and let r 2 = ka 2 k 2. Use (2.1.13) again to construct a unitary V 2 2 M n 1 suc tat V 2 a 2 = r 2 e 1 and let U 2 = [I 1 ] V 2. Ten 2 r 1 F 3 U 2 U 1 A = 4 0 r A 3

9 2.1 Unitary matrices and te QR factorization 109 Repeat tis construction m times to obtain U m U m 1 U 2 U 1 A = R 0 in wic R 2 M m is upper triangular. Its main diagonal entries are r 1 ; : : : ; r m are all nonnegative. Let U = U m U m 1 U 2 U 1. Partition U = U 1 U 2 U m 1U m = [Q Q 2 ], in wic Q 2 M n;m as ortonormal columns since it comprises te rst m columns of a unitary matrix. Ten A = QR, as desired. If A as full column rank, ten R is nonsingular, so its main diagonal entries are all positive. Suppose tat rank A = m and A = QR = ~ Q ~ R, in wic R and ~ R are upper triangular and ave positive main diagonal entries, and Q and ~ Q ave ortonormal columns. Ten A A = R (Q Q)R = R IR = R R and also A A = ~ R ~ R, so R R = ~ R ~ R and ~ R R = ~ RR 1. Tis says tat a lower triangular matrix equals an upper triangular matrix, so bot must be diagonal: ~RR 1 = D is diagonal, and it must ave positive main diagonal entries because te main diagonal entries of bot ~ R and R 1 are positive. But ~ R = DR implies tat D = ~ RR 1 = ~ R R = (DR) R = D 1 R R = D 1, so D 2 = I and ence D = I. We conclude tat ~ R = R and ence ~ Q = Q. Te assertion in (c) follows from te fact tat a square matrix wit ortonormal columns is unitary. Te nal assertion (d) follows from te construction in (a) and te assurance in (2.1.13) tat te unitary matrices U i may all be cosen to be real. Exercise. Sow tat any B 2 M n of te form B = A A, A 2 M n, may be written as B = LL, in wic L 2 M n is lower triangular and as nonnegative diagonal entries. Explain wy tis factorization is unique if A is nonsingular. Tis is called te Colesky factorization of B; every positive denite matrix may be factored in tis way (see Capter 7). For square matrices A 2 M n, tere are some easy variants of te QR factorization tat can be useful. Let K be te (real ortogonal and symmetric) n-by-n reversal matrix ( ), wic as te pleasant property tat K 2 = I. Moreover, KRK is lower triangular if R is upper triangular (te main diagonal entries are te same, but te order is reversed), and of course KLK is upper triangular if L is lower triangular. If we factor KAK = QR as in (2.1.14), ten A = (KQK)(KRK) = Q 1 L, in wic Q 1 = KQK is unitary and L is lower triangular wit nonnegative main diagonal entries; we call tis a QL factorization of A. Now let A = QL be a QL factorization of A, and observe tat A = L Q, wic is an RQ factorization of A. Finally, factoring

10 110 Unitary similarity and unitary equivalence KA K = QL gives A = (KLK) (BQB), wic is an LQ factorization of A Corollary. Let A 2 M n be given. Ten tere are unitary matrices Q 1 ; Q 2 ; Q 3, lower triangular matrices L 2 ; L 3 wit nonnegative main diagonal entries, and an upper triangular matrix R 2 wit nonnegative main diagonal entries suc tat A = Q 1 L 1 = R 2 Q 2 = L 3 Q 3. If A is nonsingular, ten te respective unitary and triangular factors are uniquely determined and te main diagonal entries of te triangular factors are all positive. If A is real, ten all of te factors Q 1 ; Q 2 ; Q 3 ; L 2 ; L 3 ; R 2 may be cosen to be real. Problems 1. If U 2 M n is unitary, sow tat j det Uj = Let U 2 M n be unitary and let be a given eigenvalue of U. Sow tat (a) jj = 1 and (b) x is a (rigt) eigenvector of U corresponding to if and only if x is a left eigenvector of U corresponding to. Hint: Use (2.1.4g) and Problem 1 in (1.1). 3. Given real parameters 1 ; 2 ; : : : ; n, sow tat U = diag(e i1 ; e i2 ; : : : ; e in )is unitary. Sow tat every diagonal unitary matrix as tis form. 4. Caracterize te diagonal real ortogonal matrices. 5. Sow tat te permutation matrices (0.9.5) in M n are a subgroup (a subset tat is itself a group) of te group of real ortogonal matrices. How many different permutation matrices are tere in M n? 6. Give a presentation in terms of parameters of te 3-by-3 ortogonal group. Two presentations of te 2-by-2 ortogonal group are given in (2.1). 7. Suppose A; B 2 M n and AB = I. Provide details for te following argument tat BA = I: Every y 2 C n can be represented as y = A(By), so rank A = n and ence dim(nullspace(a)) = 0 ( ). Compute A(AB BA) = A(I BA) = A (AB)A = A A = 0, so AB BA = A matrix A 2 M n is complex ortogonal if A T A = I. A real ortogonal matrix is unitary, i but a nonreal ortogonal matrix need not be unitary. (a) Let K = 2 M 2 (R). Sow tat A(t) = (cost)i + (i sin t)k 2 M is complex ortogonal for all t 2 R, but tat A(t) is unitary only for t = 0. Te yperbolic functions are dened by cos t = (e t + e t )=2, sin t = (e t e t )=2. (b) Sow tat, unlike te unitary matrices, te set of complex ortogonal matrices is not a bounded set, and it is terefore not a compact set. (c) Sow tat te set of complex ortogonal matrices of a given size forms

11 2.1 Unitary matrices and te QR factorization 111 a group. Te smaller (and compact) group of real ortogonal matrices of a given size is often called te ortogonal group. (d) If A 2 M n is complex ortogonal, sow tat j det Aj = 1; consider A(t) in (a) to sow tat A can ave eigenvalues wit jj 6= 1. (e) If A 2 M n is complex ortogonal, sow tat A, A T, and A are all complex ortogonal and nonsingular. Do te rows or columns of A form an ortogonal set? (f) Caracterize te diagonal complex ortogonal matrices. Compare wit Problem 4. (g) Sow tat A 2 M n is bot complex ortogonal and unitary if and only if it is real ortogonal. 9. If U 2 M n is unitary, sow tat U, U T, and U are all unitary. 10. If U 2 M n is unitary, sow tat x; y 2 C n are ortogonal if and only if Ux and Uy are ortogonal. 11. A nonsingular matrix A 2 M n is skew ortogonal if A 1 = A T. Sow tat A is skew-ortogonal if and only if ia is ortogonal. More generally, if 2 R, sow tat A 1 = e i A T if and only if e i=2 A is ortogonal. Wat is tis for =? for = 0? 12. Sow tat if A 2 M n is similar to a unitary matrix, ten A 1 is similar to A. 13. Consider diag (2; 1 2 ) 2 M 2 and sow tat te set of matrices tat are similar to unitary matrices is a proper subset of te set of matrices A for wic A 1 is similar to A. 14. Sow tat te intersection of te group of unitary matrices in M n wit te group of complex ortogonal matrices in M n is te group of real ortogonal matrices in M n. Hint: U 1 = U T = U. 15. If U 2 M n is unitary, f1; : : : ; ng, and U[; c ] = 0, (0.7.1) sow tat U[ c ; ] = 0, and U[] and U[ c ] are unitary. 16. Let x; y 2 R n be given linearly independent unit vectors and let w = x + y. Consider te Palais matrix P x;y = I 2(w T w) 1 ww T + 2yx T. Sow tat: (a) P x;y = (I 2(w T w) 1 ww T )(I 2xx T ) = U w U x is a product of two real Houseolder matrices, so it is a real ortogonal matrix; (b) det P x;y = +1, so P x;y is always a proper rotation matrix; (c) P x;y x = y and P x;y y = x + 2(x T y)y; (d) P x;y z = z if z 2 R n, z? x; and z? y; (e) P x;y acts as te identity on te (n 2)-dimensional subspace (spanfx; yg)? and it is a proper rotation on te 2-dimensional subspace spanfx; yg tat takes x into y; (f) If n = 3, explain wy P x;y is te unique proper rotation tat takes x into y and leaves xed teir vector cross product x y; (g) te eigenvalues of P x;y are x T y i(1 (x T y) 2 ) 1=2 = e i ; 1; : : : ; 1, in wic cos = x T y.

12 112 Unitary similarity and unitary equivalence Hint: (1.3.23) te eigenvalues of [w x] T [ (w T w) 1 w y] 2 M 2 (R) are 1 2 (xt y 1 i(1 (x T y) 2 ) 1=2 ). 17. Suppose tat A 2 M n;m, n m, and rank A = m. Describe te steps of te Gram-Scmidt process applied to te columns of A, proceeding from left to rigt. Explain wy tis process produces, column-by-column, an explicit matrix Q 2 M n;m wit ortonormal columns and an explicit upper triangular matrix R 2 M m suc tat Q = AR. How is tis factorization related to te one in (2.1.14)? 18. Let A 2 M n be factored as A = QR as in (2.1.14), partition A = [a 1 : : : a n ] and Q = [q 1 : : : q n ] according to teir columns and let R = [r ij ] n i;j=1. (a) Explain wy fq 1; : : : ; q k g is an ortonormal basis for spanfa 1 ; : : : ; a k g for eac k = 1; : : : ; n. (b) Sow tat r kk is te Euclidean distance from a k to spanfa 1 ; : : : ; a k 1 g for eac k = 2; : : : ; n. 19. Let X = [x 1 : : : x m ] 2 M n;m, suppose rank X = m, and factor X = QR as in (2.1.14). Let Y = QR = [y 1 : : : y m ]. Sow tat te columns of Y are a basis for te subspace S = spanfx 1 ; : : : ; x m g and tat Y X = I m, so y i x j = 0 if i 6= j and eac y i x i = 1. One says tat fy 1 ; : : : ; y m g is te basis of S tat is dual (reciprocal) to te basis fx 1 ; : : : ; x m g. 20. If U 2 M n is unitary, sow tat adj U = det(u)u. 21. Explain wy Lemma remains true if unitary is replaced wit complex ortogonal. 22. Suppose tat X; Y 2 M n;m ave ortonormal columns. Sow tat X and Y ave te same range (column space) if and only if tere is a unitary U 2 M m suc tat X = Y U. Hint: (0.2.7). 23. Let A 2 M n, let A = QR be a QR factorization, let R = [r ij ], and partition bot A and R according to teir columns: A = [a 1 : : : a n ] and R = [r 1 : : : r n ]. Explain wy ka i k 2 = kr i k 2 for eac i = 1; : : : ; n, j det Aj = det R = r 11 r nn, and r ii kr i k 2 for eac i = 1; : : : ; n. Conclude tat j det Aj Q n i=1 ka ik 2. Tis is known as Hadamard's inequality. Furter Reading. For more information about matrices tat satisfy te conditions of (2.1.9), see C. R. DePrima and C. R. Jonson, Te Range of A 1 A in GL(n; C), Linear Algebra Appl. 9 (1974) Unitary similarity Since U = U 1 for unitary U, te transformation on M n given by A! U AU is a similarity transformation if U is unitary. Tis special type of similarity is called unitary similarity.

13 2.2 Unitary similarity Denition. Let A; B 2 M n be given. We say tat A is unitarily similar to B if tere is a unitary U 2 M n suc tat A = UBU. If U may be taken to be real (and ence is real ortogonal), ten A is said to be real ortogonally similar to B. We say tat A is unitarily diagonalizable if it is unitarily similar to a diagonal matrix; A is real ortogonally diagonalizable if it is real ortogonally similar to a diagonal matrix. Exercise. Sow tat unitary similarity is an equivalence relation Teorem. Let U 2 M n and V 2 M m be unitary, let A = [a ij ] 2 M n;m and B = [b ij ] 2 M n;m, and suppose A = UBV. Ten P n;m i;j=1 jb ijj 2 = P n;m i;j=1 ja ijj 2. In particular, tis identity is satised if m = n and V = U, tat is, if A is unitarily similar to B. Proof: It sufces to ceck tat tr B B = tr A A. (0.2.5) Compute tr A A = tr(ubv ) (UBV ) = tr(v B U UBV ) = tr V B BV = tr B BV V = tr B B. Exercise. Sow tat te matrices unitarily similar i and i are similar but not Unitary similarity implies similarity, but not conversely. Te unitary similarity equivalence relation partitions M n into ner equivalence classes tan te similarity equivalence relation. Unitary similarity, like similarity, corresponds to a cange of basis, but of a special type it corresponds to a cange from one ortonormal basis to anoter. Exercise. Using te notation of (2.1.11), explain wy only rows and columns i and j are canged under real ortogonal similarity via te plane rotation U(; i; j). Exercise. Using te notation of (2.1.13), explain wy U(y; x) AU(y; x) = U wau w for any A 2 M n, tat is, a unitary similarity via an essentially Hermitian unitary matrix of te form U(y; x) is a unitary similarity via a Houseolder matrix. Unitary (or real ortogonal) similarity via a Houseolder matrix is often called a Houseolder transformation. For computational or teoretical reasons, it is often convenient to transform a given matrix by unitary similarity into anoter matrix wit a special form. Here are two examples Example. Suppose A = [a ij ] 2 M n is given. We claim tat tere is a unitary U 2 M n suc tat all te main diagonal entries of U AU = B = [b ij ] are equal; if A is real, ten U may be taken to be real ortogonal. If tis claim

14 114 Unitary similarity and unitary equivalence is true, ten tr A = tr B = nb 11, so every main diagonal entry of B is equal to te average of te main diagonal entries of A. Begin by considering te complex case and n = 2. Since we can replace A 2 M 2 by A ( 1 2 tr A)I, tere is no loss of generality to assume tat tr A = 0, in wic case te two eigenvalues of A are for some 2 C. We wis to determine a unit vector u suc tat u Au = 0. If = 0, let u be any unit vector suc tat Au = 0. If 6= 0, let w and z be any unit eigenvectors associated wit te distinct eigenvalues. Let x() = e i w + z, wic is nonzero for all 2 R since w and z are linearly independent. Compute x() Ax() = (e i w + z) (e i w z) = 2i Im(e i z w). If z w = e i jz wj,ten x( ) Ax( ) = 0. Let u = x( )= kx( )k 2. Now let v 2 C 2 be any unit vector tat is ortogonal to u and let U = [u v]. Ten U is unitary and (U AU) 11 = u Au = 0. But tr(u AU) = 0, so (U AU) 22 = 0 as well. Now suppose n = 2 and A is real. If te diagonal entries of A = i [a ij ] are not equal, consider te plane rotation matrix U = cos sin. A calculation reveals tat te diagonal entries of U AU T are equal if (cos2 sin 2 )(a 11 a 22 ) = 2 sin cos (a 12 + a 21 ), so equal diagonal entries are acieved if 2 (0; =2) is cosen so tat cot 2 = (a 12 + a 21 )=(a 11 a 22 ). We ave now sown tat any 2-by-2 complex matrix A is unitarily similar to a matrix wit bot diagonal entries equal to te average of te diagonal entries of A; if A is real, te similarity may be taken to be real ortogonal. Now suppose n > 2 and dene f(a) = maxfja ii a jj j : i; j = 1; 2; : : : ; ng. If f(a) > 0, let A 2 = aii a ij for a pair of indices i; j for wic a ji a jj i f(a) = ja ii a jj j (tere could be several pairs of indices for wic tis maximum positive separation is attained; coose any one of tem). Let U 2 2 M 2 be unitary, real if A is real, and suc tat U2 A 2 U 2 as bot main diagonal entries equal to 1 2 (a ii + a jj ). Construct U(i; j) 2 M n from U 2 in te same way tat U(; i; j) was constructed from a 2-by-2 plane rotation in (2.1.11). Te unitary similarity U(i; j) AU(i; j) affects only entries in rows and columns i and j, so it leaves uncanged every main diagonal entry of A except te entries in positions i and j, wic it replaces wit te average 1 2 (a ii + a jj ). For any k 6= i; j te triangle inequality ensures tat sin ja kk 1 2 (a ii + a jj )j = j 1 2 (a kk a ii ) (a kk a jj )j 1 2 ja kk a ii j ja kk a jj j cos 1 2 f(a) + 1 f(a) = f(a) 2

15 2.2 Unitary similarity 115 wit equality only if te scalars a kk a ii and a kk a jj bot lie on te same ray in te complex plane and ja kk a ii j = ja kk a jj j. Tese two conditions 1 imply tat a ii = a jj, so it follows tat ja kk 2 (a ii + a jj )j < f(a) for all k 6= i; j. Tus, te unitary similarity we ave just constructed reduces by one te nitely many pairs of indices k; ` for wic f(a) = ja kk a``j. Repeat te construction, if necessary, to deal wit any suc remaining pairs and acieve a unitary U (real if A is real) suc tat f(u AU) < f(a). Finally, consider te compact set R(A) = fu AU : U 2 M n is unitary}. Since f is a continuous nonnegative-valued function on R(A), it acieves its minimum value tere, tat is, tere is some B 2 R(A) suc tat f(a) f(b) 0 for all A 2 R(A). If f(b) > 0, we ave just seen tat tere is a unitary U (real if A is real) suc tat f(b) > f(u BU). Tis contradiction sows tat f(b) = 0, so all te diagonal entries of B are equal Example. Suppose A = [a ij ] 2 M n is given. Te following construction sows tat A is unitarily similar to an upper Hessenberg matrix wit nonnegative entries in its rst subdiagonal. Let a 1 be te rst column of A, partitioned as a T 1 = [a 11 T ] wit 2 C n 1. Let U 1 = I n 1 if = 0; oterwise, use (2.1.13) to construct U 1 = U(kk 2 e 1 ; ) 2 M n 1, a unitary matrix tat takes into a positive multiple of e 1. Form te unitary matrix V 1 = I 1 U 1 and observe tat te rst column of V 1 A is te vector [a 11 kk 2 0] T. Moreover, A 1 = (V 1 A)V1 as te same rst column as V 1 A and is unitarily similar to A. Partition it as 2 3 a 11 F A 1 = 4 kk2 5, A 2 2 M n 1 A 2 0 Use (2.1.13) again to form, in te same way, a unitary matrix U 2 tat takes te rst column of A 2 into a vector wose entries below te second are all zero and wose second entry is nonnegative. Let V 2 = I 2 U 2 and let A 2 = V 2 AV2. Tis similarity does not affect te rst column of A 1. After at most n 1 steps, tis construction produces an upper Hessenberg matrix A n 1 tat is unitarily similar to A and as nonnegative subdiagonal entries. Exercise. If A is Hermitian or skew-hermitian, explain wy te construction in te preceding example produces a tridiagonal Hermitian or skew-hermitian matrix tat is unitarily similar to A. Teorem (2.2.2) provides a necessary but not sufcient condition for two given matrices to be unitarily similar. It can be augmented wit additional identities tat collectively do provide necessary and sufcient conditions. A

16 116 Unitary similarity and unitary equivalence key role is played by te following simple notion. Let s, t be two given noncommuting variables. We refer to any nite formal product of nonnegative powers of s, t W (s; t) = s m1 t n1 s m2 t n2 s m k t n k ; m 1 ; n 1 ; : : : ; m k ; n k 0 (2.2.5) as a word in s and t. Te degree of te word W (s; t) is te nonnegative integer m 1 + n 1 + m 2 + n m k + n k, tat is, te sum of all te exponents in te word. If A 2 M n is given, we dene a word in A and A as W (A; A ) = A m1 (A ) n1 A m2 (A ) n2 A m k (A ) n k Since te powers of A and A need not commute, it may not be possible to simplify te expression of W (A; A ) by rearranging te terms in te product. Suppose A is unitarily similar to B 2 M n, so tat A = UBU for some unitary U 2 M n. For any word W (s; t) we ave W (A; A ) = (UBU ) m1 (UB U ) n1 (UBU ) m k (UB U ) n k = UB m1 U U(B ) n1 U UB m k U U(B ) n k U = UB m1 (B ) n k B m k (B ) n k U = UW (B; B )U so W (A; A ) is unitarily similar to W (B; B ). Tus, tr W (A; A ) = trw (B; B ). If we take te word W (s; t) = ts, we obtain te identity in (2.2.2). If one considers all possible words W (s; t), tis observation gives innitely many necessary conditions for two matrices to be unitarily similar. A teorem of W. Spect, wic we state witout proof, guarantees tat tese necessary conditions are also sufcient Teorem. Two matrices A; B 2 M n are unitarily similar if and only if tr W (A; A ) = tr W (B; B ) (2.2.7) for every word W (s; t) in two noncommuting variables. Spect's teorem can be used to sow tat two matrices are not unitarily similar by exibiting a specic word tat violates (2.2.7). However, except in special situations (see Problem 6), it may be useless in sowing tat two given matrices are unitarily similar because innitely many conditions must be veried. Fortunately, a renement of Spect's teorem says tat it sufces to ceck te trace identities (2.2.7) for only nitely many words, wic gives a practical criterion to assess unitary similarity of matrices of small size.

17 2.2 Unitary similarity Teorem. Two matrices A; B 2 M n are unitarily similar if and only if tr W (A; A ) = tr W (B; B ) for every word W (s; t) in two noncommuting variables wose degree is at most r 2n 2 n n n 2 2 For n = 2, it sufces to verify (2.2.7) for te tree words W (s; t) = s; s 2, and st. For n = 3, it sufces to verify (2.2.7) for te seven words W (s; t) = s; s 2 ; st; s 3 ; s 2 t; s 2 t 2, and s 2 t 2 st. Problems 1. Let A = [a ij ] 2 M n (R) be symmetric but not diagonal, and suppose tat indices i; j wit i < j are cosen so tat ja ij j is as large as possible. Dene by cot 2 = (a jj a ii )=2a ij, let U(; i; j) be te plane rotation (2.1.11), and let B = U(; i; j)au(; i; j) T = [b pq ]. Sow tat b ij = 0 and use (2.2.2) to sow tat P p6=q jb pqj 2 < P p6=q ja pqj 2. Indeed, it is not necessary to compute ; just take cos = a jj (a 2 ij + a2 jj ) 1=2 and sin = a ij (a 2 ij + a2 jj ) 1=2. Sow tat repeated real ortogonal similarities via plane rotations (cosen in te same way for B and its successors) strictly decrease te sums of te squares of te off-diagonal entries wile preserving te sums of te squares of all te entries; at eac step, te computed matrix is (in tis sense) more nearly diagonal tan at te step before. Tis is te metod of Jacobi for calculating te eigenvalues of a real symmetric matrix. It produces a sequence of matrices tat converges to a real diagonal matrix. Wy must te diagonal entries of te limit be te eigenvalues of A? How can te corresponding eigenvectors be obtained? 2. Te eigenvalue calculation metod of Givens for real matrices also uses plane rotations, but in a different way. For n 3, provide details for te following argument sowing tat every A = [a ij ] 2 M n (R) is real ortogonally similar to a real lower Hessenberg matrix, wic is necessarily tridiagonal if A is symmetric; see (0.9.9) and (0.9.10). Coose a plane rotation U 1;3 of te form U(; 1; 3), as in te preceding problem, so tat te 1,3 entry of U1;3AU 1;3 is 0. Coose anoter plane rotation of te form U 1;4 = U(; 1; 4) so tat te 1,4 entry of U1;4(U 1;3AU 1;3 )U 1;4 is 0; continue in tis way to zero out te rest of te rst row wit a sequence of real ortogonal similarities. Ten start on te second row beginning wit te 2,4 entry and zero out te 2,4, 2,5,..., 2,n entries. Explain wy tis process does not disturb previously manufactured 0 entries, and wy it preserves symmetry if A is symmetric. Proceeding in tis way troug row n 3 produces a lower Hessenberg matrix after nitely many

18 118 Unitary similarity and unitary equivalence real ortogonal similarities via plane rotations; tat matrix is tridiagonal if A is symmetric. However, te eigenvalues of A are not displayed as in Jacobi's metod; tey must be obtained from a furter calculation. 3. Sow tat every A 2 M 2 is unitarily similar to its transpose. Hint: Consider te tree words W (s; t) = s; s 2 ; st. 4. Let 2 A = Any matrix is similar to its transpose (3.2.3), but A is not unitarily similar to A T. For wic of te seven words listed in (2.2.8) do A and B = A T fail te test (2.2.7)? 5. If A 2 M n and tere is a unitary U 2 M n suc tat A = UAU, sow tat A + A = U(A + A )U, tat is, U commutes wit A + A. Apply tis observation to te 3-by-3 matrix in te preceding problem and conclude tat if it is unitarily similar to its transpose, ten any suc unitary similarity must be diagonal. Sow tat no diagonal unitary similarity can take tis matrix into its transpose. 6. Let A 2 M n and B; C 2 M m be given. Use eiter (2.2.6) or (2.2.8) to sow tat B and C are unitarily similar if and only if any one of te following conditions olds: i (a) A 0 0 B and A 0 0 C i are unitarily similar. (b) B B and C C are unitarily similar if bot direct sums contain te same number of terms. (c) A B B and A C C are unitarily similar if bot direct sums contain te same number of terms. 7. Give an example of two 2-by-2 matrices tat satisfy te identity (2.2.2) but are not unitarily similar. Explain wy. 8. Let A; B 2 M 2 and let C = AB BA. Use Example to sow tat i 2 C 2 = I for some scalar. Hint: tr C =?; 0 =? b a 0 9. Let A 2 M n and suppose tr A = 0. Use Example to sow tat A can be written as a sum of two nilpotent matrices. Conversely, if A can be written as a sum of nilpotent matrices, explain wy tr A = 0. Hint: Write A = UBU, in wic B = [b ij ] as zero main diagonal entries. Ten write B = B L + B R, in wic B L = [ ij ], ij = b ij if i j and ij = 0 if j > i.

19 2.3 Unitary triangularizations Let n 2 be a given integer and dene! = e 2i=n. (a) Explain wy P n 1 k=0!k` = 0 unless ` = mn for some m = 0; 1; 2; : : :, in wic case te sum is equal to n. (b) Let F n = n 1=2 [! (i 1)(j 1) ] n i;j=1 denote te n- by-n Fourier matrix. Sow tat F n is symmetric, unitary, and coninvolutory: F n Fn = F n F n = I. (c) Let C n denote te basic circulant permutation matrix ( ). Explain wy C n is unitary (real ortogonal). (d) Let D = diag(1;!;! 2 ; : : : ;! n 1 ) and sow tat C n F n = F n D, so C n = F n DFn and Cn k = F n D k Fn for all k = 1; 2; : : : : (e) Let A denote te circulant matrix ( ), expressed as te sum in ( ). Explain wy A = F n Fn, in wic = diag( 1 ; : : : ; n ), eac ` = P n 1 k=0 a k+1! k(` 1), and te diagonal entries of are te entries of te vector n 1=2 FnAe 1. Tus, te Fourier matrix provides an explicit unitary diagonalization for every circulant matrix. (f) Write F n = C n + is n, in wic C n and S n are real. Wat are te entries of C n and S n? Let H n = C n + S n denote te n-by-n Hartley matrix. (g) Sow tat Cn 2 + Sn 2 = I, C n S n = S n C n = 0, H n is symmetric, and H n is real ortogonal. () Let K n denote te reversal matrix ( ). Sow tat C n K n = K n C n = C n, S n K n = K n S n = S n, and H n K n = K n H n, so C n, S n, and H n are centrosymmetric. It is known tat H n AH n = is diagonal for any matrix of te form A = E + K n F, in wic E and F are real circulant matrices, E = E T, and F = F T ; te diagonal entries of are te entries of te vector n 1=2 H n Ae 1. In particular, te Hartley matrix provides an explicit real ortogonal diagonalization for every real symmetric circulant matrix. Furter Readings and Notes. For te original proof of (2.2.6), see W. Spect, Zur Teorie der Matrizen II, Jaresber. Deutsc. Mat.-Verein. 50 (1940) 19 23; tere is a modern proof in [Kap]. For a survey of te issues addressed in (2.2.8), see D. Ðjoković and C. Jonson, Unitarily Acievable Zero Patterns and Traces of Words in A and A, Linear Algebra Appl. 421 (2007) Unitary triangularizations Peraps te most fundamentally useful fact of elementary matrix teory is a teorem attributed to I. Scur: any square complex matrix A is unitarily similar to a triangular matrix wose diagonal entries are te eigenvalues of A. Te proof involves a sequential deation by unitary similarity Teorem. (Scur) Let A 2 M n ave eigenvalues 1 ; : : : ; n in any prescribed order and let x be a unit vector suc tat Ax = 1 x. Ten tere is a unitary U = [x u 2 : : : u n ] 2 M n suc tat U AU = T = [t ij ] is upper triangular wit diagonal entries t ii = i ; i = 1; : : : ; n. Tat is, every square

20 120 Unitary similarity and unitary equivalence complex matrix A is unitarily similar to an upper triangular matrix wose diagonal entries are te eigenvalues of A in any prescribed order. Furtermore, if A 2 M n (R) and if all its eigenvalues are real, ten U may be cosen to be real ortogonal. Proof: Let x be a normalized eigenvector of A associated wit te eigenvalue 1, tat is, x x = 1 and Ax = 1 x. Let U 1 = [x u 2 : : : u n ] be any unitary matrix wose rst column is x. For example, one may take U 1 = U(x; e 1 ) as in (2.1.13) or see Problem 1. Ten U1 AU 1 = U1 Ax Au2 : : : Au n = U 1 1 x Au 2 : : : Au n = = x u 2. u n x Au 2 : : : Au n 5 1 x x x Au 2 : : : x Au n 1 u 2x. A 1 1 u nx = 1 F 0 A 1 because te columns of U 1 are ortonormal. Te eigenvalues of te submatrix A 1 = [u i Au j] n i;j=2 2 M n 1 are 2 ; : : : ; n. If n = 2, we ave acieved te desired unitary triangularization. If not, let 2 C n 1 be a normalized eigenvector of A 1 corresponding to 2, and perform te preceding reduction on A 1. If U 2 2 M n 1 is any unitary matrix wose rst column is, ten we ave seen tat U 2 A 1 U 2 = 2 F 0 A 2 Let V 2 = [1] U 2 and compute te unitary similarity (U 1 V 2 ) AU 1 V 2 = V2 U1 AU 1 V 2 = A 2 Continue tis reduction to produce unitary matrices U i 2 M n i+1 ; i = 1; : : : ; n 1 and unitary matrices V i 2 M n ; i = 2; : : : ; n 2. Te matrix U = U 1 V 2 V 3 V n 2

21 2.3 Unitary triangularizations 121 is unitary and U AU is upper triangular. If all te eigenvalues of A 2 M n (R) are real, ten all of te eigenvectors and unitary matrices in te preceding algoritm can be cosen to be real (Problem 3 in (1.1) and (2.1.13)). Exercise. Follow te proof of (2.3.1) to see tat upper triangular can be replaced by lower triangular in te statement of te teorem wit, of course, a different unitary similarity. Exercise. If te eigenvector x in te proof of (2.3.1) is also a left eigenvector of A, we know tat x A = x. (1.4.7a) Explain wy U1 AU 1 = A 1 i. If every rigt eigenvector of A is also a left eigenvector, explain wy te upper triangular matrix T constructed in (2.3.1) is actually a diagonal matrix Example. If te eigenvalues of A are re-ordered and te corresponding upper triangularization (2.3.1) is performed, te entries of T above te main diagonal can look very different. Consider 2 T 1 = p p 2 5 ; T 2 = ; U = p p Explain wy U is unitary and T 2 = UT 1 U. Exercise. If A = [a ij ] 2 M n as eigenvalues 1 ; : : : ; n and is unitarily similar to an upper triangular matrix T = [t ij ] 2 M n, te diagonal entries of T are te eigenvalues of A in some order. Apply (2.2.2) to A and T to sow tat nx nx X nx j i j 2 = ja ij j 2 jt ij j 2 ja ij j 2 (2.3.2a) i=1 i;j=1 i<j wit equality if and only if T is diagonal. i;j=1 Exercise. If A = [a ij ] and B = [b ij ] 2 M 2 ave te same eigenvalues and if P 2 i;j=1 ja ijj 2 = P 2 i;j=1 jb ijj 2, use te criterion in (2.2.8) to sow tat A and B are unitarily similar. However, consider 2 A = and B = (2.3.2b) wic ave te same eigenvalues and te same sums of squared entries. Use te criterion in (2.2.8) or te exercise following ( ) to sow tat A and B are not unitarily similar. Neverteless, A and B are similar. Wy?

22 122 Unitary similarity and unitary equivalence It is a useful adjunct to (2.3.1) tat a commuting family of matrices may be simultaneously upper triangularized Teorem. Let F M n be a commuting family. Tere is a unitary U 2 M n suc tat U AU is upper triangular for every A 2 F. Proof: Return to te proof of (2.3.1). Exploiting (1.3.19) at eac step of te proof in wic a coice of an eigenvector (and unitary matrix) is made, coose an eigenvector tat is common to every A 2 F and coose a single unitary matrix tat as tis common eigenvector as its rst column; it deates (via unitary similarity) every matrix in F in te same way. Similarity preserves commutativity, and a partitioned multiplication calculation reveals tat, if two matrices of te form A11 A 12 0 A 22 and B11 B 12 0 B 22 commute, ten A 22 and B 22 commute also. Tus, te commuting family property is inerited by te submatrix A i at eac reduction step in te proof of (2.3.1). We conclude tat all ingredients in te U of (2.3.1) may be cosen in te same way for all members of a commuting family, tus verifying (2.3.3). In (2.3.1) we are permitted to specify te main diagonal of T (tat is, we may specify in advance te order in wic te eigenvalues of A appear as te deation progresses), but (2.3.3) makes no suc claim even for a single matrix in F. At eac stage of te deation, te common eigenvector used is associated wit some eigenvalue of eac matrix in F, but we may not be able to specify wic one. We simply take te eigenvalues as tey come, using (1.3.19). If a real matrix A as any non-real eigenvalues, tere is no ope of reducing it to upper triangular form T by a real similarity because te diagonal entries of T would be te eigenvalues of A. However, we can always reduce A to a real quasi-triangular form by a real similarity as well as by a real ortogonal similarity Teorem. Suppose tat A 2 M n (R) as p complex conjugate pairs of non-real eigenvalues 1 = a 1 + ib 1 ; 1 = a 1 ib 1 : : : ; p = a p + ib p ; p = a p ib p in wic all a j ; b j 2 R and all b j 6= 0, and, if 2p < n, an additional n 2p real eigenvalues 1 ; : : : ; n 2p. Ten tere is a nonsingular

23 2.3 Unitary triangularizations 123 S 2 M n (R) suc tat 2 S 1 AS = 6 4 A 1 A 2... F (2.3.5) 0 A n p is real and block upper triangular, and eac diagonal block is eiter 1-by-1 or aj b j 2-by-2. Tere are p real diagonal blocks of te form b j a j i, one for eac conjugate pair of non-real eigenvalues j ; j = a j ib j. Tere are n 2p diagonal blocks of te form [ j ], one for eac of te real eigenvalues 1 ; : : : ; n 2p. Te p 2-by-2 diagonal blocks and te n 2p 1-by-1 diagonal blocks may appear in (2.3.5) in any prescribed order. Te real similarity S may be taken to be a real ortogonal matrix Q; in tis case te 2-by-2 diagonal blocks of Q T AQ ave te form R aj b j j nonsingular real upper triangular matrix. i b j a j R 1 j, in wic eac R j is a Proof: Te proof of (2.3.1) sows ow to deate A by a sequence of real ortogonal similarities corresponding to its real eigenvalues, if any. Problem 33 in (1.3) describes te deation step corresponding to a complex conjugate pair of non-real eigenvalues; repeating tis deation p times acieves te form (2.3.5), wose 2-by-2 diagonal blocks ave te asserted form (wic reveals teir corresponding conjugate pair of eigenvalues). It remains to consider ow te 2-by-2 diagonal blocks would be modied if we were to use only real ortogonal similarities in te 2-by-2 deations. If = a + ib is a non-real eigenvalue of A wit associated eigenvector x = u + iv; u; v 2 Ri n, we ave seen tat fu; vg is linearly independent and A[u v] = [u v] a b. If fu; vg is not an ortonormal set, use te QR factorization (2.1.14) to write [u v] = Q 2 R 2, in wic Q 2 = [q 1 q 2 ] 2 M n;2 (R) as ortonormal columns and R 2 2 M 2 (R) i is nonsingular and upper triangular. Ten A[u v] = AQ 2 R 2 =, so Q 2 R 2 a b b a a b AQ 2 = Q 2 R 2 b a R 1 2 If we let S be a real ortogonal matrix wose rst two columns are q 1 and q 2, we obtain a deation of te asserted form. Of course, if fu; vg is ortonormal, te QR factorization is unnecessary and we can take S to be a real ortogonal matrix wose rst two columns are u and v. Tere is also a real version of (2.3.3). b a

24 124 Unitary similarity and unitary equivalence Teorem. Let F M n (R) be a commuting family. Tere is a real ortogonal Q M n (R) suc tat Q T AQ as te form (2.3.5) for every A 2 F. Exercise. Modify te proof of (2.3.3) to prove (2.3.6) as follows: First deate all members of F using all te common real eigenvectors. Ten consider te common non-real eigenvectors and deate two columns at a time as in te proof of (2.3.4). Notice tat different members of F may ave different numbers of 2-by-2 diagonal blocks after te common real ortogonal similarity, but if one member as a 2-by-2 block in a certain position and anoter member does not, ten commutativity requires tat te latter must ave a pair of equal 1-by-1 blocks tere. Problems 1. Let x 2 C n be a given unit vector and write x = [x 1 y T ] T, in wic x 1 2 C and y 2 C n 1. Coose 2 R suc tat e i x 1 0 and dene z = e i x = [z 1 T ] T, in wic z 1 2 R is nonnegative and 2 C n 1. Consider te Hermitian matrix " # z1 V x = I z 1 Use partitioned multiplication to compute Vx V x = Vx 2. Conclude tat U = e i V x = [x u 2 : : : u n ] is a unitary matrix wose rst column is te given vector x. 2. If x 2 R n is a given unit vector, sow ow to streamline te construction described in Problem 1 to produce a real ortogonal matrix Q 2 M n (R) wose rst column is x. Prove tat your construction works. 3. Let A 2 M n (R). Explain wy te nonreal eigenvalues of A (if any) must occur in conjugate pairs. n i io 4 Consider te family F = ; and sow tat te ypotesis of commutativity in (2.3.3), wile sufcient to imply simultaneous unitary upper triangularizability of F, is not necessary. 5. Let F = fa 1 ; : : : ; A k g M n be a given family, and let G = fa i A j : i; j = 1; 2; : : : ; kg be te family of all pair-wise products of matrices in F. If G is commutative, it is known tat F can be simultaneously unitarily upper triangularized if and only if every eigenvalue of every commutator A i A j A j A i is zero. Sow tat assuming commutativity of G is a weaker ypotesis tan assuming commutativity of F. Sow tat te family F in Problem 4

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014 Solutions to te Multivariable Calculus and Linear Algebra problems on te Compreensive Examination of January 3, 24 Tere are 9 problems ( points eac, totaling 9 points) on tis portion of te examination.

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

Generic maximum nullity of a graph

Generic maximum nullity of a graph Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

Math 161 (33) - Final exam

Math 161 (33) - Final exam Name: Id #: Mat 161 (33) - Final exam Fall Quarter 2015 Wednesday December 9, 2015-10:30am to 12:30am Instructions: Prob. Points Score possible 1 25 2 25 3 25 4 25 TOTAL 75 (BEST 3) Read eac problem carefully.

More information

Test 2 Review. 1. Find the determinant of the matrix below using (a) cofactor expansion and (b) row reduction. A = 3 2 =

Test 2 Review. 1. Find the determinant of the matrix below using (a) cofactor expansion and (b) row reduction. A = 3 2 = Test Review Find te determinant of te matrix below using (a cofactor expansion and (b row reduction Answer: (a det + = (b Observe R R R R R R R R R Ten det B = (((det Hence det Use Cramer s rule to solve:

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

Finite Difference Method

Finite Difference Method Capter 8 Finite Difference Metod 81 2nd order linear pde in two variables General 2nd order linear pde in two variables is given in te following form: L[u] = Au xx +2Bu xy +Cu yy +Du x +Eu y +Fu = G According

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016 MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0

More information

called the homomorphism induced by the inductive limit. One verifies that the diagram

called the homomorphism induced by the inductive limit. One verifies that the diagram Inductive limits of C -algebras 51 sequences {a n } suc tat a n, and a n 0. If A = A i for all i I, ten A i = C b (I,A) and i I A i = C 0 (I,A). i I 1.10 Inductive limits of C -algebras Definition 1.10.1

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Convexity and Smoothness

Convexity and Smoothness Capter 4 Convexity and Smootness 4.1 Strict Convexity, Smootness, and Gateaux Differentiablity Definition 4.1.1. Let X be a Banac space wit a norm denoted by. A map f : X \{0} X \{0}, f f x is called a

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers.

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers. ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU A. Fundamental identities Trougout tis section, a and b denotes arbitrary real numbers. i) Square of a sum: (a+b) =a +ab+b ii) Square of a difference: (a-b)

More information

The complex exponential function

The complex exponential function Capter Te complex exponential function Tis is a very important function!. Te series For any z C, we define: exp(z) := n! zn = + z + z2 2 + z3 6 + z4 24 + On te closed disk D(0,R) := {z C z R}, one as n!

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

2011 Fermat Contest (Grade 11)

2011 Fermat Contest (Grade 11) Te CENTRE for EDUCATION in MATHEMATICS and COMPUTING 011 Fermat Contest (Grade 11) Tursday, February 4, 011 Solutions 010 Centre for Education in Matematics and Computing 011 Fermat Contest Solutions Page

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

THE IMPLICIT FUNCTION THEOREM

THE IMPLICIT FUNCTION THEOREM THE IMPLICIT FUNCTION THEOREM ALEXANDRU ALEMAN 1. Motivation and statement We want to understand a general situation wic occurs in almost any area wic uses matematics. Suppose we are given number of equations

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Convexity and Smoothness

Convexity and Smoothness Capter 4 Convexity and Smootness 4. Strict Convexity, Smootness, and Gateaux Di erentiablity Definition 4... Let X be a Banac space wit a norm denoted by k k. A map f : X \{0}!X \{0}, f 7! f x is called

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points MAT 15 Test #2 Name Solution Guide Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points Use te grap of a function sown ere as you respond to questions 1 to 8. 1. lim f (x) 0 2. lim

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Chapter 1D - Rational Expressions

Chapter 1D - Rational Expressions - Capter 1D Capter 1D - Rational Expressions Definition of a Rational Expression A rational expression is te quotient of two polynomials. (Recall: A function px is a polynomial in x of degree n, if tere

More information

7.1 Using Antiderivatives to find Area

7.1 Using Antiderivatives to find Area 7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between

More information

Quantum Mechanics Chapter 1.5: An illustration using measurements of particle spin.

Quantum Mechanics Chapter 1.5: An illustration using measurements of particle spin. I Introduction. Quantum Mecanics Capter.5: An illustration using measurements of particle spin. Quantum mecanics is a teory of pysics tat as been very successful in explaining and predicting many pysical

More information

ELA

ELA Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF

More information

Functions of the Complex Variable z

Functions of the Complex Variable z Capter 2 Functions of te Complex Variable z Introduction We wis to examine te notion of a function of z were z is a complex variable. To be sure, a complex variable can be viewed as noting but a pair of

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Section 15.6 Directional Derivatives and the Gradient Vector

Section 15.6 Directional Derivatives and the Gradient Vector Section 15.6 Directional Derivatives and te Gradient Vector Finding rates of cange in different directions Recall tat wen we first started considering derivatives of functions of more tan one variable,

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

64 IX. The Exceptional Lie Algebras

64 IX. The Exceptional Lie Algebras 64 IX. Te Exceptional Lie Algebras IX. Te Exceptional Lie Algebras We ave displayed te four series of classical Lie algebras and teir Dynkin diagrams. How many more simple Lie algebras are tere? Surprisingly,

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Reflection Symmetries of q-bernoulli Polynomials

Reflection Symmetries of q-bernoulli Polynomials Journal of Nonlinear Matematical Pysics Volume 1, Supplement 1 005, 41 4 Birtday Issue Reflection Symmetries of q-bernoulli Polynomials Boris A KUPERSHMIDT Te University of Tennessee Space Institute Tullaoma,

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

Quantum Numbers and Rules

Quantum Numbers and Rules OpenStax-CNX module: m42614 1 Quantum Numbers and Rules OpenStax College Tis work is produced by OpenStax-CNX and licensed under te Creative Commons Attribution License 3.0 Abstract Dene quantum number.

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Strongly continuous semigroups

Strongly continuous semigroups Capter 2 Strongly continuous semigroups Te main application of te teory developed in tis capter is related to PDE systems. Tese systems can provide te strong continuity properties only. 2.1 Closed operators

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

158 Calculus and Structures

158 Calculus and Structures 58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you

More information

Derivatives. if such a limit exists. In this case when such a limit exists, we say that the function f is differentiable.

Derivatives. if such a limit exists. In this case when such a limit exists, we say that the function f is differentiable. Derivatives 3. Derivatives Definition 3. Let f be a function an a < b be numbers. Te average rate of cange of f from a to b is f(b) f(a). b a Remark 3. Te average rate of cange of a function f from a to

More information

Notes on Multigrid Methods

Notes on Multigrid Methods Notes on Multigrid Metods Qingai Zang April, 17 Motivation of multigrids. Te convergence rates of classical iterative metod depend on te grid spacing, or problem size. In contrast, convergence rates of

More information

Chapter 2. Limits and Continuity 16( ) 16( 9) = = 001. Section 2.1 Rates of Change and Limits (pp ) Quick Review 2.1

Chapter 2. Limits and Continuity 16( ) 16( 9) = = 001. Section 2.1 Rates of Change and Limits (pp ) Quick Review 2.1 Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 969) Quick Review..... f ( ) ( ) ( ) 0 ( ) f ( ) f ( ) sin π sin π 0 f ( ). < < < 6. < c c < < c 7. < < < < < 8. 9. 0. c < d d < c

More information

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

Crouzeix-Velte Decompositions and the Stokes Problem

Crouzeix-Velte Decompositions and the Stokes Problem Crouzeix-Velte Decompositions and te Stokes Problem PD Tesis Strauber Györgyi Eötvös Loránd University of Sciences, Insitute of Matematics, Matematical Doctoral Scool Director of te Doctoral Scool: Dr.

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,

More information

Subdifferentials of convex functions

Subdifferentials of convex functions Subdifferentials of convex functions Jordan Bell jordan.bell@gmail.com Department of Matematics, University of Toronto April 21, 2014 Wenever we speak about a vector space in tis note we mean a vector

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

GELFAND S PROOF OF WIENER S THEOREM

GELFAND S PROOF OF WIENER S THEOREM GELFAND S PROOF OF WIENER S THEOREM S. H. KULKARNI 1. Introduction Te following teorem was proved by te famous matematician Norbert Wiener. Wiener s proof can be found in is book [5]. Teorem 1.1. (Wiener

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

160 Chapter 3: Differentiation

160 Chapter 3: Differentiation 3. Differentiation Rules 159 3. Differentiation Rules Tis section introuces a few rules tat allow us to ifferentiate a great variety of functions. By proving tese rules ere, we can ifferentiate functions

More information

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z

More information