Lidskii aditivo y multiplicativo con igualdades

Size: px
Start display at page:

Download "Lidskii aditivo y multiplicativo con igualdades"

Transcription

1 Lidsii aditivo y multiplicativo con igualdades Seminario IAM 26 / 10 / Submajorization and log-majorization Next we briefly describe majorization and log-majorization, two notions from matrix analysis theory that will be used throughout the paper. For a detailed exposition on these relations see [1]. Given x, y R d we say that x is submajorized by y, and write x w y, if x i y i for every I d. If x w y and tr x = d x i = d y i = tr y, then we say that x is majorized by y, and write x y. If the two vectors x and y have different size, we write x y if the extended vectors completing with zeros to have the same size) satisfy the previous relationship. On the other hand we write x y if x i y i for every i I d. It is a standard exercise to show that x y = x y = x w y. Log-majorization between vectors in R d 0 is a multiplicative analogue of majorization in R d. Indeed, given x, y R d 0 we say that x is log-majorized by y, denoted x log y, if x i y i for every I d 1 and d d x i = y i. Our interest in log-majorization is also motivated by the relation of this notion with tracial inequalities for convex functions. It is wnown see [1]) that if x, y R d 0, x log y x w y. Hence, if x, y R d 0 are such that x log y then for every convex and increasing function f : 0, ) R we get that trfx)) trfy)). 2 Aditivo 2.1 El teorema tradicional Theorem 2.1 Teorema de Weyl). Sean A, B Hd). Entonces: λ j A) + λ d B) λ j A + B) λ j A) + λ 1 B) para todo j I d. 1) 1

2 Si se da alguna de las igualdades, existe un x H unitario tal que A + B) x = λ j A + B) x, A x = λ j A) x and B x = λ 1 B) x o bien λ d B) x, Proof. Let u j and v j denote the eigenvectors of A and A + B respectively, corresponding to their eigenvalues arranged in decreasing order. Consider the subspaces S = span{v 1,..., v j } and T = span{u j,..., u d }. Let x S T an unit vector. Then λ j A + B) A + B) x, x = A x, x + B x, x λ j A) + λ 1 B). If we further assume that an equality holds in 1), then we deduce that A + B)x, x = λ j A + B), A x, x = λ j A) and B x, x = λ 1 B). Como x S T, these last facts imply that A + B) x = λ j A + B) x and A x = λ i A) x. The fact that B x, x = λ 1 B) x = B x = λ 1 B) x is nown. La otra sale parecido. Corollary 2.2 Weyl s monotonicity principle). Let A Hd) and B M d C) +. Then If there exists J I d such that λ j A + B) λ j A) for every j I d. 2) λ j A + B) = λ j A) for every j J, then there exists an orthonormal system {x j } j J such that A x j = λ j A) x j and B x j = 0 for every j J. Proof. Inequality 2) follows easily from Thm. 2.1 λ d B) 0). The second part follows by induction on the set J : Fix j 0 J. By Thm. 2.1 again, there exists a unit vector x j0 such that A x j0 = λ j0 A) x j0 and B x j0 = λ d B) x j0 = 0. This proves the case J = 1. If J > 1, consider the space W = {x j0 } C d which reduces A, B and A + B. Let I = {j : j J, j < j 0 } {j 1 : j J, j > j 0 }. The operators A W LW ) sa and B W LW ) + satisfy that λ j A W +B W ) = λ j A W ) for every j I, with I = J 1. By the inductive hypothesis we can find an orthonormal system {x j } j I W which satisfy the desidered properties. Theorem 2.3 Otro de Weyl). Sean A y B Hn). Entonces λa + B) λa) + λb). 3) Es importante el hecho de que, en la suma µa) + µb), se asume que ambos vectores están ordenados de la misma forma. Proof. Sale por el Principio del Máximo de Ky Fan, que decia que si A Hn) entonces λ j A) = max Ax j, x j, para todo I n, donde el máximo se toma sobre todas las -uplas ortonormales {x 1,..., x } en C n. 2

3 Observar que 3) dice que, si A, B Hn) y I n, λ j A + B) λ j A) λ j B), pero eso implica mal que λa + B) λa) λb), porque el orden de la izquierda no es el correcto. La buena prueba sale con el otro Weyl: Theorem 2.4 Lidsii con igualdad). Let A, B Hd). Then λa + B) λa) λb) Por otro lado, una eventual igualdad ) λa + B) λa) = λb) = A and B commute. 4) Proof. We can assume that B is not a multiple of the identity. Let I d be such that λ 1 B) > λ B). Let us denote by B = B λ B) I. By construction λ B ) = 0. Let B + be the positive part of B. Then λb + ) = ) λ 1 B ),..., λ 1 B ), 0 1 d +1 Since B + M dc) + and B B + then Weyl s monotonicity principle implies that λ j A + B ) λ j A + B + ), j I d = λ j A + B ) λ j A + B + ), j J 1 j J 1 para cualquier J 1 I n tal que J 1 = 1. Luego λ j A + B ) λ j A) λ j A + B + ) λ ja) j J 1 j J 1 5) j I d λ j A + B + ) λ ja) 6) = tr A + B + ) tr A 5) = 1 λ j B ), since λ j A + B + ) λ ja) for j I d again by Weyl s monotonicity principle. Sumandoles ahora 1 veces λ B) de cada lado, sale que 1 λ j A + B) λ j A) λ j B) j J 1 para todo J 1 I n tal que J 1 = 1. Eso muestra que λa + B) λa) λb). Suppose that there exists a permutation σ S d such that λ j B) = λ σj) A + B) λ σj) A) for every j I d. Therefore, there exists an increasing sequence {J } d =1 of subsets of I d such that J = and j J λ j A + B) λ j A) = λ j B) for every I d, 7) 3

4 and notice Eq. 7) also holds if we replace B by B. Tambien las desigualdades 6) de arriba siguen valiendo, pero ahora son igualdades con los J adecuados), lo que ensangucha lo del medio. Luego si J c 1 = I d \ J 1, se tiene que j J c 1 λ j A + B + ) λ ja) = 0 Weyl s = λ j A + B + ) = λ ja) for every j J c 1. By Corollary 2.2 there exists an ONS {x j } j J c 1 such that A x j = λ j A) x j and B + x j = 0 for every j J 1 c. All these facts together imply that P def = j J c 1 x j x j = P er B + since λ B ) = 0 and dim er B + = d + 1 = J c 1. and P A = A P, Recall that P is also the spectral projection of B associated to the interval, λ B)], for any I d such that λ 1 B) > λ B). Since the spectral projection of B associated with, λ 1 B)] equals the identity operator, and B is a linear combination of the projections P and I, we conclude that A and B commute. 2.2 Characterization of optimal matching matrices Fix S 0 Hn). Es facil ver que Lidsii implica que si S 1 Hn) entonces λ S 0 ) + λ S 1 ) λs 0 + S 1 ), porque λ S 1 ) = λ S 1 ). In this section we characterize those matrices S 1 M d C) + such that λs 0 + S 1 ) = λ S 0 ) + λ S 1 ) ). 8) If S 1 Hn) satisfies Eq. 8) then S 1 is an optimal matching matrix OMM) for S 0. Theorem 2.5. Let S 0, S 1 Hd) be such that λs 0 + S 1 ) = λs 0 ) + λ S 1 ) ). Then S0 and S 1 commute. Proof. Tae B = S 0 + S 1 and A = S 1. Therefore λa) = λ A) = λ S 1 ), so that λa + B) λa) = λs 0 ) + λ S 1 ). Hence A and B satisfy the assumptions in 4) and they must commute. In this case S 0 and S 1 also commute. Let S 0 M d C) + and let S 1 M d C) + be an OMM for S 0. By Theorem 2.5 there exists a common ONB of eigenvectors for S 0 and S 1. Pero la cosa es mejor: Theorem 2.6 Equality in Lidsii s inequality). Let S 0 M d C) + and let S 1 M d C) + be an OMM for S 0. Let λ = λs 0 ) and µ = λ S 1 ). Then there exists {v i : i I d } a ONB for S 0 and λ such that S 1 = i I d µ i v i v i and S 0 + S 1 = i I d λ i + µ i ) v i v i. 4

5 In order to give a proof we first consider some technical results. We begin by fixing some notations. Let λ R d >0. For every j I d we define the set Lλ, j) = {i I d : λ i = λ j }. If we assume that λ = λ or λ = λ then the sets Lj) are formed by consecutive integers. In the firs case we have that λ i < λ j = > l for every Lλ, i) and l Lλ, j). Given a permutation σ S d and λ R d >0 we denote by λ σ = λ σ1),..., λ σd) ). Observe that λ = λ σ λ = λ σ 1 σ Lλ, j) ) = Lλ, j) for every j I d. 9) The following inequality is well nown see for example [1, II.5.15]): Proposition 2.7 Rearrangement inequality for products of sums). Let λ, µ R d >0 be such that λ = λ and µ = µ. Then d λ i + µ i ) d λ i + µ σi) ) for every permutation σ S d. The following result deals with the case of equality in the last inequality. Proposition 2.8. Let λ, µ R d >0 be such that λ = λ and µ = µ. Let σ S d be such that Moreover, assume that σ also satisfies that: λ + µ) = λ + µ σ ). if r, s I d are such that µ σr) = µ σs) with σr) < σs) then r < s. 10) Then the permutation σ satisfies that λ = λ σ. Proof. For every τ S d let F τ) = d λ i + µ τi) ). By the hypothesis and Proposition 2.7, F σ) = F id) = max τ S d F τ). Assume that λ λ σ 1. In this case there exists j, I d such that µ j < µ and λ σ 1 j) < λ σ 1 ). 11) Indeed, let j 0 be the smallest index such that σ 1 does not restrict to a permutation on Lλ, j 0 ). Then, there exists j Lλ, j 0 ) such that σ 1 j) / Lλ, j 0 ). As σ 1 Lλ, j 0 ) \ {j}) Lλ, j 0 ) there also exists / Lλ, j 0 ) such that σ 1 ) Lλ, j 0 ). They have the required properties: First note that λ σ 1 j) < λ j0 = λ σ 1 ) and then also σ 1 j) > σ 1 ) ) because σ 1 j) can not be in Lλ, j 0 ) nor in Lλ, r) for any r < j 0 where σ 1 acts as a permutation). A similar argument shows that j <. We have used in both cases that the sets Lλ, j) are formed by consecutive integers, since the vector λ is decreasingly ordered. Observe that j < = µ j µ. So it suffices to show that µ j µ. Let us denote by r = σ 1 j) and s = σ 1 ). The previous items show that r > s and σr) < σs). Hence the equality µ j = µ σr) = µ σs) = µ is forbidden by our hypothesis 10). 5

6 So Eq. 11) is proved. Consider now the permutation τ = σ 1 j, ), where j, ) stands for the transposition of the indexes j and. Straightforward computations show that λ σ 1 j) + µ j ) λ σ 1 ) + µ ) λ σ 1 j) + µ ) λ σ 1 ) + µ j ) = λ σ 1 j) λ σ 1 )) µ µ j ) 11) < 0. From the previous inequality we conclude that F id) = F σ) < F τ) F id). This contradiction arises from the assumption λ λ σ 1. Therefore λ = λ σ 1 = λ σ as 9) desired. Remar 2.9. Let λ, µ R d >0 be such that λ = λ and µ = µ. Let τ S d be such that λ + µ) = λ + µ τ ). Then, by considering convenient permutations of the sets Lµ, j) we can always replace τ by σ in such a way that µ σ = µ τ and such that this σ satisfies the condition 10) of Proposition 2.8. Hence, in this case λ + µ) = λ + µ σ ) and the previous result applies. Ahora si va la prueba. Repetimos el enunciado de onda: Theorem 2.6. Let S 0 M d C) + and let S 1 M d C) + be an optimal matching matrix for S 0. Let λ = λs 0 ) and µ = λ S 1 ). There exists {v i : i I d } a ONB for S 0 and λ such that S 1 = i I d µ i v i v i and S 0 + S 1 = i I d λ i + µ i ) v i v i. 12) Proof. Let us assume further that S 0, S 1 are invertible matrices so that λ, µ R d >0. By Theorem 2.5 we see that S 0 and S 1 commute. Then, there exists B = {w i : i I d } an ONB for S 0 and λ such that S 1 w i = µ τi) w i for every i I d, and for some permutation τ S d. Therefore ) 8) λ + µ = λs 0 + S 1 ) = ) λ + µ τ. By Remar 2.9 we can replace τ by σ S d in such a way that µ τ = µ σ, λ + µ) = λ + µ σ ) and σ satisfies the hypothesis 10). Hence, by Proposition 2.8, we deduce that λ σ 1 = λ. Therefore one easily checs that the ONB formed by the vectors v i = w σ 1 i) for i I d i.e. the rearrangement B σ 1 of B) is still a ONB for S 0 and λ, but it now satisfies Eq. 12). In case S 0 or S 1 are not invertible, we can argue as above with the matrices S 0 = S 0 + I and S 1 = S 1 + I. These matrices are invertible and such that S 1 is an optimal matching for S 0. Further, λ S 0 ) = λs 0 ) + 1 and λ S 1 ) = λs 1 ) + 1. Hence, if {v i : i I d } has the desired properties for S 0 and S 1 then this ONB also has the desired properties for S 0 and S 1. 3 Multiplicativo 3.1 El teorema de Lee - Mathias We begin by revisiting the following well nown inequality from matrix theory. Our interest relies in the case of equality. Proposition 3.1 Ostrowsi s inequality). Let S Hd) and let V M d C). Then V V I = λ i V SV ) for every i I d. 13) Moreover, if there exists J I d such that = λ i V SV ) for i J, then there exists an o.n.s. {v i } i I C d such that V v i = v i and S v i = v i for i I. 6

7 Proof. Fix i I d and notice that, by Sylvester s law of inertia, λ i V S I) V ) = 0, since λ i S I) = 0. By Weyl s inequalities we have that 0 = λ i V S I) V ) λ i V SV ) + λ 1 V V ) = λ i V SV ) λ d V V ), 14) which clearly shows Eq. 13), since λ d V V ) 1. The following inequalities are the multiplicative version of Lidsii s inequality: Theorem 3.2 [4]). Let S Hd) and V Gl d). Let J I d, J =, be such that > 0 for i J. Then we have that λ i V V ) i J λ i V SV ) λ i V V ). 15) Proof. We can assume that V c I and that V Gl d) + sino se labura con V y anda). Let V = U D λ U where U Ud) and λ = λ V ). Let 2 d and denote by V = λ 1 V. In this case λ i V ) = λ iv ) λ V ) for every i I d. In particular, λ V ) = 1. We now consider B = U D µ U for µ = λ 1 V ),..., λ 1 V ), 1,..., 1) R d >0). Note that B I and V 1 B = B V 1 I. Let J 1 I d be such that J 1 = 1. Then, by Ostrowsi s inequality we get that λ i V S V ) λ i B V i J 1 i J 1 1 )V S V )V 1 B )) = λ i B S B ). i J 1 Using Ostrowsi s inequality again, we see that λ ib S B ) 1 for every i I d and therefore λ i B S B ) i J 1 i I d λ i B S B ) = detb S B ) dets) 1 = detb) 2 = λ i V 2 ). Asi sale la desigualdad de la derecha en 15) falta arreglar con λ V ) 2 2 de ambos lados). La otra sale tomando inversas adecuadas. 4 Igualdades Los teos anteriores con casos de igualdad. Ostrossy: Let S Hd) and let V M d C) be such that V V I. If there exists J I d such that = λ i V SV ) for i J, then there exists an o.n.s. {v i } i I C d such that V v i = v i and S v i = v i for i I. 7

8 Proof. The first part of the statement is well nown see for example [2, Thm ]). Hence, we prove the second part of the statement by induction on J, the number of elements of J. Assume first that V M d C) + is such that V 2 I. Fix i J and notice that, by Sylvester s law of inertia, λ i V S I) V ) = 0, since λ i S I) = 0. By Weyl s inequalities we have that λ i V SV ) λ d V 2 ) = λ i V SV ) + λ 1 V 2 ) λ i V S I) V ) = 0. 16) Since λ d V 2 ) 1 and λ i V SV ) = i J ) we conclude that λ d V 2 ) = 1. Moreover, by the equality in Eq. 16) and [6, Lemma 6.1.], there exists x C d, x = 1 such that V SV x = λ i V SV ) x and V 2 x = λ 1 V 2 ) x = λ d V 2 ) x = x. Hence V 2 x = x and then V x = x. Thus, V SV x = λ i V SV ) x = Sx = λ i V SV ) x = x. This proves the statement for J = 1. If we assume that J > 1 then we fix v i = x and consider W = {v i }, which reduces both A and V. Therefore an easy inductive argument involving the restrictiond S W and V W shows the general case. If we now consider an arbitrary V M d C) such that V V I then let V = U V be the polar decomposition of V. In this case V SV = U S S V U so that λ i V SV ) = λ i V S V ) for i I d, where V 2 = V V = U V V )U I. These last facts together with the case of equality for positive expansions prove the statement. In order to state our results we introduce the following notion. Definition 4.1. Let S Gl d) + and let V Gl d). We say that V is an upper multiplicative matching UMM) of S resp. lower MM or LMM of S) if there exists a family {J } Id such that J J +1 I d for 1 d 1, J = for I d and such that λ i V SV ) = λ i V V ), I d i J resp. λ i V V ) = λ d+1 iv V ) = λ i V SV ) i J λ i, I S) d ). Theorem 4.2. Let S Gl d) + and let V Gl d) be a UMM or a LMM of S. Then A and V commute. Proof. We can assume that V is not a multiple of the identity. We use the splitting technique considered in [4]. Let V Gl d) be an UMM of S. Assume further that V Gl d) + and let V = U D λ U where U Ud) and D λ M d C) + denotes the diagonal matrix with main diagonal λ = λ i ) i Id with λ i λ i+1 for 1 i d 1. Let 2 d be such that λ 1 > λ. Let V = λ 1 V, which is also an UMM for S. In this case λ i V ) = λ iv ) for λ V ) every i I d. In particular, λ V ) = 1. We now consider B = U D µ U, where µ = λ 1 V ),..., λ 1 V ), 1,..., 1) R d >0). 8

9 Notice that W = erb I) coincides with the span of the o.n.s. {u i } d i=, where each u i denotes the i-th column of the matrix U. In particular dimw = d + 1. Also notice that the orthogonal projection onto W, denoted by P, coincides with the spectral projection of V corresponding to the interval 0, λ V )]. On the other hand, by construction of B, we see that B I and V 1 B = B V 1 I. Let J 1 I d be such that J 1 = 1 and λ i V SV ) i J 1 1 = λ i V 2 ). 17) Then, by Ostrowsi s inequality we get that λ i V S V ) λ i B V i J 1 i J 1 1 )V S V )V 1 B )) = λ i B S B ) i J 1. Using Ostrowsi s inequality again, we see that λ ib S B ) λ i V S V ) i J 1 λ i B S B ) i J 1 = detb S B ) dets) 1 for every i I d and therefore i I d λ i B S B ) 1 = detb) 2 = λ i V 2 ). By Eq. 17) we see that the previous inequalities are actually equalities. Hence, if we let J c 1 = I d \ J 1 then J c 1 = d + 1 and i J c 1 λ i B S B ) = 1 = λ i B S B ) = for i J c 1. By the case of equality in Ostrowsi s inequality in Proposition 3.1 we conclude that there exists an o.n.s. {v i } i J c 1 C d such that B v i = v i and Sv i = for i J c 1. 18) Then we conclude that {v i } i J c 1 is an o.n.b. of W. Hence P = i J v 1 c i v i and, by Eq. 18), we conclude that P and A commute. Finally, since V can be written as a linear combination of the spectral projections P and the identity I, we see that V and A commute in this case. The general case for arbitrary V Gl d) follows from the positive case with the reduction described at the end of the proof of Proposition 3.1. Assume now that V M d C) + is a LMM of S. Then V 1 is an UMM for V SV. Indeed, if J I d is such that λ i V 2 ) = λ i V SV ), i J then we have that 1 ) 1 λ i V 2 ) = λ i )) V 2 λ i V SV ) = = λ i V 1 V SV ) V 1 ). λ i S) λ i V SV ) i J i J 9

10 By the first part of this proof, we conclude that V 1 and V SV commute, which implies that S and V commute. If V Gl d) is arbitrary we conclude that S and V commute with the reduction described at the end of the proof of Proposition 3.1. Theorem 4.3. Let S Gl d) + and let λ R d >0). Then, for every V M d C) such that λv V ) = λ we have that λs) λ log λv SV ) log λs) λ R d >0). 19) Moreover, if λv SV ) = λs) λ ) resp. λv SV ) = λs) λ) then there exists an o.n.b. {v i } i Id of C d such that S = i I d v i v i and V = i I d λ 1/2 d+1 i v i v i 20) resp. S = i I d v i v i and V = i I d λ 1/2 i V ) v i v i ). Proof. Let S and V be as above. Assume further that V Gl d) + and notice that then λv S V ) = λs 1/2 V 2 S 1/2 ). By Theorem 3.2 we get that, for every J I d with J =, λ i S) λ iv 2 ) = i J i J λ i S 1/2 S 1/2 V 2 S 1/2 )S 1/2 ) λ i S 1 ) λ i S 1/2 V 2 S 1/2 ). This shows that λ λ S) log λv S V ) or equivalently, that λs) λ log λv S V ). Moreover, the previous facts also show that if λv S V ) = λs) λ ) then S 1/2 is an UMM of S 1/2 V 2 S 1/2. By Theorem 4.2 we see that S 1/2 and S 1/2 V 2 S 1/2 commute, which in turn implies that S and V commute. Since S and V commute we conclude that there exists an o.n.b. {w i } i Id S = i I d w i w i and V = i I d λ σi) V ) w i w i of C d such that for some permutation σ S d. That is, in this case we have that ) ). λs) λ V 2 ) = λv SV ) = λs) λ σv 2 ) Notice that by replacing S and V by ts and tv for sufficiently large t > 0 we can always assume that S I Gl d) + and V I Gl d) +. Using the properties of the logaritm, we conclude that the vectors log λs) and log λv 2 ) R >0 ) are such that ) ) log λs) + log λ V 2 ) = log λs) + log λ σv 2 ). By [6, Proposition 6.6] see also [6, Remar 6.7]) we conclude that log λs) = log λ σ S). That is, if we set v i = w σ 1 i) for i I d then the o.n.b. {v i } i Id satisfies the conditions in Eq. 20). The general case, for V Gl d), follows by the reduction described at the end of the proof of Proposition 3.1. On the other hand, notice that a direct application of Theorem 3.2 shows that λ i V SV ) λ i V V ) = λ i V SV ) λ i V V ). 10

11 Hence, we conclude that λv SV ) log λs) λv V ) R d >0). Finally, in case that λv SV ) = λs) λv V ) we see that S is an UMM for S and therefore S and V commute. In this case it is straightforward to chec that there exists an o.n.b. {v i } i Id with the desired properties. Remar 4.4. Let S Gl d) + and let λ R d >0). Consider the set O S λ) = {V SV : V M d C), λv V ) = λ}. 21) Then, Theorem 4.3 shows that there exist log minimizers and maximizers in O S λ) and that their spectral and geometrical structure can be described explicitly. References [1] R. Bhatia, Matrix Analysis, Berlin-Heildelberg-New Yor, Springer [2] Horn, Johnson, matrix analysis... [3] A.A. Klyacho, Random wals on symmetric spaces and inequalities for matrix spectra. Special Issue: Worshop on Geometric and Combinatorial Methods in the Hermitian Sum Spectral Problem Coimbra, 1999). Linear Algebra Appl ), no. 1-3, [4] Li, Mathias Lidsii s multiplicative inequalities... [5] Massey, Ruiz, Stojanoff, Optimal duals and frame completions for majorization, acha... [6] Massey, Ruiz, Stojanoff, Optimal completions of a frame... 11

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations Math 46 - Abstract Linear Algebra Fall, section E Orthogonal matrices and rotations Planar rotations Definition: A planar rotation in R n is a linear map R: R n R n such that there is a plane P R n (through

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

10 Time-Independent Perturbation Theory

10 Time-Independent Perturbation Theory S.K. Saiin Oct. 6, 009 Lecture 0 0 Time-Independent Perturbation Theory Content: Non-degenerate case. Degenerate case. Only a few quantum mechanical problems can be solved exactly. However, if the system

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition

More information

Category O and its basic properties

Category O and its basic properties Category O and its basic properties Daniil Kalinov 1 Definitions Let g denote a semisimple Lie algebra over C with fixed Cartan and Borel subalgebras h b g. Define n = α>0 g α, n = α

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

W if p = 0; ; W ) if p 1. p times

W if p = 0; ; W ) if p 1. p times Alternating and symmetric multilinear functions. Suppose and W are normed vector spaces. For each integer p we set {0} if p < 0; W if p = 0; ( ; W = L( }... {{... } ; W if p 1. p times We say µ p ( ; W

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION

THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION MANTAS MAŽEIKA Abstract. The purpose of this paper is to present a largely self-contained proof of the singular value decomposition (SVD), and

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

More information

arxiv:math/ v5 [math.ac] 17 Sep 2009

arxiv:math/ v5 [math.ac] 17 Sep 2009 On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

ALGEBRA 8: Linear algebra: characteristic polynomial

ALGEBRA 8: Linear algebra: characteristic polynomial ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

REPRESENTATIONS OF S n AND GL(n, C)

REPRESENTATIONS OF S n AND GL(n, C) REPRESENTATIONS OF S n AND GL(n, C) SEAN MCAFEE 1 outline For a given finite group G, we have that the number of irreducible representations of G is equal to the number of conjugacy classes of G Although

More information

Homework 11 Solutions. Math 110, Fall 2013.

Homework 11 Solutions. Math 110, Fall 2013. Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting

More information

Introduction to Geometry

Introduction to Geometry Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Matrix Theory, Math6304 Lecture Notes from March 22, 2016 taken by Kazem Safari

Matrix Theory, Math6304 Lecture Notes from March 22, 2016 taken by Kazem Safari Matrix Theory, Math6304 Lecture Notes from March 22, 2016 taken by Kazem Safari 1.1 Applications of Courant-Fisher and min- or -min Last time: Courant -Fishert -min or min- for eigenvalues Warm-up: Sums

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

On the Existence of k-solssoms

On the Existence of k-solssoms Divulgaciones Matemáticas Vol. 8 No. 1 (000), pp. 5 9 On the Existence of k-solssoms Sobre la Existencia de k-solssoms A. R. Ashrafi (ashrafi@vax.ipm.ac.ir) A. Gordji Department of Mathematics Faculty

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Math 396. An application of Gram-Schmidt to prove connectedness

Math 396. An application of Gram-Schmidt to prove connectedness Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Counting Matrices Over a Finite Field With All Eigenvalues in the Field

Counting Matrices Over a Finite Field With All Eigenvalues in the Field Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

7 Orders in Dedekind domains, primes in Galois extensions

7 Orders in Dedekind domains, primes in Galois extensions 18.785 Number theory I Lecture #7 Fall 2015 10/01/2015 7 Orders in Dedekind domains, primes in Galois extensions 7.1 Orders in Dedekind domains Let S/R be an extension of rings. The conductor c of R (in

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

(1) is an invertible sheaf on X, which is generated by the global sections

(1) is an invertible sheaf on X, which is generated by the global sections 7. Linear systems First a word about the base scheme. We would lie to wor in enough generality to cover the general case. On the other hand, it taes some wor to state properly the general results if one

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

A brief introduction to trace class operators

A brief introduction to trace class operators A brief introduction to trace class operators Christopher Hawthorne December 2015 Contents 1 Introduction 1 2 Preliminaries 1 3 Trace class operators 2 4 Duals 8 1 Introduction The trace is a useful number

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the Math 395. Geometric approach to signature For the amusement of the reader who knows a tiny bit about groups (enough to know the meaning of a transitive group action on a set), we now provide an alternative

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information