# Linear algebra 2. Yoav Zemel. March 1, 2012

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at Linear transformations in an inner product space Let V be an inner product space and T : V V a linear transformation. We wish to define its conjugate transformation T : V V. The transformation (α, β) T (α), β is the bilinear pattern defined by T. Lemma 3.1 Let α, β, γ, δ in V, and a, b in F, then (1) T (α + γ), β = T (α), β + T (γ), β. (2) T (α), β + δ = T (α), β + T (α), δ. (3) T (aα), β = a T (α), β. (4) T (α), bβ = b T (α), β. Proof. Trivial by linearity of T and the definition of the inner product. Lemma 3.2 Let β, γ in V. (1) If for any α V α, β = α, γ, or for any α V β, α = γ, α then γ = β. (2) Let T and S be linear transformations V V. If for any α, β V T (α), β = S(α), β or for any α α, T (β) = α, S(β) then T = S. (3) If for any α, β V we have T (α), β = 0 or for any α, β V we have α, T (β) = 0 then T = 0. Proof. (1) If for any α α, β γ = 0 then β γ is orthogonal to all vectors, and therefore equals 0. The other case is analogous. 1

2 (2) If T (α) S(α), β = 0 for any β, then T (α) S(α) = 0. This holds for any α, hence T = S. the other case is analogous. (3) We have T (α), β = 0 = S(α), β, where S is the zero transformation. By (2) T = 0, and the other case is analogous. So T determines a bilinear pattern T (α), β. By this lemma, this linear pattern determines T. If T (α), α = 0 for any α, it does not necessarily mean that T = 0, if F = R e.g. when T is rotation by 90 degrees. However, if F = C then it is true. Proof. For any α, β V we have 0 = T (α + β), α + β = T (α), α + T (α), β + T (β), α + T (β), β = T (α), β + T (β), α. Replacing β by iβ and using i = i we get i T (β), α i T (α), β = 0. On the other hand, we also have i T (β), α + i T (α), β = 0. From this it follows that T (β), α = 0 for any β and any α, and thus T = 0. Let β V and consider the linear functional ϕ β : V R defined by ϕ β (α) = α, β. Note that ϕ β = β, α is linear only over R, not over C. Theorem 3.3 For any linear functional ϕ : V F there exists a unique β for which ϕ = ϕ β. Note : this is true only when dimv <, e.g. V =continuous functions on [-1,1] and the functional ϕ(f) = f(0) does not come from any inner product. Proof. Suppose that ϕ β = ϕ γ then for any α, α, β = α, γ and by a previous lemma γ = β. This establishes uniqueness. For the existence, we consider all the linear functionals ϕ β for β V. This is a subspace of the dual space V. As dimv = dimv, we need to show that n = dimv = dim{ϕ β β V } and this will establish V = {ϕ β β V }. Let S : V V be defined by S(β) = ϕ β. S is linear over R, and almost linear over C : S(β 1 +β 2 )(α) = ϕ β1 +β 2 (α) = α, β 1 + β 2 = α, β 1 + α, β 2 = ϕ β1 (α)+ϕ β2 (α). Thus S(β 1 + β 2 ) = S(β 1 ) + S(β 2 ). But S(cβ)(α) = α, cβ = c α, β = cs(β)(α) cs(β)(α), 2

3 so S is not necessarily linear. S is injective because if ϕ β = ϕ γ then β = γ. We know that if dimv = n and T : V W is injective then its image T (V ) is a subspace of W with dimension n. The same proof will work here for S, even though it is not linear but almost linear. It follows that Im(S) = V and the existence proof is complete. Theorem 3.4 conjugate transformation Let V be an inner product space of dimension n <. For any linear transformation T : V V there exists a unique linear transformation T : V V, such that α, β T (α), β = α, T (β). T is the conjugate transformation of T. Proof. (uniqueness) If α, β α, T (β) = α, S (β), then T = S by a previous lemma. (existence) Let β V and consider ϕ β (α) = T α, β. By theorem 3 there exists γ V such that for any α, ϕ β (α) = α, γ. Define T (β) = γ, which is well defined by uniqueness of γ. Then α we have T α, β = α, T (β). Do the same procedure for any β to define T over whole V. All that remains is to show that T is linear. Let α, β and γ in V, then α, T (β + γ) = T α, β + γ = T α, β + T α, γ = α, T (β) + α, T (γ) = α, T (β) + T (γ). This holds for any α, and by lemma 2A, T (β + γ) = T (β) + T (γ). For a scalar s F, α, T (sβ) = T α, sβ = s T (α), β = s α, T (β) = α, st (β). This, again, holds for any α and therefore T (sβ) = st (β). T is linear, and the proof is complete. Theorem 3.5 Let T : V V and let T be its conjugate. Let {e 1,..., e n } be an orthonormal basis of V under which A is the matrix that represents T. Then the matrix that represents T is the matrix A defined by A ij = A ji i, j. A is the conjugate matrix of A. Proof. The j th column of A is the coordinate vector of T (e j ) by the given basis. The i th element of it is A ij. We also know that the i th element equals T (e j ), e i. Thus A ij = T (e j ), e i = e i, T (e j ) = T (e i ), e j = A ji. 3

4 Consider C 2 with the standard inner product, and T (z 1, z 2 ) = (z 1 + iz 2, 2z 2 + (2 3i)z 1 ). Then T (1, 0) = (1, 2 3i) = 1(1, 0)+(2 3i)(0, 1) and T (0, 1) = (i, 2) = i(1, 0)+2(0, 1), so the matrix of T is therefore ( ) 1 i A = 2 3i 2 and A = ( ) i. i 2 This gives T (z 1, z 2 ) = (z 1 + (2 + 3i)z 2, 2z 2 iz 1 ). 3.1 Properties of the conjugate transformation Consider the map : hom(v, V ) hom(v, V ). Lemma 3.6 (1) (T + S) = T + S. (2) (at ) = at. (3) (T S) = S T. (4) (T ) = T. (5) The * transformation is bijective. Proof. Rather immediate : For (1), observe that α, (T + S) (β) = (T + S)α, β = T α + Sα, β = T α, β + Sα, β = = α, T β + α, S β = α, T β + S β. As this holds for any α, (T + S) (β) = T (β) + S (β). As this holds for any β, (T + S) = T + S. (2) Similarly, α, (at ) (β) = (at )(α), β = a T α, β = a α, T β = α, at β. As this holds for any α, (at ) (β) = (at )(β). As this holds for any β, (at ) = at. (3) Here we have α, (T S) (β) = (T S)α, β = T (S(α)), β = S(α), T (β) = α, S (T (β)). This holds for any α, so (T S) (β) = S (T (β)). This holds for any β, so that (T S) = S T. (4) Observe that α, (T ) (β) = T (α), β = β, T (α) = T β, α = α, T β. 4

5 This holds for any α, so (T ) (β) = T (β). This holds for any β, so (T ) = T. (5) If T = S then (T ) = (S ) and by (4) T = S. Thus * is injective. For a given T, T exists, and (T ) = T, so * is also surjective. Since the * operation preserves the relation between matrices and transformations, these properties hold for matrices as well. It is left as an exercise to prove it directly for matrices. Let I be the identity, then I is also the identity matrix, since Iα, β = α, Iβ = α, I β. Definition 3.7 T is conjugate to itself if T = T. When F = R, it is said to be symmetric. When F = C, it is said to be Hermitian. An n n matrix is conjugate to itself if A = A, which happens precisely when A is symmetric if F = R. If F = C then A is said to be Hermitian. If A represents T under an orthonormal basis then A is Hermitian if and only if T is Hermitian. Lemma 3.8 Let T be conjugate to itself and suppose that T α, α = 0 for any α then T = 0. Proof. If F = C then we do not even need T to be conjugate to itself, so it is sufficient to prove the lemma for F = R. One has 0 = T (α + β), α + β = T α, α + T β, β + T α, β + T β, α = = T β, α + T α, β = T α, β + β, T (α) = T α, β + β, T (α) = 2 T α, β, where the last equality is because F = R and the one before that is because T is conjugate to itself. Since T α, β = 0 for any α, β, it follows that T = 0. Theorem 3.9 Let V be a unitary space, then T is conjugate to itself if and only if α V we have T α, α R. This is obviously true only when F = C. Proof. Suppose that T is conjugate to itself. Then for any α we have T α, α = α, T α = α, T α = T α, α, from which it follows that it is indeed a real number. For the converse, reversing the arguments shows that α V α, (T T )(α) = 0, and since F = C it follows that T = T. 5

6 Remark 1 Obviously, T is conjugate to itself if and only if T is conjugate to itself. Definition 3.10 T is unitary if it preserves the inner product, i.e. for any α, β V we have T α, T β = α, β. In a Euclidean space a unitary transformation is called orthogonal. Theorem 3.11 The following are equivalent (1) T is unitary. (2) T preserves the norm i.e. a = T (a) for any a. (3) T T = I. (4) T maps any orthonormal basis to an orthonormal basis. (5) T maps some orthonormal basis to an orthonormal basis. Remark 2 (2) is equivalent to T preserving lengths i.e. T α T β = T (α β) = α β. (3) means that T and T are invertible, and the inverse of one is the other. Clearly this means T T = I as well. Therefore, if T is unitary then T is as well. Proof. If T is unitary, then T α 2 = T α, T α = α, α = α 2, so T preserves the norm. If T preserves the norm, then α, T T α = T α, T α = α, α. Thus 0 = α, T T α α = α, (T T I)α = (T T I)α, α. The proof now follows since T T I is conjugate to itself : (T T I) = T T I. If T T = I and {e 1,..., e n } is an orthonormal basis for V, then e i, e j = δ ij. As T is invertible, {T e 1,..., T e n } has to be a basis. It remains to show that it is orthonormal. T e i, T e j = e i, T T e j = e i, e j = δ ij, so {T e 1,..., T e n } is an orthonormal basis. (4) implies (5), since orthonormal bases exist. Suppose that {e 1,..., e n } and {T e 1,..., e n } are orthonormal bases. Then for α = i a ie i and β = j b je j we have T α, T β = ai T e i, b j T e j = a i T e i, b j T e j = = i a i b j e i, e j = a i b i = α, β. j Definition 3.12 A matrix A, is said to be unitary (orthogonal, if F = R) if A A = I (A t A = I). 6

7 Corollary 3.13 If A represents T with respect to an orthonormal basis, then T is unitary if and only if A is unitary. Proof. If T is unitary then T T = I, and as the basis is orthonormal, A represents T, and thus A A represents I, which gives A A = I. If T is not unitary then T T I and A will not be unitary. ( ) ( ) cos a sin a 1 0 Example : is an orthogonal matrix over R, as well as. sin a cos a 0 1 A product of unitary matrices is also unitary : (T S) T S = S T T S = S IS = S S = I. Proposition 3.14 A is unitary if and only if there exists an inner product space V over F such that A is the transformation matrix of a transformation mapping an orthonoraml basis of V to an orthonormal basis of V. Proof. Let A be unitary, set V = F n with the standard inner product, and define T : V V by T α = Aα. Then T is a unitary linear transformation. Let k = {e 1,..., e n } be the standard basis of V. Thus k 2 = {T e 1,..., T e n } is an orthonormal basis as well. A is the transformation matrix from k to k 2. For the converse, we use the isomorphism that maps a vector to its coordinates vector under the orthonormal basis k = {e 1,..., e n }. Now, suppose that k 2 = {d 1,..., d n } and k are both orthonormal, and A is the transformation matrix from k to k 2. Therefore T as defined above maps k to k 2 ; it is therefore unitary. A represents it under an orthonormal basis, so it is also unitary. Theorem 3.15 Let A M n (F ) and consider its rows as vectors α 1,..., α n F n. Then A is unitary if and only if these rows form an orthonormal basis. Proof. Write A A = (b ij ), then A is unitary if and only if b ij = δ ij. On the other hand, α i, α j = k a ika jk = k a ika kj = b ij. We need α i, α j to equal δ ij and that happens if and only if A is unitary. It follows immediately that A is unitary if and only if its columns are orthonormal basis. We wish to discover when V has an orthonormal basis of eigenvectors of T. Claim 1 If such a basis exists, then T T = T T. Proof. If {e 1,..., e n } is an orthonormal basis of eigenvectors of T then the matrix A which represents T under this basis is diagonal. A is diagonal as well, and represents T. AA represents T T, and A A represents T T, but AA = A A because both are diagonal. Thus T T = T T. 7

8 Definition 3.16 A linear transformation T is said to be normal if T T = T T. Examples : If T is conjugate to itself then it is normal. If it is unitary then it is normal. Lemma 3.17 Let V inner product space over F, T : V V normal. If c is eigenvalue of T with eigenvector u, then c is an eigenvalue of T with the same u. Proof. As T is normal, for any w V we have T w 2 = T w, T w = w, T T w = w, T T w = T w, T w = T w 2. We know that T u = cu and wish to show that T u = cu. Consider T ci. As T is normal, T ci is also normal : (T ci)(t ci) = (T ci)(t ci) = T T ct ct + cci, and (T ci) (T ci) = T T ct ct + cci, which are equal as T and T commute. Thus T ci is normal. It follows that 0 = T u cu = (T ci)u = (T ci) u = (T ci)u = T u cu. This gives T u = cu. Corollary 3.18 If T is normal, then any real eigenvalue of T is an eigenvalue of T. Lemma 3.19 Let T be normal, then any two eigenvectors lying in different eigenspaces are orthogonal. Proof. Suppose that u belong to a and w to b a, then a u, w = au, w = T u, w = u, T w = u, bw = b u, w, in virtue of the previous lemma. As a b this means u, w = 0. Lemma 3.20 Let V be an inner product space over C, with 0 < dimv = n <. Then for any linear transformation T : V V there is an eigenvector u 0. Proof. The (not identically zero) characteristic polynomial splits to linear components and they are all eigenvalues thus there are nonzero eigenvectors. Theorem 3.21 Let V be a unitary space over C and T : V V be normal. Then V admits an orthonormal basis of eigenvectors of T. 8

9 Proof. Let U be the span of all the eigenvectors. This is a subspace of V. Let a 1,..., a k be the eigenvalues of T and V i U the eigenspaces. For any V i we choose an orthonormal basis and define E to be the union of theses bases. Since all the eigenvectors are in E we have U = V i = span(e). Write E = {u 1,..., u l }, where l k, then E spans U. We have to show that E is independent and that l = n. If i = j, then u i, u j = 1 because u i is an elements of an orthonormal basis of some space. If i j and u i, u j are in the same eigenspace V r, then u i, u j = 0 because they are part of an orthonormal basis of V r. If i j and they lie in different eigenspaces, then u i, u j = 0 by a previous lemma. This proves that E is an orthonormal system, thus independent. To see that l = n, write V = U U +, where U + is the orthogonal complement of U. We wish to show that U + = {0}, and we begin by showing that it is T invariant, i.e. T (U + ) U +. Let β U +, then u j, β = 0 for any j. But u j, T (β) = T (u j ), β = xu j, β = x u j, β = 0, where x is the eigenvalue to whose space u j belongs. Thus T (β) U +. Define S : U + U + by S(α) = T (α). If there is α 0, then by a previous lemma S has an eigenvector. It will also be an eigenvector of T, so must lie in U, and so the intersection of U and U + is not {0}, a contradiction. This completes the proof. Corollary 3.22 Let T : V V as before, then V has an orthonormal basis under which T is diagonal. Corollary 3.23 For a normal A M n (C), there is a unitary U M n (C) with U AU = U 1 AU diagonal. In order to find U, one can proceed as follows : (1) calculate the characteristic polynomial det(xi A). (2) find the eigenvalues of A, i.e. the roots of the polynomial. (3) for any eigenvalue find a basis for the eigenspace. (4) for any eigenspace find an orthornormal basis (use Gram Schimdt process). (5) the union of those bases is an orthonormal basis of eigenvectors of V. If the columns of U are (α 1,..., α n ), then the similar matrix is l 1 l 2..., l n 9

10 where l i is the eigenvalue of α i (l 1,..., l n do not have to be distinct). Theorem 3.24 Let V be an inner product space over C or R, T : V V normal. Then (1) T is conjugate to itself if and only if all the eigenvalues of T (in C) are real numbers. (2) T is unitary if and only if all eigenvalues have an absolute value of 1. ( ) 0 1 Example : is unitary (its rows are orthonormal), and has no real 1 0 eigenvalues. But over C is has the eigenvalues i and i, both have a modulus of 1. Proof. Let {e 1,..., e n } be an orthorormal basis of V under which the matrix D of T is diagonal. D is clearly normal. (1) T is conjugate to itself if and only if D = D, which is equivalent to D having only real eigenvalues, and they are the same eigenvalues of T. (2) T is unitary if and only if D is unitary if and only if DD = I if and only if l i l i = 1 i if and only if li = 1 i. Corollary 3.25 Let A be a symmetric matrix over R then A is diagonalizable over R. Proof. A is symmetric thus conjugate to itself thus normal. Therefore all the eigenvalues are real, and the characteristic polynomial decomposes to linear elements. As A is normal it can be diagonalized by a unitary matrix : for any eigenvalue we look at the eigenspace and find an orthonormal basis. Since A and its eigenvalues are real, we can choose the elements of this basis to be real as well. The union of these bases is an orthonormal basis of V. The matrix U whose columns are the vectors of the basis is orthogonal and U 1 AU is diagonal. Corollary 3.26 Let V be a unitary space with dimv = n. Let T : V V, then T is normal if and only if there exist a unitary matrix U and a Hermitian matrix H such that HU = UH = T. Proof. If such H and U exist then HU is normal, because (H = H commutes with U) HU(HU) = HUU H = HH = H 2 = U UHH = U HHU = U H HU = (HU) HU. If T is normal, let {e 1,..., e n } be an orthonormal basis of V under which D, the matrix that represents T, is diagonal. Write l 1 l 2 D =..., l j = r j (cos t j + i sin t j ), r j R +. l n 10

11 Set r 1 H r 2 = cos t 1 + i sin t 1..., U cos t 2 + i sin t 2 = r n.... cos t n + i sin t n Then : H is Hermitian because it is diagonal, and all its eigenvalues are real. U is normal because it is diagonal. It is unitary because all its eigenvalues have a modulus of 1. Clearly H U = U H = D. Now let H, and U be the linear transformations whose matrices under {e 1,..., e n } are H and U respectively. Then HU = UH = T. 4 Bilinear patterns Definition 4.1 Let W and V be vector space over a field F. A bilinear pattern is a function f : V W F with the following properties : f(a + c, b) = f(a, b) + f(c, b), f(a, b + c) = f(a, b) + f(a, c), f(au, v) = f(u, av) = af(u, v). Equivalently, for any v V and w W, f(v, ) : W F and f(, w) : V F are linear functionals. Examples : f(v, w) = 0 for any w, v is a bilinear pattern. If V is a Euclidean space over over R then, : V V F is bilinear. If V is unitary,, is not bilinear because it is not homogeneous unless V = {0}. If V is the dual space of V then f : V V F defined by f(v, w) = w(v) is a bilinear pattern. Fix g W and h V. Then f(w, v) = g(w)h(v) is also a bilinear pattern. Let V = F m, W = F n and A an m n matrix. Then A defines a bilinear pattern on V W by f(v, w) = v t Aw where v t is the transpose of v. This example is in fact the most general, once we fix bases for V and W : Suppose that dimv = m and dimw = n. Let (v 1,..., v m ) and (w 1,..., w n ) be bases. For a bilinear f define the matrix A by A ij = f(v i, w j ). 11

12 Write v = m i=1 x iv i, w = n i=1 y iw i then ( f(v, w) = f x i v i, ) y i w i i j = i,j x i y j f(vi, wj) = = i,j x i y j A ij = (x 1,..., x m )A(y 1,..., y n ) t = v t Aw, where v t = (x 1,..., x m ) and w t = (y 1,..., y n ). Given these two bases, f determines A and A determines f, so the map f A is bijective. If W = V and A = I under the standard basis we get f(u, v) = u t v. Suppose that dimv = m and dimw = n. multiplication by Define addition and scalar (f + g)(a, b) = f(a, b) + g(a, b) and (cf)(a, b) = cf(a, b). Then the collection of bilinear patterns is a vector space, of dimension mn by the isomorphism f A. One can also show it without using matrices : Lemma 4.2 Let V and W be vector space of finite dimension, and {g 1,..., g m }, {h 1,..., h n } bases of V and W respectively. Then g i h j is a bilinear pattern. These mn products give a basis for the vector space of bilinear patterns. Theorem 4.3 Let f be a bilinear pattern on V W and consider its matrix A with respect to the bases {w 1,..., w n } and {v 1,..., v m }. Let P be the transformation matrix from {v 1,..., v m } to {v 1,..., v m} and Q the transformation matrix from {w 1,..., w n } to {w 1,..., w n} then the matrix of f with respect to these new bases is P t AQ. Proof. Let v V, w W, and write their coordinates under {v 1,..., v m} and {w 1,..., w n} as v = (x 1,..., x m ) and w = (y 1,..., y n ). Then the coordinates of v under {v 1,..., v m } are P (x 1,..., x m ), and the coordinates of w under {w 1,..., w n } are Q(y 1,..., y n ), and so f(v, w) = (P (x 1,..., x m )) t AQ(y 1,..., y n ) = (x 1,..., x m ) t (P t AQ)(y 1,..., y n ), so the matrix by the new bases is indeed P t AQ. If V = W, then we call f a linear pattern on V. In such cases, we use the same basis for both vectors. Thus, if the matrix of f under {v 1,..., v n } is A, and under {v 1,..., v n} it is A, then A = P t AP, where P is the transformation matrix from {v 1,..., v n } to {v 1,..., v n}. 12

13 Definition 4.4 Two matrices A and B are congruent if there exists an invertible matrix P such that B = P t AP. This defines an equivalence relation, since A = I t AI, and if B = P t AP then A = (P t ) 1BP 1 = (P 1) t BP 1. If C = Q t BQ then C = Q t BQ = Q t P t AP Q = (P Q) t A(P Q), and P Q is invertible. Lemma 4.5 Two n n matrices are congruent if and only if they represent the same bilinear pattern over V. Proof. Immediate. Theorem 4.6 Let f be a bilinear pattern on V W with dimv = m and dimw = n. Choose bases {v 1,..., v m } and {w 1,..., w n } and 0 r min{m, n}. { 1 i = j r (1) f(v i, w j ) =. This gives something of the sort of a diagonal 0 otherwise matrix. 13

### ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

### Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

### 1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

### Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

### The Spectral Theorem for normal linear maps

MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

### 22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

### MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

### Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

### Linear Algebra. Workbook

Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

### is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

### Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Lecture 1: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) The eigenvalue problem, Ax= λ x, occurs in many, many contexts: classical mechanics, quantum mechanics, optics 22 Eigenvectors and

### LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

### 1 Last time: least-squares problems

MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

### MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### Linear System Theory

Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

### Algebra I Fall 2007

MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

### October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

### Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

### Representation Theory

Representation Theory Representations Let G be a group and V a vector space over a field k. A representation of G on V is a group homomorphism ρ : G Aut(V ). The degree (or dimension) of ρ is just dim

### Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

### Then since v is an eigenvector of T, we have (T λi)v = 0. Then

Problem F02.10. Let T be a linear operator on a finite dimensional complex inner product space V such that T T = T T. Show that there is an orthonormal basis of V consisting of eigenvectors of B. Solution.

### BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

### REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

### Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

### Linear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011

Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the

### MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

### Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

### Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

### 2: LINEAR TRANSFORMATIONS AND MATRICES

2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,

### Math Linear Algebra II. 1. Inner Products and Norms

Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

### ALGEBRA 8: Linear algebra: characteristic polynomial

ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This

### Eigenvalues and Eigenvectors

Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

### Mathematical Methods wk 2: Linear Operators

John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

### Duality of finite-dimensional vector spaces

CHAPTER I Duality of finite-dimensional vector spaces 1 Dual space Let E be a finite-dimensional vector space over a field K The vector space of linear maps E K is denoted by E, so E = L(E, K) This vector

### Review of some mathematical tools

MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

### EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

### Diagonalization by a unitary similarity transformation

Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

### MATRIX LIE GROUPS AND LIE GROUPS

MATRIX LIE GROUPS AND LIE GROUPS Steven Sy December 7, 2005 I MATRIX LIE GROUPS Definition: A matrix Lie group is a closed subgroup of Thus if is any sequence of matrices in, and for some, then either

### Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

### Topics in linear algebra

Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

### and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

### I. Multiple Choice Questions (Answer any eight)

Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

### Singular Value Decomposition (SVD) and Polar Form

Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

### Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

### Linear Algebra Massoud Malek

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

### Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

### EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

### Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an

### Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

### 1. Row Operations. Math 211 Linear Algebra Skeleton Notes S. Waner

1 Math 211 Linear Algebra Skeleton Notes S. Waner 1. Row Operations Definitions 1.1 A field is a set F with two binary operations +,, such that: 1. Addition and multiplication are commutative: x, y é F,

### MAS4107 Linear Algebra 2

General Prerequisites MAS4107 Linear Algebra 2 Peter Sin University of Florida email: sin@math.ufl.edu Familiarity with the notion of mathematical proof and some experience in reading and writing proofs.

### Math 396. Quotient spaces

Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

### NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

### Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

### Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

### The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

### Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

### 5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### 0.2 Vector spaces. J.A.Beachy 1

J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

### 18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

### (f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

### Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

### Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

### ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

### e j = Ad(f i ) 1 2a ij/a ii

A characterization of generalized Kac-Moody algebras. J. Algebra 174, 1073-1079 (1995). Richard E. Borcherds, D.P.M.M.S., 16 Mill Lane, Cambridge CB2 1SB, England. Generalized Kac-Moody algebras can be

### Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

### Lecture 3: QR-Factorization

Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

### Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

### Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

### JORDAN AND RATIONAL CANONICAL FORMS

JORDAN AND RATIONAL CANONICAL FORMS MATH 551 Throughout this note, let V be a n-dimensional vector space over a field k, and let φ: V V be a linear map Let B = {e 1,, e n } be a basis for V, and let A

### (K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

### Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

### Algebra Exam Syllabus

Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

### Jordan Normal Form. Chapter Minimal Polynomials

Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

### MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

### Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

### Homework set 4 - Solutions

Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W

### The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

### Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Problem 1 Show that if π is an irreducible representation of a compact lie group G then π is also irreducible. Give an example of a G and π such that π = π, and another for which π π. Is this true for

### Math 315: Linear Algebra Solutions to Assignment 7

Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### 2. Review of Linear Algebra

2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

### Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Justin Campbell August 3, 2017 1 Representations of SU 2 and SO 3 (R) 1.1 The following observation is long overdue. Proposition

### REPRESENTATION THEORY WEEK 5. B : V V k

REPRESENTATION THEORY WEEK 5 1. Invariant forms Recall that a bilinear form on a vector space V is a map satisfying B : V V k B (cv, dw) = cdb (v, w), B (v 1 + v, w) = B (v 1, w)+b (v, w), B (v, w 1 +

### 5 Linear Transformations

Lecture 13 5 Linear Transformations 5.1 Basic Definitions and Examples We have already come across with the notion of linear transformations on euclidean spaces. We shall now see that this notion readily

### Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

### ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

### Math 110, Spring 2015: Midterm Solutions

Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

### Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky

Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky Linear Algebra Igor Yanovsky, 5 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation.

### THE EULER CHARACTERISTIC OF A LIE GROUP

THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth

### Lecture 13 The Fundamental Forms of a Surface

Lecture 13 The Fundamental Forms of a Surface In the following we denote by F : O R 3 a parametric surface in R 3, F(u, v) = (x(u, v), y(u, v), z(u, v)). We denote partial derivatives with respect to the

### Eigenvectors and Hermitian Operators

7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

### Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we