Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky
|
|
- Warren Martin
- 6 years ago
- Views:
Transcription
1 Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky
2 Linear Algebra Igor Yanovsky, 5 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can not be made responsible for any inaccuracies contained in this handbook.
3 Linear Algebra Igor Yanovsky, 5 3 Contents Basic Theory 4. Linear Maps Linear Maps as Matrices Dimension and Isomorphism Matrix Representations Redux Subspaces Linear Maps and Subspaces Dimension Formula Matrix Calculations Diagonalizability Inner Product Spaces 8. Inner Products Orthonormal Bases Gram-Schmidt procedure QR Factorization Orthogonal Complements and Projections Linear Maps on Inner Product Spaces 3. Adjoint Maps Self-Adjoint Maps Polarization and Isometries Unitary and Orthogonal Operators Spectral Theorem Normal Operators Unitary Equivalence Triangulability Determinants 7 4. Characteristic Polynomial Linear Operators 8 5. Dual Spaces Dual Maps Problems 3
4 Linear Algebra Igor Yanovsky, 5 4 Basic Theory. Linear Maps Lemma. If A Mat mxn (F) and B Mat nxm (F), then tr(ab) = tr(ba). Proof. Note that the (i, i) entry in AB is n j= α ijβ ji, while (j, j) entry in BA is m i= β jiα ij. Thus tr(ab) = tr(ba) = m n α ij β ji, i= j= n j= i= m β ji α ij.. Linear Maps as Matrices Example. Let P n = {α + α t + + α n t n : α, α,..., α n F} be the space of polynomials of degree n and D : V V the differential map D(α + α t + + α n t n ) = α + + nα n t n. If we use the basis, t,..., t n for V then we see that D(t k ) = kt k and thus the (n + )x(n + ) matrix representation is computed via [D() D(t) D(t ) D(t n )] = [ t nt n ] = [ t t t n ] n.3 Dimension and Isomorphism A linear map L : V W is isomorphism if we can find K : W V such that LK = I W and KL = I V. V I V V L W I W K W
5 Linear Algebra Igor Yanovsky, 5 5 Theorem. V and W are isomorphic there is a bijective linear map L : V W. Proof. If V and W are isomorphic we can find linear maps L : V W and K : W V so that LK = I W and KL = I V. Then for any y = I W (y) = L(K(y)) so we can let x = K(y), which means L is onto. If L(x ) = L(x ) then x = I V (x ) = KL(x ) = KL(x ) = I V (x ) = x, which means L is. Assume L : V W is linear and a bijection. Then we have an inverse map L which satisfies L L = I W and L L = I V. In order for this inverse map to be allowable as K we need to check that it is linear. Select α, α F and y, y W. Let x i = L (y i ) so that L(x i ) = y i. Then we have L (α y + α y ) = L (α L(x ) + α L(x )) = L (L(α x + α x )) = I V (α x + α x ) = α x + α x = α L (y ) + α L (y ). Theorem. If F m and F n are isomorphic over F, then n = m. Proof. Suppose we have L : F m F n and K : F n F m such that LK = I F n KL = I F m. L Mat nxm (F) and K Mat mxn (F). Thus and n = tr(i F n) = tr(lk) = tr(kl) = tr(i F m) = m. Define the dimension of a vector space V over F as dim F V = n if V is isomorphic to F n. Remark. dim C C =, dim R C =, dim Q R =. The set of all linear maps {L : V W } over F is homomorphism, and is denoted by hom F (V, W ). Corollary. If V and W are finite dimensional vector spaces over F, then hom F (V, W ) is also finite dimensional and dim F hom F (V, W ) = (dim F W ) (dim F V ) Proof. By choosing bases for V and W there is a natural mapping hom F (V, W ) Mat (dimf W ) (dim F V )(F) F (dim F W ) (dim F V ) This map is both - and onto as the matrix represetation uniquely determines the linear map and every matrix yields a linear map.
6 Linear Algebra Igor Yanovsky, Matrix Representations Redux L : V W, bases x,..., x m for V and y,..., y n for W. The matrix for L interpreted as a linear map is [L] : F m F n. The basis isomorphisms defined by the choices of basis for V and W : [x x m ] : F m V, [y y n ] : F n W. [x x m] V F m L W [L] F n [y y n] L [x x m ] = [y y n ][L].5 Subspaces A nonempty subset M V is a subspace if α, β F and x, y M, then αx+βy M. Also, M. If M, N V are subspaces, then we can form two new subspaces, the sum and the intersection: M + N = {x + y : x M, y N}, M N = {x : x M, x N}. M and N have trivial intersection if M N = {}. M and N are transversal if M + N = V. Two spaces are complementary if they are transversal and have trivial intersection. M, N form a direct sum of V if M N = {} and M + N = V. Write V = M N. Example. V = R. M = {(x, ) : x R}, x-axis, and N = {(, y) : y R}, y-axis. Example. V = R. M = {(x, ) : x R}, x-axis, and N = {(y, y) : y R}, a diagonal. Note (x, y) = (x y, ) + (y, y), which gives V = M N. If we have a direct sum decomposition V = M N, then we can construct the projection of V onto M along N. The map E : V V is defined using that each z = x + y, x M, y N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) = E(x) = x. Thus im(e) = M and ker(e) = N. Definition. If V is a vector space, a projection of V is a linear operator E on V such that E = E. [x x m ] : F m V means [x x m] α. α m = α x + + α mx m
7 Linear Algebra Igor Yanovsky, Linear Maps and Subspaces L : V W is a linear map over F. The kernel or nullspace of L is ker(l) = N(L) = {x V : L(x) = } The image or range of L is im(l) = R(L) = L(V ) = {L(x) W : x V } Lemma. ker(l) is a subspace of V and im (L) is a subspace of W. Proof. Assume that α, α F and that x, x ker(l), then L(α x + α x ) = α L(x ) + α L(x ) = α x + α x ker(l). Assume α, α F and x, x V, then α L(x ) + α L(x ) = L(α x + α x ) im (L). Lemma. L is - ker(l) = {}. Proof. We know that L( ) = L() =, so if L is we have L(x) = = L() implies that x =. Hence ker(l) = {}. Assume that ker(l) = {}. If L(x ) = L(x ), then linearity of L tells that L(x x ) =. Then ker(l) = {} implies x x =, which shows that x = x as desired. Lemma. L : V W, and dim V = dim W. L is - L is onto dim im (L) = dim V. Proof. From the dimension formula, we have dim V = dim ker(l) + dim im(l). L is - ker(l) = {} dim ker(l) = dim im (L) = dim V dim im (L) = dim W im (L) = W, that is, L is onto..7 Dimension Formula Theorem. Let V be finite dimensional and L : V W a linear map, all over F, then im(l) is finite dimensional and dim F V = dim F ker(l) + dim F im(l) Proof. We know that dim ker(l) dim V and that it has a complement M of dimension k = dim V dim ker(l). Since M ker(l) = {} the linear map L must be - when restricted to M. Thus L M : M im(l) is an isomorphism, i.e. dim im(l) = dim M = k..8 Matrix Calculations Change of Basis Matrix. Given the two basis of R, β = {x = (, ), x = (, )} and β = {y = (4, 3), y = (3, )}, we find the change-of-basis matrix P from β to β. Write y as a linear combination of x and x, y = ax + bx. (4, 3) = a(, ) + b(, ) a = 3, b = y = 3x + x. Write y as a linear combination of x and x, y = ax + bx. (3, ) = a(, ) + b(, ) a =, b = y = x + x. Write the coordinates of y and y as columns of P. P = [ 3 ].
8 Linear Algebra Igor Yanovsky, Diagonalizability Definition. Let T be a linear operator on the finite-dimensional space V. T is diagonalizable if there is a basis for V consisting of eigenvectors of T. Theorem. Let v,..., v n be nonzero eigenvectors of distinct eigenvalues λ,..., λ n. Then {v,..., v n } is linearly independent. Alternative Statement. If L has n distinct eigenvalues λ,..., λ n, then L is diagonalizable. (Proof is in the exercises). Definition. Let L be a linear operator on a finite-dimensional vector space V, and let λ be an eigenvalue of L. Define E λ = {x V : L(x) = λx} = ker(l λi V ). The set E λ is called the eigenspace of L corresponding to the eigenvalue λ. The algebraic multiplicity is defined to be the multiplicity of λ as a root of the characteristic polynomial of L, while the geometric multiplicity of λ is defined to be the dimension of its eigenspace, dim E λ = dim(ker(l λi V )). Also, dim(ker(l λi V )) m. Eigenspaces. A vector v with (A λi)v = is an eigenvector for λ. Generalized Eigenspaces. Let λ be an eigenvalue of A with algebraic multiplicity m. A vector v with (A λi) m v = is a generalised eigenvector for λ. Inner Product Spaces. Inner Products The three important properties for complex inner products are: ) (x x) = x > unless x =. ) (x y) = (y x). 3) For each y V the map x (x y) is linear. The inner product on C n is defined by (x y) = x t ȳ Consequences: (α x + α x y) = α (x y) + α (x y), (x β y + β y ) = β (x y ) + β (x y ), (αx αx) = αᾱ(x x) = α (x x).. Orthonormal Bases Lemma. Let e,..., e n be orthonormal. Then e,..., e n are linearly independent and any element x span{e,..., e n } has the expansion x = (x e )e + + (x e n )e n. Proof. Note that if x = α e + + α n e n, then (x e i ) = (α e + +α n e n e i ) = α (e e i )+ +α n (e n e i ) = α δ i + +α n δ ni = α i.
9 Linear Algebra Igor Yanovsky, Gram-Schmidt procedure Given a linearly independent set x,..., x m in an inner product space V it is possible to construct an orthonormal collection e,..., e m such that span{x,..., x m } = span{e,..., e m }. e = x x. z = x proj x (x ) = x proj e (x ) = x (x e )e, e = z z. z k+ = x k+ (x k+ e )e (x k+ e k )e k, e k+ = z k+ z k+... QR Factorization A = [ ] [ ] x x m = e e m (x e ) (x e ) (x m e ) (x e ) (x m e ) (x m e m ) = QR Example. Consider the vectors x = (,, ), x = (,, ), x 3 = (,, ) in R 3. Perform Gram-Schmidt: e = x x = (,,) = (,, ). z = (,, ) (,, ) = (,, ), e = z z = (,,) 3/ = ( 6, 6, 6 ). z 3 = x 3 (x 3 e )e (x 3 e )e = (,, ) (,, ) 6 ( 6, 6, 6 ) = ( ) 3, 3, 3, e3 = z 3 z 3 = ( ) 3, 3, 3..3 Orthogonal Complements and Projections The orthogonal projection of a vector x onto a nonzero vector y is defined by ( proj y (x) = x y ) y y y = (x y) (y y) y, ( The length of this projection is projy (x) = (x y) ). y The definition of proj y (x) immediately implies that it is linear from the linearity of the inner product. The map x proj y (x) is a projection. Proof. Need to show proj y (proj y (x)) = proj y (x). ( ) (x y) proj y (proj y (x)) = proj y (y y) y = (x y) (y y) proj y(y) = (x y) (y y) (y y) (y y) y = (x y) (y y) y = proj y(x).
10 Linear Algebra Igor Yanovsky, 5 Cauchy-Schwarz Inequality. V complex inner product space. (x y) x y, x, y V. Proof. First show proj y (x) x proj y (x): (proj y (x) x proj y (x)) = ( (x y) y y x (x y) y y ) = ( (x y) ) ( x (x y) y y y y (x y) ) y y = (x y) (x y) (x y) (x y) (x y) (y x) y y (y y) = (y x) (x y) =. y y y x proj y (x) = (x y) (y y) y (x y) = y = (x y). (y y) y Triangle Inequality. V complex inner product space. x + y x + y. Proof. x + y = (x + y x + y) = x + Re(x y) + y x + (x y) + y x + x y + y = ( x + y ). Let M V be a finite dimensional subspace of an innter product space, and e,..., e m an orthonormal basis for M. Using that basis, define E : V V by E(x) = (x e )e + + (x e m )e m Note that E(x) M and that if x M, then E(x) = x. Thus E (x) = E(x) implying that E is a projection whose image is M. If x ker(e), then = E(x) = (x e )e + + (x e m )e m (x e ) = = (x e m ) =. This is equivalent to the condition (x z) = for all z M. The set of all such vectors is the orthogonal complement to M in V is denoted M = {x V : (x z) = for all z M} Theorem. Let V be an inner product space. Assume V = M M, then im(proj M ) = M and ker(proj M ) = M. If M V is finite dimensional then V = M M and proj M (x) = (x e )e + + (x e m )e m for any orthonormal basis e,..., e m for M. Proof. For E defined as above, ker(e) = M. x = E(x) + (I E)(x) and (I E)(x) ker(e) = M. Choose z M. x proj M (x) x proj M (x) + proj M (x) z = x z, where equality holds when proj M (x) z =, i.e., proj M (x) is the only closest point to x among the points in M.
11 Linear Algebra Igor Yanovsky, 5 Theorem. Let E : V V be a projection on to M V with the property that V = ker(e) ker(e). Then the following conditions are equivalent. ) E = proj M. ) im(e) = ker(e). 3) E(x) x for all x V. Proof. We have already seen that () (). Also (),() (3) as x = E(x)+(I E)(x) is an orthogonal decomposition. So x = E(x) + (I E)(x) E(x). Thus, we only need to show that (3) implies that E is orthogonal. Choose x ker(e) and observe that E(x) = x ( E)(x) is an orthogonal decomposition. Thus x E(x) = x ( E)(x) = x + ( E)(x) x This means that ( E)(x) = and hence x = E(x) im(e) ker(e) im(e). Conversely, if z im(e) = M, then we can write z = x + y ker(e) ker(e). This implies that z = E(z) = E(y) = y, where the last equality follows from ker(e) im(e). This means that x = and hence z = y ker(e). 3 Linear Maps on Inner Product Spaces 3. Adjoint Maps The adjoint of A is the matrix A such that a ij = ā ji. A : F m F n, A : F n F m. (Ax y) = (Ax) t ȳ = x t A t ȳ = x t (Āt y) = (x A y). dim(m) + dim(m ) = dim(v ) = dim(v ) Theorem. Suppose S = {v, v,..., v k } is an orthonormal set in an n-dimensional inner product space V. Then a) S can be extended to an orthonormal basis {v, v,..., v k, v k+,..., v n } for V. b) If M = span(s), then S = {v k+,..., v n } is an orthonormal basis for M. c) If M is any subspace of V, then dim(v ) = dim(m) + dim(m ). Proof. a) Extend S to a basis S = {v, v,..., v k, w k+,..., w n } for V. Apply Gram- Schmidt process to S. The first k vectors resulting from this process are the vectors in S. S spans V. Normalizing the last n k vectors of this set produces an orthonormal set that spans V. b) Because S is a subset of a basis, it is linearly independent. Since S is clearly a subset of M, we need only show that it spans M. For any x V, we have x = n (x v i )v i. i= If x M, then (x v i ) = for i k. Therefore, x = n i=k+ (x v i )v i span(s ). c) Let M be a subspace of V. It is finite-dimensional inner product space because V is, and so it has an orthonormal basis {v, v,..., v k }. By (a) and (b), we have dim(v ) = n = k + (n k) = dim(m) + dim(m ).
12 Linear Algebra Igor Yanovsky, 5 Theorem. Let M be a subspace of V. Then V = M M. Proof. By Gram-Schmidt process, we can obtain an orthonormal basis {v, v,..., v k } of M, and by theorem above, we can extend it to an orthonormal basis {v, v,..., v n } of V. Hence v k+,..., v n M. If x V, then x = a v + +a n v n, where a v + +a k v k M and a k+ v k+ + +a n v n M. Accordingly, V = M + M. On the other hand, if x M M, then (x x ) =. This yields x =. Hence M M = {}. Theorem. a) M M. b) If M is a subset of a finite-dimensional space V, then M = M. Proof. a) Let x M. Then (x z) =, z M ; hence x M. b) V = M M and, also V = M M. Hence dim M = dim V dim M and dim M = dim V dim M. This yields dim M = dim M. Since M M by (a), we have M = M. Fredholm Alternative. L : V W be a linear map between finite dimensional inner product spaces. Then ker(l) = im(l ), ker(l ) = im(l), ker(l) = im(l ), ker(l ) = im(l). Proof. Using L = L and M = M, the four statements are equivalent. ker L = {x V : Lx = }. V L W im(l) = {Lx : x V }, W L V im(l ) = {L y : y W }, im(l ) = {x V : (x L y) = for all y W } = {x V : (Lx y) = for all y W }. If x ker L x im(l ). Conversely, if (Lx y) = for all y W Lx = x ker L. Rank Theorem. L : V W be a linear map between finite dimensional inner product spaces. Then rank(l) = rank(l ). Proof. dim V = dim(ker(l)) + dim(im(l)) = dim(im(l )) + dim(im(l)) = dim V dim(im(l )) + dim(im(l)). Corollary. For an n m matrix A, the column rank equals the row rank. Proof. Conjugation does not change the rank. rank(a) is the column rank. rank(a ) is the row rank of the conjugate of A. Corollary. Let L : V V be a linear operator on a finite dimensional inner product space. Then λ is an eigenvalue for L λ is an eigenvalue for L. Moreover, these eigenvalue pairs have the same geometric multiplicity: dim(ker(l λi V )) = dim(ker(l λi V )). Proof. Note that (L λi V ) = L λi V. Thus we only need to show dim(ker(l)) = dim(ker(l )). dim(ker(l)) = dim V dim(im(l)) = dim V dim(im(l )) = dim(ker(l )).
13 Linear Algebra Igor Yanovsky, Self-Adjoint Maps A linear operator L : V V is self-adjoint (Hermitian) if L = L, and skew-adjoint if L = L. Theorem. If L is self-adjoint operator on a finite-dimensional inner product space V, then every eigenvalue of L is real. Proof. Method I: Suppose L is a self-adjoint operator in V. Let λ be an eigenvalue of L, and let x be a nonzero vector in V such that Lx = λx. Then λ(x x) = (λx x) = (Lx x) = (x L x) = (x Lx) = λ(x x). Thus λ = λ, which means that λ is real. Proof. Method II: Suppose that L(x) = λx for x. Because a self-adjoint operator is normal, then if x is an eigenvector of L then x is also an eigenvector of L. Thus, λx = L(x) = L (x) = λx. Proposition. If L is self- or skew-adjoint, then for each invariant subspace M V the orthogonal complement is also invariant, i.e., if L(M) M, then also L(M ) M. Proof. Assume that L(M) M. If x M and z M, then since L(x) M we have = (z L(x)) = (L (z) x) = ±(L(z) x). Since this holds x M, it follows that L(z) M. 3.3 Polarization and Isometries Real inner product on V : (x + y x + y) = (x x) + (x y) + (y y) (x y) = ((x + y x + y) (x x) (y y)) = ( x + y x y ). Complex inner products (are only conjugate symmetric) on V : (x + y x + y) = (x x) + Re(x y) + (y y) Re(x y) = ( x + y x y ). Re(x iy) = Re( i(x y)) = Im(x y). In particular, we have Im(x y) = ( x + iy x iy ). We can use these ideas to check when linear operators L : V V are. First note that L = (L(x) y) =, x, y V. To check the part, let y = L(x) to see that L(x) =, x V. Theorem. Let L : V V be self-adjoint. L = (L(x) x) =, x V. Proof. If L = (L(x) x) =, x V. Assume that (L(x) x) =, x V. = (L(x + y) x + y) = (L(x) x) + (L(x) y) + (L(y) x) + (L(y) y) = (L(x) y) + (y L (x)) = (L(x) y) + (y (L(x))) = Re(L(x) y). Now insert y = L(x) to see that = Re(L(x) L(x)) = L(x).
14 Linear Algebra Igor Yanovsky, 5 4 Theorem. Let L : V V be a linear map on a complex inner-product space. Then L = (L(x) x) =, x V. Proof. If L = (L(x) x) =, x V. Assume that (L(x) x) =, x V. = (L(x + y) x + y) = (L(x) x) + (L(x) y) + (L(y) x) + (L(y) y) = (L(x) y) + (L(y) x) = (L(x + iy) x + iy) = (L(x) x) + (L(x) iy) + (L(iy) x) + (L(iy) iy) = i(l(x) y) + i(l(y) x) [ ] [ ] [ ] (L(x) y) =. i i (L(y) x) Since the columns of the matrix on the left are linearly independent the only solution is the trivial one. In particular (L(x) y) =. 3.4 Unitary and Orthogonal Operators A linear transformation A is orthogonal is AA T = I, and unitary if AA = I, i.e. A = A. Theorem. L : V W is a linear map between inner product spaces. TFAE: ) L L = I V, (L is unitary) ) (L(x) L(y)) = (x y) x, y V, (L preserves inner products) 3) L(x) = x x V. (L preserves lengths) Proof. () (). L L = I V (L(x) L(y)) = (x L L(y)) = (x Iy) = (x y), x V. Also note: L takes orthonormal sets of vectors to orthonormal sets of vectors. () (3). (L(x) L(y)) = (x y), x, y V L(x) = (L(x), L(x)) = (x, x) = x. (3) (). L(x) = x, x V (L L(x) x) = (L(x) L(x)) = (x x) = (Ix x) ((L L I)(x) x) =, x V. Since L L I is self-adjoint (check), L L = I. Two inner product spaces V and W over F are isometric, if we can find an isometry L : V W, i.e. an isomorphism such that (L(x) L(y)) = (x y). Theorem. Supposet L is unitary, then L is an isometry on V. Proof. An isometry on V is a mapping which preserves distances. Since L is unitary, L(x) L(y) = L(x y) = x y. Thus L is an isometry.
15 Linear Algebra Igor Yanovsky, Spectral Theorem Theorem. Let L : V V be a self-adjoint operator on a finite dimensional inner product space. Then we can find a real eigenvalue λ for L. Spectral Theorem. Let L : V V be a self-adjoint operator on a finite dimensional inner product space. Then there exists an orthonormal basis e,..., e n of eigenvectors, i.e. L(e ) = λ e,..., L(e n ) = λ n e n. Moreover, all eigenvalues λ,..., λ n are real. Proof. Prove this by induction on dim V. Since L = L, we can find v V, λ R such that L(v) = λv (Lagrange multipliers). Let v = {x V : (x v) = }, an orthogonal complement to v. dim v = dim V. We show L leaves v invariant, i.e. L(v ) v. Let x v, then (L(x) v) = (x L (v)) = (x L(v)) = (x λv) }{{} = λ(x v) =. λ R Thus, L v : v v is again self-adjoint, because (L(x) y) = (x L(y)), x, y V x, y v. Let e = v v ; e,..., e n - orthonormal basis for v so that L(e i ) = λ i e i, i =,..., n. Check: (e e i ) =, i since e i v = e, i =,..., n. Corollary. Let L : V V be a self-adjoint operator on a finite dimensional inner product space. Then there exists an orthonormal basis e,..., e n of eigenvectors and a real n n diagonal matrix D such that λ L = [e e n ] D [e e n ] = [e e n ]..... λ n 3.6 Normal Operators [e e n ]. An operator L : V V on an inner product space is normal if LL = L L. Self-adjoint, skew-adjoint and isometric operators are normal. These are precisely the operators that admit the orthonormal basis that diagonalizes them. Proposition. LL = L L L(x) = L (x) for all x V. Proof. L(x) = L (x) L(x) = L (x) (L(x) L(x)) = (L (x) L (x)) (x L L(x)) = (x LL (x)) (x (L L LL )(x)) = L L LL =, since L L LL is self-adjoint. Theorem. If V is a complex inner product space and L : V V is normal, then ker(l λi V ) = ker(l λi V ) for all λ C. Proof. Observe L λi V is normal and use previous proposition to conclude that (L λi V )(x) = (L λi V )(x).
16 Linear Algebra Igor Yanovsky, 5 6 Spectral Theorem for Normal Operators. Let L : V V be a normal operator on a complex inner product space. Then there exists an orthonormal basis e,..., e n such that L(e ) = λ e,..., L(e n ) = λ n e n. Proof. Prove this by induction on dim V. Since L is complex linear, we can use the Fundamental Theorem of Algebra to find λ C and x V \{}, so that L(x) = λx L (x) = λx. ker(l λi V ) = ker(l λi V ). Let x = {z V : (z x) = }, an orthogonal complement to x. To get induction, we need to show that x is invariant under L, i.e. L(x ) x. Let z x and show Lz x. (L(z) x) = (z L (x)) = (z λx) = λ(z x) =. Check that L x is normal. Similarly, x is invariant under L, i.e. L : x x, since (L (z) x) = (z L(x)) = (z λx) = λ(z x) =. L x = (L x ) since (L(z) y) = (z L y), z, y x. 3.7 Unitary Equivalence Two nxn matrices A and B are unitary equivalent if A = UBU, where U is an nxn matrix such that U U = UU = I F n. Corollary. (nxn matrices). A normal matrix is unitary equivalent to a diagonal matrix.. A self-adjoint matrix is unitary or orthogonally equivalent to a real diagonal. 3. A skew-adjoint matrix is unitary equivalent to a purely imaginary diagonal. 4. A unitary matrix is unitary equivalent to a diagonal matrix whose diagonal elements are unit scalars. 3.8 Triangulability Schur s Theorem. Let L : V V be a linear operator on a finite dimensional complex inner product space. Then we can find an orthonormal basis e,..., e n such that the matrix representation [L] is upper triangular in this basis, i.e., α α α n L = [e e n ] [L] [e e n ] α α n = [e e n ] [e e n ]. α nn
17 Linear Algebra Igor Yanovsky, 5 7 Generalized Schur s Theorem. Let L : V V be a linear operator on an n dimensional vector space over F. Assume that χ L (t) = (t λ ) (t λ n ) for λ,..., λ n F, then V admits a basis x,..., x n such that the matrix representation with respect to x,..., x n is upper triangular. Proof. The proof is by induction on the dimension n of V. The result is immediate if n =. So suppose that the result is true for linear operators on (n )-dimensional inner product spaces whose characteristic polynomials split. We can assume that L has a unit eigenvector z. Suppose that L (z) = λz and that W = span({z}). We show that W is L-invariant. If y W and x = cz W, then (L(y) x) = (L(y) cz) = (y L (cz)) = (y cl (z)) = (y cλz) = cλ(y z) = cλ() =. So L(y) W. It is easy to show that the characteristic polynomial of L W divides the characteristic polynomial of L and hence splits. Thus, dim(w ) = n, so we may apply the induction hypothesis to L W and obtain an orthonormal basis γ of W such that [L W ] γ is upper triangular. Clearly, β = γ {z} is an orthonormal basis for V such that [L] β is upper triangular. 4 Determinants 4. Characteristic Polynomial The characteristic polynomial of A is defined as χ A (t) = t n +α n t n +... α t+α. The characteristic polynomial of L : V V can be defined by χ L (t) = det(l ti V ). Facts: det L = det L. det A = det AT. If A is orthogonal, det A = ±, since det(i) = det(aa T ) = det(a) det(a T ) = (det A). If U is unitary, det A =, and all λ i =.
18 Linear Algebra Igor Yanovsky, Linear Operators 5. Dual Spaces For a vector space V over F, we define the dual space V = hom(v, F) as the set of linear functions of V, i.e. V = {f : V F f is linear }. Let x,..., x n be a basis for V. For each i, there is a unique linear functional f i on V s.t. f i (x j ) = δ ij. In this way we obtain from x,..., x n a set of n distinct linear functionals f,..., f n on V. These functionals are also linearly independent. For, suppose f = Then f(x j ) = n c i f i. i= n c i f i (x j ) = i= n c i δ ij = c j. i= In particular, if f is the zero functional, f(x j ) = for each j and hence the scalars c j are all. Now f,..., f n are n linearly independent functionals, and since we know that V has dimension n, it must be that f,..., f n is a basis for V. This basis is called the dual basis. We have shown that! dual basis {f,..., f n } for V. If f is a linear functional on V then f is some linear combination of the f i, and the scalars c j must be given by c j = f(x j ). f = n f(x i )f i i= Similarly, if x = f j (x) = n α i x i is a vector in V, then i= n α i f j (x i ) = i= i= n α i δ ij = α j so that the unique expression for x as a linear combination of the x i is x = n f i (x)x i i= f i (x) = α i = i th coordinate for x. Let M V be a subspace and define the annihilator to M in V as M = {f V : f(x) = for all x M} = {f V : f(m) = {}} = {f V : f M = } annihilaror is the counterpart of orthogonal complement
19 Linear Algebra Igor Yanovsky, 5 9 Example. Let β = {x, x } = {(, ), (3, )} be a basis for R. (x i (ξ, ξ )). We find the dual basis of β given by β = {f, f }. To determine formulas for f and f, we seek functionals f (ξ, ξ ) = a ξ + a ξ and f (ξ, ξ ) = b ξ + b ξ such that f (x ) =, f (x ) =, f (x ) =, f (x ) =. Thus { { = f (x ) = f(, ) = a + a = f (x ) = f(, ) = b + b = f (x ) = f(3, ) = 3a + a = f (x ) = f(3, ) = 3b + b The solutions yield a =, a = 3 and b =, b =. Hence f (ξ, ξ ) = ξ + 3ξ and f (ξ, ξ ) = ξ ξ, or f = (, 3), f = (, ), form the dual basis. Example. Let β = {x, x, x 3 } = {(,, ), (,, ), (,, )} be a basis for R 3. (x i (ξ, ξ, ξ 3 )). We find the dual basis of β given by β = {f, f, f 3 }. To determine formulas for f,f,f 3 we seek functionals f (ξ, ξ, ξ 3 ) = a ξ + a ξ + a 3 ξ 3, f (ξ, ξ, ξ 3 ) = b ξ + b ξ + b 3 ξ 3, and f 3 (ξ, ξ, ξ 3 ) = c ξ + c ξ + c 3 ξ 3 such that f i (x j ) = δ ij. = f (x ) = f(,, ) = a + a 3 = f (x ) = f(,, ) = b + b 3 = f (x ) = f(,, ) = a = f (x ) = f(,, ) = b = f (x 3 ) = f(,, ) = a + a 3 = f (x 3 ) = f(,, ) = b + b 3 = f 3 (x ) = f(,, ) = c + c 3 Thus a = 3, a =, a 3 = 3, = f 3 (x ) = f(,, ) = c b =, b =, b 3 =, = f 3 (x 3 ) = f(,, ) = c + c 3 c = 3, c =, c 3 = 3. Hence f (ξ, ξ, ξ 3 ) = 3 ξ + 3 ξ 3, f (ξ, ξ, ξ 3 ) = ξ, f 3 (ξ, ξ, ξ 3 ) = 3 ξ + 3 ξ 3, or f = ( 3,, 3 ), f = (,, ), f 3 = ( 3,, 3 ), form the dual basis. Example. Let W is the subspace of R 4 spanned by x = (,, 3, 4) and x = (,, 4, ). We find the basis for W, the annihilator of W. If suffices to find a basis of the set of linear functionals f(ξ, ξ, ξ 3, ξ 4 ) = a ξ + a ξ + a 3 ξ 3 + a 4 ξ 4 for which f(x ) = and f(x ) = : f(,, 3, 4) = a + a 3a 3 + 4a 4 = f(,, 4, ) = a + 4a 3 a 4 = The system of equations in a, a, a 3, a 4 is in echelon form with free variables a 3 and a 4. Set a 3 =, a 4 = to obtain a =, a = 4, a 3 =, a 4 = f (ξ, ξ, ξ 3, ξ 4 ) = ξ 4ξ + ξ 3. Set a 3 =, a 4 = to obtain a = 6, a =, a 3 =, a 4 = f (ξ, ξ, ξ 3, ξ 4 ) = 6ξ ξ ξ 4. The set of linear functionals {f, f } is a basis of W.
20 Linear Algebra Igor Yanovsky, 5 Example. Given the annihilator described by the three linear functionals in R 4 : f (ξ, ξ, ξ 3, ξ 4 ) = ξ + ξ + ξ 3 + ξ 4 f (ξ, ξ, ξ 3, ξ 4 ) = ξ + ξ 4 f 3 (ξ, ξ, ξ 3, ξ 4 ) = ξ 4ξ 3 + 3ξ 4 we find the subspace it annihilates. After the row reduction, we find that the functionals below ahhihilate the same subspace g (ξ, ξ, ξ 3, ξ 4 ) = ξ + ξ 3 g (ξ, ξ, ξ 3, ξ 4 ) = ξ g 3 (ξ, ξ, ξ 3, ξ 4 ) = ξ 4 The subspace annihilated consists of the vectors with ξ = ξ 3, ξ = ξ 4 =. Thus the subspace that is annihilated is given by span{(,,, )}. Proposition. If M V is a subspace of a finite dimensional space and x,..., x n is a basis for V such that M = span{x,..., x m }, then M = span{f m+,..., f n }, where f,..., f n is the dual basis. In particular we have dim(m) + dim(m ) = dim(v ) = dim(v ). Proof. Let x,..., x m be a basis for M; M = span{x,..., x m }. Extend to {x,..., x n }, a basis for V. Construct a dual basis f,..., f n for V, f i (x j ) = δ ij. We show that f m+,..., f n is a basis for M. First, show M = span{f m+,..., f n }. Let f M V. f = n i= c if i = n i= f(x i)f i = m i= f(x i)f i + n i=m+ f(x i)f i = n i=m+ f(x i)f i span{f m+,..., f n }. Second, {f m+,..., f n } are linearly independent, since {f m+,..., f n } is a subset of basis for V. Thus, dim(m ) = n m = dim(v ) dim(m). Theorem. W and W are subspaces of a finite-dimensional vector space. Then W = W W = W. Proof. If W = W, then of course W = W. If W W, then one of the two subspaces contains a vector which is not in the other. Suppose there is a vector x W but x / W. There is a linear functional f such that f(z) = for all z W, but f(x). Then f W but f / W and W W. Theorem. Let W be a subspace of a finite-dimensional vector space V. Then W = W. Proof. dim W + dim W = dim V, dim W + dim W = dim V, and since dim V = dim V we have dim W = dim W. Since W W, we see that W = W.
21 Linear Algebra Igor Yanovsky, 5 Proposition. Assume that the finite dimensional space V = M N, then also V = M N and the restriction maps V M and V N give isomorphisms M N, N M. Proof. Select a basis x,..., x n for V such that M = span{x,..., x m } and N = span{x m+,..., x n }. Then let f,..., f n be the dual basis and simply observe that M = span{f m+,..., f n } and N = span{f,..., f m }. This proves that V = M N. Next we note that dim(m ) = dim(v ) dim(m) = dim(n) = dim(n ). So at least M and N have the same dimension. Also if we restrict f m+,..., f n to N, then we still have that f i (x j ) = δ ij for j = m +,..., n. As N = span{x m+,..., x n }, this means that f m+ N,..., f n N form a basis for N. The proof that N M is similar.
22 Linear Algebra Igor Yanovsky, 5 5. Dual Maps The dual space construction leads to dual map L : W V for a linear map L : V W. This dual map is a substitute for the adjoint to L and is related to the transpose of the matrix representation of L. The definition is L (g) = g L. Thus if g W we get a linear function g L : V F since L : V W. The dual to L is often denoted L = L t. The dual map satisfies (L(x) g) = (x L (g)) for all x V and g W. Generalized Fredholm Alternative. L : V W be a linear map between finite dimensional inner product spaces. Then ker(l) = im(l ), ker(l ) = im(l), ker(l) = im(l ), ker(l ) = im(l). Proof. Using L = L and M = M, the four statements are equivalent. ker L = {x V : Lx = }. im(l) = {Lx : x V }, im(l ) = {L (g) : g W }, im(l ) = {x V : (x L (g)) = for all g W } = {x V : g(l(x)) = for all g W }. If x ker L x im(l ). Conversely, if g(l(x)) = for all g W Lx = x ker L. Rank Theorem. L : V W be a linear map between finite dimensional inner product spaces. Then rank(l) = rank(l ). Proof. dim V = dim(ker(l)) + dim(im(l)) = dim(im(l ) ) + dim(im(l)) = dim V dim(im(l )) + dim(im(l)).
23 Linear Algebra Igor Yanovsky, Problems Cross Product: a = (a, a, a 3 ), b = (b, b, b 3 ): i j k a b = a a a 3 = (a b 3 a 3 b )i + (a 3 b a b 3 )j + (a b a b )k b b b 3 ( ) a = a 3 b b 3, a 3 a b 3 b, a a b b. Problem (F 3, #9). Consider a 3x3 real symmetric matrix with determinant 6. Assume (,, 3) and (, 3, ) are eigenvectors with eigenvalues and. a) Give an eigenvector of the form (, x, y) for some real x, y which is linearly independent of the two vectors above. b) What is the eigenvalue of this eigenvector. Proof. a) Since A is real and symmetric, A is self-adjoint. By the spectral theorem, its eigenvectors are orthonormal. v v = (,, 3) (, 3, ) =, so the two vectors are orthogonal. We cross the v and v to obtain a linearly indepentent vector v 3. v 3 = v v = i j k 3 3 = ( 3 3 ), 3, 3 = ( 3,, 3). Thus, the required vector is e 3 = v 3 3 = (, 3, 3 3 ). b) Since A is self-adjoint, by the spectral theorem, there exists an orthonormal basis of eigenvectors and a real diagonal matrix D such that A = ODO = [e e e 3 ] λ λ λ 3 [e e e 3 ]. Since O is orthotonal, OO = I, i.e. O = O, and A = ODO. Note det A = det(odo ) = det(o) det(d) det(o ) = det(o) det(d) det(o) = det(d) = λ λ λ 3. Thus 6 = det A = det D = λ λ λ 3, and λ =, λ =, then λ 3 = 3.
24 Linear Algebra Igor Yanovsky, 5 4 Problem (S, #9). Find the matrix representation in the standard basis for either rotation by an angle θ in the plane perpendicular to the subspace spanned by vectors (,,, ) and (,,, ) in R 4. Proof. x = (,,, ), x = (,,, ). span{(,,, ), (,,, )} = span{e, e }, orthogonal complement span{e 3, e 4 }, where rotation happens. e = T = [ ] e e e 3 e 4, z = x (x e )e = e = z z = 3 orthogonal complement, cos(θ) sin(θ) ± sin(θ) cos(θ) 3 ( [ ] e e e 3 e 4 basis for ; e 3 = ) = 4, e 4 = 6 3,
25 Linear Algebra Igor Yanovsky, 5 5 Problem (F, #8). T : R 3 R 3 rotation by 6 counterclockwise about the plane perpendicular to (,, ). S : R 3 R 3 reflection about the plane perpendicular to (,, ). Determine the matrix representation of S T in the standard basis {(,, ), (,, ), (,, )}. Proof. Rotation: T = [ e e ] e 3 cos(θ) sin(θ) sin(θ) cos(θ) T = [ ] e e e [ e e e 3 ] [ e e e 3 ] e, e, e 3 orthonormal basis; e = e e 3, e = e 3 e, e 3 = e e. e =, e =, e 3 = ±. 3 6 Check: e 3 = e e = i j k ( ) 3 = 3,, = ( 6,, ), e 3 = +. 6 T = Reflection: S = [ f f ] f 3 f =, f =, i j k Check: f 3 = f f = f 3 =. S = S T = OS O P T θ P. 3 3 [ f f f 3 ] f 3 = ± = P T θ P. ( ) =,, = (,, ), = OS O.
26 Linear Algebra Igor Yanovsky, 5 6 Problem (F, #8). Let T be the rotation of an angle 6 counterclockwise about the origin in the plane perpendicular to (,, ) in R 3. i) Find the matrix representation of T in the standard basis. Find all eigenvalues and eigenspaces of T. ii) What are the eigenvalues and eigenspaces of T if R 3 is replaced by C 3. Proof. i) T = [ ] e e e [ e e e 3 ] e, e, e 3 orthonormal basis; e = e e 3, e = e 3 e, e 3 = e e. e =, e =, e 3 = ±. 6 3 Check: e 3 = e e = i j k ( ) 6 = 6,, = ( 3,, ), e 3 = +. 3 Know T (e ) = e, so λ = is an eigenvalue. Eigenspace = span{e }. If z R 3, z = αe + w, w span{e, e 3 }. T (z) = αe +T (w). So if T (z) = λz, must have }{{} rot. by 6 λ(αe + w) = αe +T (w) }{{} span{e,e 3 }. λαe = αe and T (w) = λw impossible unless w =. No more eigenvalues or eigenvectors. Any 3-D rotation has for an eigenvalue: any vector lying along the axis of the rotation is unchanged by the rotation, and is therefore an eigenvector corresponding to eigenvalue. The line formed by these eigenvectors is the axis of rotation. For the special case of the null rotation, every vector in 3-D space is an eigenvector corresponding to eigenvalue. Any 3-D reflection has two eigenvalues: - and. Any vector orthogonal to the plane of the mirror is reversed in direction by the reflection, without its size being changed; that is, the reflected vector is - times the original, and so the vector is an eigenvector corresponding to eigenvalue -. The set formed by all these eigenvectors is the line orthogonal to the plane of the mirror. On the other hand, any vector in the plane of the mirror is unchanged by the reflection: it is an eigenvector corresponding to eigenvalue. The set formed by all these eigenvectors is the plane of the mirror. Any vector that is neither in the plane of the mirror nor orthogonal to it is not an eigenvector of the reflection. 3 ii) T : C 3 C 3 ;, χ = (t )(t t + ) roots are t = 3, e i π 3, e i π 3. These are then the three distinct eigenvalues, each with one complex
27 Linear Algebra Igor Yanovsky, 5 7 [ ] dimensional eigenspace. Find e ±i π 3 eigenspaces for 3 [ ] α 3, and then, β [ ] γ C δ eigenspaces for T : C 3 C 3, span{e } λ =, span{αe + βe 3 } e i π 3, span{γe + δe 3 } e i π 3. Problem (S 3, #8; W, #9; F, #). Let V be n-dimensional complex vector space and T : V V a linear operator. χ T has n distinct roots. Show T is diagonalizable. Let V be n-dimensional complex vector space and T : V V a linear operator. Let v,..., v n be non-zero vectors of distinct eigenvalues in V. Prove that {v,..., v n } is linearly independent. Proof. Since F = C, any root of χ T is also an eigenvalue, so we have λ,..., λ n distinct eigenvalues. Induction on n = dim V. n = trivially linearly independent. n >, v,..., v n are non-zero vectors in V with λ,..., λ n distinct eigenvalues. If α v + + α n v n =, (6.) want to show α i s =. T (α v + + α n v n ) = T () =, α T v + + α n T v n =, α λ v + + α n λ n v n =. (6.) Multiplying (6.) by λ n and subtracting off (6.), we get α (λ n λ )v + + α n (λ n λ n )v n =. Since {v,..., v n } are linearly independent, and λ i λ j, i j, α = = α n =. Then by (6.), α n v n = α n =, since v n is non-zero. Thus, α = = α n =, and {v,..., v n } are linearly independent. Having shown {v,..., v n } are linearly independent, they generate an n-dimensional subspace which is then all of V. Hence {v,..., v n } gives a basis.
28 Linear Algebra Igor Yanovsky, 5 8 Problem (F, #9). Let A be a real symmetric matrix. Prove that there exists an invertible matrix P such that P AP is diagonal. Proof. Let V = R n with the standard inner product. Let A be real symmetric matrix A t = A A is self-adjoint. Let T be the linear operator on V which is represented by A in the standard order basis. P V S V B [A] S VS P [A] B VB Since T is self-adjoint on V, there exists an orthonormal basis β = {v,..., v n } of eigenvectors of T. Then T v i = λv i, i =,..., n, where λ i s are eigenvalues of T. Let D = [A] B. Let P be the matrix with v,..., v n as column vectors. Then [A] B = P [A] S P = P [A] S [v v n ] = P [Av Av n ] = P [λ v λ n v n ] Since with choice, P is orthonormal with real entries, detp = (P invertible) P = P t. v λ [A] B = P t [λ v λ n v n ] =. [λ v λ n v n ] =..... v n λ n since v i s are orthonormal. Then D = P t [A] S P, D = P AP. Problem (S 3, #9). Let A M 3 (R) satisfy det(a) = and A t A = AA t = I R 3. Prove that the characteristic polynomial of A has as a root (i.e. is an eigenvalue of A). Proof. χ A (t) = t 3 + = (t λ )(t λ )(t λ 3 ), λ, λ, λ 3 C, using the fundamental theorem of algebra. A real root of odd degree. λ R. Case : λ, λ 3 R; Case : λ = λ, λ 3 = λ. det(a) = = λ λ λ 3, since determinant is a product of eigenvalues. A t A = AA t = I R 3 A orthogonal A M 3 (C) is unitary since A = ĀT = A T. λ, λ, λ 3 eigenvalues for A as a unitary transformation, so if Ax i = λ i x i, then (x i x i ) = (U Ux i x i ) = (U is unitary) = (Ux i Ux i ) = (λ i x i λ i x i ) = λ i (x i x i ) λ i =. Case : λ, λ 3 R λ, λ 3 = ± and λ λ 3 =, so one or three eigenvalues = +. Case : λ λ λ 3 = λ λ λ = λ λ = λ =.
29 Linear Algebra Igor Yanovsky, 5 9 Problem (S 3, #). Let T : R n R n be symmetric 3, tr(t ) =. Show that T =. Proof. By spectral theorem, T = ODO, O is orthogonal and D is diagonal with real entries. λ T = ODO ODO = OD O, where D = λ n = tr(t ) = tr(od O ) = tr(o OD ) = tr(d ) = λ + + λ n. λ i = since λ i. Problem (W, #). Let V be a finite dimensional complex inner product space and f : V C a linear functional. Show f(x) = (x y) for some y. Proof. Select e,..., e n orthonormal basis, and let y = f(e )e + + f(e n )e n. (x y) = (x f(e )e + + f(e n )e n ) = f(e )(x e ) + + f(e n )(x e n ) = f((x e )e + + (x e n )e n ) = f(x), since f is linear. We can also show that y is unique. Suppose y is another vector in V for which f(x) = (x y ) for every x V. Then (x y) = (x y ) for all x, so (y y y y ) =, and y = y. Problem (S, #). Let V be a finite dimensional real inner product space and T, S : V V two commuting (i.e. ST = T S) self-adjoint linear operators. Show that there exists an orthonormal basis that simultaneously diagonalizes S and T. Proof. Since T, S are self-adjoint, an ordered orthonormal basis {v,..., v n } of eigenvectors corresponding to eigenvalues λ,..., λ n for T. v i E λi (T ). V = E λ (T ) E λn (T ). v i E λi (T ) T v i = λ i v i. T Sv i = ST v i = Sλ i v i = λ i Sv i Sv i E λi (T ). Thus E λi (T ) is invariant under S, i.e. S : E λi (T ) E λi (T ). Since S Eλi (T ) is self-adjoint, an ordered orthonormal basis β i of eigenvectors of S for E λi (T ). β = n i= β i. 3 symmetric in R self-adjoint hermitian.
30 Linear Algebra Igor Yanovsky, 5 3 Problem (S, #). Let V be a complex inner product space and W a finite dimensional subspace. v V. Prove that there exists a unique vector v W W such that Let v v W v w, w W. Deduce that equality holds if and only if w = v W. Proof. v W is supposed to be the orthogonal projection of v onto W. Choose an orthonormal basis e,..., e n for W. Then define proj W (x) = (x e )e + + (x e n )e n. Claim: x W = proj W (x). Show: x proj W (x) proj W (x). ( x (x e )e (x e n )e n (x e )e + + (x e n )e n ) = ( x (x e )e + + (x e n )e n ) ( (x e )e + + (x e n )e n (x e )e + + (x e n )e n ) = (x e )(x e ) + + (x e n )(x e n ) ( n i= j= n (x e i )(x e j )(e i e j ) ) = (x e )(x e ) + + (x e n )(x e n ) ( n (x e i )(x e i ) ) = since (e i e j ) = δ ij. In fact, proj W (x) x proj W (x) and W w x proj W (x). x proj W (x) + proj W (x) w = x w x proj W (x) x w, with equality when proj W (x) w = proj W (x) = w. Show: x proj W (x) w W. ( x (x e )e (x e n )e n (w e )e + + (w e n )e n ) i= = ( x (w e )e + + (w e n )e n ) ( (x e )e + + (x e n )e n (w e )e + + (w e n )e n ) = (w e )(x e ) + + (w e n )(x e n ) ( n i= j= n (x e i )(w e j )(e i e j ) ) = (w e )(x e ) + + (w e n )(x e n ) ( n (x e i )(w e i ) ) = since (e i e j ) = δ ij. i= Problem (F 3, #). a) Let t R such that t is not an integer multiple of π. For the matrix [ ] cos(t) sin(t) A = sin(t) cos(t) prove there does not exist a real valued [ matrix ] B such that BAB is a diagonal matrix. λ b) Do the same for the matrix A =, where λ R \ {}. [ ] cos(t) λ sin(t) Proof. a) det(a λi) = = λ sin(t) cos(t) λ λ cos t +. λ, = cos t± cos t. λ, = a±ib, b, i.e. λ, / R. Hence, eigenvectors
31 Linear Algebra Igor Yanovsky, 5 3 are not real, and B M(R), such [ that BAB ] [ is ] diagonal. [ ] λ w b) λ, =. We find eigenvectors, =. Thus, both eigenvectors w [ ] are v, =, i.e. linearly dependent there does not exist a basis for R consisting of eigenvectors of A. Therefore, B M(R), such that BAB is diagonal.
32 Linear Algebra Igor Yanovsky, 5 3 Problem (F, #) (Spectral Theorem for Normal Operators). Let A Mat n n (C) satisfying A A = AA, i.e. A is normal. Show that there is an orthonormal basis of eigenvectors of A. Rephrase: For L : V V, V complex finite dimensional inner product space. Proof. Prove this by induction on dim V. Since L is complex linear, we can use the Fundamental Theorem of Algebra to find λ C and x V \{}, so that L(x) = λx L (x) = λx. ker(l λi V ) = ker(l λi V ). Let x = {z V : (z x) = }, an orthogonal complement to x. To get induction, we need to show that x is invariant under L, i.e. L(x ) x. Let z x and show Lz x. (L(z) x) = (z L (x)) = (z λx) = λ(z x) =. Check that L x is normal. Similarly, x is invariant under L, i.e. L : x x, since (L (z) x) = (z L(x)) = (z λx) = λ(z x) =. L x = (L x ) since (L(z) y) = (z L y), z, y x.
33 Linear Algebra Igor Yanovsky, 5 33 Problem (W, #). Let V be a finite dimensional complex inner product space and L : V V a linear transformation. Show that we can find an orthonormal basis so [L] is upper triangular. Proof. Assume [L] is upper triangular with respect to e,..., e n α α α n α α n [L(e ) L(e n )] = [e e n ] α nn L(e ) = α e e is an eigenvector with eigenvalue α. dim V = : L : V V complex can pick e V so that Le = α e ; pick e e. [ ] α α [L(e ) L(e )] = [e e ] upper triangular. α L(e ) = α e, L(e ) = α e + α e. Observe: L(e k ) span{e,..., e k } L(e ),..., L(e k ) span{e,..., e k }. So we have {} = M M M k M k = V with the property dim M k = k, L(M k ) M k = span{e,..., e k }. Enough to show that any linear transformation on an n-dimensional space has an (n )-dim subspace M V. Keep using this result then we can generate such an increasing sequence {} = M M M k M = V that are all invariant under L. M = span{e }, e =. e = M M, e is an orthogonal complement of e. Pick e,..., e k orthonormal basis such that span{e,..., e k } = M k. L(e ) M, L(e ) = α e, L(e ) M, L(e ) = α e + α e, Get an upper triangular form for [L]. L(e k ) M k, L(e k ) = α k e + α k e + + α kk e k. To construct M V we select x V \ {} such that L (x) = λx. (L : V V is complex and linear, so by the Fundamental Theorem of Algebra can find x, λ, so that L (x) = λx). Then M = x. dim M = k have to show M is invariant under L. Take z x: (L(z) x) = (z L (x)) = (z λx) = λ(z x) =. L(z) x. So L(M) M and M has dimension k.
34 Linear Algebra Igor Yanovsky, 5 34 Problem (F, #7; F, #7). Let T : V W be a linear transformation of finite dimensional real vector spaces. Define the transpose of T and then prove both of the following: ) im(t ) = ker(t t ). ) rank(t ) = rank(t t ) (dim im(t ) = dim im(t t )). Proof. Transpose = dual. Let T : V W be linear. Let T t = T : W V, where X = hom R (X, R). T t : W V is linear. T (g) = g T V T W, V T t W ) This is a proof of the Generalized Fredholm Alternative. ker T = {g W : T (g) = g T = } im(t ) = {T (x) : x V } im(t ) = {g W : g(t (x)) = for all x V } g(t (x)) = for all x V g T = g ker T. ) This is a proof of the Generalized Rank Theorem. rank(t ) = dim(im(t )), rank(t t ) = dim(im(t t )). T t : W V. Dimension formula: dim W = dim(ker(t t )) + dim(im(t t )) = dim(im(t )) + dim(im(t t )) = dim W dim(im(t )) + dim(im(t t )). Problem (W, #8). Let T : V W and S : W X be linear transformations of finite dimensional real vector spaces. Prove that rank(t ) + rank(s) dim(w ) rank(s T ) min{rank(t ), rank(s)}. Proof. Note: V T W S X. rank(s T ) = rank(t ) dim(im T ker S). rank(t ) + rank(s) dim(w ) = rank(t ) + rank(s) dim(ker S) rank(s) = rank(t ) dim(ker S) = rank(s T ) + dim(im T ker S) dim(ker S) rank(s T ) }{{} Note: M V subspace, dim(l(m)) dim M, a consequence of dimension formula. rank(s T ) = dim((s T )(V )) = dim(s(t (V ))) dim(t (V )) = rank(t ) dim(s(w )) = rank(s). Alternatively, to prove rank(s T ) rank(s), note that since T (V ) W, we also have S(T (V )) S(W ) and so dim S(T (V )) dim S(W ). Then rank(s T ) = dim((s T )(V )) = dim(s(t (V ))) dim S(W ) = rank(s).
35 Linear Algebra Igor Yanovsky, 5 35 Problem (S 3, #7). Let V be a finite dimensional real vector space. Let W V be a subspace and W = {f : V F linear f = on W }. Let W, W V be subspaces. Prove that W W = (W + W ). Proof. W = {f V f W = }. Write similar definitions for W, W, (W +W ), and W W and make observations. ) (W + W ) W, W (W + W ) W W. ) Suppose f W W f W =, f W = f W +W = f (W + W ). Thus, W W (W + W ). Problem (S, #8). Let V be a finite dimensional real vector space. Let M V be a subspace and M = {f : V F linear f = on M}. Prove that dim(v ) = dim(m) + dim(m ). Proof. Let x,..., x m be a basis for M; M = span{x,..., x m }. Extend to {x,..., x n }, a basis for V. Construct a dual basis f,..., f n for V, f i (x j ) = δ ij. We show that f m+,..., f n is a basis for M. First, show M = span{f m+,..., f n }. Let f M V. f = n i= c if i = n i= f(x i)f i = m i= f(x i)f i + n i=m+ f(x i)f i = n i=m+ f(x i)f i span{f m+,..., f n }. Second, {f m+,..., f n } are linearly independent, since {f m+,..., f n } is a subset of basis for V. Thus, dim(m ) = n m = dim(v ) dim(m).
36 Linear Algebra Igor Yanovsky, 5 36 Problem (F 3, #8). Prove the following three statements. You may choose an order of these statements and then use the earlier statements to prove the later statements. a) If L : V W is a linear transformation between two finite dimensional real vector spaces V, W, then dim im L = dim V dim ker(l). b) If L : V V is a linear transformation on a finite dimensional real inner product space and L is its adjoint, then im(l ) is the orthogonal complement of ker(l) in V. c) Let A nxn be a real matrix, then the maximal number of linearly independent rows (row rank) equals the maximal number of linearly independent columns (column rank). Proof. We prove (a) and (b) separately. (a),(b) (c). a) We know that dim ker(l) dim V and that it has a complement M of dimension k = dim V dim ker(l). Since M ker(l) = {} the linear map L must be - when restricted to M. Thus L M : M im(l) is an isomorphism, i.e. dim im(l) = dim M = k. b) We want to show ker(l) = im(l ). Since M = M, we can prove ker(l) = im(l ). ker L = {x V : Lx = }. V L W im(l) = {Lx : x V }, W L V im(l ) = {L y : y W }, im(l ) = {x V : (x L y) = for all y W } = {x V : (Lx y) = for all y W }. If x ker L x im(l ). Conversely, if (Lx y) =, y W Lx = x ker L. c) Using Dimension formula (a) and Fredholm Alternative (b), we have the Rank theorem: dim V = dim(ker(a)) + dim(im(a)) = dim(im(a )) + dim(im(a)) = dim V dim(im(a )) + dim(im(a)). Thus, rank(a) = rank(a ). Conjugation does not change the rank, so rank(a) = rank(a T ). rank(a) is the column rank. rank(a T ) is the row rank of A. Thus, row rank (A) = column rank (A). We have not proved that the conjugation does not change the rank. To establish this result easier, use (b) where we showed im(l ) = ker(l). Since V = ker L iml, ker(l) = im(l), which establishes im(l ) = im(l).
Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationDefinitions for Quizzes
Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationThen since v is an eigenvector of T, we have (T λi)v = 0. Then
Problem F02.10. Let T be a linear operator on a finite dimensional complex inner product space V such that T T = T T. Show that there is an orthonormal basis of V consisting of eigenvectors of B. Solution.
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationLINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS
LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationThen x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r
Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices
ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex
More informationGQE ALGEBRA PROBLEMS
GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More information6 Inner Product Spaces
Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space
More informationMATH 235. Final ANSWERS May 5, 2015
MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationChapter 6 Inner product spaces
Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y
More informationIntroduction to Linear Algebra, Second Edition, Serge Lange
Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationVector Spaces and Linear Transformations
Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations
More informationMATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.
MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:
More informationMath 115A: Homework 5
Math 115A: Homework 5 1 Suppose U, V, and W are finite-dimensional vector spaces over a field F, and that are linear a) Prove ker ST ) ker T ) b) Prove nullst ) nullt ) c) Prove imst ) im S T : U V, S
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMATH Spring 2011 Sample problems for Test 2: Solutions
MATH 304 505 Spring 011 Sample problems for Test : Solutions Any problem may be altered or replaced by a different one! Problem 1 (15 pts) Let M, (R) denote the vector space of matrices with real entries
More informationLINEAR ALGEBRA MICHAEL PENKAVA
LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationTBP MATH33A Review Sheet. November 24, 2018
TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationPRACTICE PROBLEMS FOR THE FINAL
PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show
More informationSymmetric and self-adjoint matrices
Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that
More informationChapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.
Chapter Find all x such that A x : Chapter, so that x x ker(a) { } Find all x such that A x ; note that all x in R satisfy the equation, so that ker(a) R span( e, e ) 5 Find all x such that A x 5 ; x x
More informationSolution to Homework 1
Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More information1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.
STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this
More informationPRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.
Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence
More informationAnswer Keys For Math 225 Final Review Problem
Answer Keys For Math Final Review Problem () For each of the following maps T, Determine whether T is a linear transformation. If T is a linear transformation, determine whether T is -, onto and/or bijective.
More informationMath 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.
Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationLinear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016
Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:
More informationThe Spectral Theorem for normal linear maps
MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationEXERCISES AND SOLUTIONS IN LINEAR ALGEBRA
EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA Mahmut Kuzucuoğlu Middle East Technical University matmah@metu.edu.tr Ankara, TURKEY March 14, 015 ii TABLE OF CONTENTS CHAPTERS 0. PREFACE..................................................
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More information(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).
Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,
More informationUniversity of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm
University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More informationFinal A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017
Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationLast name: First name: Signature: Student number:
MAT 2141 The final exam Instructor: K. Zaynullin Last name: First name: Signature: Student number: Do not detach the pages of this examination. You may use the back of the pages as scrap paper for calculations,
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More information