# Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Size: px
Start display at page:

Download "Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )"

## Transcription

1 Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O 2 O 3 where O 1, O 2, and O 3 are zero matrices. Thus D i,i = 1 for i r and D i,j = 0 otherwise. Cor 1. Let A be an m n matrix of rank r. Then there exists invertible matrices B M m (F ) and C M n (F ) such that D = BAC, where ( ) Ir O D = 1 O 2 O 3 is the m n matrix in which O 1, O 2, and O 3 are zero matrices. Cor 2. Let A M m n (F ). Then (a) rank(a t ) =rank(a). (b) The rank of any matrix equals the maximum number of its linearly independent rows; that is, the rank of a matrix is the dimension of the subspace generated by its rows. (c) The rows and olumns of any matrix generate subspaces of the same dimension, numerically equal to the rank of the matrix. Cor 3. Every invertible matrix is a product of elementary matrices. Section 2.1 Theorem 2.6. Let V and W be vector spaces over F, and suppose that {v 1, v 2,..., v n } is a basis for V. For w 1, w 2,..., w n in W, there exists exactly one linear transformation T : V W such that T (v i ) = w i for i = 1, 2,..., n. Section 2.4 Theorem Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively, and let β and γ be ordered bases for V and W, respectively. Then the funtion defined by Φ(T ) = [T ] γ β Φ : L(V, W ) M m n (F ) for T L(V, W ), is an isomorphism. Cor 1. Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively. Then L(V, W ) is finite-dimensional of dimension mn. Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) 1

2 2 Theorem Suppose that V is a finite-dimensional vector space with the ordered basis β = {x 1, x 2,..., x n }. Let f i (1 i n) be the ith coordinate function with respect to β and let β = {f 1, f 2,..., f)n}. Then β is an ordered basis for V, and, for any f V, we have n f = f(x i )f i. i=1 Theorem Let V and W be finite-dimensional vector spaces over F with ordered bases β and γ, respectively. For any linear transformation T : V W, the mapping T t : W V defined by T t (g) = gt for all g W is a linear transformation with the property [T t ] β γ = ([T ]γ β )t. Lemma 1. Let V be a finite-dimensional vector space, and let x V. If x = 0 for all f V, then x = 0. Theorem Let V be a finite dimensional vector space, and define ψ : V V by ψ(x) = x. Then ψ is an isomorphism. Cor 1. Let V be a finite-dimensional vector space with dual space V. Then every ordered basis for V is the dual basis for some basis for V. Section 4.2 Theorem 4.3. The determinant of an n n matrix is a linear function of each row when the remaining rows are held fixed. Section 5.1: Defn 1. A linear operator T : V V on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis β for V such that β [T ] β is a diagonal matrix. A square matrix A is called diagonalizable if L A is diagonalizable. Note 1. If A = B [T ] B where T = L A then to find β such that β [I] B B [T ] B B [I] β = β [T ] β is diagonal is the same thing as finding an invertible S such that S 1 AS is diagonal. Defn 2. Let T be a linear operator on V, a non-zero v V is called an eigenvector of T if λ F, such that T (v) = λv. The scalar λ is called the eigenvalue corresponding to the eigenvector v. Let A M n (F ), v F n, v 0 is called an eigenvector of A if v is an eigenvector of L A. That is, Av = λv. Theorem 5.1. A linear operator T on a finite dimensional vector space V is diagonalizable if and only if there is an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, β = {v 1, v 2,..., v n } is an ordered basis of eigenvectors of T, and for D = β [T ] β, D is a diagonal matrix and D j,j is the e-val corresponding to v j, 1 j n. Theorem 5.2. A M n (F ). λ is an e-val of A if and only if det(a λi) = 0. Defn 3. For matrix A M n (F ), f(t) = det(a ti n ) is the characteristic polynomial of A. Defn 4. Let T be a linear operator on an n-dimensional vector space V with ordered basis β. We define the characteristic polynomial f(t) of T to be the characteristic polynomial of A = β [T ] β. That is, f(t) = det(a ti n ).

3 Note 2. Similar matrices have the same characteristic polynomials, since if B = S 1 AS, det(b ti n ) = det(s 1 AS ti n ) = det(s 1 AS ts 1 S) = det(s 1 (A ti n )S) = det(s 1 1 ) det(a ti n ) det(s) = det(s) det(a ti n) det(s) = det(a ti n ) So that characteristic polynomials are similarity invariants. If B 1 and B 2 are 2 bases of V then B1 [T ] B1 is similar to B2 [T ] B2 and so we see that the definition of characteristic polynomial of T, does not depend on the basis used in the representation. So, we may say f(t) = det(t ti n ). Theorem 5.3. Let A M n (F ) (1) The characteristic polynomial of A is a polynomial of degree n with leading coefficient ( 1) n. (2) A has at most n distinct eigenvalues. Theorem 5.4. Let T be a linear operator on a vector space V and let λ be an e-val of T. A vector v V is an e-vctr of T corresponding to λ if and only if v 0 and v N(T λi n ). Section 5.2 Theorem 5.5. Let T be a linear operator on a vector space V and let λ 1, λ 2,..., λ k be distinct e-vals of T. If v 1, v 2,..., v k are e-vctrs of T such that for all i [k], λ 1 corresponds to v i, then {v 1, v 2,..., v k } is a linearly independent set. Cor 2. Let T be a linear operator on an n-dimensional vector space V. has n distinct e-vals, then T is diagonlizable. Defn 5. A polynomial f(t) in P (F ) splits over F if there are scalars c, a 1,..., a n such that f(t) = c(t a 1 )(t a 2 ) (t a n ). 3 If T Theorem 5.6. The characteristic polynomial of any diagonalizable linear operator splits. Defn 6. Let λ be an e-val of a linear operator (or matrix) with characteristic polynomial f(t). The algebraic multiplicity (or just multiplicity) of λ is the largest positive integer k for which (t λ) k is a factor of f(t). Defn 7. Let T be a linear operator on a vector space V and let λ be an e-val of T. Define E λ = {x V T (x = λx} = N(T λi). The set E λ is called the eigenspace of T corresponding to λ. The eigenspace of a matrix A M n (F ) is the e-space of L A. Fact 1. E λ is a subspace. Theorem 5.7. Let T be a linear operator on a finite dimensional vector space V, and let λ be an e-val of T having multiplicity m. Then 1 dim(e λ ) m. Lemma 2. Let T be a linear operator on a vector space V and let λ 1, λ 2,..., λ k be distinct e-vals of T. For each i = 1, 2,..., k, let v i E λi. If v 1 + v 2 + v k = 0, then i [k], v i = 0.

4 4 Theorem 5.8. Let T be a linear operator on a vector space V and let λ 1, λ 2,..., λ k be distinct e-vals of T. For each i = 1, 2,..., k, let S i E λi, be a finite linearly independent set. Then S = S 1 S 2 S k is a linearly independent subset of V. Theorem 5.9. Let T be a linear operator on a finite dimensional vector space V such that the characteristic polynomial of T splits. Let λ 1, λ 2,..., λ k be distinct e-vals of T. Then (1) T is diagonalizable if and only if the multiplicity of λ i is equal to dim(e λi ), i. (2) If T is diagonalizable and β i is an ordered basis for E λi, for each i, then β = β 1 β 2 β k is an ordered basis for V consisting of eigenvectors of T. Note 3. Test for diagonalization. A linear operator T on a vector space V of dimension n is diagonalizable if and only if both of the following hold. (1) The characteristic polynomial splits. (2) For each λ, eigenvalue of T, the multiplicity of λ equals the dimension of E λ. Notice that E λ = {x (T λ I)(x) = 0} = N(T λi) and n = nullity(t λi) + rank(t λi). So, dim(e λ ) = nullity(t λi) = n rank(t λi). Defn 8. Let W 1, W 2,..., W k be subspaces of a vector space V. The sum is: k W i = {v 1 + v v k : v i W i, i [k]} i=1 Fact 2. The sum is a subspace. Defn 9. Let W 1, W 2,..., W k be subspaces of a vector space V. We call V the direct sum of W 1, W 2,..., W k, written V = W 1 W 2 W k If V is the sum of W 1, W 2,..., W k and j [k], W j i j W i = {0}. Theorem Let W 1, W 2,..., W k be subspaces of a finite-dimensional vector space V. Tfae (1) V = W 1 W 2 W k (2) V = k i=1 W i and i, v i W i if v 1 + v 2 + v k = 0 then v i = 0, i (3) Each vector v V can be written uniquely as v = v 1 +v 2 + v k, where i [k], v i W i. (4) If i [k], γ i is an ordered basis for W i then γ 1 γ 2 γ k is an ordered basis for V. (5) For each i [k] there is an ordered basis γ i for W i such that γ 1 γ 2 γ k is an ordered basis for V. Theorem A linear operator T on a finite-dimensional vector space V is diagonalizable if and only if V is the direct sum of the eigenspaces of T.

5 Section skipped Section Invariant subspaces and the Cayley-Hamilton Theorem Defn 1. Let T be a linear operator on a vector space V. A subspace W of V is called a T -invariant subspace of V if (W ) W. Defn 2. Let T be a linear operator on a vector space V and let x be a nonzero vector in V. The subspace W = span({x, T (x), T 2 (x),...}) is called the T -cyclic subspace of V generated by x. Defn 3. If T is a linear operator on V and W is a T -invariant subspace of V, then the restriction T W of T to W is a mapping from W to W and it follows that T W is a linear operator on W. Theorem Let T be a linear operator on a finite-dimensional vector space V, and let W be a T -invariant subspace of V. Then the caracteristic polynomial of T W divides the characteristic polynomial of T. Theorem Let T be a linear operator on a finite-dimensional vector space V, and let W denote the T -cyclic subspace of V generated by a nonzero vector v V. Let k = dim(w ). Then (a) {v, T (v), T 2 (v),..., T k 1 (v)} is a basis for W. (b) If a 0 v +a 1 T (v)+ +T k 1 (v) = 0, then the characteristic polynomial of T W is f(t) = ( 1) k (a 0 + a 1 t + + a k 1 t k 1 + t k ). Theorem (Cayley-Hamilton). Let T be a linear operator on a finite-dimensional vector space V, and let f(t) be the characteristic polynomial of T. Then f(t ) = T 0, the zero transformation. Cor 1. (Cayley-Hamilton Theorem for Matrices). Let A be an n n matrix, and let f(t) be the characteristic polynomial of A. Then f(a) = O, the zero matrix. Theorem Let T be a linear operator on a finite-dimensional vector space V, and suppose that V = W 1 W 2 W k, where W i is a T -invariant subspace of V for each i (1 i k). Suppose that f i (t) is the characteristic polynomial of T Wi (1 i k). Then f 1 (t) f 2 (t) f k (t) is the characteristic polynomial of T. Defn 4. Let B 1 M m (F ), and let B 2 M n (F ). We define the direct sum of B 1 and B 2, denoted B 1 B 2 as the (m + n) (m + n) matrix A such that (B 1 ) i,j for 1 i, j m A i,j = (B 2 ) (i m),(j m) for m + 1 i, j n + m 0 otherwise If B 1, B 2,..., B k are square matrices with entries from F, then we define the direct sum of B 1, B 2..., B k recursively by B 1 B 2 B k = (B 1 B 2 B k 1 ) B k 5

6 6 Of A = B 1 B 2 B k, then we often write A = B 1 O O O B 2 O... O O B k Theorem Let T be a linear operator on a finite-dimensional vector space V, an d let W 1, W 2,..., W k be T invariant subspaces of V such that V = W 1 W 2 W k. For each i, let β i be an ordered basis for W i, and let β = β 1 β 2 β k. Let A = [T ] β and B i = [T Wi ] βi for each i. Then A = B 1 B 2 B k. Section Inner Products and Norms. Defn 5. Let V be a vector space over F. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F, denoted x, y, such that for all x, y, and z in V and all c in F, the following hold: (1) x + z, y = x, y + z, y (2) cx, y = c x, y (3) x, y = y, x, where the bar denotes complex conjugation. (4) x, x > 0 if x 0 Notice that if F = R, (3) is just x, y = y, x. Defn 6. Let A M m n (F ). We define the conjugate transpose or adjoint of A to be the n m matrix A such that (A ) i,j = A i,j for all i, j. Theorem 6.1. Let V be an inner product space. Then for x, y, z V and c F, (1) x, y + z = x, y + x, z (2) x, cy = c x, y (3) x, 0 = 0, x = 0 (4) x, x = 0 x = 0 (5) If x, y = x, z, for all x V, then y = z. Defn 7. Let V be an inner product space. For x V, we define the norm or length of x by x = x, x. Theorem 6.2. Let V be an inner product space over F. Then for all x, y V and c F, the following are true. (1) cx = c x. (2) x = 0 if and only if x = 0. In any case x 0. (3) (Cauchy-Schwarz Inequality) < x, y > x y. (4) (Triangle Inequality) x + y x + y. Defn 8. Let V be an inner product space, x, y V are orthogonal or ( perpendicular ) if x, y = 0. A subset S V is called orthogonal if x, y S, x, y = 0. x V is a unit vector if x = 1 and a subset S V is orthonormal if S is orthogonal and x S, x = 1.

7 Section 6.2 Defn 9. Let V be an inner product space. Then S V is an orthonormal basis of V if it is an ordered basis and orthonormal. Theorem 6.3. Let V be an inner product space and S = {v 1, v 2,..., v k } be an orthogonal subset of V such that i, v i 0. If y Span(S), then k y, v i y = v i v i Cor 1. If S also is orthonormal then k y = y, v i v i i=1 i=1 Cor 2. If S also is orthogonal and all vectors in S are non-zero then S is linearly independent. Theorem 6.4. (Gram-Schmidt) Let V be an inner product space and S = {w 1, w 2,..., w n } V be a linearly independent set. Define S = {v 1, v 2,..., v n } where v 1 = w 1 and for 2 k n, Then, S v k = w k k 1 i=1 is orthogonal and Span(S ) =Span(S). w k, v i v i. v i Theorem 6.5. Let V be a non-zero finite dimensional inner product space. Then V has an orthonormal basis β. Furthermore, if β = {v 1, v 2,..., v n } and x V, then n x = x, v i v i. i=1 Cor 1. Let V be a finite dimensional inner product space with an orthonormal basis β = {v 1, v 2,..., v n }. Let T be a linear operator on V, and A = [T ] β. Then for any i, j, (A) i,j = T (v j ), v i. Defn 10. Let S be a non-empty subset of an inner product space V. We define S (read S perp ) to be the set of all vectors in V that are orthogonal to every vector in S; that is, S = {x : x, y = 0, y S}. S is called the orthogonal complement of S. Theorem 6.6. Let W be a finite dimensional subspace of an inner product space V, and let y V. Then there exitst unique vectors u W, w W such that y = u+z. Furthermore, if {v 1, v 2..., v k } is an orthonormal basis for W, then k u = y, v i v i. i=1 Cor 1. In the notation of Theorem 6.6, the vector u is the unique vector in W that is closest to y. That is, for any x W, y x y u and we get equality in the previous inequality if and only if x = u. 7

8 8 Theorem 6.7. Suppose that S = {v 1, v 2,..., v k } is an orthonormal set in an n-dimensional inner product space V. Then S can be extended to an orthonormal basis {v 1,..., v k, v k+1,..., v n } If W =Span(S) then S 1 = {v k+1, v k+2,..., v b } is an orthonormal basis for W. If W is any subspace of V, then dim(v ) = dim(w )+ dim(w ). Section 6.3: Theorem 6.8. Let V be a finite dimensional inner product space over F, and let g : V F be a linear functional. Then, there exists a unique vector y V such that g(x) = x, y, x V. Theorem 6.9. Let V be a finite dimensional inner product space and let T be a linear operator on V. There exists a unique operator T : V V such that T (x), y = x, T (y), x, y V. Furthermore, T is linear. Defn 11. T is called the adjoint of the linear operator T and is defined to be the unique operator on V satisfying T (x), y = x, T (y), x, y V. Fact 3. x, T (y) = T (x), y, x, y V. Theorem Let V be a finite dimensional inner product space and let β be an orthonormal basis for V. If T is a linear operator on V, then [T ] β = [T ] β Cor 2. Let A be an n n matrix, then L A = (L A ). Theorem Let V be an inner product space, T, U linear operators on V. Then (1) (T + U) = T + U. (2) (ct ) = ct, c F. (3) (T U) = U T (composition) (4) (T ) = T (5) I = I. Cor 1. Let A and B be n n matrices. (1) (A + B) = A + B. (2) (ca) = ca, c F. (3) (AB) = B A (4) (A ) = A (5) I = I. Then Example 2. (Exercise 5b) Let A and B be m n matrices and C an n p matrix. Then (1) (A + B) = A + B. (2) (ca) = ca, c F. (3) (AC) = C A (4) (A ) = A (5) I = I. For x, y F n, let x, y n denote the standard inner product of x and y in F n. Recall that if x and y are regarded as column vectors, then x, y n = y x.

9 Lemma 3. Let A M m n (F ), x F n, and y F m. Then Ax, y m = x, A y n. Lemma 4. Let A M m n (F ), x F n. Then rank(a A) =rank(a). Cor 1. If A is an m n matrix such that rank(a) = n, then A A is invertible. Theorem Let A M m n (F ) and y F m. Then there exists x 0 F n such that (A A)x 0 = A y and Ax 0 y Ax y for all x F n. Furthermore, if rank(a) = n, then x 0 = (A A) 1 A y. Section 6.4: Lemma 5. Let T be a linear operator on a finite-dimensional inner product space V. If T has an eigenvector, then so does T. Theorem (Schur). Let T be a linear operator on a finite-dimensional inner product space V. Suppose that the characteristic polynomial of T splits. Then there exists an orthonormal basis β for V such that the matrix [T ] β is upper triangular. Defn 12. Let V be an inner product space, and let T be a linear operator on V. We say that T is normal if T T = T T. An n n real or complex matrix A is normal if AA = A A. Theorem Let V be an inner product space, and let T be a normal operator on V. Then the following statements are true. (1) T (x) = T (x) for all x V. (2) T ci is normal for every c F (3) If x is an eigenvector of T, then x is also an eigenvector of T. In fact, if T (x) = λx, then T (x) = λx. (4) If λ 1 and λ 2 are distinct eigenvalues of T with corresponding eigenvectors x 1 and x 2, then x 1 and x 2 are orthogonal. Defn 13. Let T be a linear operator on an inner product space V. We say that T is selfadjoint ( Hermitian) if T = T. An n n real or complex matrix A is self-adjoint (Hermitian) if A = A. Theorem Let T be a linear operator on a finite-dimensional complex inner product space V. Then T is normal if and only if there exists an orthonormal basis for V consisting of eigenvectors of T. Lemma 6. Let T be a self-adjoint operator on a finite-dimensional inner product space V. Then (1) Every eigenvalue of T is real. (2) Suppose that V is a real inner product space. Then the characteristic polynomial of T splits. Theorem Let T be a linear operator on a finite-dimensional real inner product space V. Then T is self-adjoint if and only if there exists an orthonormal basis β for V consisting of eigenvectors of T. 9

10 10 Section 6.5: Defn 14. Let T be a linear operator on a finite-dimensional inner product space V (over F ). If T (x) = x for all x V, we call T a unitary operator if F = C and an orthogonal operator if F = R. (Comment on relationship to one-to-one, onto, and invertible.) Theorem Let T be a linear operator on a finite-dimensional inner product space V. Then the following statements are equivalent. (1) T T = T T = I. (2) T (x), T (y) = x, y, x, y V. (3) If β is an orthonormal basis for V, then T (β) is an orthonormal basis for V. (4) There exists an orthonormal basis β for V such that T (β) is an orthonormal basis for V. (5) T (x) = x, x V. Lemma 7. Let U be a self-adjoint operator on a finite-dimensional inner product space V. If x, U(x) = 0 for all x V, then U = T 0. Cor 1. Let T be a linear operator on a finite-dimensional real inner product space V. Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute value 1 if and only if T is both self-adjoint and orthogonal. Cor 2. Let T be a linear operator on a finite-dimensional complex inner product space V. Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute value 1 if and only if T is unitary. Defn 15. A square matrix A is called an orthogonal matrix if A t A = AA t = I and unitary if A A = AA = I. Theorem Let A be a complex n n matrix. Then A is normal if and only if A is unitarily equivalent to a diagonal matrix. Theorem Let A be a real n n matrix. Then A is symmetric if and only if A is orthogonally equivalent to a real diagonal matrix. Theorem Let A M n (F ) be a matrix whose characteristic polynomial splits over F. (1) If F = C, then A is unitarily equivalent to a complex upper triangular matrix. (2) If F = R, then A is orthogonally equivalent to a real upper triangular matrix. Section 6.6 Defn 16. If V = W 1 W 2, then a linear operator T on V is the projection on W 1 along W 2 if whenever x = x 1 + x 2, with x 1 W 1 and x 2 W 2, we have T (x) = x 1. In this case, R(T ) = W 1 = {x V : T (x) = x} and N(T ) = W 2.

11 We refer to T as the projection. Let V be an inner product space, and let T : V V be a projection. We say that T is an orthogonal projection if R(T ) = N(T ) and N(T ) = R(T ). Theorem Let V be an inner product space, and let T be a linear operator on V. Then T is an orthogonal projection if and only if T has an adjoint T and T 2 = T = T. (Note- compare to Theorem 6.9 where V is finite-dimensional) Theorem (The Spectral Theorem). Suppose that T is a linear operator on a finite-dimensional inner product space V over G with the distinct eigenvalues λ 1, λ 2,..., λ k. Assume that T is normal if F = C and that T is self-adjoint if F = R. For each i(1 i k), let W i be the eigenspace of T corresponding to the eigenvalue λ i, and let T i be the orthogonal projection of V on W i. Then the following statements are true. (1) V = W 1 W 2 W k. (2) If W i denotes the direct sum of the subspaces W j for j i, then Wi = W i. (3) T i T j = δ i,j T i for 1 i, j k. (4) I = T 1 + t T k. (5) T = λ 1 T 1 + λ 2 T λ k T k. Defn 17. The set {λ 1, λ 2,..., λ k } of eigenvalues of T is called the spectrum of T, the sum I = T 1 + T T k is called the resolution of the identity operator induced by T, and the sum T = λ 1 T 1 + λ 2 T λ k T k is called the spectral decomposision of T. Cor 1. If F = C, then T is normal if and only if T = g(t ) for some polynomial g. Cor 2. If F = C, then T is unitary if and only if T is normal and λ = 1 for every eigenvalue λ of T. Cor 3. If F = C and T is normal, then T is self-adjoint if and only if every eigenvalue of T is real. Cor 4. Let T be as in the spectral theorem with spectral decomposition T = λ 1 T 1 + λ 2 T λ k T k. Then each T j is a polynomial in T. Section 6.7 Theorem (Singular Value Theorem for Linear Transformations). Let V and W be finite-dimensional inner product spaces, and let T : V W be a linear transformation of rank r. Then there exist orthonormal bases {v 1, v 2,..., v n } for V and {u 1, u 2,..., u m } for W and positive scalars σ 1 σ 2 σ r such that { σi u T (v i ) = i if 1 i r 0 if i > r Conversely, suppose that the preceding conditions are satisfied. Then for 1 i n, v i is an eigenvector of T T with corresponding eigenvalue σi 2 if 1 i r and 0 if i > r. Therefore the scalars σ 1, σ 2,..., σ r are uniquely determined by T. 11

12 12 Defn 18. The unique scalars σ 1, σ 2,..., σ r in Theorem 6.26 are called the singular values of T. If r is less than both m and n, then the term singular value is extended to include σ r+1 = = σ k = 0, where k is the minimum of m and n. Defn 19. Let A be an m n matrix. We define the singular values of A to be the singular values of the linear transformation L A. Theorem (Singular Value Decomposition Theorem for Matrices). Let A be an m n matrix of rank r with the positive singular values σ 1 σ 2 σ r, and let Σ be the m n matrix defined by { σi if i = j r Σ i,j = 0 otherwise Then there exists an m n unitary matrix U and an n n unitary matrix V such that A = UΣV. Defn 20. Let A be an m n matrix of rank r with positive singular values σ 1 σ 2 σ r. A factorization A = UΣV where U and V are unitary matrices and Σ is the m n matrix defined as in Theorem 6.27 is called a singular value decomposition of A.

13 Section 7.1: Defn 1. A square matrix A i is called a Jordan block corresponding to λ if it is of the form: λ λ A i = λ λ Defn 2. A square matrix [T ] β is called a Jordan canonical form of T if it has the following form, where each A i is a Jordan block and each O is the zero matrix. A 1 O O O O O A 2 O O O [T ] β =..... O O O A k 1 O O O O O A k The basis β used to form the Jordan canonical form of T is called the Jordan canonical basis for T. Defn 3. Let T be a linear operator on a vector space V, and let λ be a scalar. A nonzero vector x in V is called a generalized eigenvector of T corresponding to λ if (T λi) p (x) = 0 for some positive integer p. Defn 4. Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. The generalized eigenspace of T corresponding to λ, denoted K λ, is the subset of V defined by K λ = {x V : p Z, (T λi) p (x) = 0} Theorem 7.1. Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. Then (1) K λ is a T -invariant subspace of V containing E λ. (2) For any scalar µ λ, the restriction of T µi to K λ is one-to-one. Theorem 7.2. Let T be a linear operator on a finite-dimensional vector space V such that the characteristic polynomial of T splits. Suppose that λ is an eigenvalue of T with multiplicity of m. Then (1) dim(k λ ) m. (2) K λ = N((T λi) m ). Theorem 7.3. Let T be a linear operator on a finite-dimensional vector space V such that the characteristic polynomial of T splits, and let λ 1, λ 2,..., λ k be the distinct eigenvalues of T. Then, for every x V, there exist vectors v i K λi, 1 i k, such that x = v 1 + v v k. 13

14 14 Theorem 7.4. Let T be a linear operator on a finite-dimensional vector space V such that the characteristic polynomial of T splits, and let λ 1, λ 2,..., λ k be the distinct eigenvalues of T corresponding to the multiplicities m 1, m 2,..., m k. For 1 i k, let β i be an ordered basis for K λi. Then the following statements are true. (a) β i β j = for i j. (b) β = β 1 β 2 β k is an ordered basis for V. (c) dim(k λi ) = m i, for all i. Cor 5. Let T be a linear operator on a finite-dimensional vector space V such that the characteristic polynomial of T splits. Then T is diagonalizable if and only if E λ = K λ for every eigenvalue λ of T. Defn 5. Let T be a linear operator on a vector space V, and let x be a generalized eigenvector of T corresponding to the eigenvalue λ. Suppose that p is the smallest positive integer for which (T λi) p (x) = 0. Then the ordered set {(T λi) p 1 (x), (T λi) p 2 (x),..., (T λi)(x)} is called a cycle of generalized eigenvectors of T corresponding to λ. The vectors (T λi) p 1 (x) and x are called the initial vector and the end vector of the cycle, respectively. We say that the length of the cycle is p. Theorem 7.5. Let T be a linear operator on a finite-dimensional vector space V whose characteristic polynomial splits, and suppose that β is a basis for V such that β is a disjoint union of cycles of generalized eigenvectors of T. Then the following statements are true. (a) For each cycle γ of generalized eigenvectors contained in β, W =span(γ) is T -invariant, and [T W ] γ is a Jordan block. (b) β is a Jordan canonical basis for V. Theorem 7.6. Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. Suppose that γ 1, γ 2,..., γ q are cycles of generalized eigenvectors of T corresponding to λ such that the initial vectors of the γ i s are distinct and form a linearly independent set. then the γ i s are disjoint, and their union γ = q i=1 γ i is linearly independent. Cor 1. Every cycle of generalized eigenvectors of a linear operator is linearly independent. Theorem 7.7. Let T be a linear operator on a finite-dimensional vector space V, and let λ be an eigenvalue of T. Then K λ has an ordered basis consisting of a union of disjoint cycles of generalized eigenvectors corresponding to λ. Cor 1. Let T be a linear operator on a finite-dimensional vector space V whose characteristic polynomial splits. Then T has a Jordan canonical form. Defn 6. Let A M n (F ) be such that the characteristic polynomial of A (and hence of L A ) splits. Then the Jordan canonical form of A is defined to be the Jordan canonical form of the linear operator L A on F n.

15 Cor 2. Let A be an n n matrix whose characteristic polynomial splits. Then A has a Jordan canonical form J, and A is similar to J. Theorem 7.8. Let T be a linear operator on a finite-dimensional vector space V whose characteristic polynomial splits. Then V is the direct sum of the generalized eigenspaces of T. (Compare to Theorem 5.11) 15

16 16 Section 7.2 For the purposes of this section, we fix a linear operator T on an n-dimensional vector space V such that the characteristic polynomial of T splits. Let λ 1, λ 2,..., λ k be the distinct eigenvalues of T. By Theorem 7.7, each generalized eigenspace K λi contains an ordered basis β i consisting of a union of disjoint cycles of generalized eigenvectors corresponding to λ i. So by Theorems 7.4(b) and 7.5, the union β = β i is a Jordan canonical basis for T. For each i, let T i be the restriction of T to K λi, and let A i = [T i ] βi. Then A i is the Jordan canonical form of T i, and A 1 O O O A 2 O J = [T ] β =... O O A k is the Jordan canonical form of T. In this matrix, each O is a zero matrix of appropriate size. Defn 1. We adopt the following convention: The basis β i for K λi will henceforth be ordered in such a way that the cycles appear in order of decreasing length. That is, if β i is a disjoint union of cycles γ 1, γ 2,..., γ ni and if the length of the cycle γ j is p j, we index the cycles so that p 1 p 2 p ni. This ordering of the cycles limits the possible orderings of vectors in β i, which in turn determines the matrix A i. It is in this sense that A i is unique. It then follows that the Jordan canonical form for T is unique up to an ordering of the eigenvalues of T. Defn 2. Let T i be the restriction of T to K λi. A dot diagram of T i will aid in the visualization of the matrix A i. Suppose that β i is a disjoint union of cycles of generalized eigenvectors γ 1, γ 2,..., γ ni with lengths p 1 p 2 p ni, respectively. The dot diagram contains one dot for each vector in β i, and the dots are configured according to the following rules. (1) The array consists of n i columns (one column for each cycle.) (2) Counting from left to right, the j th column consists of the p j dots that correspond to the vectors of γ j starting with the initial vector at the top and continuing down to the end vector. Denote the end vectors of the cycles by v 1, v 2,..., v ni. In the following dot diagram of T i, each dot is labeled with the name of the vector in β i to which it corresponds. (T λ i I) p1 1 (v 1 ) (T λ i I) p2 1 (v 2 ) (T λ i I) p n 1 i (v ni ) (T λ i I) p1 2 (v 1 ) (T λ i I) p2 2 (v 2 ) (T λ i I) pn i 2 (v ni ). (T λ i I)(v 1 ) v 1. (T λ i I)(v 2 ) v 2. (T λ i I)(v ni ) v ni Theorem 7.9. We fix a basis β i of K λi so that β i is a disjoint union of n i cycles of generalized eigenvectors with lengths p 1 p 2 p ni. For any positive integer r, the vectors in β i

17 that are associated with the dots in the first r rows of the dot diagram of T i constitute a basis for N((T λ i I) r ). Hence the number of dots in the first r rows of the dot diagram equals nullity((t λ i I) r ). Cor 1. The dimension of E λi is n i. Hence in a Jordan canonical form of T, the number of Jordan blocks corresponding to λ i equals the dimension of E λi. Theorem Let r i denote the number of dots in the j th row of the dot diagram of T i, the restriction of T to K λi. Then the following statements are true. (a) r 1 = dim(v ) rank(t λ i I). (b) r j = rank((t λ i I) j 1 ) rank((t λ i I) j ), if j > 1. Cor 1. For any eigenvalue λ i of T, the dot diagram of T i is unique. Thus, subject to the convention that the cycles of generalized eigenvectors for the bases of each generalized eigenspace are listed in order of decreasing length, the Jordan canonical form of a linear operator or a matrix is unique up to the ordering of the eigenvalues. Theorem Let A and B be n n matrices, each having Jordan canonical forms computed according to the conventions of this section. Then A and B are similar if and only if they have (up to an ordering of their eigenvalues) the same Jordan canonical form. Section 7.3 Defn 1. Let T be a linear operator on a finite-dimensional vector space. A polynomial p(t) is called a minimal polynomial of T if p(t) is a monic polynomial of least positive degree for which p(t ) = T 0. Theorem Let p(t) be a minimal polynomial of linear operator T on a finite-dimens vector space V. (a) For any polynomial g(t), if g(t ) = T 0, then p(t) divides g(t). p(t) divides the characteristic polynomial of T. (b) The minimal polynomial of T is unique. 17 In particular, Defn 2. Let A M n (F ). The minimal polynomial p(t) of A is the monic polynomial of least positive degree for which p(a) = O. Theorem Let T be a linear operator on a finite-dimensional vector space V, and let β be an ordered basis for V. Then the minimal polynomial of T is the same as the minimal polynomial of [T ] β. Cor 1. For any A M n (F ), the minimal polynomial of A is the same as the minimal polynomial of L A. Theorem Let T be a linear operator on a finite-dimensional vector space V, and let p(t) be the minimal polynomial of T. A scalar λ is an eigenvalue of T if and only if p(λ) = 0. Hence the characteristic polynomial and the minimal polynomial of T have the same zeros. Cor 1. Let T be a linear operator on a finite-dimensional vector space V with minimal polynomial p(t) and characteristic polynomial f(t). Suppose that f(t) factors as f(t) = (λ 1 t) n 1 (λ 2 t) n2 (λ k t) n k,

18 18 where λ 1, λ 2,..., λ k are the distinct eigenvalues of T. Then there exist integers m 1, m 2,..., m k such that 1 m i n i, for all i and p(t) = (t λ 1 ) m 1 (t λ 2 ) m2 (t λ k ) m k. Theorem Let T be a linear operator on a finite-dimensional vector space V such that V is a T -cyclic subspace of itself. Then the characteristic polynomial f(t) and the minimal polynomial p(t) have the same degree, and hence f(t) = ( 1) n p(t). Theorem Let T be a linear operator on a finite-dimensional vector space V. Then G is diangonalizable if and only if the minimal polynomial of T is of the form p(t) = (t λ 1 )(t λ 2 ) (t λ k ) where λ 1, λ 2,..., λ k are the distinct eigenvalues of T.

### MATH 235. Final ANSWERS May 5, 2015

MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

### 1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

### Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

### Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

### MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

### MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

### Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

### Chapter 6 Inner product spaces

Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

### BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

### Linear Algebra Highlights

Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

### Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

### The following definition is fundamental.

1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

### MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

### Elementary linear algebra

Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

### ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

### Linear Algebra Practice Problems

Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

### Definitions for Quizzes

Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

### MATHEMATICS 217 NOTES

MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

### Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

### Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

### 235 Final exam review questions

5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

### Math Linear Algebra II. 1. Inner Products and Norms

Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

### Linear Algebra. Workbook

Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

### Math 24 Spring 2012 Sample Homework Solutions Week 8

Math 4 Spring Sample Homework Solutions Week 8 Section 5. (.) Test A M (R) for diagonalizability, and if possible find an invertible matrix Q and a diagonal matrix D such that Q AQ = D. ( ) 4 (c) A =.

### Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

### MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

### Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

### Last name: First name: Signature: Student number:

MAT 2141 The final exam Instructor: K. Zaynullin Last name: First name: Signature: Student number: Do not detach the pages of this examination. You may use the back of the pages as scrap paper for calculations,

### MATH 115A: SAMPLE FINAL SOLUTIONS

MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

### Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

### Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

### LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### Linear Algebra Massoud Malek

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

### Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

### a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

### Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.

### MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

### GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

### Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

### Conceptual Questions for Review

Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

### GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

### 6 Inner Product Spaces

Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

### Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

### LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

### DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

### Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

### x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### Lecture Summaries for Linear Algebra M51A

These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

### j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

### LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

### NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011 Linear Algebra II May 2011 Time allowed : 2 hours INSTRUCTIONS TO CANDIDATES 1. This examination paper contains

### EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

### MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

### Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

### ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

### Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

### Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

### Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

### (v, w) = arccos( < v, w >

MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

### DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

### 5. Diagonalization. plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found

5. Diagonalization plan given T : V V Does there exist a basis β of V such that [T] β is diagonal if so, how can it be found eigenvalues EV, eigenvectors, eigenspaces 5.. Eigenvalues and eigenvectors.

### Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

### Linear Algebra Review

Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

### Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

### 1 Last time: least-squares problems

MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

### LinGloss. A glossary of linear algebra

LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

### 1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

### Econ Slides from Lecture 7

Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

### SYLLABUS. 1 Linear maps and matrices

Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

### LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

### Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

### MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

### LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

### UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

### Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

### 2. Every linear system with the same number of equations as unknowns has a unique solution.

1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

### Chapter 4 Euclid Space

Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

### Lecture 2: Linear operators

Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

### Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

### 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

### Lec 2: Mathematical Economics

Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen

### Linear Algebra: Matrix Eigenvalue Problems

CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

### Online Exercises for Linear Algebra XM511

This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

### Linear Algebra. Session 12

Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

### PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

### (f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

### MATH 221, Spring Homework 10 Solutions

MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

### Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

### LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

### MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

### 1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation