1. General Vector Spaces

Size: px
Start display at page:

Download "1. General Vector Spaces"

Transcription

1 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule for assigning to each pair of vectors u, v V a unique vector u + v. By scalar multiplication we mean a rule for associating to each scalar k and each u V a unique vector ku. The set V together with these operations is called a vector space, provided the following properties hold for all u, v, w V and scalars k, l in some field K: (1) If u, v V, then u + v V. We say that V is closed under addition. (2) u + v = v + u. (3) (u + v) + w = u + (v + w). (4) V contains an object 0, called the zero vector, which satisfies u + 0 = u for every vector u V. (5) For each u V there exists an object u such that u + ( u) = 0. (6) If u V, and k K, then ku V. We say V is closed under scalar multiplication. (7) k(u + v) = ku + kv. (8) (k + l)u = ku + lu. (9) k(lu) = (kl)u. (10) 1u = u, where 1 is the identity in K. Remark 1.2. The most important vector spaces are real vector spaces (for which K = R in the preceding definition), and complex vector spaces (where K is the complex numbers C) Subspaces, linear independence, span, basis. Definition 1.3. A nonempty subset W of a vector space V is called a subspace if W is closed under scalar multiplication and addition. Definition 1.4. A set M = {v 1,..., v s } of vectors in V is called linearly independent, provided the only set {c 1,..., c s } of scalars which solve the equation c 1 v 1 + c 2 v c s v s = 0 is c 1 = c 2 =... = c s = 0. If M is not linearly independent then it is called linearly dependent. Definition 1.5. The span of a set of vectors M = {v 1,..., v s } is the set of all possible linear combinations of the members of M. Definition 1.6. A set of vectors in a subspace W of V is said to be a basis for V if it is linearly independent and its span is V. Definition 1.7. A vector space V is finite-dimensional if it has a basis with finitely many vectors, and infinite-dimensional otherwise. If V is finite-dimensional, then the dimension of V is the number of vectors in any basis; otherwise the dimension of V is infinite. 1

2 Examples. Illustration 1: Euclidean and Complex Spaces The most important examples of finite-dimensional vector spaces are n-dimensional Euclidean space R n, and n-dimensional complex space C n. Illustration 2. An important example of an infinite-dimensional vector space is the space of real-valued functions which have n-th order continuous derivatives on all of R, which we denote by C n (R). Definition 1.8. Let f 1 (x), f 2 (x),..., f n (x) be elements of C (n 1) (R). The Wronskian of these functions is the determinant whose n-th row contains the (n 1) derivatives of the functions, f 1 (x) f 2 (x) f n (x) f 1(x) f 2(x) f n(x) f (n 1) 1 (x) f (n 1) 2 (x) f n (n 1) (x) Theorem 1.9. (Wronski s test for linear independence) Let f 1 (x), f 2 (x),..., f n (x) be real-valued functions which have (n 1) continuous derivatives on all of R. If the Wronskian of these functions is not identically zero on R, then the functions form a linearly independent set in C (n 1) (R). Example. Show that f 1 (x) = sin 2 2x, f 2 (x) = cos 2 2x, f 3 (x) = cos 4x are linearly dependent in C 2 (R). Solution: One approach would be to examine the Wronskian of our functions. A simple computation shows that the Wronskian is identically 0, hence our functions are linearly dependent. Alternatively, since cos 4x = cos 2 2x sin 2 2x, it follows that f 1 (x) f 2 (x) + f 3 (x) = 0, and our functions are linearly dependent Definition. 2. Linear Transformations Definition 2.1. Let V,W be real vector spaces. A transformation T : V W is a linear transformation, if for any pair α, β R, and u, v V, we have T (αu + βv) = αt (u) + βt (v). Illustration. Let V = C 1 (R) denote the continuously-differentiable real-valued functions defined on R, and W = C 0 (R) denote the continuous real-valued functions on R. The derivative operator d d df dx : V W, defined by dx (f) = dx W for f V is linear, since d dx (αf + βg) = α df dx + β dg dx. Example. Find the matrix representation A of the linear transformation T : R 2 R 2, where T rotates each vector x R 2 with basepoint at the origin clockwise by an angle θ. Solution: We must find the images T (e 1 ) and T (e 2 ) of the standard basis under our transformation. It is easy to check that T (e 1 ) = T ((1, 0) T ) = (cos θ, sin θ) T, while T (e 2 ) = T ((0, 1) T () = (sin θ, cos θ) T ). cos θ sin θ Hence our matrix A =. sin θ cos θ

3 Isomorphism. Definition 2.2. A linear transformation T : V W is called an isomorphism if it is one-to-one and onto, and we say a vector space V is isomorphic to W if there is an isomorphism between V and W. Theorem 2.3. Every real n-dimensional vector space is isomorphic to R n. Example. If V is an n-dimensional vector space and the transformation T : V R n is an isomorphism, show there exists a unique inner product <, > on V such that T (u) T (v) =< u, v >, where T (u) T (v) denotes the Euclidean product on R n. Solution: We show that < u, v > defines an inner product. < u, v >= T (u) T (v) = T (v) T (u) =< v, u >. < u + v, w >= T (u + v) T (w) = (T (u) + T (v)) T (w) = T (u) T (w) + T (v) T (w) =< u, w > + < v, w >. < ku, v >= T (ku) T (v) = kt (u) T (v) = k < u, v >. Since T is an isomorphism, < v, v >= T v 2 = 0 if and only if v = 0. So < u, v > satisfies all the properties of an inner product. Uniqueness of the inner product on V follows from the similar property held by the Euclidean dot product on R n Kernel and range, one-to-one and onto. Let T : V W be a linear transformation. Then: Definition 2.4. The kernel of T is the set ker(t ) := {x V T (x) = 0}. Definition 2.5. The range of T is the set {y W x V such that y = T (x)}. Definition 2.6. T is onto if its range is all of W, and one-to-one if T maps distinct vectors in V to distinct vectors in W. We say T is an injection if if is one-to-one, and a surjection if it is onto. Example. Let T : V W be a linear transformation. Show that T is one-to-one if and only if ker(t ) = {0}. Solution: Suppose first T is one-to-one. Since T is linear, T (0) = 0. Since T is one-to-one, 0 is the only vector for which T (0) = 0, so ker(t ) = {0}. Next suppose ker(t ) = {0}. Further, choose x 1, x 2 V such that x 1 x 2. Then x 1 x 2 is not in the kernel of T, so that T (x 1 x 2 ) = T (x 1 ) T (x 2 ) 0, and T is one-to-one. 3. Matrix Algebra Theorem 3.1. Let T : R n R m be a linear transformation, and let {e 1,..., e n } denote a basis for R n. Then given any x R n, we can express T (x) as a matrix transformation T (x) = Ax, where A is the m n matrix whose i-th column is T (e i ). Let us fix some notation. We denote the entry in the i-th row and k-th column of A by the lowercase a ij.

4 Fundamental spaces of a matrix. Definition 3.2. Let A be an m n matrix. (1) The row (column) space of A is the subspace spanned by the row (column) vectors of A. These are denoted row(a) and col(a) respectively. (2) The null space is the solution space of Ax = 0, denoted null(a). Definition 3.3. The dimension of the row space of a matrix A is called the rank of A, while the dimension of the null space is called the nullity of A. Definition 3.4. If S is a nonempty subset of R n then the orthogonal complement of S, denoted S, is the set of vectors in R n which are orthogonal to every vector in S. Theorem 3.5. If A is an mxn matrix, then the row space (column space) of A and the null space of A are orthogonal complements Example. For the matrix , show that null(a) and row(a) are orthogonal complements. Solution: Recall that the null space of A consists of those vectors which solve the equation Ax = 0. It is left as an exercise to show that the null space is spanned by the vectors (7, 6, 3, 0, 5) T and ( 1, 2, 1, 4, 0) T. Further, row(a) is the same as the span of the row vectors from the reduced echelon /4 7/5 form of A (check!), which is given by the matrix /2 6/ /4 3/5. Hence row(a) is spanned by the three non-zero rows in the reduced matrix. It is easily checked by computing the dot products pairwise, that any vector in the row space is orthogonal to any vector in the column space. Example. Prove that the row vectors of an invertible n n matrix A form a basis for R n. Solution: If A is invertible, then the row vectors of A are linearly independent (check!). We know that the row space is a subspace of R n, and further is spanned by n linearly independent vectors; hence the row space of A is all of R n. It follows that the row vectors form a basis for R n Dimension theorem. Theorem 3.6. If A is an mxn matrix, then rank(a) + nullity(a) = n. Example. Prove that if A is a square matrix for which A and A 2 have the same rank, then null(a) col(a) = {0}. Solution: First we show that null(a) = null(a 2 ). By the dimension theorem, we know that dim(null(a 2 )) = n rank(a 2 ) = n rank(a) = dim(null(a)). Since null(a) null(a 2 ) (check!), it follows that null(a) = null(a 2 ). Suppose now that y null(a) col(a). Then there exists x such that y = Ax and Ay = 0. Since A 2 x = Ay = 0, x null(a 2 ) = null(a), and therefore y = 0.

5 Rank Theorem for matrices. Theorem 3.7. The row space and column space of a matrix have the same dimension. The rank theorem has several immediate implications. Proposition 3.8. Suppose A is an mxn matrix. Then: rank(a) = rank(a T ). rank(a) + nullity(a T ) = m. Example. Prove the latter proposition. Solution: To prove the first claim, note that rank(a) = dim(row(a)) = dim(col(a T )), the latter equality following since the rows of A are the columns of A T. By the rank theorem, dim(col(a T )) = dim(row(a T )), and the result follows. To prove the second claim, first recall that the dimension theorem applied to A T reads rank(a T ) + nullity(a T ) = m. Now apply part one of the proposition, i.e. rank(a) = rank(a T ), and the result follows Matrix multiplication. Definition 3.9. Suppose A is an m n matrix, and B is an n k matrix. Then we define their product AB, such that the entry in the i-th row and k-th column of AB is Σ n j=1 a ijb jk. Illustration. Suppose we represent v R n as a column vector, i.e. v = (v 1, v 2,..., v n ) T with respect to the standard basis. Further, let A = (a ij ) be an n n matrix with entries a ij. Then the vector Ax obtained by multiplying x by A has components (Ax) i = (Σ n k=1 a ikv k Change of basis. Definition Suppose B = {v 1,..., v k } is an ordered basis for a subspace W of R n, and w = a 1 v a k v k is an expression for w W in terms of B. Then we call the set {a 1,..., a n } the coordinates of w with respect to B. Further, the k-tuple of coordinates [w] B := (a 1,..., a n ) T B is referred to as the coordinate matrix of w with respect to B. Theorem Suppose B and B = {v 1,..., v n} are two bases for R n, and w R n. Then the relation between [w] B and [w] B is given by [w] B = P B B [w] B, where P B B := ([v 1 ] B [v 2 ] B [v n ] B ) is the matrix whose column vectors are the members of B. Example. Let S denote the standard basis for R 3, and let B = {v 1, v 2, v 3 } be the basis with members v 1 = (1, 2, 1), v 2 = (2, 5, 0), and v 3 = (3, 3, 8). Find the transition matrices P B S and P S B. Solution. By our theorem, we know that P B S = ([v 1 ] S [v 2 ] S ) [v 3 ] S ) = We can immediately find P S B by noting that the it must be the inverse of P S B (why?)..

6 Similarity and Diagonalizability. Definition If A and C are square matrices with the same size, we say that C is similar to A if there is an invertible matrix P such that C = P 1 AP. Definition Properties of similar matrices. (1) Two square matrices are similar if and only if there exist bases with respect to which the matrices represent the same linear operator. (2) Similar matrices have the same eigenvalues, determinant, rank, nullity, and trace. Definition A square matrix A is diagonalizable if there exists an invertible matrix P for which P 1 AP is a diagonal matrix. Theorem If A is an nxn matrix, then the following are equivalent. A is diagonalizable. A has n linearly independent eigenvectors. R n has a basis consisting of eigenvectors of A. Example. Determine whether the matrix A = is diagonalizable. If so, find the matrix P that diagonalizes the matrix A. Solution: You can check that the characteristic polynomial of A is p(λ) = (λ 1)(λ 2)(λ 3), so that A has three distinct eigenvalues. Since eigenvectors corresponding to distinct eigenvalues are linearly independent (check!), A has 3 linearly independent eigenvectors and we know A is diagonalizable. To determine P, we must find eigenvectors corresponding to the eigenvalues λ = 1, 2, 3. The reader can check that that these eigenvectors are v 1 = (1, 1, 1) T, v 2 = (2, 3, 3) T, and v 3 = (1, 3, 4) T. by P = Orthogonal diagonalizability. Hence one choice of the matrix P is given Definition A square matrix A is orthogonally diagonalizable if there exists an orthogonal matrix P for which P T AP is a diagonal matrix. Theorem A matrix is orthogonally diagonalizable if and only if it is symmetric. Example. Prove that if A is a symmetric matrix, then eigenvectors from different eigenspaces are orthogonal. Solution. Let v 1 and v 2 be eigenvectors corresponding to distinct eigenvalues λ 1, λ 2. Consider λ 1 v 1 v 2 = (λ 1 v 1 ) T v 2 = (Av 1 ) T v 2 = v T 1 A T v 2. Since A is symmetric, v T 1 A T v 2 = v T 1 Av 2 = v T 1 λ 2 v 2 = λ 2 v 1 v 2. This implies (λ 1 λ 2 )v 1 v 2 = 0, which in turn tells us v 1 v 2 = Quadratic forms. Definition Let A be a real n n matrix, and x R n. Then ( the real-valued ) function x T a11 a Ax is called a quadratic form. For example, if A = 12, then a 21 a 22 the quadratic form associated with A is a 11 x a 22 x a 12 a 21 x 1 x 2.

7 7 Theorem (Principal Axes Theorem) If A is a symmetric n n matrix, then there is an orthogonal change of variable x = P y that transforms the quadratic form x T Ax into a quadratic form y T Ay with no cross product terms. Specifically, if P orthogonally diagonalizes A, then x T Ax = y T Dy = λ 1 y λ n y 2 n, where λ 1,..., λ n are the eigenvalues of A corresponding to the eigenvectors that form the successive columns of P. Definition A quadratic form x T Ax is said to be: Positive definite if x T Ax > 0 for all x 0. Negative definite if x T Ax < 0 for all x 0. Indefinite otherwise. Example. Show that if A is a symmetric matrix, then A is positive definite if and only if all eigenvalues of A are positive. Solution: From the Principal Axes Theorem, we know that we can find P such that x T Ax = y T Dy = λ 1 y λ n y 2 n. Since P is invertible, it follows that y 0 x 0. Further, the values of x T Ax are the same as y T Dy for x, y 0. This means that x T Ax > 0 all eigenvalues of A are positive Functions of a matrix, matrix exponential. Definition Suppose A is an n n diagonalizable matrix which is diagonalized by P, and λ 1, λ 2,..., λ n are the ordered eigenvalues of A. If f is a real-valued function whose Taylor series converges on some interval containing the eigenvalues of A, then f(a) = P diag(f(λ 1 ), f(λ 2 ),..., f(λ n ))P Example. Given A = 0 3 0, compute exp(ta) Solution. We leave as an exercise to show that the eigenvalues are λ = 3,, 50, with corresponding eigenvectors v 1 = (0, 1, 0) T, v 2 = ( 4/5, 0, 3/5) T, and v 3 = 0 4/5 3/5 (3/5, 0, 4/5) T. It follows that the matrix P that diagonalizes A is P = 1 0 0, 0 3/5 4/5 and A = P diag( 3,, 50)P T. From our theorem, it follows that exp (ta) = P exp (tdiag( 3,, 50))P T, and exp (tdiag( 3,, 50)) = diag(exp ( 3t), exp (t), exp ( 50t)). It is easy to verify that exp (t) + exp ( 50t) 0 exp (t) + exp ( 50t) exp (ta) = 0 exp ( 3t) exp (t) + exp ( 50t) exp (t) + exp ( 50t) ( ) a11 a Determinants. Let A = 12 denote an arbitrary 2 2 matrix. a 21 a 22 Recall that the determinant of A is defined by det(a) := a 11 a 22 a 12 a 21. More generally, let A be a square n n matrix, and denote the entry in the i-th row and j-th column by a ij. Definition The determinant of a square n n matrix is defined by the sum det(a) = Σ ± a 1j1 a 2j2 a njn. Here the summation is over all permutations

8 8 {j 1, j 2,..., j n } of {1, 2,..., n}, where the sign is + if the permutation is even, and - if the permutation is odd. a 11 a 12 a 13 Illustration. Suppose that A = a 21 a 22 a 23. Then: a 31 a 32 a 33 det(a) = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 12 a 21 a 33 a 11 a 23 a Properties of determinants. Proposition Suppose A, B are square matrices of the same size. Then: (1) A is invertible if and only if det(a) 0. (2) det(ab) = det(a) det(b). (3) det(a) = det(a T ). Example. Show that a square matrix A is invertible if and only if A T A is invertible. Solution: Suppose A is invertible. Then from the first item in the above proposition we know det(a) 0. Further, from items 2 and 3 we see that det(a T A) = det(a T )det(a) = det(a) 2 0, so that A T A is invertible. On the other hand, if det(a T A) 0 then by the same equality we have det(a) 0, and A is invertible Cramer s rule. Theorem If Ax = b is a linear system of n equations in n unknowns, then the system has a unique solution if and only if det(a) 0. Cramer s rule then says that the exact solution is given by x 1 = det(a1) det(a), x 2 = det(a2) det(a),..., x n = det(an) det(a). Here A i denotes the matrix which results when the i-th column of A is replaced by the column vector b. ( ) ( ) ( ) 1 0 x1 0 Example. Solve = using Cramer s rule. 2 1 x 2 1 ( ) ( ) det 1 1 Solution : x 1 = ( ) = 0 det = 0, x = ( ) = det det Formula for A 1. Definition 3.. If A is a square matrix, then the minor of entry a ij is denoted by M ij, and is defined to be the determinant of the submatrix that remains when the i-th row and j-th column are deleted. The number C ij = ( 1) i+j M ij is called the cofactor of entry a ij. Definition If A is a square matrix, the matrix C = C 11 C C 1n C 21 C C 2n.... C n1 C n2... C nn is called the matrix of cofactors. The adjoint of A is the transpose of C, which we denote by adj(a).

9 9 Theorem If A is invertible, then its inverse is given by A 1 = 1 det(a) adj(a) = 1 det(a) CT. Example. Find the inverse of A = using Theorem Solution: First we compute the determinant, expanding along the first row, det(a) =a 11 C 11 + a 12 C 12 + a 13 C 13 ( ) ( =2 det 0 det =2( 12) + 3(6) = 6. ) ( det 2 0 We can similarily obtain the remaining C ij which determine the adjoint. Finally we find that the inverse is: A 1 = ) Geometric interpretation of the determinant. Theorem If A is a 2 2 matrix, then det(a) represents the area of the parallelogram determined by the two column vectors of A, when they are positioned so that their base points coincide. If A is a 3 3 matrix, then det(a) represents the volume of the parallelipiped determined by the three column vectors of A, when they are positioned so that their base points coincide. Example. Find the area of the parallelogram in the plane with vertices P 1 (1, 2), P 2 (4, 4), P 3 (7, 5), P 4 (4, 3). Solution: Let s consider the vectors P 1 P 2 and P 1 P 4, which starting from P 1 extend to P 2 and P 4 respectively. A simple calculation shows P 1 P 2 = (3, ( 2) T, and ) 3 3 P 1 P 4 = (3, 1) T. Placing these vectors as the columns of the matrix A =, 2 1 by our theorem we know that the area of our parallelogram is given by det(a) = Cross product. Definition Let u = (u 1, u 2, u 3 ) T, v = (v 1, v 2, v 3 ) T. The cross product of u with v, denoted u v, is the vector ( ) ( ) ( ) u2 u u v := (det 3 u1 u, det 3 u1 u, det 2 ) T. v 2 v 3 v 1 v 3 v 1 v 2 Example. For u = (1, 0, 2) T, v = ( 3, 1, 0) T, compute u v. Solution: By the definition, u v = (( ), ( ), (1 1 0 ( 3))) T = ( 2, 6, 1) T.

10 10 4. Eigenvalues and eigenvectors 4.1. Eigenvalues of mappings between linear spaces. Definition 4.1. Suppose V is a real vector space, and T : V V is a linear map. Then we say λ R is an eigenvalue of T, provided there exists a non-zero vector x V such that (T λi)x = 0. Example. Suppose V is a real vector space, and let I be the identity operator on V. Find the eigenvalues and eigenspaces of I. Solution: Since Ix = x, for all x V, it follows that 1 is the only eigenvalue, and the eigenspace corresponding to 1 is all of V Real and complex eigenvalues for maps between finite-dimensional spaces. Definition 4.2. If A is an n n matrix, then a scalar λ is called an eigenvalue of A if there exists a non-zero vector x such that Ax = λx. If λ is an eigenvalue of A, then every nonzero vector x such that Ax = λx is called an eigenvector of A. Example. Find all eigenvalues of the matrix A = , and the corresponding eigenvectors Solution: We note that λ is an eigenvalue provided the equation (A λi d )x = 0 has a solution for some non-zero x, where I d denotes the identity matrix. This is only possible if λ solves the characteristic equation det(a λi d ) = 0. For the matrix A in our example, the characteristic equation reads (check!) λ 3 6λ λ 6 = (λ 1)(λ 2)(λ 3) = 0, which has solutions λ = 1, 2, 3. Next, to determine the eigenvectors corresponding to λ = 1, we must solve the system (A I d )x = 0 for non-zero x. In other words, we solve x 1 x 2 x 3 = Using your favourite solution method, you can easily determine that one eigenvector is (x 1, x 2, x 2 ) T = (0, 1, 0) T. Similarily, we find an eigenvector corresponding to λ = 2 is ( 1, 2, 2) T, and for λ = 3 the eigenvector is ( 1, 1, 1) T. Finally, it is important to note that scalar multiples of any of these eigenvectors is also an eigenvector, so we have actually determined a subspace of eigenvectors corresponding to each eigenvalue (referred to as the eigenspace of λ). Definition 4.3. If n is a positive integer, then a complex n-tuple is a sequence of n complex numbers (v 1,..., v n ). The set of all complex n-tuples is called complex n-space and is denoted by C n. Definition 4.4. If u = (u 1, u 2,..., u n ) and v = (v 1, v 2,..., v n ) are vectors in C n, then the complex Euclidean dot (inner) product of u and v is defined u v := u 1 v 1 + u 2 v u n v n. The Euclidean norm is v := v v. Definition 4.5. A complex matrix A is a matrix whose entries are complex numbers. Further, we define the complex conjugate of a matrix A, denoted A, to be the matrix whose entries are the complex conjugates of the entries of A. That is, if A has entries a ij, then A has entries a ij..

11 11 Definition 4.6. If A is a complex n n matrix, then the complex roots λ of the characteristic equation det(a λi) = 0 are called complex eigenvalues of A. Further, complex nonzero solutions x to (A λi)x = 0 are referred to as the complex eigenvectors corresponding to λ. ( ) 4 5 Example. Given A =, determine the eigenvalues and find bases for 1 0 the corresponding eigenspaces. Solution: It is left as an exercise to check that the characteristic equation is λ 2 4λ + 5 = 0, so the eigenvalues are λ = 2 ± i. Let ( us determine) the eigenspace corresponding to λ = 2 + i. We must solve 2 + i 5 (x, y) T = (0, 0) T. Since we know this system must have a non i zero solution, it follows that one of the rows in the reduced matrix must have a row of zeros. Hence we need only solve ( 2 + i)x + 5y = 0. which has as solution the eigenvector (x, y) = ( 2+i 5, 1), which spans the eigenspace of λ = 2 + i. It is a good exercise for the reader to check that for a complex eigenvalue λ with corresponding eigenvector x, it is always true that λ is another eigenvalue with corresponding eigenvector x. Hence ( 2 i 5, 1) is a basis for the eigenspace corresponding to λ = 2 i Generalized Eigenspaces. Definition 4.7. Let A be a complex n n matrix, with distinct eigenvalues {λ 1, λ 2,..., λ k }. The generalized eigenspace V λi pertaining to λ i is defined by V λi = {x C n (A λ i I) n x = 0}. In particular, all eigenvectors corresponding to λ i are in V λi. Theorem 4.8. Let A be a complex n n matrix with distinct eigenvalues {λ 1, λ 2,..., λ k } and corresponding invariant subspaces V λi, i = 1,..., k. Then: (1) V λi is invariant under A, in the sense that AV λi V λi for i = 1,..., k. (2) The spaces V λi are mutually linearly independent. (3) dimv λi = m(λ i ), where m(λ i ) is the multiplicity of the eigenvalue λ i. (4) A is similar to a block diagonal matrix with k blocks A 1,..., A k Jordan Normal Form. Definition 4.9. Let λ C. A Jordan block J k (λ) is a k k upper-triangular matrix of the form λ λ J k (λ) = λ λ Definition A Jordan matrix is any matrix of the form J n1 (λ 1 ) 0 J =... 0 J nk (λ k ) where each J ni (λ i ) is a Jordan block, and n 1 + n n k = n.

12 12 Theorem Given any complex n n matrix A, there is an invertible matrix S such that J n1 (λ 1 ) 0 A = S... S 1 = SJS 1, 0 J nk (λ k ) where each J ni (λ i ) is a Jordan block, and n 1 + n n k = n. The eigenvalues are not necessarily distinct, though if A is real with real eigenvalues, then S can be taken to be real Inner product. 5. Inner product spaces Definition 5.1. An inner product on a real vector space V is a function that associates a unique real number < u, v > to each pair of vectors u, v V, in such a way that the following properties hold for all u, v, w V and scalars k: (1) < v, v > 0, and < v, v >= 0 if and only if v = 0. (2) < u, v >=< v, u >. (3) < u + v, w >=< u, w > + < v, w >. (4) < ku, v >= k < u, v >. A real vector space equipped with an inner product is called a real inner product space. Illustration. The most familiar example of an inner product space is R n, equipped with the Euclidean dot product as inner product. That is, for v, w R n, we define the dot product v w := Σ n i=1 vi w i. Example. Let V = C([0, 2π]), the continuous real-valued functions defined on the closed interval [0, 2π]. We make V into an inner product space by defining an inner product < f, g >:= 2π f(x)g(x)dx, for any two functions f, g V. 0 Suppose p and q are distinct non-zero integers. Show that f(x) = sin qx and g(x) = cos px are orthogonal with respect to the inner product. Solution: Using the identity cos px sin qx = sin (p + q)x sin (p q)x, we see that < f, g >= 2π cos px sin qxdx = 2π [sin (p + q)x sin (p q)x]dx = 0 0 = 0, as 0 0 required Norms, Cauchy-Schwarz inequality. Definition 5.2. If V is an inner product space, then we define the norm of v V by v = < v, v >, and the distance between u and v by d(u, v) = u v. Theorem 5.3. (Pythagoras) If u, v V are orthogonal with respect to the inner product, then u + v = u 2 + v 2. Theorem 5.4. (Cauchy-Schwarz Inequality) If u, v are vectors in an inner product space V, then < u, v > u v. Theorem 5.5. (Triangle Inequality) If u, v, w are vectors in an inner product space, then u + v u + v.

13 Orthogonality, orthonormal bases. Definition 5.6. A pair of vectors v, w in an inner product space V are called orthogonal if < v, w >= 0. A set W of vectors in an inner product space is called orthogonal if each pair of vectors in orthogonal. The set W is orthonormal if it is orthogonal and each vector has unit length. Finally, a basis B which is orthonormal is called an orthonormal basis. Theorem 5.7. Properties of orthonormal bases: (1) If {v 1,..., v k } is an orthonormal basis for a subspace W V, and if w W, then we may express w = (w v 1 )v 1 + (w v 2 )v (w v k )v k. (2) Every nonzero subspace of a finite-dimensional inner product space V possesses an orthonormal basis (Gram-Schmidt). Example. Confirm that the set v 1 = (2/3, 1/3, 2/3), v 2 = (1/3, 2/3, 2/3), v 3 = (2/3, 2/3, 1/3) is an orthonormal basis for R 3 equipped with the Euclidean inner product. Solution We leave it to you to check that v 1, v 2, v 3 are pairwise orthogonal, by computing the dot products. Further, each of these vectors has norm 1, so the set is orthonormal. Finally, an orthogonal set of nonzero vectors is linearly independent (check!), so that our set forms an orthonormal basis Hermitian, Unitary, and Normal Matrices. Definition 5.8. If A is a complex matrix, then the conjugate transpose of A, denoted A, is defined by A = A T, where the overbar denotes complex conjugation. Definition 5.9. A square complex matrix A is said to be unitary if A = A 1, and hermitian if A = A. Theorem Suppose A is a n n unitary, complex matrix. Then Ax Ay = x y for all x, y C n. The column and row vectors form an orthonormal set with respect to the complex Euclidean inner product. Theorem Suppose A is a Hermitian matrix. Then The eigenvalues of A are real numbers. The eigenvectors from different eigenspaces are orthogonal. Example. Show that if A is a unitary matrix, then so is A. Solution: Since A is unitary, A 1 = A, and it is left as an exercise to check that (A ) 1 = (A 1 ). From the latter it follows (A ) 1 = (A ), as required. Example. Show that the determinant of a Hermitian matrix is real. Solution: First of all we show that det(a ) = det(a). By expanding the formula for the determinant, it is readily seen that det(a) = det(a). Using the latter, and the fact that the determinant of A is the same as that of its transpose, we find det(a ) = det((a) T ) = det(a) = det(a). Since A is Hermitian, det(a) = det(a ) = det(a), and det(a) is real. Definition A square complex matrix A is called normal if AA = A A (a property you should check is shared by, for example, unitary and hermitian matrices).

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Math 225 Linear Algebra II Lecture Notes. John C. Bowman University of Alberta Edmonton, Canada

Math 225 Linear Algebra II Lecture Notes. John C. Bowman University of Alberta Edmonton, Canada Math 225 Linear Algebra II Lecture Notes John C Bowman University of Alberta Edmonton, Canada March 23, 2017 c 2010 John C Bowman ALL RIGHTS RESERVED Reproduction of these lecture notes in any form, in

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Math 308 Practice Test for Final Exam Winter 2015

Math 308 Practice Test for Final Exam Winter 2015 Math 38 Practice Test for Final Exam Winter 25 No books are allowed during the exam. But you are allowed one sheet ( x 8) of handwritten notes (back and front). You may use a calculator. For TRUE/FALSE

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

WI1403-LR Linear Algebra. Delft University of Technology

WI1403-LR Linear Algebra. Delft University of Technology WI1403-LR Linear Algebra Delft University of Technology Year 2013 2014 Michele Facchinelli Version 10 Last modified on February 1, 2017 Preface This summary was written for the course WI1403-LR Linear

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors. Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions Warm-up True or false? 1. proj u proj v u = u 2. The system of normal equations for A x = y has solutions iff A x = y has solutions 3. The normal equations are always consistent Baby proof 1. Let A be

More information

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and REVIEW FOR EXAM II The exam covers sections 3.4 3.6, the part of 3.7 on Markov chains, and 4.1 4.3. 1. The LU factorization: An n n matrix A has an LU factorization if A = LU, where L is lower triangular

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

NOTES ON LINEAR ALGEBRA. 1. Determinants

NOTES ON LINEAR ALGEBRA. 1. Determinants NOTES ON LINEAR ALGEBRA 1 Determinants In this section, we study determinants of matrices We study their properties, methods of computation and some applications We are familiar with the following formulas

More information