1. General Vector Spaces

Save this PDF as:

Size: px
Start display at page:

Transcription

1 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule for assigning to each pair of vectors u, v V a unique vector u + v. By scalar multiplication we mean a rule for associating to each scalar k and each u V a unique vector ku. The set V together with these operations is called a vector space, provided the following properties hold for all u, v, w V and scalars k, l in some field K: (1) If u, v V, then u + v V. We say that V is closed under addition. (2) u + v = v + u. (3) (u + v) + w = u + (v + w). (4) V contains an object 0, called the zero vector, which satisfies u + 0 = u for every vector u V. (5) For each u V there exists an object u such that u + ( u) = 0. (6) If u V, and k K, then ku V. We say V is closed under scalar multiplication. (7) k(u + v) = ku + kv. (8) (k + l)u = ku + lu. (9) k(lu) = (kl)u. (10) 1u = u, where 1 is the identity in K. Remark 1.2. The most important vector spaces are real vector spaces (for which K = R in the preceding definition), and complex vector spaces (where K is the complex numbers C) Subspaces, linear independence, span, basis. Definition 1.3. A nonempty subset W of a vector space V is called a subspace if W is closed under scalar multiplication and addition. Definition 1.4. A set M = {v 1,..., v s } of vectors in V is called linearly independent, provided the only set {c 1,..., c s } of scalars which solve the equation c 1 v 1 + c 2 v c s v s = 0 is c 1 = c 2 =... = c s = 0. If M is not linearly independent then it is called linearly dependent. Definition 1.5. The span of a set of vectors M = {v 1,..., v s } is the set of all possible linear combinations of the members of M. Definition 1.6. A set of vectors in a subspace W of V is said to be a basis for V if it is linearly independent and its span is V. Definition 1.7. A vector space V is finite-dimensional if it has a basis with finitely many vectors, and infinite-dimensional otherwise. If V is finite-dimensional, then the dimension of V is the number of vectors in any basis; otherwise the dimension of V is infinite. 1

2 Examples. Illustration 1: Euclidean and Complex Spaces The most important examples of finite-dimensional vector spaces are n-dimensional Euclidean space R n, and n-dimensional complex space C n. Illustration 2. An important example of an infinite-dimensional vector space is the space of real-valued functions which have n-th order continuous derivatives on all of R, which we denote by C n (R). Definition 1.8. Let f 1 (x), f 2 (x),..., f n (x) be elements of C (n 1) (R). The Wronskian of these functions is the determinant whose n-th row contains the (n 1) derivatives of the functions, f 1 (x) f 2 (x) f n (x) f 1(x) f 2(x) f n(x) f (n 1) 1 (x) f (n 1) 2 (x) f n (n 1) (x) Theorem 1.9. (Wronski s test for linear independence) Let f 1 (x), f 2 (x),..., f n (x) be real-valued functions which have (n 1) continuous derivatives on all of R. If the Wronskian of these functions is not identically zero on R, then the functions form a linearly independent set in C (n 1) (R). Example. Show that f 1 (x) = sin 2 2x, f 2 (x) = cos 2 2x, f 3 (x) = cos 4x are linearly dependent in C 2 (R). Solution: One approach would be to examine the Wronskian of our functions. A simple computation shows that the Wronskian is identically 0, hence our functions are linearly dependent. Alternatively, since cos 4x = cos 2 2x sin 2 2x, it follows that f 1 (x) f 2 (x) + f 3 (x) = 0, and our functions are linearly dependent Definition. 2. Linear Transformations Definition 2.1. Let V,W be real vector spaces. A transformation T : V W is a linear transformation, if for any pair α, β R, and u, v V, we have T (αu + βv) = αt (u) + βt (v). Illustration. Let V = C 1 (R) denote the continuously-differentiable real-valued functions defined on R, and W = C 0 (R) denote the continuous real-valued functions on R. The derivative operator d d df dx : V W, defined by dx (f) = dx W for f V is linear, since d dx (αf + βg) = α df dx + β dg dx. Example. Find the matrix representation A of the linear transformation T : R 2 R 2, where T rotates each vector x R 2 with basepoint at the origin clockwise by an angle θ. Solution: We must find the images T (e 1 ) and T (e 2 ) of the standard basis under our transformation. It is easy to check that T (e 1 ) = T ((1, 0) T ) = (cos θ, sin θ) T, while T (e 2 ) = T ((0, 1) T () = (sin θ, cos θ) T ). cos θ sin θ Hence our matrix A =. sin θ cos θ

3 Isomorphism. Definition 2.2. A linear transformation T : V W is called an isomorphism if it is one-to-one and onto, and we say a vector space V is isomorphic to W if there is an isomorphism between V and W. Theorem 2.3. Every real n-dimensional vector space is isomorphic to R n. Example. If V is an n-dimensional vector space and the transformation T : V R n is an isomorphism, show there exists a unique inner product <, > on V such that T (u) T (v) =< u, v >, where T (u) T (v) denotes the Euclidean product on R n. Solution: We show that < u, v > defines an inner product. < u, v >= T (u) T (v) = T (v) T (u) =< v, u >. < u + v, w >= T (u + v) T (w) = (T (u) + T (v)) T (w) = T (u) T (w) + T (v) T (w) =< u, w > + < v, w >. < ku, v >= T (ku) T (v) = kt (u) T (v) = k < u, v >. Since T is an isomorphism, < v, v >= T v 2 = 0 if and only if v = 0. So < u, v > satisfies all the properties of an inner product. Uniqueness of the inner product on V follows from the similar property held by the Euclidean dot product on R n Kernel and range, one-to-one and onto. Let T : V W be a linear transformation. Then: Definition 2.4. The kernel of T is the set ker(t ) := {x V T (x) = 0}. Definition 2.5. The range of T is the set {y W x V such that y = T (x)}. Definition 2.6. T is onto if its range is all of W, and one-to-one if T maps distinct vectors in V to distinct vectors in W. We say T is an injection if if is one-to-one, and a surjection if it is onto. Example. Let T : V W be a linear transformation. Show that T is one-to-one if and only if ker(t ) = {0}. Solution: Suppose first T is one-to-one. Since T is linear, T (0) = 0. Since T is one-to-one, 0 is the only vector for which T (0) = 0, so ker(t ) = {0}. Next suppose ker(t ) = {0}. Further, choose x 1, x 2 V such that x 1 x 2. Then x 1 x 2 is not in the kernel of T, so that T (x 1 x 2 ) = T (x 1 ) T (x 2 ) 0, and T is one-to-one. 3. Matrix Algebra Theorem 3.1. Let T : R n R m be a linear transformation, and let {e 1,..., e n } denote a basis for R n. Then given any x R n, we can express T (x) as a matrix transformation T (x) = Ax, where A is the m n matrix whose i-th column is T (e i ). Let us fix some notation. We denote the entry in the i-th row and k-th column of A by the lowercase a ij.

4 Fundamental spaces of a matrix. Definition 3.2. Let A be an m n matrix. (1) The row (column) space of A is the subspace spanned by the row (column) vectors of A. These are denoted row(a) and col(a) respectively. (2) The null space is the solution space of Ax = 0, denoted null(a). Definition 3.3. The dimension of the row space of a matrix A is called the rank of A, while the dimension of the null space is called the nullity of A. Definition 3.4. If S is a nonempty subset of R n then the orthogonal complement of S, denoted S, is the set of vectors in R n which are orthogonal to every vector in S. Theorem 3.5. If A is an mxn matrix, then the row space (column space) of A and the null space of A are orthogonal complements Example. For the matrix , show that null(a) and row(a) are orthogonal complements. Solution: Recall that the null space of A consists of those vectors which solve the equation Ax = 0. It is left as an exercise to show that the null space is spanned by the vectors (7, 6, 3, 0, 5) T and ( 1, 2, 1, 4, 0) T. Further, row(a) is the same as the span of the row vectors from the reduced echelon /4 7/5 form of A (check!), which is given by the matrix /2 6/ /4 3/5. Hence row(a) is spanned by the three non-zero rows in the reduced matrix. It is easily checked by computing the dot products pairwise, that any vector in the row space is orthogonal to any vector in the column space. Example. Prove that the row vectors of an invertible n n matrix A form a basis for R n. Solution: If A is invertible, then the row vectors of A are linearly independent (check!). We know that the row space is a subspace of R n, and further is spanned by n linearly independent vectors; hence the row space of A is all of R n. It follows that the row vectors form a basis for R n Dimension theorem. Theorem 3.6. If A is an mxn matrix, then rank(a) + nullity(a) = n. Example. Prove that if A is a square matrix for which A and A 2 have the same rank, then null(a) col(a) = {0}. Solution: First we show that null(a) = null(a 2 ). By the dimension theorem, we know that dim(null(a 2 )) = n rank(a 2 ) = n rank(a) = dim(null(a)). Since null(a) null(a 2 ) (check!), it follows that null(a) = null(a 2 ). Suppose now that y null(a) col(a). Then there exists x such that y = Ax and Ay = 0. Since A 2 x = Ay = 0, x null(a 2 ) = null(a), and therefore y = 0.

5 Rank Theorem for matrices. Theorem 3.7. The row space and column space of a matrix have the same dimension. The rank theorem has several immediate implications. Proposition 3.8. Suppose A is an mxn matrix. Then: rank(a) = rank(a T ). rank(a) + nullity(a T ) = m. Example. Prove the latter proposition. Solution: To prove the first claim, note that rank(a) = dim(row(a)) = dim(col(a T )), the latter equality following since the rows of A are the columns of A T. By the rank theorem, dim(col(a T )) = dim(row(a T )), and the result follows. To prove the second claim, first recall that the dimension theorem applied to A T reads rank(a T ) + nullity(a T ) = m. Now apply part one of the proposition, i.e. rank(a) = rank(a T ), and the result follows Matrix multiplication. Definition 3.9. Suppose A is an m n matrix, and B is an n k matrix. Then we define their product AB, such that the entry in the i-th row and k-th column of AB is Σ n j=1 a ijb jk. Illustration. Suppose we represent v R n as a column vector, i.e. v = (v 1, v 2,..., v n ) T with respect to the standard basis. Further, let A = (a ij ) be an n n matrix with entries a ij. Then the vector Ax obtained by multiplying x by A has components (Ax) i = (Σ n k=1 a ikv k Change of basis. Definition Suppose B = {v 1,..., v k } is an ordered basis for a subspace W of R n, and w = a 1 v a k v k is an expression for w W in terms of B. Then we call the set {a 1,..., a n } the coordinates of w with respect to B. Further, the k-tuple of coordinates [w] B := (a 1,..., a n ) T B is referred to as the coordinate matrix of w with respect to B. Theorem Suppose B and B = {v 1,..., v n} are two bases for R n, and w R n. Then the relation between [w] B and [w] B is given by [w] B = P B B [w] B, where P B B := ([v 1 ] B [v 2 ] B [v n ] B ) is the matrix whose column vectors are the members of B. Example. Let S denote the standard basis for R 3, and let B = {v 1, v 2, v 3 } be the basis with members v 1 = (1, 2, 1), v 2 = (2, 5, 0), and v 3 = (3, 3, 8). Find the transition matrices P B S and P S B. Solution. By our theorem, we know that P B S = ([v 1 ] S [v 2 ] S ) [v 3 ] S ) = We can immediately find P S B by noting that the it must be the inverse of P S B (why?)..

6 Similarity and Diagonalizability. Definition If A and C are square matrices with the same size, we say that C is similar to A if there is an invertible matrix P such that C = P 1 AP. Definition Properties of similar matrices. (1) Two square matrices are similar if and only if there exist bases with respect to which the matrices represent the same linear operator. (2) Similar matrices have the same eigenvalues, determinant, rank, nullity, and trace. Definition A square matrix A is diagonalizable if there exists an invertible matrix P for which P 1 AP is a diagonal matrix. Theorem If A is an nxn matrix, then the following are equivalent. A is diagonalizable. A has n linearly independent eigenvectors. R n has a basis consisting of eigenvectors of A. Example. Determine whether the matrix A = is diagonalizable. If so, find the matrix P that diagonalizes the matrix A. Solution: You can check that the characteristic polynomial of A is p(λ) = (λ 1)(λ 2)(λ 3), so that A has three distinct eigenvalues. Since eigenvectors corresponding to distinct eigenvalues are linearly independent (check!), A has 3 linearly independent eigenvectors and we know A is diagonalizable. To determine P, we must find eigenvectors corresponding to the eigenvalues λ = 1, 2, 3. The reader can check that that these eigenvectors are v 1 = (1, 1, 1) T, v 2 = (2, 3, 3) T, and v 3 = (1, 3, 4) T. by P = Orthogonal diagonalizability. Hence one choice of the matrix P is given Definition A square matrix A is orthogonally diagonalizable if there exists an orthogonal matrix P for which P T AP is a diagonal matrix. Theorem A matrix is orthogonally diagonalizable if and only if it is symmetric. Example. Prove that if A is a symmetric matrix, then eigenvectors from different eigenspaces are orthogonal. Solution. Let v 1 and v 2 be eigenvectors corresponding to distinct eigenvalues λ 1, λ 2. Consider λ 1 v 1 v 2 = (λ 1 v 1 ) T v 2 = (Av 1 ) T v 2 = v T 1 A T v 2. Since A is symmetric, v T 1 A T v 2 = v T 1 Av 2 = v T 1 λ 2 v 2 = λ 2 v 1 v 2. This implies (λ 1 λ 2 )v 1 v 2 = 0, which in turn tells us v 1 v 2 = Quadratic forms. Definition Let A be a real n n matrix, and x R n. Then ( the real-valued ) function x T a11 a Ax is called a quadratic form. For example, if A = 12, then a 21 a 22 the quadratic form associated with A is a 11 x a 22 x a 12 a 21 x 1 x 2.

7 7 Theorem (Principal Axes Theorem) If A is a symmetric n n matrix, then there is an orthogonal change of variable x = P y that transforms the quadratic form x T Ax into a quadratic form y T Ay with no cross product terms. Specifically, if P orthogonally diagonalizes A, then x T Ax = y T Dy = λ 1 y λ n y 2 n, where λ 1,..., λ n are the eigenvalues of A corresponding to the eigenvectors that form the successive columns of P. Definition A quadratic form x T Ax is said to be: Positive definite if x T Ax > 0 for all x 0. Negative definite if x T Ax < 0 for all x 0. Indefinite otherwise. Example. Show that if A is a symmetric matrix, then A is positive definite if and only if all eigenvalues of A are positive. Solution: From the Principal Axes Theorem, we know that we can find P such that x T Ax = y T Dy = λ 1 y λ n y 2 n. Since P is invertible, it follows that y 0 x 0. Further, the values of x T Ax are the same as y T Dy for x, y 0. This means that x T Ax > 0 all eigenvalues of A are positive Functions of a matrix, matrix exponential. Definition Suppose A is an n n diagonalizable matrix which is diagonalized by P, and λ 1, λ 2,..., λ n are the ordered eigenvalues of A. If f is a real-valued function whose Taylor series converges on some interval containing the eigenvalues of A, then f(a) = P diag(f(λ 1 ), f(λ 2 ),..., f(λ n ))P Example. Given A = 0 3 0, compute exp(ta) Solution. We leave as an exercise to show that the eigenvalues are λ = 3,, 50, with corresponding eigenvectors v 1 = (0, 1, 0) T, v 2 = ( 4/5, 0, 3/5) T, and v 3 = 0 4/5 3/5 (3/5, 0, 4/5) T. It follows that the matrix P that diagonalizes A is P = 1 0 0, 0 3/5 4/5 and A = P diag( 3,, 50)P T. From our theorem, it follows that exp (ta) = P exp (tdiag( 3,, 50))P T, and exp (tdiag( 3,, 50)) = diag(exp ( 3t), exp (t), exp ( 50t)). It is easy to verify that exp (t) + exp ( 50t) 0 exp (t) + exp ( 50t) exp (ta) = 0 exp ( 3t) exp (t) + exp ( 50t) exp (t) + exp ( 50t) ( ) a11 a Determinants. Let A = 12 denote an arbitrary 2 2 matrix. a 21 a 22 Recall that the determinant of A is defined by det(a) := a 11 a 22 a 12 a 21. More generally, let A be a square n n matrix, and denote the entry in the i-th row and j-th column by a ij. Definition The determinant of a square n n matrix is defined by the sum det(a) = Σ ± a 1j1 a 2j2 a njn. Here the summation is over all permutations

8 8 {j 1, j 2,..., j n } of {1, 2,..., n}, where the sign is + if the permutation is even, and - if the permutation is odd. a 11 a 12 a 13 Illustration. Suppose that A = a 21 a 22 a 23. Then: a 31 a 32 a 33 det(a) = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 12 a 21 a 33 a 11 a 23 a Properties of determinants. Proposition Suppose A, B are square matrices of the same size. Then: (1) A is invertible if and only if det(a) 0. (2) det(ab) = det(a) det(b). (3) det(a) = det(a T ). Example. Show that a square matrix A is invertible if and only if A T A is invertible. Solution: Suppose A is invertible. Then from the first item in the above proposition we know det(a) 0. Further, from items 2 and 3 we see that det(a T A) = det(a T )det(a) = det(a) 2 0, so that A T A is invertible. On the other hand, if det(a T A) 0 then by the same equality we have det(a) 0, and A is invertible Cramer s rule. Theorem If Ax = b is a linear system of n equations in n unknowns, then the system has a unique solution if and only if det(a) 0. Cramer s rule then says that the exact solution is given by x 1 = det(a1) det(a), x 2 = det(a2) det(a),..., x n = det(an) det(a). Here A i denotes the matrix which results when the i-th column of A is replaced by the column vector b. ( ) ( ) ( ) 1 0 x1 0 Example. Solve = using Cramer s rule. 2 1 x 2 1 ( ) ( ) det 1 1 Solution : x 1 = ( ) = 0 det = 0, x = ( ) = det det Formula for A 1. Definition 3.. If A is a square matrix, then the minor of entry a ij is denoted by M ij, and is defined to be the determinant of the submatrix that remains when the i-th row and j-th column are deleted. The number C ij = ( 1) i+j M ij is called the cofactor of entry a ij. Definition If A is a square matrix, the matrix C = C 11 C C 1n C 21 C C 2n.... C n1 C n2... C nn is called the matrix of cofactors. The adjoint of A is the transpose of C, which we denote by adj(a).

9 9 Theorem If A is invertible, then its inverse is given by A 1 = 1 det(a) adj(a) = 1 det(a) CT. Example. Find the inverse of A = using Theorem Solution: First we compute the determinant, expanding along the first row, det(a) =a 11 C 11 + a 12 C 12 + a 13 C 13 ( ) ( =2 det 0 det =2( 12) + 3(6) = 6. ) ( det 2 0 We can similarily obtain the remaining C ij which determine the adjoint. Finally we find that the inverse is: A 1 = ) Geometric interpretation of the determinant. Theorem If A is a 2 2 matrix, then det(a) represents the area of the parallelogram determined by the two column vectors of A, when they are positioned so that their base points coincide. If A is a 3 3 matrix, then det(a) represents the volume of the parallelipiped determined by the three column vectors of A, when they are positioned so that their base points coincide. Example. Find the area of the parallelogram in the plane with vertices P 1 (1, 2), P 2 (4, 4), P 3 (7, 5), P 4 (4, 3). Solution: Let s consider the vectors P 1 P 2 and P 1 P 4, which starting from P 1 extend to P 2 and P 4 respectively. A simple calculation shows P 1 P 2 = (3, ( 2) T, and ) 3 3 P 1 P 4 = (3, 1) T. Placing these vectors as the columns of the matrix A =, 2 1 by our theorem we know that the area of our parallelogram is given by det(a) = Cross product. Definition Let u = (u 1, u 2, u 3 ) T, v = (v 1, v 2, v 3 ) T. The cross product of u with v, denoted u v, is the vector ( ) ( ) ( ) u2 u u v := (det 3 u1 u, det 3 u1 u, det 2 ) T. v 2 v 3 v 1 v 3 v 1 v 2 Example. For u = (1, 0, 2) T, v = ( 3, 1, 0) T, compute u v. Solution: By the definition, u v = (( ), ( ), (1 1 0 ( 3))) T = ( 2, 6, 1) T.

10 10 4. Eigenvalues and eigenvectors 4.1. Eigenvalues of mappings between linear spaces. Definition 4.1. Suppose V is a real vector space, and T : V V is a linear map. Then we say λ R is an eigenvalue of T, provided there exists a non-zero vector x V such that (T λi)x = 0. Example. Suppose V is a real vector space, and let I be the identity operator on V. Find the eigenvalues and eigenspaces of I. Solution: Since Ix = x, for all x V, it follows that 1 is the only eigenvalue, and the eigenspace corresponding to 1 is all of V Real and complex eigenvalues for maps between finite-dimensional spaces. Definition 4.2. If A is an n n matrix, then a scalar λ is called an eigenvalue of A if there exists a non-zero vector x such that Ax = λx. If λ is an eigenvalue of A, then every nonzero vector x such that Ax = λx is called an eigenvector of A. Example. Find all eigenvalues of the matrix A = , and the corresponding eigenvectors Solution: We note that λ is an eigenvalue provided the equation (A λi d )x = 0 has a solution for some non-zero x, where I d denotes the identity matrix. This is only possible if λ solves the characteristic equation det(a λi d ) = 0. For the matrix A in our example, the characteristic equation reads (check!) λ 3 6λ λ 6 = (λ 1)(λ 2)(λ 3) = 0, which has solutions λ = 1, 2, 3. Next, to determine the eigenvectors corresponding to λ = 1, we must solve the system (A I d )x = 0 for non-zero x. In other words, we solve x 1 x 2 x 3 = Using your favourite solution method, you can easily determine that one eigenvector is (x 1, x 2, x 2 ) T = (0, 1, 0) T. Similarily, we find an eigenvector corresponding to λ = 2 is ( 1, 2, 2) T, and for λ = 3 the eigenvector is ( 1, 1, 1) T. Finally, it is important to note that scalar multiples of any of these eigenvectors is also an eigenvector, so we have actually determined a subspace of eigenvectors corresponding to each eigenvalue (referred to as the eigenspace of λ). Definition 4.3. If n is a positive integer, then a complex n-tuple is a sequence of n complex numbers (v 1,..., v n ). The set of all complex n-tuples is called complex n-space and is denoted by C n. Definition 4.4. If u = (u 1, u 2,..., u n ) and v = (v 1, v 2,..., v n ) are vectors in C n, then the complex Euclidean dot (inner) product of u and v is defined u v := u 1 v 1 + u 2 v u n v n. The Euclidean norm is v := v v. Definition 4.5. A complex matrix A is a matrix whose entries are complex numbers. Further, we define the complex conjugate of a matrix A, denoted A, to be the matrix whose entries are the complex conjugates of the entries of A. That is, if A has entries a ij, then A has entries a ij..

11 11 Definition 4.6. If A is a complex n n matrix, then the complex roots λ of the characteristic equation det(a λi) = 0 are called complex eigenvalues of A. Further, complex nonzero solutions x to (A λi)x = 0 are referred to as the complex eigenvectors corresponding to λ. ( ) 4 5 Example. Given A =, determine the eigenvalues and find bases for 1 0 the corresponding eigenspaces. Solution: It is left as an exercise to check that the characteristic equation is λ 2 4λ + 5 = 0, so the eigenvalues are λ = 2 ± i. Let ( us determine) the eigenspace corresponding to λ = 2 + i. We must solve 2 + i 5 (x, y) T = (0, 0) T. Since we know this system must have a non i zero solution, it follows that one of the rows in the reduced matrix must have a row of zeros. Hence we need only solve ( 2 + i)x + 5y = 0. which has as solution the eigenvector (x, y) = ( 2+i 5, 1), which spans the eigenspace of λ = 2 + i. It is a good exercise for the reader to check that for a complex eigenvalue λ with corresponding eigenvector x, it is always true that λ is another eigenvalue with corresponding eigenvector x. Hence ( 2 i 5, 1) is a basis for the eigenspace corresponding to λ = 2 i Generalized Eigenspaces. Definition 4.7. Let A be a complex n n matrix, with distinct eigenvalues {λ 1, λ 2,..., λ k }. The generalized eigenspace V λi pertaining to λ i is defined by V λi = {x C n (A λ i I) n x = 0}. In particular, all eigenvectors corresponding to λ i are in V λi. Theorem 4.8. Let A be a complex n n matrix with distinct eigenvalues {λ 1, λ 2,..., λ k } and corresponding invariant subspaces V λi, i = 1,..., k. Then: (1) V λi is invariant under A, in the sense that AV λi V λi for i = 1,..., k. (2) The spaces V λi are mutually linearly independent. (3) dimv λi = m(λ i ), where m(λ i ) is the multiplicity of the eigenvalue λ i. (4) A is similar to a block diagonal matrix with k blocks A 1,..., A k Jordan Normal Form. Definition 4.9. Let λ C. A Jordan block J k (λ) is a k k upper-triangular matrix of the form λ λ J k (λ) = λ λ Definition A Jordan matrix is any matrix of the form J n1 (λ 1 ) 0 J =... 0 J nk (λ k ) where each J ni (λ i ) is a Jordan block, and n 1 + n n k = n.

12 12 Theorem Given any complex n n matrix A, there is an invertible matrix S such that J n1 (λ 1 ) 0 A = S... S 1 = SJS 1, 0 J nk (λ k ) where each J ni (λ i ) is a Jordan block, and n 1 + n n k = n. The eigenvalues are not necessarily distinct, though if A is real with real eigenvalues, then S can be taken to be real Inner product. 5. Inner product spaces Definition 5.1. An inner product on a real vector space V is a function that associates a unique real number < u, v > to each pair of vectors u, v V, in such a way that the following properties hold for all u, v, w V and scalars k: (1) < v, v > 0, and < v, v >= 0 if and only if v = 0. (2) < u, v >=< v, u >. (3) < u + v, w >=< u, w > + < v, w >. (4) < ku, v >= k < u, v >. A real vector space equipped with an inner product is called a real inner product space. Illustration. The most familiar example of an inner product space is R n, equipped with the Euclidean dot product as inner product. That is, for v, w R n, we define the dot product v w := Σ n i=1 vi w i. Example. Let V = C([0, 2π]), the continuous real-valued functions defined on the closed interval [0, 2π]. We make V into an inner product space by defining an inner product < f, g >:= 2π f(x)g(x)dx, for any two functions f, g V. 0 Suppose p and q are distinct non-zero integers. Show that f(x) = sin qx and g(x) = cos px are orthogonal with respect to the inner product. Solution: Using the identity cos px sin qx = sin (p + q)x sin (p q)x, we see that < f, g >= 2π cos px sin qxdx = 2π [sin (p + q)x sin (p q)x]dx = 0 0 = 0, as 0 0 required Norms, Cauchy-Schwarz inequality. Definition 5.2. If V is an inner product space, then we define the norm of v V by v = < v, v >, and the distance between u and v by d(u, v) = u v. Theorem 5.3. (Pythagoras) If u, v V are orthogonal with respect to the inner product, then u + v = u 2 + v 2. Theorem 5.4. (Cauchy-Schwarz Inequality) If u, v are vectors in an inner product space V, then < u, v > u v. Theorem 5.5. (Triangle Inequality) If u, v, w are vectors in an inner product space, then u + v u + v.

13 Orthogonality, orthonormal bases. Definition 5.6. A pair of vectors v, w in an inner product space V are called orthogonal if < v, w >= 0. A set W of vectors in an inner product space is called orthogonal if each pair of vectors in orthogonal. The set W is orthonormal if it is orthogonal and each vector has unit length. Finally, a basis B which is orthonormal is called an orthonormal basis. Theorem 5.7. Properties of orthonormal bases: (1) If {v 1,..., v k } is an orthonormal basis for a subspace W V, and if w W, then we may express w = (w v 1 )v 1 + (w v 2 )v (w v k )v k. (2) Every nonzero subspace of a finite-dimensional inner product space V possesses an orthonormal basis (Gram-Schmidt). Example. Confirm that the set v 1 = (2/3, 1/3, 2/3), v 2 = (1/3, 2/3, 2/3), v 3 = (2/3, 2/3, 1/3) is an orthonormal basis for R 3 equipped with the Euclidean inner product. Solution We leave it to you to check that v 1, v 2, v 3 are pairwise orthogonal, by computing the dot products. Further, each of these vectors has norm 1, so the set is orthonormal. Finally, an orthogonal set of nonzero vectors is linearly independent (check!), so that our set forms an orthonormal basis Hermitian, Unitary, and Normal Matrices. Definition 5.8. If A is a complex matrix, then the conjugate transpose of A, denoted A, is defined by A = A T, where the overbar denotes complex conjugation. Definition 5.9. A square complex matrix A is said to be unitary if A = A 1, and hermitian if A = A. Theorem Suppose A is a n n unitary, complex matrix. Then Ax Ay = x y for all x, y C n. The column and row vectors form an orthonormal set with respect to the complex Euclidean inner product. Theorem Suppose A is a Hermitian matrix. Then The eigenvalues of A are real numbers. The eigenvectors from different eigenspaces are orthogonal. Example. Show that if A is a unitary matrix, then so is A. Solution: Since A is unitary, A 1 = A, and it is left as an exercise to check that (A ) 1 = (A 1 ). From the latter it follows (A ) 1 = (A ), as required. Example. Show that the determinant of a Hermitian matrix is real. Solution: First of all we show that det(a ) = det(a). By expanding the formula for the determinant, it is readily seen that det(a) = det(a). Using the latter, and the fact that the determinant of A is the same as that of its transpose, we find det(a ) = det((a) T ) = det(a) = det(a). Since A is Hermitian, det(a) = det(a ) = det(a), and det(a) is real. Definition A square complex matrix A is called normal if AA = A A (a property you should check is shared by, for example, unitary and hermitian matrices).

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

Math 225 Linear Algebra II Lecture Notes. John C. Bowman University of Alberta Edmonton, Canada

Math 225 Linear Algebra II Lecture Notes John C Bowman University of Alberta Edmonton, Canada March 23, 2017 c 2010 John C Bowman ALL RIGHTS RESERVED Reproduction of these lecture notes in any form, in

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

1. Select the unique answer (choice) for each problem. Write only the answer.

MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

Linear Algebra Massoud Malek

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

Linear Algebra. Workbook

Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

MATH 369 Linear Algebra

Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Math 34/84 - Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your

Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

Mathematical Methods wk 2: Linear Operators

John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

4. Linear transformations as a vector space 17

4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

Practice Problems for the Final Exam

Practice Problems for the Final Exam Linear Algebra. Matrix multiplication: (a) Problem 3 in Midterm One. (b) Problem 2 in Quiz. 2. Solve the linear system: (a) Problem 4 in Midterm One. (b) Problem in

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n

Exam in TMA4110 Calculus 3, June 2013 Solution

Norwegian University of Science and Technology Department of Mathematical Sciences Page of 8 Exam in TMA4 Calculus 3, June 3 Solution Problem Let T : R 3 R 3 be a linear transformation such that T = 4,

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

Dimension and Structure

96 Chapter 7 Dimension and Structure 7.1 Basis and Dimensions Bases for Subspaces Definition 7.1.1. A set of vectors in a subspace V of R n is said to be a basis for V if it is linearly independent and

1 Determinants. 1.1 Determinant

1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

I. Multiple Choice Questions (Answer any eight)

Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

Eigenvalues and Eigenvectors

CHAPTER Eigenvalues and Eigenvectors CHAPTER CONTENTS. Eigenvalues and Eigenvectors 9. Diagonalization. Complex Vector Spaces.4 Differential Equations 6. Dynamical Systems and Markov Chains INTRODUCTION

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

Linear Algebra: Sample Questions for Exam 2

Linear Algebra: Sample Questions for Exam 2 Instructions: This is not a comprehensive review: there are concepts you need to know that are not included. Be sure you study all the sections of the book and

Mathematical Foundations

Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

Department of Biostatistics Department of Stat. and OR. Refresher course, Summer Linear Algebra. Instructor: Meilei Jiang (UNC at Chapel Hill)

Department of Biostatistics Department of Stat. and OR Refresher course, Summer 216 Linear Algebra Original Author: Oleg Mayba (UC Berkeley, 26) Modified By: Eric Lock (UNC, 21 & 211) Gen Li (UNC, 212)

4.3 - Linear Combinations and Independence of Vectors

- Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

EIGENVALUES AND EIGENVECTORS

EIGENVALUES AND EIGENVECTORS Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal This is equivalent to

Optimization Theory. A Concise Introduction. Jiongmin Yong

October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

Then since v is an eigenvector of T, we have (T λi)v = 0. Then

Problem F02.10. Let T be a linear operator on a finite dimensional complex inner product space V such that T T = T T. Show that there is an orthonormal basis of V consisting of eigenvectors of B. Solution.

SOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.

SOLUTIONS: ASSIGNMENT 9 66 Use Gaussian elimination to find the determinant of the matrix det 1 1 4 4 1 1 1 1 8 8 = det = det 0 7 9 0 0 0 6 = 1 ( ) 3 6 = 36 = det = det 0 0 6 1 0 0 0 6 61 Consider a 4

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

MATH JORDAN FORM

MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

Math 215 HW #9 Solutions

Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith

Review of some mathematical tools

MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Eigenvalues and Eigenvectors If Av=λv with v nonzero, then λ is called an eigenvalue of A and v is called an eigenvector of A corresponding to eigenvalue λ. Agenda: Understand the action of A by seeing

Eigenvalues and Eigenvectors

Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

Matrix Operations: Determinant

Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant

There are two things that are particularly nice about the first basis

Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

Definition (T -invariant subspace) Example. Example

Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

Eigenvalues and Eigenvectors

LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

SAMPLE STUDY MATERIAL Postal Correspondence Course GATE Engineering Mathematics GATE ENGINEERING MATHEMATICS ENGINEERING MATHEMATICS GATE Syllabus CIVIL ENGINEERING CE CHEMICAL ENGINEERING CH MECHANICAL

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

Jordan Normal Form. Chapter Minimal Polynomials

Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

Computational math: Assignment 1

Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

2. Review of Linear Algebra

2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

Chapter 9 Eigenanalysis Contents 9. Eigenanalysis I.................. 49 9.2 Eigenanalysis II................. 5 9.3 Advanced Topics in Linear Algebra..... 522 9.4 Kepler s laws................... 537

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

FINITE-DIMENSIONAL LINEAR ALGEBRA

DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

This is a closed book, closed notes exam You need to justify every one of your answers unless you are asked not to do so Completely correct answers given without justification will receive little credit

Generalized eigenvector - Wikipedia, the free encyclopedia

1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

Notes on Mathematics

Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................