This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2 1 has order 3 1. 2) If A is the 2 2 matrix defined by A i,j = i j for i = 1, 2 and j = 1, 2, then all diagonal elements of A are zero. 3) 1 2 3 4 5 6 = [ 1 3 5 2 4 6. Lecture 03 ( 1.1) 1) Matrix addition, subtraction, and scalar multiplication are all defined elementwise. 2) Matrix addition and subtraction are commutative and associative. 3) If λ = 0, then for any p n matrix A, λa = 0 p n. Lecture 04 ( 1.2) 1) Matrix multiplication is defined elementwise. 2) For all matrices A and B, AB BA. 3) Matrix multiplication is associative. Lecture 05 ( 1.3) 1) [ 1 4 2 5 3 6 8 11 T = 1 3 4 6 2 8 5 11. 2) There exists a nonzero matrix A which is both skew-symmetric and diagonal. 3) If A 0 and A T = ka for some real number k, then k = ±1. Lecture 06 ( 1.3) 1
1) The matrix A shown below is symmetrically partitioned. 0 1 2 3 4 5 6 7 8 9 A = 10 11 12 13 14 15 16 17 18 19. 20 21 22 23 24 2) Every upper triangular matrix is in row-reduced form. 3) If A is both upper triangular and lower triangular, then A must be a zero matrix. Lecture 07 ( 1.4) 1) If x 1 and x 2 are distinct solutions to a system of linear equations, then z =.3x 1 +.7x 2 is also a solution to this system. 2) There exists a system of linear equations whose set of solutions has exactly 5 elements. 3) Every consistent system is homogeneous. Lecture 08 ( 1.4) 1) If S is the set of solutions to the equations E 1,..., E m in the variables x 1,..., x n, then S is also the set of solutions to the equations E 1 2E 2, E 2, E 3,...,E m, where E 1 2E 2 is the equation obtained by adding to equation E 1 negative 2 times equation E 2. 2) For a given system of equations, the derived set of equations (obtained by doing Gaussian elimination on the augmented matrix for the original system) has the same set of solutions as the original system of equations. 3) If, after doing elementary row operations, an augmented matrix for a linear system in the variables x, y, and z has the form 1 1 1 1 0 1 1 1, 0 0 1 1 then the (unique) solution to the original system is x = 1, y = 1, z = 1. Lecture 09 ( 1.4) 1) If after applying elementary row operations to an augmented matrix there exists a row of zeros, then the corresponding system of equations must have infinitely many solutions. 2) If the row-reduced form for the augmented matrix of a system (in the variables x, y, and z) is 1 a b c 0 1 d f 0 0 0 g where a, b, c, d, e, f are real numbers, then the system does not have a unique solution. 3) A homogeneous system with more variables than equations must have infinitely many solutions. 2
Lecture 10 ( 1.5) 1) If A is not invertible, then A must have a zero row. 2) If A is invertible and symmetric, then A 1 is symmetric as well. 3) If A and b are the matrices A = [ 5 0 0 2 [ 4, b = 3, then the matrix equation Ax = b has the unique solution [ [ 1/5 0 4 x =. 0 1/2 3 Lecture 11 ( 1.5) 1) The elementary matrix E = 0 0 1 0 1 0 1 0 0 corresponds to the elementary row operation of interchanging the first and third rows of any 3 n matrix. 2) An n n matrix is invertible if and only if it can be transformed using elementary row operations into the identity matrix I n. 3) If a matrix A is invertible, then A 1 can be computed by applying to the identity matrix any sequence of elementary row operations that transform A into the identity. Lecture 12 ( 1.6) 1) If A = LU, for some matrices L and U, and L is invertible, then A must be invertible. 2) If the lower triangular matrix L has a zero on its diagonal, then L is not invertible. 3) If the nonsingular matrix A can be transformed to upper triangular form using only the third elementary row operation, then A has an LU decomposition. Lecture 13 ( 2.1) 1) For any two vectors u and v in a vector space V, u v = v u. 2) For any three vectors u,v, and w in the vector space V, u (v w) = v (u w). 3) R 3, with vector addition and scalar multiplication defined componentwise, is a real vector space. Lecture 14 ( 2.1) 3
1) R n, with vector addition and scalar multiplication defined componentwise, is a complex vector space. 2) The set of all polynomials with real coefficients and having degree less than or equal to 4, with vector addition and scalar multiplication defined as usual for polynomials, is a real vector space. 3) The set of all n n lower triangular matrices (having entries in R), with vector addition and scalar multiplication being matrix addition and scalar multiplication, is a real vector space. Lecture 15 ( 2.1) 1) In some vector spaces there are vectors that have more than one additive inverse. 2) For any vector u in a vector space and any scalar α, α u = 0 if and only if either α = 0 or u = 0. 3) For any vector u in a vector space V and any scalar α, (α u) = ( α) u = α ( u). Lecture 16 ( 2.1) 1) If V is a vector space, then V is a subspace of V. 2) There exists a subspace of R 2 with exactly 10 elements. 3) If S is a nonempty subset of the vector space V and α u β v S, whenever u,v S and α and β are scalars, then S must be a subspace of V. Lecture 17 ( 2.2) 1) S = {(x 1, x 2, x 3 ) R 3 2x1 3x 2 + 4x 3 = 1} is a subspace of R 3. 2) If S is a subspace of V, then S = span(s). 3) If v n is a linear combination of v 1,...,v n 1, then span{v 1,...,v n 1 } = span{v 1,...,v n 1,v n }. Lecture 18 ( 2.3) 1) Any set of three vectors in R 4 must be linearly independent. 2) If V is the set of all real-valued functions defined on R, and S = {sint, cost}, then S is linearly independent. 3) If S T and T is linearly dependent, then S must be linearly dependent. Lecture 19 ( 2.4) 4
1) A basis for the vector space V is a set S V such that S is linearly independent and spans V. 2) The set S = {t, t 2, t 3 } is a basis for the vector space V of all polynomials q(t) such that the degree of q(t) is less than or equal to 3 and q(0) = 0. 3) If S has n elements and is a spanning set for some vector space V, then any subset of V with more than n elements must be linearly independent. Lecture 20 ( 2.4) 1) There exists a basis of P 3 with 5 elements. 2) Every linearly independent subset S of R 3 that contains exactly 3 vectors must be a basis for R 3. 3) If S is a 10 element subset of M 4 2 that spans M 4 2, then two elements can be removed from S so that the remaining 8 element subset is a basis for M 4 2. Lecture 21 ( 2.5) 1) If V is a vector space, S T V, and T = span(x) for some set X V, then span(span(s)) T. 2) The row rank of a matrix A is the number of nonzero rows of A. 3) If A is an upper triangular n n matrix with nonzero diagonal elements, then rowspace(a) = R n. Lecture 22 ( 2.5) 1) If the 3 4 matrix A can be transformed using elementary row operations to the matrix B = 1 1 0 0 0 0 1 5, 0 0 0 1 then rowrank(a) = 3. 2) If V is a vector space with basis S and the set {u 1...,u m } is linearly independent, then the set of coordinate representations (with respect to S) of u 1,...,u m is linearly independent as a subset of R n. 3) If V is a vector space and A is the k n matrix whose rows are the coordinates of the vectors in some set S V, and S is linearly independent, then rowrank(a) = n. Lecture 23 ( 2.6) 1) If A is a square matrix and the rows of A are linearly independent, then the columns of A are also linearly independent. 2) If A is k n and r(a) = n, then the system Ax = 0 must have infinitely many solutions. 5
3) If A is any k n matrix and b is any k 1 matrix, then r(a) r([a b), and r(a) < r([a b) if and only if the system Ax = b is inconsistent. Lecture 24 ( 2.6) 1) If A is an n n matrix and r(a) < n, then for any n n matrix C, r(ca) < n. 2) If A and B are n n matrices and AB = I n, then both A and B are invertible and A 1 = B and B 1 = A. 3) If A is a square matrix and not invertible, and if B is the row-reduced matrix obtained from A by using elementary row operations, then B has at least one zero on its diagonal. Lecture 25 ( 3.1,3.2) 1) By definition, a linear transformation is one-to-one. 2) A transformation T : V W is onto if for every v V there exists some w W such that T(v) = w. 3) If T : V W is linear, then T(v w) = Tv Tw for all v,w V. Lecture 26 ( 3.3) 1) If {v 1,...,v n } is a basis for the vector space V, and T 1 : V W and T 2 : V W are linear transformations satisfying T 1 (v j ) = T 2 (v j ) for j = 1,...,n, then T 1 v = T 2 v for all v V. 2) If T : V W is a linear transformation, B is a basis for V, C is a basis for W, and A is a matrix such that (Tv) C = A(v) B for all v V, then the jth row of A consists of the coordinates of Tv j with respect to C. 3) If the vector space V has basis {v 1,...,v n }, then the transformation ψ : V R n defined by c 1.. ψ(c 1 v 1 + + c n v n ) = R n must be linear, one-to-one, and onto. c n Lecture 27 ( 3.4) 1) If B, C, and D are bases for the finite dimensional vector space V, then P D C P C B = P D B. 2) If A B B is the matrix representation of T : V V with respect to basis B of V, AC C matrix representation of T with respect to basis C of V, then A B B P B C = P B CAC C. is the 3) Two matrices A and à are similar if there exists an invertible matrix P such that Pà = AP 1. Lecture 28 ( 3.5) 6
1) If T : V W is linear, then both ker(t) and image(t) are subspaces of V. 2) If T : R 2 R 2 is defined by T(a, b) = (0, b), then null(t) = 1. 3) If T : V V is linear, V is n-dimensional with basis B, and the matrix representation of T with respect to B is invertible, then r(t) = n. Lecture 29 ( 3.5) 1) If T : V W is linear and has a 3 dimensional image, and if dim(v ) = 7, then null(t) = 4. 2) If T : V W is linear and T is one-to-one, then the kernel of T is nontrivial. 3) If V and W are n-dimensional vector spaces, then there must be exist a linear transformation T : V W that is one-to-one and onto. Note: The section numbers below refer to the first edition. Sections 4.1 and 4.2 of the first edition are covered in Appendix A of the second edition. Sections 4.3, 4.4, and 4.5 of the first edition are covered in sections 4.1, 4.2, and 4.3, respectively, of the second edition. Lecture 30 ( 4.1, 4.2) [ a b 1) If A = c d, then det(a) = det(a T ). 2) If A has all zeros on its diagonal, then det(a) = 0. 3) If A is an invertible n n matrix and B is a row-reduced matrix obtained from A using elementary row operations, then det(b) = 1. Lecture 31 ( 4.2) 1) If A = v 1 v 2 v 3 and B = v 3 v 1 v 2 2) For any square matrix A, det( A) = det(a)., where v 1,v 2,v 3 R 3, then det(b) = det(a). 3) If square matrix B is obtained from A by adding 5 times column 3 of A to column 1 of A, then det(b) = det(a). Lecture 32 ( 4.2) 1) For any square matrix A, A is invertible if and only if det(a) = 0. 2) For any n n matrices A and B, det(ab) = det(ba). 3) If A, B, and C are n n matrices, B is nonsingular, and ABC = B, then det(ac) = 1. Lecture 33 ( 4.3) 7
1) For any linear transformation T : V V, the real number 3 is an eigenvalue of T because T0 = 0 = 3 0. 2) If A, B, and P are n n matrices such that B = P 1 AP, and if x is an eigenvector of A with eigenvalue λ, then P 1 x is an eigenvector of B with eigenvalue λ. 3) If A is a diagonal n n matrix with λ 1,..., λ n on its diagonal, then λ 1,..., λ n are the eigenvalues of A. Lecture 34 ( 4.3, 4,4) 1) The matrix A = [ 0 1 1 0 has eigenvalues +1 and 1. 2) If 0 is an eigenvalue of A, then det(a) = 0. [ [ 1 1 1 0 3) The matrices A = and B = 0 1 0 1 are not similar. have the same characteristic equations but Lecture 35 ( 4.4) 1) If A has a nonzero eigenvalue, then A must be invertible. 2) If 4 is an eigenvalue of A, then 64 is an eigenvalue of A 3. 3) If det(a λi n ) = (λ 1)(λ 2) (λ n), then tr(a) = n(n + 1)/2. Lecture 36 ( 4.5) 1) If A is a 2 2 matrix with 2 eigenvectors, then A must be diagonalizable. 2) If M 1 AM = D is a diagonal matrix, then each column of M must be a eigenvector of A. 3) If λ 1,..., λ n are the eigenvalues (listed with multiplicity) of the n n matrix A, then any diagonal matrix to which A is similar must have λ 1,...,λ n on its diagonal, although not necessarily in that order. Lecture 37 ( 4.5) 1) If A is 3 3 and the real eigenvalues of A are 1, 2, and 3, then A is diagonalizable. 2) If A is 3 3 and the real eigenvalues of A are 1 and 2, then A must not be diagonalizable. 3) If A is 5 5 and r(a 6I 5 ) = 3, then 6 is an eigenvalue of A and dim(s 6 ) = 2. 8