OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Similar documents
Online Exercises for Linear Algebra XM511

Lecture Summaries for Linear Algebra M51A

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra Highlights

MAT Linear Algebra Collection of sample exams

Math Linear Algebra Final Exam Review Sheet

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MAT 2037 LINEAR ALGEBRA I web:

Linear Algebra Practice Problems

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

2. Every linear system with the same number of equations as unknowns has a unique solution.

Study Guide for Linear Algebra Exam 2

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA REVIEW

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Offline Exercises for Linear Algebra XM511 Lectures 1 12

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Calculating determinants for larger matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

2 b 3 b 4. c c 2 c 3 c 4

Vector Spaces and Linear Transformations

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Solution to Homework 1

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

LINEAR ALGEBRA SUMMARY SHEET.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

MTH 464: Computational Linear Algebra

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Lecture Notes in Linear Algebra

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Review problems for MA 54, Fall 2004.

Math113: Linear Algebra. Beifang Chen

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

This MUST hold matrix multiplication satisfies the distributive property.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

3 Matrix Algebra. 3.1 Operations on matrices

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

ANSWERS. E k E 2 E 1 A = B

Components and change of basis

4.6 Bases and Dimension

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Chapter 2 Notes, Linear Algebra 5e Lay

NOTES FOR LINEAR ALGEBRA 133

Introduction to Matrix Algebra

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

Matrices and Matrix Algebra.

Solution Set 7, Fall '12

4. Linear transformations as a vector space 17

1. General Vector Spaces

MATH 235. Final ANSWERS May 5, 2015

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

and let s calculate the image of some vectors under the transformation T.

Foundations of Matrix Analysis

Linear Equations in Linear Algebra

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

1 Matrices and Systems of Linear Equations. a 1n a 2n

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Graduate Mathematical Economics Lecture 1

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Chapter 7. Linear Algebra: Matrices, Vectors,

Linear Systems and Matrices

4. Determinants.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

The definition of a vector space (V, +, )

Chapter 2: Linear Independence and Bases

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Comps Study Guide for Linear Algebra

Linear Algebra (Math-324) Lecture Notes

Linear Algebra March 16, 2019

Introduction to Matrices

Extra Problems for Math 2050 Linear Algebra I

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

LS.1 Review of Linear Algebra

Linear Algebra- Final Exam Review

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

1 Matrices and Systems of Linear Equations

Math 3108: Linear Algebra

CS 246 Review of Linear Algebra 01/17/19

LINEAR ALGEBRA WITH APPLICATIONS

Transcription:

This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1) OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises 1) The matrix [3 2 1 has order 3 1. [F. This matrix has 1 row and 3 columns, so its order is 1 3. 2) If A is the 2 2 matrix defined by A i,j = i j for i = 1, 2 and j = 1, 2, then all diagonal elements of A are zero. [T. The diagonal elements of A are A 1,1 and A 2,2. But by the definition of A, A 1,1 = 1 1 = 0 and A 2,2 = 2 2 = 0. 3) 1 2 3 4 5 6 = [ 1 3 5 2 4 6. [F. The two matrices have different orders so they cannot be equal (regardless of the similarity of their elements). Later we will see that these two matrices are related. Each is the transpose of the other. Lecture 03 ( 1.1) 1) Matrix addition, subtraction, and scalar multiplication are all defined elementwise. [T. See definitions given in lecture or in the textbook. 2) Matrix addition and subtraction are commutative and associative. [F. Matrix addition, like ordinary addition of real numbers, is commutative and associative. But like ordinary subtraction of real numbers, matrix subtraction is neither commutative nor associative. 3) If λ = 0, then for any p n matrix A, λa = 0 p n. [T. (λa) i,j = (0A) i,j = 0 A i,j = 0. Since every element is zero, λa is the p n zero matrix. Lecture 04 ( 1.2) 1) Matrix multiplication is defined elementwise. [F. See the definition of matrix multiplication given in lecture. 2) For all matrices A and B, AB BA. [F. In general, matrix multiplication is not commutative, but there some particular cases in which commutativity holds. For example, if A = B, then AB = AA = BA. 3) Matrix multiplication is associative. [T. Associativity of matrix multiplication is asserted in a theorem given in the lecture, and its proof is a homework exercise. 1

Lecture 05 ( 1.3) 1) [ 1 4 2 5 3 6 8 11 T = 1 3 4 6 2 8 5 11. [T. When transposing a matrix, the first row becomes the first column, the second row becomes the second column, etc. 2) There exists a nonzero matrix A which is both skew-symmetric and diagonal. [F. If A is skew-symmetric, then all diagonal elements must be zero. But if A is diagonal, then all nondiagonal elements must be zero. Hence, if A is both skew-symmetric and diagonal, then A = 0. 3) If A 0 and A T = ka for some real number k, then k = ±1. [T. Try to prove it. Notice that if A T = ka, then A = (A T ) T = (ka) T = ka T = k 2 A. Lecture 06 ( 1.3) 1) The matrix A shown below is symmetrically partitioned. 0 1 2 3 4 5 6 7 8 9 A = 10 11 12 13 14 15 16 17 18 19. 20 21 22 23 24 [F. The vertical partition lines occur after the first and fourth columns, but the horizontal partition lines occur after the second and fourth rows. A symmetrically partitioned matrix is a square matrix that is partitioned in such a way that the vertical partition lines are in the same places with respect to the sequence of columns as are the horizontal partition lines with respect to the sequence of rows. [ 1 2 2) Every upper triangular matrix is in row-reduced form. [F. The matrix is upper 0 2 triangular, but it is not in row-reduced form because the first nonzero element in the second row is not a 1. 3) If A is both upper triangular and lower triangular, then A must be a zero matrix. [F. Every diagonal matrix is both upper triangular and lower triangular. Lecture 07 ( 1.4) 1) If x 1 and x 2 are distinct solutions to a system of linear equations, then z =.3x 1 +.7x 2 is also a solution to this system. [T. A theorem proven in lecture asserts that if x 1 and x 2 are solutions to a given system and if α + β = 1, then αx 1 + βx 2 is also a solution to the system. 2

2) There exists a system of linear equations whose set of solutions has exactly 5 elements. [F. As proven in lecture, the set of solutions to a system of linear equations must have either 0, 1, or infinitely many elements. 3) Every consistent system is homogeneous. [F. The system Ax = b is consistent if it has at least one solution; and it is homogeneous if b = 0. Just because a system has a solution does not mean that b must be 0. For example, the system given by the two equations x + y = 2, x + y = 2 is consistent, because (0, 2) is a solution. But it is not homogeneous. It is true that every homogeneous system is consistent, because every homogeneous system has the trivial solution x = 0. Lecture 08 ( 1.4) 1) If S is the set of solutions to the equations E 1,..., E m in the variables x 1,..., x n, then S is also the set of solutions to the equations E 1 2E 2, E 2, E 3,..., E m, where E 1 2E 2 is the equation obtained by adding to equation E 1 negative 2 times equation E 2. [T. As discussed in lecture, adding to one equation a scalar times another equation does not change the set of solutions of the system of equations. Do you see why? 2) For a given system of equations, the derived set of equations (obtained by doing Gaussian elimination on the augmented matrix for the original system) has the same set of solutions as the original system of equations. [T. The derived set of equations corresponds to the new augmented matrix put in row-reduced form by successive elementary row operations. But elementary row operations on an augmented matrix do not change the solution set of the corresponding system of equations. 3) If, after doing elementary row operations, an augmented matrix for a linear system in the variables x, y, and z has the form 1 1 1 1 0 1 1 1, 0 0 1 1 then the (unique) solution to the original system is x = 1, y = 1, z = 1. [F. The derived set of equations is x + y + z = 1 y + z = 1 z = 1. So z = 1 and using back-substitution gives y = 1 z = 1 1 = 0 and x = 1 y z = 1 0 1 = 0. So x = 0, y = 0, z = 1 is the unique solution. Lecture 09 ( 1.4) 1) If after applying elementary row operations to an augmented matrix there exists a row of zeros, then the corresponding system of equations must have infinitely many solutions. [F. The existence of a row of zeros does not necessarily imply infinitely many solutions. For example, you can check that the row-reduced form of the augmented matrix for the system x + 2y = 5 x 3y = 7 2x + 5y = 12 has a row of zeros, but the system has exactly one solution, namely x = 1, y = 2. 3

2) If the row-reduced form for the augmented matrix of a system (in the variables x, y, and z) is 1 0 a 1 b d c f 0 0 0 g where a, b, c, d, f, g are real numbers, then the system does not have a unique solution. [T. If g 0, then the system is inconsistent, so there are no solutions. Otherwise, if g = 0, then z is unrestricted, so there must be infinitely many solutions. 3) A homogeneous system with more variables than equations must have infinitely many solutions. [T. Every homogeneous system is consistent, and if there are more variables than derived equations, then there must be infinitely many solutions. Lecture 10 ( 1.5) 1) If A is not invertible, then A must have a zero row. [F. A sufficient condition for noninvertibility is the existence of a zero row, but this condition is not necessary. For example, the 2 2 matrix with all entries equal to 1 is not invertible. For any real numbers a, b, c, d, if [ a b c d then [ a + b a + b c + d c + d [ 1 1 1 1 = = I 2, [ 1 0 0 1 Equating first rows gives a + b = 1 and a + b = 0, which is impossible. So no inverse.. [ 1 1 1 1 has 2) If A is invertible and symmetric, then A 1 is symmetric as well. [T. A matrix B is symmetric if B T = B. Now notice that (A 1 ) T = (A T ) 1 = (A) 1 (the first equality was proved in this lecture, and the second follows from the fact that A is symmetric). Therefore, (A 1 ) T = A 1, implying that A 1 is symmetric. 3) If A and b are the matrices A = [ 5 0 0 2 [ 4, b = 3, then the matrix equation Ax = b has the unique solution [ [ 1/5 0 4 x =. 0 1/2 3 [T. A is [ a diagonal matrix with all diagonal elements nonzero, so A is invertible, and A 1 1/5 0 =. Therefore, the unique solution is 0 1/2 x = A 1 b = [ 1/5 0 0 1/2 [ 4 3. Lecture 11 ( 1.5) 4

1) The elementary matrix E = 0 0 1 0 1 0 1 0 0 corresponds to the elementary row operation of interchanging the first and third rows of any 3 n matrix. [T. Just check the assertion directly by left-multiplying any 3 n matrix by E. Notice that E can be obtained by interchanging the first and third rows of I 3. 2) An n n matrix is invertible if and only if it can be transformed using elementary row operations into the identity matrix I n. [T. As mentioned in this lecture, it is proven later in the course that an n n matrix is invertible if and only if it can transformed using elementary row operations into a row-reduced form with all diagonal elements nonzero. In this lecture, it was demonstrated how such row-reduced matrices can be further transformed into the identity matrix. 3) If a matrix A is invertible, then A 1 can be computed by applying to the identity matrix any sequence of elementary row operations that transform A into the identity. [T. Applying an elementary row operation to A is equivalent to left-multiplying by an elementary matrix. If E 1,..., E k are elementary matrices such that E k E k 1 E 1 A = I n, then A 1 = E k E k 1 E 1 = E k E k 1 E 1 I n, which can be obtained by transforming I n using the corresponding sequence of elementary row operations. Lecture 12 ( 1.6) 1) If A = LU, for some matrices L and U, and L is invertible, then A must be invertible. [F. A is invertible if and only if both L and U are invertible. Try to prove this. 2) If the lower triangular matrix L has a zero on its diagonal, then L is not invertible. [T. This was proven as a lemma in the lecture. L can be transformed to a matrix L with a row zeros, and that matrix cannot be invertible. Since L is not invertible, it cannot be transformed into the identity matrix. But then L also cannot be transformed to the identity matrix. Hence, L is not invertible. 3) If the nonsingular matrix A can be transformed to upper triangular form using only the third elementary row operation, then A has an LU decomposition. [F. As proven in lecture, A has an LU decomposition if and only if A can be transformed to upper triangular form using only elementary row operations [ R 3 (i, j, k) where i > j. The restriction i > j 0 1 is important: Consider the matrix A =. If A had an LU decomposition, then 1 0 [ [ L 1 A = U would be upper triangular. Let L 1 a 0 =. Then L 1 0 a A = c b b c which is upper triangular only if b = 0, in which case L 1 would be singular, implying A would be singular, a contradiction. Therefore, A does not have an LU decomposition. However, by first adding row2 [ to row1, then adding ( 1) times row1 to row2, A is transformed 1 1 into the matrix B =, which is upper triangular. So A has been transformed to 0 1 upper triangular form using only operation R 3, but A does not have an LU decomposition. Notice that the first operation applied to A is of the form R 3 (i = 1, j = 2, 1), so i < j. Lecture 13 ( 2.1) 1) For any two vectors u and v in a vector space V, u v = v u. [F. The expressions u v and v u make no sense. denotes scalar multiplication, which is a binary operation whose input is a scalar and vector pair, and whose output is a vector. 5

2) For any three vectors u, v, and w in the vector space V, u (v w) = v (u w). [T. u (v w) = (u v) w associativity = (v u) w commutativity = v (u w) associativity. 3) R 3, with vector addition and scalar multiplication defined componentwise, is a real vector space. [T. Some of the axioms were verified in lecture. You should verify the others. Lecture 14 ( 2.1) 1) R n, with vector addition and scalar multiplication defined componentwise, is a complex vector space. [F. Both R n and C n are real vector spaces. But R n is not a complex vector space. Scalar multiplication of an n-tuple of real numbers by a complex number may not give an n-tuple of real numbers. For example, if i = 1 C, then i (1, 1, 1) = (i, i, i) R 3. So, if the scalars are chosen to be complex numbers, then R 3 does not have the requisite closure property for. 2) The set of all polynomials with real coefficients and having degree less than or equal to 4, with vector addition and scalar multiplication defined as usual for polynomials, is a real vector space. [T. Try to prove this by verifying the requisite properties. 3) The set of all n n lower triangular matrices (having entries in R), with vector addition and scalar multiplication being matrix addition and scalar multiplication, is a real vector space. [T. Note that the lower triangular matrices are closed under matrix addition and scalar multiplication. The zero matrix is lower triangular, and the additive inverse of a lower triangular matrix is lower triangular. Furthermore, all the other axioms of a vector space automatically hold for the set of n n lower triangular matrices because those axioms hold for the set of all n n matrices. Lecture 15 ( 2.1) 1) In some vector spaces there are vectors that have more than one additive inverse. [F. As proven in lecture, each vector in a given vector space has a unique additive inverse. 2) For any vector u in a vector space and any scalar α, α u = 0 if and only if either α = 0 or u = 0. [T. One of the theorems proven in lecture says that α = 0 implies α u = 0 for any vector u; another theorem proven in the lecture says that u = 0 implies α u = 0 for any scalar α; and a third theorem proven in lecture says that for any vector u and scalar α, α u = 0 implies either α = 0 or u = 0. 3) For any vector u in a vector space V and any scalar α, (α u) = ( α) u = α ( u). [T. Try to prove this. HINT: Show that when either ( α) u or α ( u) is added to α u, you get 0. Lecture 16 ( 2.1) 6

1) If V is a vector space, then V is a subspace of V. [T. V is a subset of V and certainly V is a vector space (using the same operations as defined for V!). So V is a subspace of V. Note: The subspaces V and {0} are often called the trivial subspaces of the vector space V. 2) There exists a subspace of R 2 with exactly 10 elements. [F. A subspace S of any real (or complex) vector space has either 1 element (the zero vector), or it has infinitely many elements. If there exists a nonzero u S, then by closure under scalar multiplication n u S for every integer n. All of these vectors are distinct (why?), so S must have infinitely many elements. 3) If S is a nonempty subset of the vector space V and α u β v S, whenever u, v S and α and β are scalars, then S must be a subspace of V. [T. The assertion is exactly the corollary proven in lecture. Lecture 17 ( 2.2) 1) S = {(x 1, x 2, x 3 ) R 3 2x1 3x 2 + 4x 3 = 1} is a subspace of R 3. [F. It is easy to see that 0 = (0, 0, 0) S, so S is not a subspace. If the 1 is changed to a 0 in the defining equation for S, then S would be a subspace of R 3. 2) If S is a subspace of V, then S = span(s). [T. For each u S, u = 1u span(s). So S span(s). However, if S is a subspace, then span(s) S because, as proved in lecture, span(s) is contained in every subspace containing S. Thus, since S and span(s) are subsets of each other, S = span(s). 3) If v n is a linear combination of v 1,..., v n 1, then span{v 1,..., v n 1 } = span{v 1,..., v n 1, v n }. [T. Try to prove this. HINT: First show that any linear combination of v 1,..., v n can be written as a linear combination of v 1,..., v n 1. Lecture 18 ( 2.3) 1) Any set of three vectors in R 4 must be linearly independent. [F. There are many counterexamples. For example, the set {(1, 0, 0, 0), (0, 1, 0, 0), (1, 1, 0, 0)} is linearly dependent because the last vector is the sum of the first two. Note that by a result alluded to in lecture, any 4 vectors in R 3 must be linearly dependent. 2) If V is the set of all real-valued functions defined on R, and S = {sin t, cos t}, then S is linearly independent. [T. If S were linearly dependent, then there would exist a scalar α such that sin t = α cos t, implying that tan t = sin t cos t = α is a constant function, which is obviously false. 3) If S T and T is linearly dependent, then S must be linearly dependent. [F. For example, in R 2, let S = {(1, 0), (0, 1)} and T = {(1, 0), (0, 1), (1, 1)}. T is linearly dependent but S is linearly independent. Note that, in general, if S T and T is linearly independent, then S is linearly independent as well. Lecture 19 ( 2.4) 1) A basis for the vector space V is a set S V such that S is linearly independent and spans V. [T. That s exactly the definition of basis given in lecture. 7

2) The set S = {t, t 2, t 3 } is a basis for the vector space V of all polynomials q(t) such that the degree of q(t) is less than or equal to 3 and q(0) = 0. [T. We know that S is linearly independent. Also if q(t) has degree less than or equal to 3, then q(t) = a 0 +a 1 t+a 2 t 2 +a 3 t 3. But q(0) = 0 implies a 0 = 0, so q(t) is a linear combination of t, t 2, and t 3. Thus S spans V, implying that S is a basis for V. 3) If S has n elements and is a spanning set for some vector space V, then any subset of V with more than n elements must be linearly independent. [F. As proven in lecture, if V has a spanning set with n elements, then any subset of V with more than n elements must be linearly dependent. Lecture 20 ( 2.4) 1) There exists a basis of P 3 with 5 elements. [F. P 3, the set of all polynomials of degree less than or equal to 3, has dimension 4, so all bases of P 3 must have 4 elements. 2) Every linearly independent subset S of R 3 that contains exactly 3 vectors must be a basis for R 3. [T. If S = {v 1, v 2, v 3 } were linearly independent but not a basis, then S must fail to span. So there exists v 4 R 3 such that v 4 span(s). Therefore, {v 1, v 2, v 3, v 4 } is linearly independent. But since dim(r 3 ) = 3 every set with more than 3 elements must be linearly dependent. This contradiction implies span(s) = R 3. 3) If S is a 10 element subset of M 4 2 that spans M 4 2, then two elements can be removed from S so that the remaining 8 element subset is a basis for M 4 2. [T. From the theorem given in lecture, some subset of S must be a basis for M 4 2. But dim(m 4 2 ) = 4 2 = 8, so any subset of S that is a basis must contain 8 elements. Therefore, there exists at least one (maybe more) 8 element subset of S that is a basis for M 4 2. Lecture 21 ( 2.5) 1) If V is a vector space, S T V, and T = span(x) for some set X V, then span(span(s)) T. [T. The hypotheses say that S span(x). Therefore, since taking the span preserves the subset relation, and applying span twice is the same as applying it once, span(span(s)) = span(s) span(span(x)) = span(x) = T. 2) The [ row rank of a matrix A is the number of nonzero rows of A. [F. For example, if 1 1 A =, then the row space of A is the one-dimensional vector space spanned by 1 1 [1 1. In general, the row rank of A is the number of nonzero rows of the matrix B obtained by transforming A to row-reduced form using elementary row operations. 3) If A is an upper triangular n n matrix with nonzero diagonal elements, then rowspace(a) = R n. [T. If A has nonzero elements on its diagonal, then A can be transformed to the identity matrix I n using elementary row operations. But the rows of the identity matrix I n give the standard basis for R n, so rowspace(i n ) = R n. Since A can be transformed to I n by elementary row operations, rowspace(a) = rowspace(i n ) = R n. Lecture 22 ( 2.5) 8

1) If the 3 4 matrix A can be transformed using elementary row operations to the matrix B = 1 1 0 0 0 0 1 5, 0 0 0 1 then rowrank(a) = 3. [T. B is obtained from A by elementary row operations so rowspace(a) = rowspace(b). Since B is in row-reduced form, the number of nonzero rows of B gives the dimension of rowspace(b), which is the dimension of rowspace(a), which is rowrank(a). 2) If V is a vector space with basis S and the set {u 1..., u m } is linearly independent, then the set of coordinate representations (with respect to S) of u 1,..., u m is linearly independent as a subset of R n. [T. Essentially, this fact was proven in the lecture. You should try to prove it yourself, without referring back to the lecture. 3) If V is a vector space and A is the k n matrix whose rows are the coordinates of the vectors in some set S V, and S is linearly independent, then rowrank(a) = n. [F. A has k rows, and according to the hypotheses, as a set these rows are linearly independent. So this set forms a basis for rowspace(a), implying that rowrank(a) = k. Lecture 23 ( 2.6) 1) If A is a square matrix and the rows of A are linearly independent, then the columns of A are also linearly independent. [T. If A is n n and the rows are linearly independent, then they form a basis for the row space of A, which therefore has dimension n. So the column space also has dimension n, and it is spanned by the columns of A. Since there are n columns, these columns must form a basis for the column space. Thus, the columns are linearly independent. 2) If A is k n and r(a) = n, then the system Ax = 0 must have infinitely many solutions. [F. As shown in lecture, Ax = 0 has nontrivial solutions only if r(a) is less than the number n of variables in the system. 3) If A is any k n matrix and b is any k 1 matrix, then r(a) r([a b), and r(a) < r([a b) if and only if the system Ax = b is inconsistent. [T. If A 1,..., A n are the columns of A, then for any column matrix b, {A 1,..., A n } {A 1,..., A n, b}, so span{a 1,..., A n } span{a 1,..., A n, b}. Taking dimensions gives or dim(span{a 1,..., A n }) dim(span{a 1,..., A n, b}), r(a) r([a b). In the lecture, it was shown that r(a) = r([a b) if and only if Ax = b is consistent. Therefore, r(a) < r([a b) if and only if Ax = b is inconsistent. Lecture 24 ( 2.6) 1) If A is an n n matrix and r(a) < n, then for any n n matrix C, r(ca) < n. [T. Try to prove this. 2) If A and B are n n matrices and AB = I n, then both A and B are invertible and A 1 = B and B 1 = A. [T. As proven in lecture, AB = I n implies BA = I n. Therefore, both A and B are invertible and each is the other s inverse. 9

3) If A is a square matrix and not invertible, and if B is the row-reduced matrix obtained from A by using elementary row operations, then B has at least one zero on its diagonal. [T. As proven in lecture, A is invertible if and only if it can be transformed using elementary row operations to an upper triangular matrix with all diagonal elements equal to 1. (Remember that row-reduced form implies that the first nonzero entry in each row is 1.) Lecture 25 ( 3.1,3.2) 1) By definition, a linear transformation is one-to-one. [F. A linear transformation is a function (between vector spaces) that preserves vector addition and scalar multiplication. It need not be one-to-one. For example, the zero transformation from a nontrivial vector space V to a vector space W is linear but definitely not one-to-one, because all vectors in V are mapped to the same vector (0) in W. 2) A transformation T : V W is onto if for every v V there exists some w W such that T (v) = w. [F. T is onto if for every w W there exists v V such that T (v) = w. 3) If T : V W is linear, then T (v w) = T v T w for all v, w V. [T. T (v w) = T (v + ( 1)w) = T (v) + T (( 1)w) = T v + ( 1)T w = T v T w. Lecture 26 ( 3.3) 1) If {v 1,..., v n } is a basis for the vector space V, and T 1 : V W and T 2 : V W are linear transformations satisfying T 1 (v j ) = T 2 (v j ) for j = 1,..., n, then T 1 v = T 2 v for all v V. [T. A linear transformation is determined by its action on a basis of its domain. So if two linear transformations with domain V agree on a basis for V, then they must agree on all vectors in V. More explicitly, if v = c 1 v 1 + + c n v n, then T 1 v = T 1 (c 1 v 1 + + c n v n ) = c 1 T 1 v 1 + + c n T 1 v n = c 1 T 2 v 1 + + c n T 2 v n (because T 1 (v j ) = T 2 (v j ), j = 1,..., n) = T 2 (c 1 v 1 + + c n v n ) = T 2 v. 2) If T : V W is a linear transformation, B is a basis for V, C is a basis for W, and A is a matrix such that (T v) C = A(v) B for all v V, then the jth row of A consists of the coordinates of T v j with respect to C. [F. The coordinates T v j with respect to C compose the jth column of A. Just note that (v j ) B is the column matrix e j having a 1 in the jth position and zeros elsewhere. So A(v j ) B = Ae j, which is the jth column of A. 3) If the vector space V has basis {v 1,..., v n }, then the transformation ψ : V R n defined by c 1 ψ(c 1 v 1 + + c n v n ) =. R n must be linear, one-to-one, and onto. [T. If B = {v 1,..., v n }, then ψ is the transformation ψ(v) = (v) B. As shown in lecture, this transformation is linear, one-to-one, and onto. c n 10

Lecture 27 ( 3.4) 1) If B, C, and D are bases for the finite dimensional vector space V, then P D C P C B = P D B. [T. P D C P C B (v) B = P D C (v) C = (v) D = P D B (v) B. 2) If A B B is the matrix representation of T : V V with respect to basis B of V, AC C is the matrix representation of T with respect to basis C of V, then A B B P B C = P B CAC C. [F. The correct formula is A C C P B C = P B CAB B. Notice that A C CP C B (v) B = A C C(v) C = (T v) C = P C B (T v) B = P C B A B B(v) B. 3) Two matrices A and à are similar if there exists an invertible matrix P such that P à = AP 1. [F. A and à are similar if there exists an invertible matrix P such that à = P 1 AP, or P à = AP. Lecture 28 ( 3.5) 1) If T : V W is linear, then both ker(t ) and image(t ) are subspaces of V. [F. ker(t ) is a subspace of V, but image(t ) is a subspace of W. 2) If T : R 2 R 2 is defined by T (a, b) = (0, b), then null(t ) = 1. [T. Check that ker(t ) = {(a, b) R 2 b = 0} = {(a, 0) a R}. Since (a, 0) = a(1, 0), {(1, 0)} is a basis for the kernel. Hence, null(t ) = dim(ker(t )) = 1. 3) If T : V V is linear, V is n-dimensional with basis B, and the matrix representation of T with respect to B is invertible, then r(t ) = n. [T. If A is the matrix representation of T with respect to B, then A is invertible (by hypothesis), so its rank is n. Consequently, r(t ) = r(a) = n. Lecture 29 ( 3.5) 1) If T : V W is linear and has a 3 dimensional image, and if dim(v ) = 7, then null(t ) = 4. [T. As proven in lecture, r(t ) + null(t ) = dim(v ). 2) If T : V W is linear and T is one-to-one, then the kernel of T is nontrivial. [F. If T is one-to-one, then the kernel must be the trivial subspace consisting only of the zero vector. 3) If V and W are n-dimensional vector spaces, then there must be exist a linear transformation T : V W that is one-to-one and onto. [T. All vector spaces with the same (finite) dimension are isomorphic. Lecture 30 ( 4.1, 4.2) [ a b 1) If A = c d, then det(a) = det(a T ). [T. Just compute. det(a) = ad cb = ad bc = det(a T ). In the next lecture, we ll show that det(a) = det(a T ) holds for all square matrices A. 11

2) If A has all zeros on its diagonal, then det(a) = 0. [F. For example, 0 1 1 0 = 1. 3) If A is an invertible n n matrix and B is a row-reduced matrix obtained from A using elementary row operations, then det(b) = 1. [T. The rank of B must equal the rank of A, which is n. But if B has rank n and is in row-reduced form, then B is upper triangular with 1 s on its diagonal. Therefore, det(b) = 1 n = 1. Lecture 31 ( 4.2) 1) If A = v 1 v 2 v 3 and B = v 3 v 1 v 2, where v 1, v 2, v 3 R 3, then det(b) = det(a). [T. B is obtained by first interchanging row 1 and row 3 of A, then, on the resulting matrix, interchanging row 2 and row 3. Each pairwise interchange of rows contributes a factor of 1 to the determinant. Therefore, det(b) = ( 1) ( 1) det(a) = det(a). 2) For any square matrix A, det( A) = det(a). [F. If A is n n and λ is any scalar, then det(λa) = λ n det(a). Letting λ = 1 gives det( A) = ( 1) n det(a). Therefore, det( A) = det(a) if and only if n is odd. 3) If square matrix B is obtained from A by adding 5 times column 3 of A to column 1 of A, then det(b) = det(a). [T. B T is obtained by adding 5 times row 3 of A T to row 1 of A T. By a result proved in lecture, this elementary row operation does not change the determinant; so det(b T ) = det(a T ). But det(b T ) = det(b) and det(a T ) = det(a). Lecture 32 ( 4.2) 1) For any square matrix A, A is invertible if and only if det(a) = 0. [F. As proven in lecture, A is invertible if and only if det(a) 0. 2) For any n n matrices A and B, det(ab) = det(ba). [T. det(ab) = det(a) det(b) = det(b) det(a) = det(ba). 3) If A, B, and C are n n matrices, B is nonsingular, and ABC = B, then det(ac) = 1. [T. Taking determinants of both sides of ABC = B yields det(a) det(b) det(c) = det(b). Dividing through by det(b) (which we can do because B is nonsingular, implying that det(b) 0) gives det(a) det(c) = 1. Therefore, det(ac) = det(a) det(c) = 1. Lecture 33 ( 4.3) 1) For any linear transformation T : V V, the real number 3 is an eigenvalue of T because T 0 = 0 = 3 0. [F. By definition, an eigenvector must be nonzero. So 3 is eigenvalue of T if and only if there exists a nonzero vector v such that T v = 3v. 2) If A, B, and P are n n matrices such that B = P 1 AP, and if x is an eigenvector of A with eigenvalue λ, then P 1 x is an eigenvector of B with eigenvalue λ. [T. BP 1 x = P 1 AP (P 1 x) = P 1 Ax = P 1 λx = λp 1 x. 12

3) If A is a diagonal n n matrix with λ 1,..., λ n on its diagonal, then λ 1,..., λ n are the eigenvalues of A. [T. If e j is the column matrix with a 1 in the jth spot and zeros elsewhere, then Ae j = λ j e j, showing that each λ j is an eigenvalue. By writing out the equation Ax = λx, try to prove that any eigenvalue of A must be one of λ 1,..., λ n. Remember that by definition, an eigenvector is a nonzero vector. Lecture 34 ( 4.3, 4,4) [ 0 1 1) The matrix A = has eigenvalues +1 and 1. [F. The characteristic equation 1 0 det(a λi 2 ) = 0 simplifies to λ 2 + 1 = 0. Since this equation has no real roots, [ A has i no real eigenvalues. It s complex eigenvalues are +i and i, with eigenvectors and [ 1 1 respectively. i 2) If 0 is an eigenvalue of A, then det(a) = 0. [T. The eigenvalues are the solutions to det(a λi) = 0. Since λ = 0 is an eigenvalue, det(a) = det(a 0I) = 0. [ [ 1 1 1 0 3) The matrices A = and B = have the same characteristic equations but 0 1 0 1 are not similar. [T. det(a λi 2 ) = (1 λ) 2 = det(b λi 2 ). But B is I 2, which commutes with every matrix. So for any invertible matrix P, P 1 BP = P 1 I 2 P = I 2. So the only matrix similar to B = I 2 is B = I 2. Therefore, A and B are not similar. In fact, A is not similar to any diagonal matrix. Think about why. Lecture 35 ( 4.4) 1) If A has a nonzero eigenvalue, then A must be invertible. [F. As proven in lecture, A is invertible if and only if all of its eigenvalues are nonzero. 2) If 4 is an eigenvalue of A, then 64 is an eigenvalue of A 3. [T. If Ax = 4x, then A 3 x = A(A(Ax)) = A(A(4x)) = 4A(Ax) = 4 A(4x) = 16Ax = 64x. 3) If det(a λi n ) = (λ 1)(λ 2) (λ n), then tr(a) = n(n + 1)/2. [T. The roots of the characteristic equation det(a λi n ) = (λ 1)(λ 2) (λ n) = 0 are λ = 1, 2,..., n, and these are the eigenvalues of A. According to a theorem stated in lecture, their sum 1 + 2 + + n = n(n + 1)/2 is the trace of A. Lecture 36 ( 4.5) 1) If A is a 2 2 matrix with 2 eigenvectors, then A must be diagonalizable. [F. A is diagonalizable if and only if A has 2 linearly independent eigenvectors. Note that any multiple of an eigenvector is an eigenvector, so the existence of one eigenvector always implies the existence of infinitely many eigenvectors. 13

2) If M 1 AM = D is a diagonal matrix, then each column of M must be a eigenvector of A. [T. Let x j be the jth column of M, let λ j be the jth diagonal element of D, and let e j be the jth standard basis vector. Ax j = AMe j = MDe j = Mλ j e j = λ j Me j = λ j x j. So x j is an eigenvector of A with eigenvalue λ j. 3) If λ 1,..., λ n are the eigenvalues (listed with multiplicity) of the n n matrix A, then any diagonal matrix to which A is similar must have λ 1,..., λ n on its diagonal, although not necessarily in that order. [T. This assertion is a slight re-wording of the second theorem proven in the lecture. As an exercise you should prove that any two n n diagonal matrices with λ 1,..., λ n on their diagonals must be similar. Hint: elementary matrices acting on the left can permute rows, and acting on the right can permute columns. Lecture 37 ( 4.5) 1) If A is 3 3 and the real eigenvalues of A are 1, 2, and 3, then A is diagonalizable. [T. A has 3 distinct real eigenvalues. Any three corresponding eigenvectors are linearly independent. Therefore, A is diagonalizable. 2) If A is 3 3 and the real eigenvalues of A are 1 and 2, then A must not be diagonalizable. [F. A may or may not be diagonalizable. If A has two linearly independent eigenvectors with eigenvalue 1, or two linearly independent eigenvectors with eigenvalue 2, then A will have 3 linearly independent eigenvectors. In this case A is diagonalizable. However, if dim(s 1 ) = dim(s 2 ) = 1, then there does not exist a set of 3 linearly independent eigenvectors, so in this case A is not diagonalizable. 3) If A is 5 5 and r(a 6I 5 ) = 3, then 6 is an eigenvalue of A and dim(s 6 ) = 2. [T. If r(a 6I 5 ) = 3 < 5, then det(a 6I 5 ) = 0, implying that 6 is an eigenvalue of A. According to a theorem proven in lecture, dim(s 6 ) = 5 r(a 6I 5 ) = 2. 14