Math 346 Notes on Linear Algebra

Similar documents
2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Calculating determinants for larger matrices

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Study Guide for Linear Algebra Exam 2

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math Notes on Differential Equations and Linear Algebra

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

and let s calculate the image of some vectors under the transformation T.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

The definition of a vector space (V, +, )

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Properties of Linear Transformations from R n to R m

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

3 Matrix Algebra. 3.1 Operations on matrices

MTH 464: Computational Linear Algebra

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Math 54 HW 4 solutions

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Lecture Summaries for Linear Algebra M51A

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Chapter 7. Linear Algebra: Matrices, Vectors,

Math Linear Algebra Final Exam Review Sheet

LINEAR ALGEBRA REVIEW

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Elementary maths for GMT

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra (Math-324) Lecture Notes

Linear Algebra Highlights

Row Space and Column Space of a Matrix

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

1 9/5 Matrices, vectors, and their applications

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Introduction to Matrices

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Online Exercises for Linear Algebra XM511

Graduate Mathematical Economics Lecture 1

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Review problems for MA 54, Fall 2004.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Announcements Wednesday, November 01

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Math Final December 2006 C. Robinson

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description:

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Announcements Wednesday, November 01

Extra Problems for Math 2050 Linear Algebra I

NOTES FOR LINEAR ALGEBRA 133

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 313 Chapter 1 Review

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

This MUST hold matrix multiplication satisfies the distributive property.

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

MAT 242 CHAPTER 4: SUBSPACES OF R n

M. Matrices and Linear Algebra

LS.1 Review of Linear Algebra

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Algebra Practice Problems

MATH 310, REVIEW SHEET 2

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

2018 Fall 2210Q Section 013 Midterm Exam II Solution

CSL361 Problem set 4: Basic linear algebra

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Linear Systems and Matrices

4. Linear transformations as a vector space 17

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

City Suburbs. : population distribution after m years

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Review Notes for Midterm #2

MATH2210 Notebook 2 Spring 2018

Columbus State Community College Mathematics Department Public Syllabus

LINEAR SYSTEMS, MATRICES, AND VECTORS

4. Determinants.

Elementary Linear Algebra

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Section Instructors: by now you should be scheduled into one of the following Sections:

Digital Workbook for GRA 6035 Mathematics

Chapter 3. Vector spaces

Math 21b. Review for Final Exam

Math 313 (Linear Algebra) Exam 2 - Practice Exam

2.3. VECTOR SPACES 25

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 220 F11 Lecture Notes

Transcription:

Math 346 Notes on Linear Algebra Ethan Akin Mathematics Department Fall, 2014 1 Vector Spaces Anton Chapter 4, Section 4.1 You should recall the definition of a vector as an object with magnitude and direction (as opposed to a scalar with magnitude alone). This was devised as a means of representing forces and velocities which exhibit these vector characteristics. The behavior of these physical phenomena leads to a geometric notion of addition of vectors (the parallelogram law ) as well as multiplication by scalars - real numbers. By using coordinates one discovers important properties of these operations. Addition Properties (v + w) + z = v + ( w + z). v + w = w + v. v + 0 = v. v + v = 0. (1.1) The first two of these, the Associative and Commutative Laws, allow us to rearrange the order of addition and to be sloppy about parentheses just as 1

we are with numbers. We use them without thinking about them. The third says that the zero vector behaves the way the zero number does: adding it leaves the vector unchanged. The last says that every vector has an additive inverse which cancels it. Scalar Multiplication Properties a(bv) = (ab)v. 1v = v. (1.2) The first of these looks like the Associative Law but it isn t. It relates multiplication between numbers and scalar multiplication. The second says that multiplication by 1 leaves the vector unchanged the way the addition with 0 does. Distributive Properties a(v + w) = av + aw. (a + b)v = av + bv. (1.3) These link addition and scalar multiplication. In Linear Algebra we abstract from the original physical examples. Now a vector is just something in a vector space. A vector space is a set V of objects which are called vectors. What is special about a vector space is that it carries a definition of addition between vectors and a definition of multiplication of a vector by a scalar. For us a scalar is a real number, or possibly a complex number. We discard the original geometric picture and keep just the properties given about. These become the axioms of a vector space. Just from them it is possible to derive various other simple, but important properties (Anton Theorem 4.1.1) av = 0 a = 0 or v = 0. ( 1)v = v. (1.4) Fundamental Principle of Linear Algebra A = B A B = 0. (1.5) 2

2 Subspaces and Linear Combinations Anton Chapter 4, Section 4.2 Example 2.1. Which of the following are subspaces? Justify your answers. (a) The set of real-valued functions f such that f(17) = 0. (b) The set of real-valued functions f such that f(0) = 17. (c) The set of all linear combinations of the vectors v 1, v 2, v 3. A subset W of a vector space V is a nonempty subset which is closed under vector addition and scalar multiplication. So a subspace is a vector space in its own right. The intersection of any collection of subspaces is a subspace [Anton, Theorem 4.2.2] Let S be the list, or set, of vectors {v 1,..., v n } A linear combination on S is a vector v = Σ n i=1 C i v i. Exercise 1 (a) If S is contained in a subspace W then every linear combination on S is a vector in W. (b) The set of all linear combinations on S, obtained by varying the coefficients C 1,..., C n, is a subspace of V. That is, the sum of two linear combinations is a linear combination and any scalar multiple of a linear combination is a linear combination. (c) Which of the following are subspaces? Justify your answers. (a) The set of real-valued functions f such that f(17) = 0. (b) The set of real-valued functions f such that f(0) = 17. The span of a subset S, denoted span(s), is the span(s) = {A : S A&A is a subspace}. That is, it is the smallest subspace which contains S. From Exercise 1, it follows that span(s) is exactly the set of all linear combinations on S. Exercise 2 (a) If S T then span(s) span(t ). (b) If A is a subspace and S A then span(s) A. 3

(c) If T span(s) then [S] = span(s T ). S spans when span(s) is the whole space, V. That is, every vector in V is a linear combination on S. 3 Linear Independence Anton Chapter 4, Section 4.3 A list S = {v 1,..., v n } is (not are) linearly independent (= li) if c 1 v 1 +... + c n v n = 0 only if c 1 = c 2... = c n = 0. That is, the only way to get 0 as a linear combination is by using the zero coefficients. By the Fundamental Principle: S is li if and only if we can equate coefficients. That is: c 1 v 1 +...+c n v n = a 1 v 1 +...+a n v n = c 1 = a 1,..., c n = a n. (3.1) Example: A set consisting of a single vector {w} is ld if and only if w = 0. Exercise 3 Assume that S T. If S spans then T spans. If T is li then S is li. When S is not linearly independent it is called linearly dependent (= ld). Thus, S is ld when a non-trivial linear combination equals zero. Using the Fundamental Principle show that S is ld if and only if one vector of S is linear combination of the others. We can sharpen this to get: Exercise 4 (a)if S = {v 1,..., v n } is li and T = {v 1,..., v n, v n+1,...v m } is ld then some vector among {v n+1,...v m } is a linear combination of the remaining vectors of T. (b) If S = {v 1,..., v n } is an li list of vectors and T = {v 1,..., v n, v n+1 } then the following are equivalent: (i) T is ld. 4

(ii) v n+1 [S]. (iii) [S] = [T ]. (Hint: Show (i) (ii) (iii) (ii) (i).) (c) Prove the Plus/Minus Theorem of Anton Theorem 4.5.3. S is a basis if it spans and is li. The following includes Anton Theorem 4.5.5. Theorem 3.1. ( Between Theorem) Let S = {v 1,..., v n } T = {w 1,..., w m } be lists of vectors in V. If S is li and S T = {v 1,..., v n, w 1,..., w m } spans. There exists a basis B with S B S T. That is, the basis consists of all of the vectors of S together with some (maybe none, maybe all) of the vectors of T. Proof: Look first, at two extreme cases. If S spans then since it is li, it is a basis. Use B = S, using none of the vectors of T. For example, if m = 0 so that there are no vectors in T, then S = S T spans and B = S is a basis. If S T is li then since it spans, it is a basis. Use B = S T, using all of the vectors of T. Now if S T is not li, then by Exercise 4 (a) one of the vectors of T can be written as a linear combination of the remaining vectors of S T. By renumbering we can suppose this is true of the last vector w m. Call T = = {w 1,..., w m 1 }. So that last vector w m span(s T ). Then Exercise 2(c) says that span(s T ) = span(s T ). But S T spans V and so S T spans V. We continue, doing a loop. If S T is li then we are done with B = S T. If S T is ld we can remove some vector from T and still have a spanning set. Eventually, we get to a situation where together the set remaining vectors is li and so we have the basis B. If not before this will happen when we have removed all of the vectors in T. Technically, we are using mathematical induction on m. Remember that for the initial case m = 0 is easy, B = S was a basis. Assume the result for m = k and suppose that m = k + 1. We saw that if S T was li, then B = S T was a basis, and if S T was ld, we could remove a vector and still have the spanning set S T but 5

now with m for T being k. The induction hypothesis implies that there is a basis B with S B S T S T. Result then follows from the Principle of Mathematical Induction. Exercise 5 If v 0 and T = {w 1,..., w m } is a list of vectors in V which spans and v is a nonzero vector in V, then there is a basis B with v B and B {v} T and #B m (Hint: {v} T is ld. ) Corollary 3.2. If S is li and T spans, then #S #T. Proof: Use Exercise 5. Adjoin the vectors of S one at a time to the vectors of T and eliminate vectors from T to get a basis. The vectors of S will run out before those of T do. In particular, if a vector space has a basis of size n then any basis has size n and we call n the dimension of the space. Exercise 6 The dimension of R n is n. Theorem 3.3. Let S be a set of vectors in a vector space of dimension n. (a) If S is li then #S n with equality if and only if S is a basis. (b) If S spans then #S n with equality if and only if S is a basis. (b)any two of the following conditions implies the third. (i) #S = n. (ii) S is li. (iii) S spans. Proof: (a) If S is li then there exists a basis T with S T and so #S #T = n. (b) If S spans then there exists a basis T with T S and so #S #T = n. In either case, if #S = n = #T then S = T and so S is a basis. 6

(c) If S is li and spans then it is a basis and so has size n. For the other directions apply (a) and (b). 4 Linear Transformations Anton Chapter 8, Section 8.1 A linear operator or linear transformation is just a function L between vector spaces, that is, the inputs and outputs are vectors, which satisfies linearity, which is also called the superposition property: L(v + w) = L(v) + L(w) and L(av) = al(v). (4.1) It follows that L relates linear combinations: L(C 1 v 1 + C 2 v 2 + C 3 v 3 ) = C 1 L(v 1 ) + C 2 L(v 2 ) + C 3 L(v 3 ). (4.2) This property of linearity is very special. It is a standard algebra mistake to apply it to functions like the square root function and sin and cos etc. for which it does not hold. On the other hand, these should be familiar properties from calculus. The operator D associating to a differentiable function f its derivative Df is our most important example of a linear operator. From a linear operator we get an important example of a subspace. The set of vectors v such that L(v) = 0, the solution space of the homogeneous equation, is a subspace when L is a linear operator but usually not otherwise. If r is not 0 then the solution space of L(v) = r is not a subspace. But if any particular solution v p has been found, so that L(v p ) = r, then all of the solutions are of the form v p + w where w is a solution of the homogeneous equation. This is often written v g = v p + v h. That is, the general solution is the sum of a particular solution and the general solution of the homogeneous equation. A subspace is called invariant for a linear operator L mapping the vector space to itself when L maps the subspace into itself. The most important example occurs when the subspace consists of multiples of a single nonzero vector v. This subspace is invariant when L(v) = rv for some scalar r. In that case, v is called an eigenvector for L with eigenvalue r. Exercise 7 Let D be the linear operator of differentiation on C. 7

(a) Show that the set of linear combinations C 1 cos +C 2 sin is an invariant subspace for D. (b) Show that the set of polynomials of degree at most 3 is a subspace and that it is invariant for D. (c) Given a real number r compute an eigenvector for D with eigenvalue r. 5 Matrices Anton Chapter 1, Sections 1.3, 1.4 An m n matrix is a rectangular array of numbers with m rows and n columns. That is, it is just a list of m n numbers but listed in rectangular form instead of all in a line. However, this shape is irrelevant as far as vector addition and scalar multiplication are concerned. The zero m n matrix 0 has a zero in each place. A matrix is called a square matrix if m = n. However, there is a multiplication between matrices which seems a bit odd until you get used to it by seeing its applications. If A is an m p matrix and B is a p q matrix then the product C = AB is defined and is an m q matrix. Think of A as cut up into m rows, each of length p, and B as cut up into q columns, each of length p. C is constructed using the following pattern (in this picture A is 3 4, and B is 4 2 so that C is 3 2). = B (5.1) A = = C. In particular, the product of two n n matrices is defined and yields an n n matrix. The associative law (AB)C = A(BC) and the distributive laws A(B + C) = AB + AC and (A + B)C = AC + BC both hold and so we 8

use them rather automatically. However, the commutative law AB = BC is not true except in certain special cases and so you have to be careful about the side on which you multiply. Rows and Columns of Zeroes: If Row i = 0 in matrix A then Row i = 0 in matrix AB. If Column j = 0 in matrix B then Column j = 0 in matrix AB. The n n identity matrix I n is 1 0... I n = 0 1......... (5.2)... 0 1 That is, it has 1 s on the diagonal and zeroes off the diagonal. For any n n matrix A, we have I n A = A = AI n. That is, I n behaves like the number 1 with respect to matrix multiplication. In general, (ri)a = ra = A(rI) for any scalar r. The inverse of a n n matrix A, denoted A 1 when it exists, is a matrix which cancels A by multiplication. That is( A 1 A ) = I = AA 1. a b The determinant of a 2 2 matrix A = is ad bc and is denoted c d det(a). For example, det(i 2 ) = 1. It turns out that det(ab) = det(a)det(b). This property implies that if A has an inverse matrix then det(a)det(a 1 ) = det(i) = 1. So det(a) is not zero. ( In ) fact, its reciprocal ( is det(a ) 1 ). a b d b For a 2 2 matrix A = define c d  =. That is, you c a interchange the diagonal elements and leave the other two elements in place but with the sign reversed. You should memorize this pattern as it is the reason that 2 2 matrices are much easier to deal with than larger ones. We will repeatedly use the following important identity: A  = det(a)i = ÂA. (5.3) Thus, if det(a) is not zero, then A has an inverse and its inverse is A 1 = 1 Â. (5.4) det(a) If det(a) = 0 then A does not have an inverse and A = 0 = ÂA. 9

The system of two linear equations in two unknowns ax + by = r, cx + dy = s. (5.5) ( ) a b Can be written as a matrix equation L A (X) = AX = S with A = ( ) c ( d ) x r the coefficient matrix, and with column vectors X = and S =. y s When the determinant is not zero then the unique solution is X = A 1 S. Exercise 8 By using the formula for the inverse, derive Cramer s Rule ( ) ( ) 1 r b x = det(a) det 1 a r, y = s d det(a) det (5.6) c s If det(a) = 0 and S = 0, then the homogeneous system AX = 0 has infinitely many solutions. X = 0 is always a solution for the homogeneous system, ( but ) (5.3) implies ( that ) AÂ = 0. This means that the columns of Â, d b X = and X = are solutions as well. At least one of these is c a not zero, unless A itself is 0 and in that case every vector X is a solution. 6 Row Operations and Elementary Matrices Anton Chapter 1, Sections 1.5, 1.6 The three elementary row operations on n p matrices are given by: 1. Multiply row R i by a nonzero constant c (i n). 2. Interchange rows R i and R j (i, j n) 3. To row R i add c times a copy of row R j ( i, j n, j i). [Note row R j is left unchanged]. 10

We associate to a row operation Op the n n elementary matrix E = Op(I n ). Theorem 6.1. If A an n p matrix and B is a p q matrix then Op(AB) = Op(A)B. So if I is the n n identity matrix, Op(A) = Op(IA) = Op(I)A. Corollary 6.2. Reverse row operations have inverse elementary matrices. Proof: The the row operation Op and its reverse Ōp to the identity matrix. I = Ōp(Op(I)) = Ōp(IOp(I))Ōp(I)Op(I). (6.1) Start with an n n matrix A and do row operations to reach a matrix B in reduced echelon form. Either B = I or the last row of B is a row of zeroes. Case 1 (B = I): I = B = Op k (...(Op 1 (A)...) = E k...e 1 A, where E 1 is the elementary matrix Op 1 (I), and so on to E k = Op k (I). E k...e 1 is an invertible matrix and A cancels it. So A is invertible with inverse E k...e 1. If you perform the same row operations on the identity matrix you get Op k (...(Op 1 (I)...) = E k...e 1 = A 1. Thus, we fine the inverse by putting (A : I) in reduced echelon form, obtaining (I : A 1 ). Case 2 (B has a row of zeroes): A is not invertible for if it were then the product E k...e 1 A would be, but this is B and BC has a row of zeroes for any n n matrix C and so B cannot be canceled. 7 Determinants Anton Chapter 2, Sections 2.1, 2.2, 2.3 The determinant is a function which associates to each n n matrix a number labeled det(a). We will need the following properties (which we won t prove). 11

1. det(i) = 1. 2. det(a ) = det(a). 3. det(ab) = det(a)det(b). 4. With all but one row fixed, the determinant is a linear function as we vary the remaining row. 5. Interchanging two rows multiplies the determinant by 1. It follows from 4. that det(op(a)) = cdet(a) if Op multiplies a row by c. If there is a row of zeroes, the determinant is zero. It follows from 5. that det(op(a)) = det(a) if Op interchanges two rows. If two rows are the same, the determinant is zero. The third type of row operation leaves the determinant unchanged. That is, det(op(a)) = det(a) if Op adds to a row c times a copy of another row. For computation, block triangular form will be convenient. Syllabus and Homework In the syllabus below we write Anton for the Linear Algebra book and EA for these notes. 1. Vector Spaces: EA Section 1 and Anton Section 4.1, General Vector Spaces, HW: Anton p. 179/ 19, 20, 23-27. 2. Subspaces and Linear Combinations: EA Section 2 and Anton Section 4.2, Subspaces, HW: EA ex. 1, 2 Anton p. 188-189/ 1, 3, 4, 14, 17. 3. Linear Independence and Bases: EA Section 3 and Anton Section 4.3, Linear Independence, Section 4.4, Coordinates and Basis HW: EA ex. 3, 4, 5, 6 Anton p. 199/ 11, 13, 15. 4. Dimension: Anton Section 4.4, Dimension, HW: Anton p. 217/ 22. 5. Linear Mappings: EA Section 4 and Anton Section 8.1, General Linear Transformations, HW: EA ex. 7 Anton p. 444/ 36, 37, 40, 41. 12

6. Systems of Equations and Echelon Form: Anton Sections 1.1, Introduction to Systems of Linear Equations, and, 1.2, Gaussian Elimination, HW: Anton p. 10/ 12 abc, 13 abc, p. 23/ 4-8, and p. 199/ 4 ab, 7,8, pp. 207-208/9a, 15, 216/1,3,5. 7. Matrix Algebra: EA Section 5 and Anton Sections 1.3, Matrices and Matrix Operations, and, 1.4 Inverses; Algebraic Properties of Matrices, HW: EA ex. 8 Anton pp. 35-37/1-5abcde, 23, 24 and 8. Row Operations and Elementary Matrices: EA Section 6 and Anton Section 1.5, Elementary Matrices and a Method for Finding A 1, and, 1.6, More on Linear Systems and Invertible Matrices, HW: Anton pp. 58,59/ 4, 9-24 [For odd ones obtain the inverse, for the evens just put the matrix in reduced echelon form] and p. 65/ 1,3,5,8. 9. Determinants: EA Section 7 and Anton Sections 2.1, 2.2 [Skim these], HW: Anton p. 105/ 11, 13, 15. 10. Row Space, Column Space and Rank: Anton Section 4.7, Row Space, Column Space, and Null Space, and, Section 4.8, Rank, Nullity and the Fundamental Matrix Spaces, HW: Anton p. 236/ 6 abcd [find a basis for the column space and for the row space as will as the null space, for each matrix]. 11. Eigenvalues, Eigenvectors and Diagonalization: Anton Section 5.1, Eigenvalues and Eigenvectors, and, Section 5.2, Diagonalization, HW: Anton pp. 303-304/6-13, 18,19, and pp. 313-314/12-20. 13

8 Class Information Notes MATH 346E (52873) Linear Algebra Fall, 2014 Monday, Wednesday 2:00-3:15 pm NAC 4/108 Book and Notes: Elementary Linear Algebra, 10 th Edition, by Howard Anton. In addition, there will be the supplementary notes attached above, a pdf of which will be posted on my home page. Grading: There will be three in class tests [if time permits, otherwise, just two] and a final exam. The final counts 40 % of the grade. You should be warned that there are no makeups. Instead the remaining work will simply be counted more heavily. Please attend regularly and be on time. Office Hours: Monday 12:00-12:50 pm and Wednesday 9:00-9:50 am. Other times by appointment. Ethan Akin Office: R6/287A Phone: 650-5136 Email: ethanakin@earthlink.net 14