SUMMARY OF MATH 1600

Similar documents
MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

1 9/5 Matrices, vectors, and their applications

1. General Vector Spaces

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

PRACTICE PROBLEMS FOR THE FINAL

Math 1553, Introduction to Linear Algebra

7. Dimension and Structure.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

Linear Algebra Highlights

Solving a system by back-substitution, checking consistency of a system (no rows of the form

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Matrix Algebra for Engineers Jeffrey R. Chasnov

LINEAR ALGEBRA SUMMARY SHEET.

Dimension and Structure

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Math 308 Practice Test for Final Exam Winter 2015

Reduction to the associated homogeneous system via a particular solution

Math Linear Algebra Final Exam Review Sheet

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

ELE/MCE 503 Linear Algebra Facts Fall 2018

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra I for Science (NYC)

Math 302 Outcome Statements Winter 2013

Conceptual Questions for Review

Definitions for Quizzes

LINEAR ALGEBRA REVIEW

Linear Algebra. Min Yan

Columbus State Community College Mathematics Department Public Syllabus

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Linear Algebra March 16, 2019

Cheat Sheet for MATH461

Math 369 Exam #2 Practice Problem Solutions

Extra Problems for Math 2050 Linear Algebra I

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Lecture 13: Row and column spaces

Math 2174: Practice Midterm 1

Lecture Summaries for Linear Algebra M51A

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Practice Problems for the Final Exam

Spring 2014 Math 272 Final Exam Review Sheet

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Study Guide for Linear Algebra Exam 2

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Review problems for MA 54, Fall 2004.

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Typical Problem: Compute.

ANSWERS. E k E 2 E 1 A = B

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Applied Linear Algebra in Geoscience Using MATLAB

CSL361 Problem set 4: Basic linear algebra

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description:

Math 21b. Review for Final Exam

Linear Algebra Primer

MATH 235. Final ANSWERS May 5, 2015

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ELEMENTARY MATRIX ALGEBRA

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

I. Multiple Choice Questions (Answer any eight)

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

1 Last time: least-squares problems

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

2018 Fall 2210Q Section 013 Midterm Exam II Solution

. = V c = V [x]v (5.1) c 1. c k

Introduction to Linear Algebra, Second Edition, Serge Lange

Exam in TMA4110 Calculus 3, June 2013 Solution

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Math 205 A B Final Exam page 1 12/12/2012 Name

MA 265 FINAL EXAM Fall 2012

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

Practice Final Exam. Solutions.

Review of Some Concepts from Linear Algebra: Part 2

MATH 2030: ASSIGNMENT 4 SOLUTIONS

Section 6.4. The Gram Schmidt Process

MTH 464: Computational Linear Algebra

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra Practice Problems

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Transcription:

SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You should also study class notes, the textbook, recommended problems and MapleTA problems. 1. Vectors 1.1. Introduction. (1) Know how to manipulate n-vectors via addition, subtraction and scalar multiplication using rules of Theorem 1.1. (2) Understand the geometric definition of addition and scalar multiplication of vectors and how to obtain vectors defined with initial point A and terminal point B. (3) Find linear combinations of vectors. (4) Do arithmetic in Z m and Z n m. (5) Solve equations in Z m and Z n m or determine if this is not possible. 1.2. Dot Product. (1) Compute the dot product of 2 vectors in R n or Z n m and the length of a vector in R n. Find the distance between two vectors in R n. (2) Know and be able to compute with the dot product and length using properties of Theorem 1.2 and 1.3. (3) Know how to normalize a non-zero vector in R n. (4) Know and be able to apply The Cauchy Schwarz Inequality and the Triangle inequality (Theorems 1.4 and 1.5). (5) Be able to compute the angle between two vectors in R n and be able to determine when two vectors are orthogonal. (6) Know and be able to apply Pythagorus Theorem (Theorem 1.6). (7) Be able to compute the projection of a vector onto another non-zero vector. 1.3. Lines and Planes. (1) Determine the vector, parametric, normal and general equations of a line in R 2 and be able to convert between any two of these forms. 1

2 SUMMARY OF MATH 1600 (2) Determine the vector and parametric forms of a line in R 3. (3) Determine the vector, parametric, normal and general equations of a plane in R 3 and be able to convert between any two of these forms. (4) Find the distance between a point and a line in R 2 or R 3 and find the closest point on the line to the given point. (5) Find the distance between a point and a plane in R 3 and find the closest point on the plane to the given point. (6) Find the angle between two non-parallel planes. (7) Find the distance between two parallel planes or lines. 1.4. Code Vectors. (1) Find a missing digit in a code vector given a check vector in Z n m. Do this in the specific cases of UPC codes and ISBN-10 codes. (2) Determine whether or not a single digit error or a specified transposition error will be detected in a code vector with given check vector. 2.1. Introduction. 2. Systems of Linear Equations (1) Find the solution to a system of linear equations algebraically by backwards or forwards substitution or geometrically by interpreting the system as the intersection of lines in R 2 or planes in R 3. 2.2. Direct Methods for Solving Linear Equations. (1) Row reduce a matrix into row echelon form or reduced row echelon form using a series of elementary row operations. (2) Use Gaussian elimination to solve a system of linear equations. (3) Use Gauss-Jordan elimination to solve a system of linear equations. (4) Know the Rank Theorem and its implications. (5) Find the rank of a matrix. (6) Find the intersection of 2 non-parallel planes. (7) Find, if possible, the intersection of 2 lines in R 3. (8) Know Theorem 2.3: that a homogeneous system with more variables than equations has a non-zero solution and its implications. (9) Solve linear systems of equations over Z p.

SUMMARY OF MATH 1600 3 2.3. Spanning Sets and Linear Independence. (1) Determine when a given vector is in the span of a set of vectors. (2) Be able to use Theorem 2.4: i.e. b Span{a 1,..., a n } if and only if [A b] is consistent where a i is the ith column of A. (3) Determine when a subset of R n spans R n or not. (4) Describe the span of a subset of R n geometrically if n = 2, 3. (5) Determine when a subset of R n or Z n p is linearly independent or dependent. If dependent, find a linear dependence relation. (6) Use Theorem 2.6 to check for linear dependence/independence of a set of vectors in R m (or Z m p ) {a 1,..., a n } R m (or Z m p ) is linearly independent if and only if the homogeneous system Ax = 0 has only the zero solution. If it is linearly dependent, a dependence relation can be found using n i=1 x ia i = 0 if and only if Ax = 0. Here A is the m n matrix A = [a 1,..., a n ]. (7) Know the relationships between linear independence and rank. A set of vectors {a 1,... a n } in R m is linearly independent if and only if rank(a) = n where A = [a 1,..., a n ] is the m n matrix with {a 1,..., a n } as columns. (8) Know the relationship between spanning and rank. A set of vectors {a 1,..., a n } R m spans R m if and only if rank(a) = m where A = [a 1,..., a n ] is the m n matrix with {a 1,..., a n } as columns. (9) In general, rank(a) min{m, n} where A is m by n. Know the implications of this for systems of linear equations. (10) Know that any set of m vectors in R n must be linearly dependent if m > n. 2.4. Applications. We skipped this section. 3.1. Matrix Operations. 3. Matrices (1) Know all matrix terms and notation: rows, columns,diagonal entries, diagonal matrix, scalar matrix, identity matrix, square matrix, zero matrix. (2) Know when 2 matrices are equal. (3) Compute sums, differences, negatives, and scalar multiples of matrices, if defined. (4) Compute products of matrices if defined. (5) Compute powers of square matrices and know properties of powers. (6) Compute the product of two partitioned matrices.

4 SUMMARY OF MATH 1600 (7) Be able to manipulate various partitions of matrix products. Suppose A is m n and B is n p where a i, b i are the ith columns of A, B and A i, B i are the ith rows of A, B respectively. If e i R m, then e T i A = A i = ith row of A. If e j R n, then Ae j = a j = jth column of A. Row i of AB is A i B or n j=1 a ijb j. Column j of AB is Ab j or n i=1 b ija i. These are all different special cases of partitioning matrices to compute products. (8) Compute the transpose of a matrix and determine when a matrix is symmetric. 3.2. Matrix Algebra. (1) Know and be able to use the properties of matrix addition and scalar multiplication of Theorem 3.2. (2) Determine when a matrix is a linear combination of a set of other matrices. (3) Describe the span of a set of matrices. (4) Determine when a set of matrices is linearly independent or dependent and if dependent, find a linear dependence relation. (5) Know and be able to use the properties of Matrix multiplication given by Theorem 3.3. (6) Know and be able to use the properties of transposes given by Theorem 3.4. (7) Know that matrix multiplication isn t commutative and its consequences. Know that many multiplicative facts that work in the real numbers do not work for matrices, such as AB = 0 doesn t imply A = 0 or B = 0 and its consequences. (8) Know how to test when a matrix is symmetric, and how to obtain symmetric matrices from arbitrary matrices as in Theorem 3.5. 3.3. The Inverse of a Matrix. (1) Know the definition of an inverse of a matrix and how to check when a matrix is invertible, and how to check when a matrix B is an inverse of a matrix A (and hence A 1 = B since inverses are unique by Theorem 3.6). (2) Know how to determine whether a 2 by 2 matrix is invertible, and if so, find its inverse by the formula in Theorem 3.8. (3) Find the unique solution to Ax = b if A is invertible using the formula x = A 1 b (Theorem 3.7).

SUMMARY OF MATH 1600 5 (4) Know and be able to use the properties of inverses with respect to inverses, products, scalar multiples, transposes, and powers as in Theorem 3.9. (5) Be able to find the elementary matrix of size m obtained from any of the three types of elementary row operations (i.e. do the elementary row operation to I m ). (6) Know and be able to use Theorem 3.10: Multiplying an m by n matrix A on the left by an elementary matrix E corresponding to an elementary row operation performs the same elementary row operation on A. (7) Find the inverse of an elementary matrix using Theorem 3.11: The inverse of an elementary matrix corresponding to an elementary row operation is the elementary matrix corresponding to the inverse of that elementary row operation. (8) Be able to write an invertible matrix as a product of elementary matrices. (9) Find an elementary matrix which performs an elementary row operation. (10) Know and be able to use the Fundamental Theorem of Invertible Matrices: Version I (Theorem 3.12). (11) Find the inverse of an n n matrix A, if it exists, by row reducing [A I n ] to [I n A 1 ]. (This works by Theorem 3.14). A isn t invertible if rank(a) < n where A is n n. (12) Check that an n n matrix A is invertible by just checking AB = I n for some n n matrix B. (Theorem 3.13). (13) Find the inverse of a matrix with coefficients in Z p, p prime. 3.4. This section was not covered. 3.5. Subspaces, Basis, Dimension, and Rank. (1) Know the definition of a subspace of R n, and be able to check whether a subset of R n is a subspace by checking the three axioms or if it is not, provide an explicit numerical counterexample that shows that it fails one of the three axioms. (2) Know that the span of a finite set of vectors of R n is a subspace (Theorem 3.19). (3) Know the definitions of the row space, column space and null space of an m n real matrix A and that Row(A), Null(A) are subspaces of R n (latter from 3.21) and Col(A) is a subspace of R m. (4) Know the definition of a basis B of a subspace S of R n and be able to verify that a subset B of S is a basis of S.

6 SUMMARY OF MATH 1600 (5) Know that row equivalent matrices have the same row space. Use this and the fact that RREF(A) has row space basis given by its non-zero rows to find a basis for Row(A). That is, a basis of Row(A) is given by the non-zero rows of its RREF. (6) Find a basis for Col(A) by using {a p(1),..., a p(r) } where p(1) < < p(r) are the pivot columns of the RREF of A. (i.e. a basis for Col(A) is given by the columns of A corresponding to pivot columns of RREF(A).) (7) Find a basis for Null(A) using the fact that row equivalent matrices have the same row space so that Null(A) = Null(R) where R =RREF(A). Solve the system Rx = 0 by setting all the nonpivot variables to parameters, writing all the pivot variables in terms of the parameters and then after substituting back into x, expressing the solutions as linear combinations of n rank(a) vectors, one corresponding to each non-pivot variable. These n rank(a) vectors are a basis of Null(A). (8) Know the basis theorem, that any two bases for a subspace S of R n, have the same number k of vectors. Then S is a k- dimensional vector space. (9) Determine the dimension of any subspace of R n. (10) In particular, determine the dimension of subspaces related to a matrix A: dim(col(a)) = dim(row(a)) = rank(a) and nullity(a) = dim(null(a)) = n rank(a) if A is m n. (11) Know and be able to use the Rank theorem: rank(a)+nullity(a) = n if A is m n. (12) Know that rank(a) = rank(a T ) (Theorem 3.25). (13) Know and be able to use The Fundamental Theorem of Invertible Matrices: Version 2. (14) Find a basis for a subspace given as a spanning set of a finite set of vectors in R n. (15) Determine whether or not a set of vectors in R n (resp Z n p) forms a basis of R n (resp. Z n p) (16) Compute the rank and nullity of matrices over R or Z p. (17) Determine whether a vector is in Row(A), Col(A) or Null(A) for a given matrix A. (18) Know how to find the possible values of rank and nullity of a matrix of a given size, and to make conclusions about the set of columns or rows of the matrix. (19) Show that a vector w R n is in Span(B) for a basis B of a subspace S of R n. Determine its coordinate vector [w] B in this case.

SUMMARY OF MATH 1600 7 3.6. Introduction to Linear Transformations. (1) Know the definition of a linear transformation and be able to prove that a map is a linear transformation or disprove that it is a linear transformation using an explicit numerical counterexample. (2) Know that the map T A : R n R m, T A (x) = Ax is a linear transformation for any real m n matrix A (Theorem 3.30). (3) Know in fact that any linear transformation T : R n R m is of the form T = T A for a unique real m n matrix A. In fact T = T A if and only if A = [T ] where [T ] = [T (e 1 ),..., T (e n )] is the standard matrix of the linear transformation T. (Theorem 3.31). (4) Determine the standard matrix of a linear transformation T : R n R m. (5) For each type of linear transformation given by geometry in R 2. find its standard matrix. That is, find the matrix of R θ : R 2 R 2, rotation by θ radians counterclockwise from the positive x-axis (Example 3.58). Find the matrix of projection P l onto a line l through the origin in R 2. (Note that P l (x) = proj d (x) where d is the direction vector of the line through the origin in R 2 ). Find the matrix of reflection R l in a line l through the origin in R 2. (Note that F l (x) = 2proj d (x) x where d is the direction vector of the line through the origin in R 2 ). (6) Find the composite of linear transformations T : R n R m, S : R m R p, given by S T : R n R p, (S T )(v) = S(T (v)). From Theorem 3.32, S T is also a linear transformation with standard matrix [S T ] = [S][T ]. (7) Find the composite of 2 linear transformations directly and via Theorem 3.32. (8) Find the matrix of a composite of linear transformations of R 2 given by geometric descriptions. (9) Know how to check whether a linear transformation is invertible and verify that another linear transformation is its (unique) inverse. (10) A linear transformation T : R n R n is invertible if and only if its standard matrix [T ] is invertible. In this case [T 1 ] = [T ] 1 (Theorem 3.33 plus class). Use this formula to find the inverse of a linear transformation. 4.1. Introduction. 4. Eigenvalues and Eigenvectors

8 SUMMARY OF MATH 1600 (1) Know and be able to use the definition of an eigenvector and eigenvalue of a square matrix. Be able to verify whether a given vector is an eigenvector and to find a corresponding eigenvalue. (2) Find the eigenvalues of a 2 by 2 matrix over R, C or Z p. (3) Find a basis for each eigenspace of a 2 by 2 matrix over R, C or Z p. (4) For a 2 by 2 matrix which is the standard matrix of a linear transformation given geometrically, find its eigenvalues and a basis for each eigenspace. 4.2. Determinants. (1) Find the determinant of a square matrix using the cofactor or Laplace expansion along any row or column (Theorem 4.1). (2) Know and be able to use the fact that the determinant of a triangular matrix is the product of the diagonal entries (Theorem 4.2). (3) Know and be able to use the properties of determinants coming from elementary row and column operations (Theorem 4.3). (4) Use row reduction, keeping track of elementary row operations and their effects on determinant plus the result on the determinant of a triangular matrix, to compute the determinant of a matrix. (5) Know the determinants of the three types of elementary matrices (Theorem 4.4). (6) Know and be able to use the fact that a square matrix is invertible if and only if its determinant is non-zero. (7) Know and be able to use facts about determinants: The determinant of a scalar multiple (Theorem 4.7), the determinant of a product (Theorem 4.8), the determinant of the inverse of an invertible matrix (Theorem 4.9), the determinant of a transpose (Theorem 4.10). Use those rules to find the determinant of a matrix formed by the above matrix operations. (8) Use elementary row and column operations to deduce the determinant of a matrix from that of a given matrix. (9) Use determinant properties and equations satisfied by matrices to find possible values of determinants. (10) Use Cramer s rule (Theorem 4.11) to solve a system of linear equations given by Ax = b where A is an invertible square matrix. (11) Use the adjoint formula to find the inverse of an invertible matrix. Cross Product (Exploration after 4.2:Determinants)

SUMMARY OF MATH 1600 9 (1) Know how to compute the cross product of two vectors in R 3 using the determinant. (2) Know properties of cross product from Exercise 3 of Exploration (after Determinants 4.2). (3) Determine the area of a parallelogram or triangle determined by 2 vectors in the plane using the cross product. (4) Determine the volume of a parallelopiped determined by 3 vectors in R 3. (5) Find the equation of a plane passing through 3 points. 4.3. Eigenvalues and eigenvectors of n by n matrices. (1) Find the characteristic polynomial of a square matrix. (2) Find the eigenvalues of a square matrix by finding the roots of its characteristic polynomial. (3) Find a basis for each eigenspace for each eigenvalue of a square matrix. (4) Find the algebraic and geometric multiplicities of each of the eigenvalues of a square matrix. (5) Know and be able to use the fact that the eigenvalues of a triangular matrix are the diagonal entries. (6) Know yet another equivalent condition to the invertibility of a square matrix A, that 0 is not an eigenvalue of A leading to Theorem 4.17. (7) Know and be able to use the formulas for the eigenvalues of powers and inverses of matrices with respect to a given eigenvector for the matrix. (Theorem 4.18). (8) Know and be able to use the fact that a set of eigenvectors for a square matrix A corresponding to distinct eigenvalues is linearly independent (Theorem 4.20). (9) Find A k x for a diagonalisable matrix A. (10) Find conditions on eigenvalues of matrices satisfying equations. 4.4. Similarity and Diagonalisation. (1) Know the definition of similar matrices and be able to use it. (2) Know that similarity satisfies the properties of Theorem 4.21 (is an equivalence relation on square matrices). (3) Know that similar matrices share many properties: determinant, rank, characteristic polynomial, eigenvalues. (4) Determine when square matrices are not similar by computing ranks, determinants, characteristic polynomials. (5) Know that if matrices are similar, so are their powers and even negative powers if the matrices are invertible.

10 SUMMARY OF MATH 1600 (6) Know the definition of diagonalisable matrix. (7) Be able to use Theorem 4.23 to diagonalise a matrix if it is possible. (8) Know that the union of bases of the eigenspaces corresponding to the distinct eigenspaces is always linearly independent (Theorem 4.24). (9) Know that an n by n matrix with n distinct eigenvalues is always diagonalisable (Theorem 4.25). (10) Know that the geometric multiplicity of an eigenvalue of a square matrix is always less than its algebraic multiplicity. (11) Know and be able to use the Diagonalisation Theorem, which determines precisely when a square matrix with characteristic polynomial which factors into linear factors is diagonalisable. This is true if and only if the algebraic multiplicity is equal to the geometric multiplicity of each distinct eigenvalue. In this case a basis of R n consisting of eigenvectors for the n by n real matrix A is given by the union of bases of the eigenspaces corresponding to distinct eigenvalues. (12) Compute the powers of a diagonalisable matrix A using its diagonalisation. 5.1. Orthogonality in R n. 5. Orthogonality (1) Know the definition of an orthogonal (respectively orthonormal) set in R n and be able to recognise them. (2) Know and be able to use the fact that an orthogonal set of non-zero vectors in R n is linearly independent. (3) Know the definition of and be able to recognise an orthogonal or orthonormal basis of R n. (4) If w Span(B) where B = {v 1,..., v k } is an orthogonal subset of non-zero vectors in R n, then find w as a linear combination of the vectors in B using the formula in Theorem 5.2. Equivalently, find the coordinates of w with respect to B. (5) Know the definition of an orthogonal square real matrix and its equivalent descriptions in Theorem 5.4, Theorem 5.5, and Theorem 5.7. Be able to determine whether or not a matrix is orthogonal and, if so, find its inverse. (6) Know that a square real matrix is orthogonal if and only it preserves lengths if and only if it preserves dot products (and hence angles) (Theorem 5.6).

SUMMARY OF MATH 1600 11 (7) Know that the inverse of an orthogonal matrix is orthogonal and the product of orthogonal matrices is orthogonal. Know that orthogonal matrices have determinant ±1 and eigenvalues of complex length 1. (Theorem 5.8). Note : the length of a complex number is also called its magnitude. The complex length of a real number is its absolute value. (8) Know how to do arithmetic in complex numbers, find complex conjugates of complex numbers, and lengths of complex numbers. Know that the complex conjugate of a product is the product of the complex conjugates and the length of a product of complex numbers is the product of their lengths. (Note: length of a complex number is also called its magnitude). (9) Know that orthogonal 2 by 2 matrices are either matrices of rotations or reflections and recognise them as such. 5.2. Orthogonal Complements and Orthogonal Projections. (1) Know the definition of the orthogonal complement of a subspace of R n and that it is a subspace of R n. Be able to find a basis of this subspace. (2) Know that the null space of a matrix and the row space of a matrix are orthogonal complements and that the column space of a matrix and the null space of its transpose are orthogonal complements. (Theorem 5.10). Find bases of each of the 4 fundamental subspaces of a matrix A: i.e. Row(A), Col(A), Null(A), and Null(A T ). (3) Know properties of orthogonal complements given by Theorem 5.9. (4) Find the orthogonal projection of a vector in R n onto a subspace W of R n. Note that the definition of orthogonal projection given in the text requires an orthogonal basis of W but does not depend on the choice of orthogonal basis. Find also the component of a vector orthogonal to W. (5) Know and be able to use the Orthogonal Decomposition theorem (Theorem 5.11). Find the orthogonal decomposition of a vector in R n with respect to a subspace W of R n. 5.3. The Gram Schmidt Process and the QR factorisation. (1) Note that we will not cover the QR factorisation of a matrix. (2) Construct an orthogonal basis of a subspace of R n from an arbitrary basis via the Gram-Schmidt Process (Theorem 5.15). (3) Find an orthogonal basis of R n that contains a given orthogonal subset.

12 SUMMARY OF MATH 1600 5.4. Orthogonal Diagonalisation of Symmetric Matrices. (1) Know the definition of an orthogonally diagonalisable matrix. (2) Be able to orthogonally diagonalise a real symmetric n by n matrix A. That is, find its eigenvalues and an orthonormal basis for each distinct eigenspace. Put the orthonormal basis B = {q 1,..., q n } of R n consisting of eigenvectors for A into the columns of a matrix Q = [q 1,..., q n ]. Then Q T AQ = diag(λ 1,..., λ n ) where Aq i = λ i q i. Also find its spectral decomposition A = λ 1 q 1 q T 1 + + λ n q n q T n. (3) Know that a real square matrix is orthogonally diagonalisable if and only if it is symmetric (Theorem 5.17 and 5.20). (4) Know that if A is a real symmetric matrix, then any two eigenvectors corresponding to distinct eigenvalues are orthogonal (Theorem 5.19) and the eigenvalues of a real symmetric matrix are real (Theorem 5.18). (5) Use the spectral decomposition to determine a symmetric matrix with a specified set of orthogonal eigenvectors corresponding to specified eigenvectors. (6) Use the spectral decomposition theorem to show that certain combinations of orthogonally diagonalisable matrices are orthogonally diagonalisable.