W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Similar documents
Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

2 Eigenvectors and Eigenvalues in abstract spaces.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

PRACTICE PROBLEMS FOR THE FINAL

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

DEPARTMENT OF MATHEMATICS

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

DEPARTMENT OF MATHEMATICS

MATH 221, Spring Homework 10 Solutions

Math Final December 2006 C. Robinson

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Solving a system by back-substitution, checking consistency of a system (no rows of the form

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

(v, w) = arccos( < v, w >

Quizzes for Math 304

MA 265 FINAL EXAM Fall 2012

Linear Algebra Highlights

DEPARTMENT OF MATHEMATICS

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Recall : Eigenvalues and Eigenvectors

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Practice Final Exam. Solutions.

235 Final exam review questions

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

2. Every linear system with the same number of equations as unknowns has a unique solution.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

1 9/5 Matrices, vectors, and their applications

and let s calculate the image of some vectors under the transformation T.

Chapter 6: Orthogonality

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Study Guide for Linear Algebra Exam 2

(v, w) = arccos( < v, w >

Definitions for Quizzes

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 308 Practice Test for Final Exam Winter 2015

Math Linear Algebra Final Exam Review Sheet

LINEAR ALGEBRA REVIEW

DEPARTMENT OF MATHEMATICS

Summer Session Practice Final Exam

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Eigenvalues, Eigenvectors, and Diagonalization

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Practice problems for Exam 3 A =

Review problems for MA 54, Fall 2004.

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Solutions to practice questions for the final

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Sample Final Exam: Solutions

SUMMARY OF MATH 1600

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Spring 2014 Math 272 Final Exam Review Sheet

Solutions to Final Exam

1. General Vector Spaces

I. Multiple Choice Questions (Answer any eight)

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MAT 1302B Mathematical Methods II

ANSWERS. E k E 2 E 1 A = B

For each problem, place the letter choice of your answer in the spaces provided on this page.

Problem 1: Solving a linear equation

Linear Algebra Practice Problems

1 Last time: least-squares problems

Comps Study Guide for Linear Algebra

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Dimension. Eigenvalue and eigenvector

MATH 54 - FINAL EXAM STUDY GUIDE

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

MATH 2210Q MIDTERM EXAM I PRACTICE PROBLEMS

LINEAR ALGEBRA QUESTION BANK

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Exam in TMA4110 Calculus 3, June 2013 Solution

Chapter 5. Eigenvalues and Eigenvectors

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

(v, w) = arccos( < v, w >

Linear Algebra- Final Exam Review

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Cheat Sheet for MATH461

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra Final Exam Study Guide Solutions Fall 2012

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

Transcription:

MA322 Sathaye Final Preparations Spring 2017 The final MA 322 exams will be given as described in the course web site (following the Registrar s listing. You should check and verify that you do not have a valid conflict with other final exams. The questions on the final exam will be on Chapters 4,5,6. Of course, all the calculations depend on the skills learned in the first three chapters, so you need to review them as well. Expect about six questions based on the three chapters. You should review the old exams, WHS problems, daily quizzes and all the definitions. Important formulas should also be memorized to avoid serious mistakes caused by little flaws in recalling the formulas. 1. Early Chapters with enhancements. These topics involve the exam 1 material as well as its enhancement in the later periods. Know what linear systems of equations are and how to relate them to augmented matrices. You should be able to carry out the REF/RREF calculations on a given augmented matrix without fail. Be sure to be able to handle matrices with variable entries. Be prepared to write out the complete solution to a given linear system (after reduction to REF or RREF including the notions of X h - the homogeneous solution and X p a particular solution. Study the vector space interpretations of the solution process. In particular given a system (A B its solution is used to determine bases (and hence dimensions of the spaces Col(A, N ul(a. The solution process also gives information about independence/dependence of column vectors and consistency (solvability of linear systems. Review the 0, 1, principle. Study the consistency matrix obtained by REF of (A I and its uses for consistency of equations. It can also be effectively used to calculate the N ul(a more naturally. It is also used to find orthogonal complement of Col(A and helps in the orthogonal projections. Review and practice matrix calculations. It is crucial to study the relation between row transformations (and indeed the voodoo principle in general. Be sure to study calculation of inverses especially for small sizes and possibly including variable entries. 1

Study formulas for (P Q 1 and (P Q T. Don t forget how to solve (A B if A is invertible. Remember the Adj(M and its uses. Study the determinants and techniques to determine their values using Laplace expansions and row/column transformations. Learn how to recognize (when possible if the determinant is zero. Study the determinantal rank of a matrix and remember that it is equal to the usual rank of the matrix. In fact, the rank of a matrix has three equivalent definitions - number of pivots in REF, size of the largest nonzero subdeterminant and dimension of the column space. Be sure to use whichever is convenient in a problem. 2. Chapter 4. This covers the abstract Vector Spaces, but includes earlier and later ideas when relevant. Define R n to be the set of all column matrices with n real entries. Given any real matrix A = A n m the space Col(A is a subspace of R n. Equations represented by (A B correspond to determining if B Col(A. The space Nul(A is the space of all vectors perpendicular to the row space of A also denoted by Row(A = Col(A T. Inspired by these connections, we define an abstract vector space V over a field K which has a valid scalar multiplication (by K and addition. Be sure to review various examples of vector spaces, including polynomials, power series, function spaces etc. The concept of rank of a matrix gets transformed to the dimension of a vector space. Be aware that (unlike a matrix a vector space may have finite or infinite dimension. Carefully study the concepts of linear dependence/independence of vectors and a basis of a given vector space. Don t forget that when V has finite dimension n, then any basis has exactly n elements. Moreover, any independent set of vectors has at most n elements and any set of more than n elements is necessarily dependent. Given any basis B = ( v 1 v 2 v n of a vector space V, every vector in V can be written uniquely as BX where X is a column of scalars. We define the coordinate vector of v in basis B as [v] B and note that v = B[v] B is valid for any basis of the vector space V. We study subspaces and their properties. Given a vector space V = Span(B a subspace of V is a non empty subset which is closed under the same operations 2

as in V. In general, a subspace is a span of a non empty subset can have any dimension between 0 and dim(v. We now reinterpret our equation AX = B as a map which send a vector X to a new vector AX. This gives rise to the concept of a linear transformation from a vector space V to a vector space W. The study of when the linear transformation is injective and when it is surjective follows. If we choose a basis B = ( v 1 v 2 v n for V and a basis C = ( w1 w 2 c m for W then matrix of any linear transformation T : V W is given by a matrix A so that T (BX = CAX. The matrix A is equal to ( [T (v 1 ] C [T (v 2 ] C [T (v n ] C. It can be suggestively written as T B C, or the matrix of the transformation T from V = Span(B to W = Span(C. This leads to reinterpretation of properties of the transformation T in terms of the matrix T B C. Study the details. Two fundamental formulas about dimensions must be remembered. Given a linear transformation T : V W where dim(v is finite, we have dim(v = dim(ker(t +dim(image(t. This generalizes the calculation for a matrix A = A m n we have n = rank(a + number of free variables when we solve AX = B. The second and equivalent formula is dim(w 1 +W 2 +dim(w 1 W2 = dim(w 1 + dim(w 2 for any two finite dimensional subspaces W 1, W 2 of V. 3. Remaining Chapters. We begin by revisiting the effect of change of basis on the coordinate vectors. If B, C are two bases of a vector space V, then for every v V, we have v = B[v] B = C[v] C i.e. C[B] C [v] B = C[v] C or [v] C = [B] C [v] B. Thus the transformation is given by the multiplication by [B] C. Note that [B] C [C] B = I i.e. [B] C = [C] 1 B. If T is a linear transformation from V to V and has a matrix A with respect to a basis B and M with respect to a basis C, then we deduce that M = [C] B A[B] C. If we denote the matrix [C] B by P, then the formula is M = P AP 1. A matrix M is said to be similar to A if we can write M = P AP 1 for some P. Note that the concept holds on for square matrices and with P invertible. Diagonalization. We investigate matrices A which are similar to diagonal matrices D. Such a matrix A is said to be diagonalizable. 3

Note that A = P DP 1 for some invertible P. Moreover, AP = P D. If v i is the i-th column of P then Av i = d i v i where d i is the i-th entry on the diagonal of D. This gives rise to the definition of an eigenvector v of A which satisfies Av = λv for some scalar λ and v 0. λ is then an eigenvalue belonging to v and an eigenspace of λ is the space of all v such that Av = λv. To find eigenvalues of A, we define the characteristic polynomial of A by P (λ = det(a λi. This leads to a simple test of diagonalizability: A = A n n is diagonalizable iff R n has a basis of eigenvectors of A, iff the sum of dimensions of different eigenspaces of A equals n (also known as the eigendim(a. One easy sufficient condition is when A has n distinct eigenvalues. Transformations defined by diagonalizable matrices are easier to understand and describe. We note that A m = P D m P 1 and hence powers of A are easy to calculate. In particular, lim m A m v can be determined easily. It is important to be able to prove when A is not diagonalizable. In particular, when the eigenvalues are not real, then the matrix is not diagonalizable over reals. For a 2 2 matrix A, study the three distinct types of matrices which are similar to A, namely, diagonal, triangular with repeated entries along the diagonal and rotation type matrices (complex eigenvalues. Inner Products. We discussed a generalized version of dot products in vector spaces. This gives different notions of perpendicularity and angles in general vector spaces and leads to practical applications. Study the examples of inner products defined in terms of integrals (in function spaces and by a direct inner product matrix relative to a basis. You should be able to do calculations similar to the dot products (length, distance, angle etc. with inner products. Normal equations and best fit. We started with solving AX = B and discovered when the system is inconsistent. We finish the discussion by showing that there is always a best fit solution, namely a vector X such that (AX B has as small a norm as possible in the chosen inner product. This can be shown to be the solution of the normal equations A T AX = A T B. If we use a general inner product, then this becomes < A, A > X =< A, B >. 4

We show that the normal equations are always consistent and the solution X gives the desired best fit. Learn how to do the best fit problems for a given set of data points. Projections. The best fit solution X of the equation AX = B gives a projection of B into the column space of A. If we consider the linear transformation which send B to AX is is easy to show that B AX is orthogonal to all vectors in Col(A. This gives us the idea of orthogonal projection with respect to the generalized inner product. In case V = R n and (w 1,, w r is a not necessarily orthogonal basis of a subspace W, then the projection matrix is found thus: Let A be the matrix with columns w 1,, w r and let P = A(A T A 1 A T. Then P is the matrix of the orthogonal projection into W, i.e. the projection of any v is given by P v. In case we have a vector space V and (w 1,, w r is an orthogonal basis of a subspace W, then the projection formula is much simpler: P (v = c 1 w 1 + + c r w r where c i = < v, w i > < w i, w i > for all i. You should derive the matrix of this transformation using some basis of V. It is naturally useful to have a procedure to transform a basis of W into an orthogonal basis. This is called the Gram-Schmidt algorithm. Learn the simpler version taught in class and notes. 5