Determinant and Trace

Similar documents
Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Integration by Parts

The numbers inside a matrix are called the elements or entries of the matrix.

Vectors in two dimensions

Table of Common Derivatives By David Abraham

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

Integration Review. May 11, 2013

Diagonalization of Matrices Dr. E. Jacobs

5.4 Fundamental Theorem of Calculus Calculus. Do you remember the Fundamental Theorem of Algebra? Just thought I'd ask

Euler equations for multiple integrals

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth

Tutorial 1 Differentiation

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Pure Further Mathematics 1. Revision Notes

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Calculus in the AP Physics C Course The Derivative

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const.

Derivative of a Constant Multiple of a Function Theorem: If f is a differentiable function and if c is a constant, then

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Linear First-Order Equations

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Exam 2 Review Solutions

Linear transformations

Calculus of Variations

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012

The Exact Form and General Integrating Factors

u!i = a T u = 0. Then S satisfies

Applications of the Wronskian to ordinary linear differential equations

Year 11 Matrices Semester 2. Yuk

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 1b. Differential operators and orthogonal coordinates. Partial derivatives. Divergence and divergence theorem. Gradient. A y. + A y y dy. 1b.

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

Quantum Algorithms: Problem Set 1

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

G j dq i + G j. q i. = a jt. and


Trigonometric Functions

Transformations. Chapter D Transformations Translation

Introduction to Mechanics Work and Energy

Solutions to Practice Problems Tuesday, October 28, 2008

EC5555 Economics Masters Refresher Course in Mathematics September 2013

1300 Linear Algebra and Vector Geometry

Math 1B, lecture 8: Integration by parts

NOTES FOR SECOND YEAR LINEAR ALGEBRA

6. Friction and viscosity in gasses

1 Last time: determinants

Vectors. Slide 2 / 36. Slide 1 / 36. Slide 3 / 36. Slide 4 / 36. Slide 5 / 36. Slide 6 / 36. Scalar versus Vector. Determining magnitude and direction

Differentiation ( , 9.5)

Answers to Coursebook questions Chapter 5.6

6 General properties of an autonomous system of two first order ODE

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

SECTION 3.2 THE PRODUCT AND QUOTIENT RULES 1 8 3

Acute sets in Euclidean spaces

Matrix multiplications that do row operations

Sample Solutions from the Student Solution Manual

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

Solution Set 7, Fall '12

AP Physics C - Mechanics

Matrices and Vectors

58. The Triangle Inequality for vectors is. dot product.] 59. The Parallelogram Law states that

Computer Graphics: 2D Transformations. Course Website:

TOEPLITZ AND POSITIVE SEMIDEFINITE COMPLETION PROBLEM FOR CYCLE GRAPH

The Non-abelian Hodge Correspondence for Non-Compact Curves

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

x f(x) x f(x) approaching 1 approaching 0.5 approaching 1 approaching 0.

OPG S. LIST OF FORMULAE [ For Class XII ] OP GUPTA. Electronics & Communications Engineering. Indira Award Winner

Linear Equations in Linear Algebra

Center of Gravity and Center of Mass

THE INFLATION-RESTRICTION SEQUENCE : AN INTRODUCTION TO SPECTRAL SEQUENCES

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

45. The Parallelogram Law states that. product of a and b is the vector a b a 2 b 3 a 3 b 2, a 3 b 1 a 1 b 3, a 1 b 2 a 2 b 1. a c. a 1. b 1.

I = i 0,

2D Geometric Transformations. (Chapter 5 in FVD)

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

4. Important theorems in quantum mechanics

Physics 2212 K Quiz #2 Solutions Summer 2016

Additional Exercises for Chapter 10

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Lagrangian and Hamiltonian Mechanics

Dot Products, Transposes, and Orthogonal Projections

A Second Time Dimension, Hidden in Plain Sight

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

Introduction to Matrix Algebra

Derivatives of Trigonometric Functions

f(x + h) f(x) f (x) = lim

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations

Lecture 12. Energy, Force, and Work in Electro- and Magneto-Quasistatics

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 8A

18 EVEN MORE CALCULUS

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Math 342 Partial Differential Equations «Viktor Grigoryan

3.2 Differentiability

Homework 7 Due 18 November at 6:00 pm

In the usual geometric derivation of Bragg s Law one assumes that crystalline

Transcription:

Determinant an Trace Area an mappings from the plane to itself: Recall that in the last set of notes we foun a linear mapping to take the unit square S = {, y } to any parallelogram P with one corner at the origin. We can write the parallelogram P as P = {v + yw :, y }, where v an w are the two vectors which form the eges of P starting at the origin (, ). Then we can write the linear transformation as ([ ) [ v w T = v + yw, [T =, y v 2 w 2 where v = (v, v 2 ) an w = (w, w 2 ) in components. Notice that the mapping T is invertile precisely when it oes not collapse S own to a line segment (or a point), which happens precisely when the area of the parallelogram P is non-zero. Also, recall that we sai T is invertile precisely when its eterminant et([t ) = v w 2 v 2 w 2 is nonzero. We have et([t ) T invertile Area(P ). We ll see net that et([t ) is the area of P, up to a sign. This is easiest to see with the shear map we eamine in the last set of notes. Start with the sheer map T whose matri representation is [ [T =. (In the earlier set of notes we wrote the entry in the upper right corner of [T as a, ut it will turn out to e convenient to call it for our later iscussion.) In this case, T maps the unit [ square S = {, y } to the parallelogram P spanne y the two vectors v = an [ w = ; in other wors, T (S) = P = {v + yw :, y } = We reprouce a picture here: { [ [ + y } :, y. Area = (, ) Area = (, ) We alreay know that the unit square S has area, ut let s see that P also has area. The area of a parallelogram is equal to its ase times its height, an the height an ase of P are oth, so the area of P is =. On the other han, ([ ) et([t ) = et = = = Area(P ).

Now we can rescale the sheer T y a in the horizontal irection an y an vertical irection, to have something more general. This time we have [ a [T =, an T (S) = P = {v + yw :, y } = an the picture looks like { [ a [ + y } :, y, (, ) Area = (a, ) Area = a (In this particular picture a = /2 an = 2, ut this choice of scaling factors is not important.) This time the height of the parallelogram P is while its ase is a, so Area(P ) = ase height = a. Again, we have ([ ) et([t ) = a et = a = a = Area(P ). Notice that the asolute value here is necessary, ecause a an coul have opposite signs. [ a Now that we know et([t ) gives the area of the image of the unit square if [T =, it s not too har to see this is true for any linear map. We ll first nee a technical fact. Eercise: Prove et(ab) = et(a) et(b) for 2 2 matrices A an B. (Really, just multiply it out.) Notice that this means et(ab) = et(ba) for any pair of 2 2 matrices. Eercise: Show that for any angle θ we have ([ ) cos θ sin θ et([r θ ) = et =. sin θ cos θ Now let T : R 2 R 2 e a linear mapping of the plane to itself, an suppose [ a [T =. c [ [ [ [ a This means T (e ) = an T (e c 2 ) =, where e = an e 2 = as efore. [ a Now, the vector T (e ) = makes some angle θ with the positive ais, so we apply the c rotation R θ to T to get a new mapping [ [ [ T = R θ T, [ T cos θ sin θ a ã = [R θ [T = =, sin θ cos θ c 2

an et([ T ) = et([r θ [T ) = et([r θ ) et([t ) = et([t ). By the computation we i aove, Area( P ) = et([ T ). We also have that T sens the unit square S to a parallelogram P, an T sens S to a parallelogram P. These two parallelograms P an P iffer y a rotation, so they have the same area. Thus we see Area(P ) = Area( P ) = et([ T ) = et([t ). In particular, we have just proven that et([t ) precisely when T is invertile, ecause this is precisely when the image parallelogram P has nonzero area. Orientation an the sign of the eterminant: As we saw in the previous notes, there are actually two linear transformations which map the unit square S onto this parallelogram P, we can also have ([ ) [ T = In this case we see that ([, T ) [ a = et([t ) = a = Area(P ). [ a, [T = Why o we have the minus sign? To unerstan what s going on, it will help to lael the corners of the unit square S an the parallelogram P as in the picture elow.. iv i iii ii ii iii i iv What oes this laeling mean? The[ mapping T sens the vector e, which goes from i to ii in the square on the left to the vector. which also goes from i to ii in the parallelogram on the right. Similarly, the [ mapping T sens the vector e 2, which goes from i to iv in the square on a the left to the vector. which also goes from i to iv in the parallelogram on the right. Now, if we follow the laeling of the corners of the square in orer, as in i to ii to iii to iv, then we traverse along the ounary of the square counter-clockwise. However, if we follow the laeling of th ecorners of the parallelogram in orer, as in i to ii to iii to iv, we traverse along the ounary of the parallelogram clockwise. This means the mapping T reverse the irection we traverse along the ounary of the shape. In other wors, T reverse the orientation. We have iscovere the following general principle: et([t ) < T reverses orientation. This principle is eactly why we wrote et([t ) = Area(P ) efore. In general, if T : R 2 R 2 preserves orientation then et([t ) = Area(P ), ut if T reverses orientation then et([t ) = Area(P ). Higher imensions: So far we ve seen that the eterminant of a 2 2 matri is the area (up to a sign) of the parallelogram which is the image of the unit square. In fact, a similar thing 3

is true in higher imensions. Let [T e an n n matri, which we ve seen correspons to a linear map T : R n R n. Then T sens the unit cue S = { i : i =, 2,..., n} to a parallelpipe P, which is spanne y the columns of [T. Then et([t ) = Vol(P ), where Vol gives the n-imensional volume. We egin with a quick illustrative eample. Consier [T = a c e Then the image of the unit cue S uner T is where [ [ T a = c By slicing P with horizontal slices, we see, e >. P = {(, y, z) : (, y) P, z e},, S = {, y }, P = T ( S). Vol(P ) = e Area( P ) = e et([ T ). So, y any reasonale efinition of the eterminant for 3 3 matrices which fits with our efinition for 2 2 matrices, we must have et([t ) = e et([ T ) = e(a c). Eercise: Let [T e a 3 3 matri. Show that you can always perform a rotation to make the last colum of [T into. (Hint: what are the columns of [T?) e At this point, we can write own a reasonale formula for the eterminant of a 3 3 matri. Let [T = a e c f, g h i then ([ e et[t = c et g h ) ([ a f et g h ) ([ a + i et e Here we ve single out the last column, ut we can o the same thing y picking out any row or column. To o this properly, we nee some notation. Let [T = [A ij, so that the entry of [T in the ith row, jth column is A ij. Also, let [ T ij e the 2 2 matri you get from [T y crossing out the ith row an jth column. Then for any choice of j =, 2, 3 we can write ). et([t ) = ( ) +j A j et([ T j ) + ( ) 2+j A 2j et([ T 2j ) + ( ) 3+j A 3j et([ T 3j ), which computes et([t ) y eaning along the jth column. i =, 2, 3 we can write Alternatively, for any choice of et([t ) = ( ) i+ A i et([ T i ) + ( ) i+2 A i2 et([ T i2 ) + ( ) i+3 A i3 et([ T i3 ), which computes et([t ) y epaning along the ith row. The same iea will compute the eterminant of any square matri inuctively. That is, you write the eterminant of an n n matri as a sum of eterminants of (n ) (n ) matrices. We write the general formula as follows. Again, we let A ij e the entry of [T in the ith row, jth 4

column, an we let [ T ij e the (n ) (n ) matri you get from [T y crossing out the ith row an the jth column. The for any choice of j =, 2,..., n we compute et([t ) y epaning along the jth column using the formula et([t ) = ( ) +j A j et([ T j ) + ( ) 2+j A 2j et([ T 2j ) + + ( ) n+j A nj et([ T nj ). Alternatively, for any choice of i =, 2,..., n we compute et([t ) y epaning along the ith row using the formula et([t ) = ( ) i+ A i et([ T i ) + ( ) i+2 A i2 et([ T i2 ) + + ( ) i+n A in et([ T in ). We summarize some important properties of the eterminant here.. The eterminant is linear in each row an column. That is, if A is an n n matri an à is the same as A ecept that you multiply the ith row y c, then et(ã) = c et(a). Also, A an A 2 are the same ecept at the ith row an A is what you get y aing together the ith row of A an A 2 then et(a) = et(a ) + et(a 2 ). The same goes for columns. 2. Consequently, if A is an n n matri an c is a numer then et(ca) = c n et(a). 3. An n n matri A is invertile if an only if et(a). 4. In fact, et(a) is the n-imensional volume of the parallelpipe P which is the image of the unit cue S = {,..., n } uner the linear transformation associate to A. (You can prove this y inuction, in a very similar way we got the geometric interpretation for three imensions from the two-imensional version.) 5. Let A an B e n n matrices, then et(ab) = et(a) et(b). 6. Let A e an n n matri an let à e the matri you get y swapping ajacent two rows of A (or y swapping two ajacent columns). Then et(ã) = et(a) Trace: Another important numer associate to an n n matri is its trace. We let [T e an n n matri with A ij eing the entry in the ith row, jth column. Then the trace of [T is given y tr([t ) = A + A 22 + + A nn, the sum of the entries of [T on the iagonal running from the top left of [T to its ottom right. Later on, we ll see that uner some conitions (for instance, if A ij = A ji ) that the trace measures an average istortion of T as a mapping. That is, if the trace is close to n then T oesn t istort istances too much, ut if the traces is very ifferent from n then it istorts istances a lot. Rememer that this guiline only hols if T is symmetric, that is if A ij = A ji. The trace satisfies the following properties:. If I is the n n ientity matri then tr(i) = n. 2. If A an B are square matrices an c is a numer then tr(a + B) = tr(a) + tr(b), tr(ca) = c tr(a), tr(ab) = tr(ba). Some special matrices: We close with some special types of square matrices. Let [T e a square matri, with entries A ij in the ith row, jth colum as aove. We say [T is iagonal if A ij = for i j. We say [T is upper triangular if A ij = for i > j an that [T is lower triangular if A ij = for i < j. Eercise: Why is a iagonal matri calle iagonal? How aout upper (or lower) triangular matrices? Eercise: Let [T e iagonal, an show that et([t ) = A A 22 A nn, tr([t ) = A + A 22 + + A nn Eercise: Show that the same formula hols for a upper triangular an lower triangular matrices. 5