January 18, 2018
Contents 1 2 3 4
Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A.
Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. 2 We proved the product formula: det AB = det A det B.
Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. 2 We proved the product formula: det AB = det A det B. 3 We proved that det A = det A t.
Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. 2 We proved the product formula: det AB = det A det B. 3 We proved that det A = det A t. 4 We proved Expansion by Cofactors, both rows columns, that is if we denote:
Cofactors cof a kj = det A kj = det A0 kj = ( 1)k+j A kj, then det A = det A = n a kj cof a kj j=1 n a kj cof a kj k=1 k th row expansion j th column expansion.
Cofactors cof a kj = det A kj = det A0 kj = ( 1)k+j A kj, then det A = det A = n a kj cof a kj j=1 n a kj cof a kj k=1 k th row expansion j th column expansion. The Main Tools: Everything was done using Gauss-Jordan row reduction. This is what you need to really underst.
The Cofactor Matrix We already noted that: n det A = a kj cof a kj det A = j=1 n a kj cof a kj k=1 k th row expansion j th column expansion.
The Cofactor Matrix We already noted that: n det A = a kj cof a kj k th row expansion det A = j=1 n a kj cof a kj k=1 Note however that if we compute the sums j th column expansion. 0 = 0 = n a lj cof a kj k th row expansion j=1 n a ki cof a kj k=1 j th column expansion. for l k, respectively for i j, then we get zero.
The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.)
The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.) the second one is the determinant of the matrix B, obtained from A by replacing column j with column i. (So a matrix with two equal columns.)
The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.) the second one is the determinant of the matrix B, obtained from A by replacing column j with column i. (So a matrix with two equal columns.)
The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.) the second one is the determinant of the matrix B, obtained from A by replacing column j with column i. (So a matrix with two equal columns.) If you stare at the above formulas, you see that they look like formulas for the product of two matrices, except for the wrong position of the summation index. We fix that by considering (cof A) t the transpose of the cofactor matrix.
So we get the formula Theorem A (cof A) t = (det A)I n (cof A) t A = (det A)I n. The n n matrix A is invertible if only if det A 0, in this case the formula for the inverse is given by A 1 = 1 det A (cof A)t.
So we get the formula Theorem A (cof A) t = (det A)I n (cof A) t A = (det A)I n. The n n matrix A is invertible if only if det A 0, in this case the formula for the inverse is given by A 1 = 1 det A (cof A)t. Proof: We proved the hard part: that det A 0 implies A is invertible the inverse is given by the above formula.
So we get the formula Theorem A (cof A) t = (det A)I n (cof A) t A = (det A)I n. The n n matrix A is invertible if only if det A 0, in this case the formula for the inverse is given by A 1 = 1 det A (cof A)t. Proof: We proved the hard part: that det A 0 implies A is invertible the inverse is given by the above formula. Conversely, if A is invertible then so det A 0. det A det A 1 = det A A 1 = det I n = 1,
Cramer s Rule In solving linear systems Ax = b, if A is invertible we immediately get x = A 1 b, if you plug in the formula that we just got, we get x = 1 det A (cof A)t b. But if we use column expansion, we see that ((cof A) t b) i is the determinant of the matrix obtained by substituting column i in A with b. So x i is a quotient of two determinants. Read from the book!
Cramer s Rule In solving linear systems Ax = b, if A is invertible we immediately get x = A 1 b, if you plug in the formula that we just got, we get x = 1 det A (cof A)t b. But if we use column expansion, we see that ((cof A) t b) i is the determinant of the matrix obtained by substituting column i in A with b. So x i is a quotient of two determinants. Read from the book! And finally, even though we are not going to use it, you have to see at least once the formula Formula for det det A = σ S n ( 1) ɛ(σ) a 1σ(1) a 2σ(2) a nσ(n).
Linear Transformations with Diagonal Matrix Representation Recall that we get a matrix representation for the linear transformation T : V W only after chosing a basis in V a basis in W. So you can check that in finite dimensions you can always find a Diagonal Matrix Representation. That not what we are intersested in. What we want is a linear map T : V V, that has a diagonal matrix representaion in a given basis in V. It is not hard to see that:
Linear Transformations with Diagonal Matrix Representation Theorem Given a linear transformation T : V V, where dim V = n, then T has a diagonal representation if only if there exists a basis (e 1,,..., e n ) in V such that T (e i ) = λ i e i, for some scalars λ i R. (or λ i C or even in general λ i k).
Linear Transformations with Diagonal Matrix Representation Theorem Given a linear transformation T : V V, where dim V = n, then T has a diagonal representation if only if there exists a basis (e 1,,..., e n ) in V such that T (e i ) = λ i e i, for some scalars λ i R. (or λ i C or even in general λ i k). This brings us to the following definition:
Definition Let V be a linear space, S V a linear subspace T : S V a linear map. A scalar λ is called an eigenvalue of T if there exists a nonzero v S, such that Tv = λv. v is called an eigenvector, or more precisely an eigenvector corresponding to the eigenvalue λ.
Definition Let V be a linear space, S V a linear subspace T : S V a linear map. A scalar λ is called an eigenvalue of T if there exists a nonzero v S, such that Tv = λv. v is called an eigenvector, or more precisely an eigenvector corresponding to the eigenvalue λ. Note I have used the customary notation for linear maps Tv instead of T (v).
Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements.
Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements. 2 The identity map I : V V has 1 as its only eigenvalue, every vector is an eigenvector.
Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements. 2 The identity map I : V V has 1 as its only eigenvalue, every vector is an eigenvector. 3 In infinite dimensions, a linear transformation may have infinitely many eigenvalues. For example the differentiation operator defined on the subspace C 1 (R) C(R), with values in C(R) by Df = f has every real number as eigenvalue, as can be seen by De λx = λe λx.
Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements. 2 The identity map I : V V has 1 as its only eigenvalue, every vector is an eigenvector. 3 In infinite dimensions, a linear transformation may have infinitely many eigenvalues. For example the differentiation operator defined on the subspace C 1 (R) C(R), with values in C(R) by Df = f has every real number as eigenvalue, as can be seen by De λx = λe λx. 4 If you allow complex valued functions, then every complex λ is an eigenvalue for the differentiation operator.
Examples 5 The Integration operator T : C(R) C(R) defined by Tf (x) = has no eigenvalues. ˆ x 0 f (t) dt
Examples 5 The Integration operator T : C(R) C(R) defined by has no eigenvalues. Indeed, if Tf = λf, then ˆ x Tf (x) = 0 ˆ x 0 f (t) dt f (t) dt = λf (x), differentiating (using the Fundamental Theorem of Calculus) f = λf.
Examples 5 The Integration operator T : C(R) C(R) defined by has no eigenvalues. Indeed, if Tf = λf, then ˆ x Tf (x) = 0 ˆ x 0 f (t) dt f (t) dt = λf (x), differentiating (using the Fundamental Theorem of Calculus) f = λf. If λ = 0, then f = 0, so it is not an eigenvalue, while if λ 0, then the general solution is f (x) = ce 1 λ x, since f (0) = 0, this forces c = 0, so f = 0.
Linear Independence Theorem Let u 1,..., u k be nonzero eigenvectors of a linear transformation T : S V, S V,corresponding to distinct eigenvalues λ 1,..., λ k. Then the eigenvectors u 1,..., u k are linearly independent. Proof We are going to use induction on k. k = 1 is true since the u i s are different from zero. Assume that there are c i scalars such that k c i u i = 0. i=1 (We want to show that all of them are zero.)
Linear Independence Proof cont. Applying T we get also that k c i λ i u i = 0 i=1 Multiplying the first equation with λ k subtracting from this one, we get k 1 c i (λ i λ k )u i = 0. i=1 By induction this implies that c i (λ i λ k ) = 0, for 1 i k 1, this forces c i = 0, for 1 i k 1. (λ i λ k ). So, what remains from the first equation is c k u k = 0, which implies c k = 0. (Again since u k 0.)
Some Terminology Note that saying that λ is an eigenvalue for T is the same as saying that the null-space of the transformation is different from zero. λi T
Some Terminology Note that saying that λ is an eigenvalue for T is the same as saying that the null-space of the transformation is different from zero. λi T The set of all eigenvectors corresponding to λ is this null-space.
Some Terminology Note that saying that λ is an eigenvalue for T is the same as saying that the null-space of the transformation is different from zero. λi T The set of all eigenvectors corresponding to λ is this null-space. It is denoted by E(λ) is a linear subspace of S.
The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues.
The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V.
The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V.
The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V. Recall that: 1 An n n matrix A is invertible if only if det A 0,
The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V. Recall that: 1 An n n matrix A is invertible if only if det A 0, 2 The linear transformation T : V V is invertible if only if it is 1 1 (injective), that is its null-space consists only of {0}.
Characteristic Polynomial We just proved Theorem Suppose T : V V is a linear transformation, with matrix A relative to the choice of a basis e 1,..., e n. Then λ is an eigenvalue of T if only if det(λi n A) = 0.
Characteristic Polynomial We just proved Theorem Suppose T : V V is a linear transformation, with matrix A relative to the choice of a basis e 1,..., e n. Then λ is an eigenvalue of T if only if Theorem The function det(λi n A) = 0. f (λ) = det(λi n A) is a polynomial of degree n (in λ). It is called the characteristic polynomial of A. It depends only on T, so we are going to call it also the characteristic polynomial of T.
Characteristic Polynomial Proof: λ a 11... a 1i a 1n... det(λi n a) = det a i1 λ a ii a in.... a n1 a ni λ a nn It is easy to see by induction that the highest term is λ n the free term is ( 1) n det A.
The Trace Proof: (cont)
The Trace Proof: (cont) A little bit harder is that the coefficient of λ n 1 is n a ii. i=1 The expression a ii is called the trace of A, respectively the trace of T denoted tr A, respectively tr T.
The Trace Proof: (cont) A little bit harder is that the coefficient of λ n 1 is n a ii. i=1 The expression a ii is called the trace of A, respectively the trace of T denoted tr A, respectively tr T. If we chose another basis, then the matrix B with respect to that basis will be B = CAC 1, where C corresponds to the change of basis. So det(λi n B) = det[c(λi n A)C 1 ] = det A.