More chapter 3...linear dependence and independence... vectors It is important to determine if a set of vectors is linearly dependent or independent Consider a set of vectors A, B, and C. If we can find a, b, and c such that, a A + b B + c C = Then this set of vectors is linearly dependent. More generally, if we have N vectors v n, and we can find a set of coefficients c n, N c n v n = n=1 Then the vectors are linearly dependent. (Note that we don t count the case where c n = for all n).
Linear dependence and independence continued, and homogeneous equations For example, thin of vectors A, B, and C in 3 dimensions that all lie in the same plane. Since we only need two vectors to define a plane, these vectors must be linearly dependent. We can tae the condition N n=1 c nv n = and write a matrix A whose columns are the vectors v n, and a column vector c whose elements are the c n, Then we solve the homogeneous equations Ac =
Homogeneous equations, continued Trivial solution c = always exists If we have n equations and n unnowns, only a non-trivial solution exists if the number of linearly independent equations (rows) is less than the number of unnowns Thin of row reduction; at least one row must be reduced to zero In elementary row reduction, combining rows does not change determinant If we have n equations and n unnowns, a non-trivial solution exists only if deta = If we have more vectors than the dimension of the vectors, they are always dependent
Connection to Cramer s rule a 1 x + b 1 y = c 1 a x + b y = c Using row-reduction, we find c 1 b 1 c b x = a 1 b 1 a b a 1 c 1 a c y = a 1 b 1 a b Thus x =,y = if c 1 = c =, unless det A =
Eigenvalues and eigenvectors; diagonalization We have described linear operators acting on vectors in some space to yield new vectors Mr = A special case occurs when = λr, where λ is just a number called an eigenvalue Mr = λr We can write this in index notation as, M ij r j = λr i j The vector r in this case is the eigenvector corresponding to the eigenvalue λ For a square n n matrix, there can be n eigenvalues and n corresponding eigenvectors
Matrix diagonalization Once we have found all the eigenvectors and eigenvalues, we can diagonlize the matrix Consider the simple matrix σ x ( ) 1 σ x = 1 If we want the eigenvectors and eigenvalues, we want to solve ( ) ( ) ( ) 1 v1 v1 σ x = = λ 1 v v First write the homogeneous equation ( ) ( ) λ 1 v1 = 1 λ v
Matrix diagonalization, continued We now that nontrivial solutions exist when λ 1 1 λ = In this case, the two rows are linearly dependent, and an infinite number of solutions exist Taing the determinant, we get λ 1 = We find eigenvalues λ 1 = 1, and λ = 1
Finding the eigenvectors For λ 1 = 1, the eigenvector is found from ( ) ( ) 1 1 v1 = 1 1 We find v 1 = v. Usually we normalize to mae a unit vector, so for λ 1 = 1 we have eigenvector ( v1 v ) 1 = v ( 1/ 1/ For λ 1 = 1, the eigenvector is found from ( ) ( ) 1 1 v1 = 1 1 We find v 1 = v, so for λ = 1 we have eigenvector ( v1 v ) = v ( 1/ 1/ ) )
Properties of the eigenvalues and eigenvectors Notice that in the last case, λ 1 and λ were real Also, notice that the eigenvectors are orthonormal ( 1/ 1/ ) ( 1/ 1/ ( 1/ 1/ ) ( 1/ 1/ ) ) = = ( 1/ 1/ ) ( 1/ 1/ ) = 1
Matrix diagonalization We can diagonalize a matrix M by maing a matrix U with columns as the eigenvectors of A We can find U 1 Then we have U 1 MU = D, where D is a matrix of the eigenvalues Called a similarity transformation
Hermitian matrices Tae A as the adjoint, and define it as A = (A ) T If A = A, the matrix is said to be Hermitian The eigenvalues of a Hermitian matrix are real, and the eigenvectors are orthogonal (and can be made orthonormal) Proof: Hr = λr Tae the adjoint of both sides to get equation r H = λ r
Proof, continued Multiply first equation by r on left, second equation by r on right, and combine them (λ λ)r r = Now tae two vectors r 1, r, with Hr 1 = λ 1 r 1 and Hr = λ r We can multiply first equation on left by r r Hr 1 = λ 1 r r 1 Multiply second equation on left by r 1 r 1 Hr = λ r 1 r
Proof continued Tae adjoint of second equation, and use H = H (r 1 Hr ) = r Hr 1 = λ 1 r r 1 = (λ r 1 r ) = λ r r 1 So finally we have, using λ = λ (λ 1 λ )r r 1 = So then r i r j = δ i,j for normalized vectors
Applications of diagonalization; normal vibrational modes Consider a simple model of a linear triatomic molecule (see Fig. 1. in text) Tae a specific case (for fun), of CO, so m M = 16 1 = 4 3. Hence, M = 3 4m, and m = 16amu We can find the equations of motion: mẍ = (x y) Mÿ = (y x) (y z) m z = (z y) Now we assume a solution of the form x(t) x y(t) = y e iωt z(t) z
Linear triatomic molecule, continued... We can write the equations of motion then as a matrix equation ω m M m x y z = x y z We can easily invert the matrix of the masses, and then write the homogeneous equation ω m M m M M ω m We find from this ω 1 = ω m m and ω = x y z = ( m )(1 + m M )
Eigenvectors for linear triatomic molecule For ω 1 = m, we find the eigenvector from m M m M M m We find then the symmetric stretch mode x 1 y = z 1 x y z 1 1 =
Eigenvectors continued For ω = ( m )(1 + m M ), we find the eigenvector from M M m m m M M x y z = We find x = z, and y = m M x = m M z, or if we normalize the asymmetric stretch mode x 1/ (1 + m ) 1/ M y = 1/ ( + M ) 1/ m z 1/ (1 + m ) 1/ M
A particular case... CO Tae a specific case (for fun), of CO, so m M = 16 1 = 4 3. Hence, M = 3 4m, and m = 16amu In experiment, the two frequencies are found 1388cm 1 and 349cm 1, so ν 1 = (3 1 1 cm/s)(1388cm 1 ) = 4.16 1 13 s 1 = 41.6THz ν = (3 1 1 cm/s)(349cm 1 ) = 7.5 1 13 s 1 = 7.5THz We find ν ν 1 1.69
CO continued We expect from our simply theory that ν = 1 + m ν 1 M We have here m M = 4 3 so, ν = 1 + 8 11 ν 1 3 = 3 1.91 Comparison to experiment 7.5 41.6 1.69... not too bad We could estimate (for example using the symmetric stretch mode ν 1 = 41.6THz and m = 1amu), = mω1 = [ (16g/mol)(1 3 ] g/g) 6. 1 3 mol 1 (π) (41.6 1 1 s 1 ) 1814J/m Might be better expressed as 113.3 ev Å