Calculus C (ordinary differential equations) Lesson 9: Matrix exponential of a symmetric matrix Coefficient matrices with a full set of eigenvectors Solving linear ODE s by power series Solutions to linear ODE s with non-constant coefficients The Bessel equation Frank Hansen Institute for Excellence in Higher Education Tohoku University 208-9
Matrix exponential of a symmetric matrix / 2 Let A be a symmetric n n matrix By the spectral theorem A may be written on the form A = QDQ, where Q is an orthogonal matrix (Q = Q ) and λ 0 0 0 λ 2 0 D = 0 0 λ n is a diagonal matrix with the eigenvalues of A counted with multiplicity in the diagonal We define the matrix exponential of A by setting e tλ 0 0 0 e tλ 2 0 E A (t) = Q Q t R 0 0 e tλn
Verification of properties, page I 2 / 2 We must prove that the continuous matrix function just defined is the matrix exponential of A (i) We first realize that E A (0) = QE n Q = E n (ii) Furthermore, e (t+s)λ 0 0 0 e (t+s)λ 2 0 E A (t + s) = Q 0 0 e (t+s)λn Q e tλ 0 0 e sλ 0 0 0 e tλ 2 0 0 e sλ 2 0 = Q 0 0 e tλn 0 0 e sλn = E A (t)e A (s) Q
Verification of properties, page II 3 / 2 (iii) Finally we obtain λ e tλ 0 0 d dt E 0 λ 2 e tλ 2 0 A(t) = Q 0 0 λ n e tλn Q and thus λ 0 0 d dt E A(t) 0 λ 2 0 = Q t=0 Q = A 0 0 λ n We have indeed proved that E A (t), as defined by the spectral theorem, is the matrix exponential of A
Coefficient matrices with a full set of eigenvectors 4 / 2 Consider a system of linear first order differential equations where A is an n n matrix d X(t) = AX(t), dt We assume the existence of a linearly independent set (ξ,, ξ n ) of eigenvectors for A, and we denote by (λ,, λ n ) the corresponding eigenvalues We already know that X i (t) = e λ i t ξ i t R is a solution to the system for each i =,, n The set of solutions X (t) = ( X (t),, X n (t) ) is linearly independent since X (0) = (ξ,, ξ n ) is linearly independent In conclusion, we realise that X (t) is a fundamental matrix solution
Example The system of linear first order differential equations d dt X(t) = AX(t) with coefficient matrix 0 0 A = 2 0 2 2 has an independent set of eigenvectors given by ξ =, 0 ξ 2 =, 0 ξ 3 = 0 with corresponding eigenvalues,, The set of solutions, e t 0 0 X (t) = e e t 0 t R, e t e t e t is therefore a fundamental matrix solution 5 / 2
Solving linear ODE s by power series, page I 6 / 2 Consider the 2 order differential equation (#) d 2 y dy 2t dt2 dt 2y = 0 with non-constant coefficients If we look for a solution of the form y(t) = a n t n n=0 defined in some open interval I, then by computation and y (t) = y (t) = na n t n = n= n(n )a n t n 2 = n=2 na n t n n=0 (n + )(n + 2)a n+2 t n n=0
Solving linear ODE s by power series, page II 7 / 2 By inserting these power series in (#) we obtain ) ((n + )(n + 2)a n+2 2na n 2a n t n = 0 t I n=0 Since a power series representing the zero function have zero coefficients we derive that (n + )(n + 2)a n+2 2na n 2a n = 0 n = 0,, 2, By solving for a n+2 we obtain a recurrence formula a n+2 = 2a n(n + ) (n + )(n + 2) = 2a n n + 2 n = 0,, 2, that determines all values in the sequence (a n ) given a 0 and a
Solving linear ODE s by power series, page III 8 / 2 (i) a 0 =, a = 0 Then all odd coefficients a, a 3, a 5, are zero Furthermore, a 2 = 2a 0 2 =, a 4 = 2a 2 4 = 2, a 6 = 2a 4 6 = 2 3 and by induction Hence a 2n = 2 3 n = n! y (t) = + t 2 + t4 2! + t6 3! + + t2n n! + = et2 is a solution to (#)
Solving linear ODE s by power series, page IV 9 / 2 (ii) a 0 = 0, a = Then all even coefficients a 0, a 2, a 4, are zero Furthermore, a 3 = 2a 3 = 2 3, a 5 = 2a 3 5 = 22 3 5, a 7 = 2a 5 7 = 23 3 5 7 and by induction a 2n+ = Hence the power series is a solution to (#) y 2 (t) = 2 n 3 5 7 (2n + ) n=0 2 n t 2n+ 3 5 7 (2n + )
Solving linear ODE s by power series, page V 0 / 2 The quotient between two consecutive terms in the power series for y 2 (t) is given by 2t 2 2n + 3 and this ratio converges, for every t R, to zero as n By Cauchy s ratio test the power series for y 2 (t) is therefore absolutely convergent for every t R Since y (t) and y 2 (t) are linearly independent we obtain that the set of solutions to (#) is given by y(t) = C y (t) + C 2 y 2 (t), where C and C 2 are arbitrary constants We also observe that every solution to (#) is everywhere defined in R
Linear ODE s with non-constant coefficients / 2 Consider the homogeneous linear second order differential equation (#) P(t) d 2 y dt 2 + Q(t)dy dt + R(t)y = 0, where P(t), Q(t) and R(t) are continuous functions Theorem Consider t 0 R and ρ > 0 Suppose the functions Q(t) R(t) P(t) and P(t) have convergent Taylor series expansions for t t 0 < ρ Then there is a unique solution y(t) = a 0 + a (t t 0 ) + a 2 (t t 0 ) 2 + of (#) such that y(t 0 ) = x 0 and y (t 0 ) = x 0 2 for given values x 0 and x 0 2 The solution is analytical with radius of convergence at least equal to ρ
The Bessel equation 2 / 2 The homogeneous second order linear differential equation t 2 d 2 y dt 2 + t dy dt + (t2 ν 2 )y = 0, ν > 0 with non-constant coefficients is called the Bessel equation