Lu u. µ (, ) the equation. has the non-zero solution

Similar documents
Separation of Variables in Linear PDE: One-Dimensional Problems

Partial Differential Equations Summary

Properties of Linear Transformations from R n to R m

MODULE 13. Topics: Linear systems

Problem (p.613) Determine all solutions, if any, to the boundary value problem. y + 9y = 0; 0 < x < π, y(0) = 0, y (π) = 6,

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

MATH 412 Fourier Series and PDE- Spring 2010 SOLUTIONS to HOMEWORK 5

MAT Linear Algebra Collection of sample exams

Lecture IX. Definition 1 A non-singular Sturm 1 -Liouville 2 problem consists of a second order linear differential equation of the form.

MATH 251 Final Examination December 16, 2014 FORM A. Name: Student Number: Section:

Family Feud Review. Linear Algebra. October 22, 2013

MATH 251 Final Examination August 14, 2015 FORM A. Name: Student Number: Section:

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Solutions: Problem Set 3 Math 201B, Winter 2007

Solving the Heat Equation (Sect. 10.5).

Lecture Notes on PDEs

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Introduction to Sturm-Liouville Theory and the Theory of Generalized Fourier Series

(v, w) = arccos( < v, w >

144 Chapter 3. Second Order Linear Equations

Ordinary Differential Equations II

Math 273, Final Exam Solutions

(L, t) = 0, t > 0. (iii)

Ch 10.1: Two Point Boundary Value Problems

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Heat Equation, Wave Equation, Properties, External Forcing

COMP 558 lecture 18 Nov. 15, 2010

where u 0 is given. In all our applications it is possible to subtract a known function v(x, t) from u(x, t) so that w(x, t) = u(x, t) v(x, t)

Review For the Final: Problem 1 Find the general solutions of the following DEs. a) x 2 y xy y 2 = 0 solution: = 0 : homogeneous equation.

Math Assignment 14

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

MATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues)

complex dot product x, y =

Partial Differential Equations

Linear Algebra Practice Problems

Chapter 1: The Finite Element Method

Math 20F Final Exam(ver. c)

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Plot of temperature u versus x and t for the heat conduction problem of. ln(80/π) = 820 sec. τ = 2500 π 2. + xu t. = 0 3. u xx. + u xt 4.

Differential equations, comprehensive exam topics and sample questions

Separation of variables in two dimensions. Overview of method: Consider linear, homogeneous equation for u(v 1, v 2 )

ENGI 9420 Lecture Notes 8 - PDEs Page 8.01

Wave Equation With Homogeneous Boundary Conditions

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

A Brief Outline of Math 355

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

CHAPTER 10 NOTES DAVID SEAL

Quizzes for Math 304

Econ Slides from Lecture 7

MATH 251 Final Examination August 10, 2011 FORM A. Name: Student Number: Section:

Math Ordinary Differential Equations

Partial Differential Equations

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Only this exam and a pen or pencil should be on your desk.

Exercises * on Principal Component Analysis

MA 201: Method of Separation of Variables Finite Vibrating String Problem Lecture - 11 MA201(2016): PDE

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Lucio Demeio Dipartimento di Ingegneria Industriale e delle Scienze Matematiche

EIGENVALUES AND EIGENVECTORS 3

2.4 Eigenvalue problems

MA Chapter 10 practice

Physics 342 Lecture 26. Angular Momentum. Lecture 26. Physics 342 Quantum Mechanics I

Calculus IV - HW 3. Due 7/ Give the general solution to the following differential equations: y = c 1 e 5t + c 2 e 5t. y = c 1 e 2t + c 2 e 4t.

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

MATH 251 Final Examination May 3, 2017 FORM A. Name: Student Number: Section:

Linear Algebra Massoud Malek

Method of Separation of Variables

McGill University April 20, Advanced Calculus for Engineers

MTH 2310, FALL Introduction

ENGI 9420 Lecture Notes 8 - PDEs Page 8.01

MATH 251 Final Examination May 4, 2015 FORM A. Name: Student Number: Section:

Numerical Linear Algebra Homework Assignment - Week 2

Exercises * on Linear Algebra

Generalized eigenspaces

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Final Exam Practice 3, May 8, 2018 Math 21b, Spring Name:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

7.3 Singular points and the method of Frobenius

ACM 104. Homework Set 4 Solutions February 14, 2001

Partial Differential Equations Separation of Variables. 1 Partial Differential Equations and Operators

Cheat Sheet for MATH461

MODULE 12. Topics: Ordinary differential equations

Physics 505 Homework No. 1 Solutions S1-1

Linear Algebra: Matrix Eigenvalue Problems

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

Maths for Signals and Systems Linear Algebra in Engineering

Introduction to Matrices

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Differential Equations. Problems

Math 110, Spring 2015: Midterm Solutions

MATH 251 Final Examination December 16, 2015 FORM A. Name: Student Number: Section:

Math Linear Algebra II. 1. Inner Products and Norms

Physics 250 Green s functions for ordinary differential equations

Review problems for MA 54, Fall 2004.

4. Linear transformations as a vector space 17

Transcription:

MODULE 18 Topics: Eigenvalues and eigenvectors for differential operators In analogy to the matrix eigenvalue problem Ax = λx we shall consider the eigenvalue problem Lu = µu where µ is a real or complex number and u is a non-zero function (i.e., u is not identically equal to zero). µ will be called an eigenvalue of L and u(t) is the corresponding eigenvector = eigenfunction. As in the case of inverses we shall find that the existence of eigenvalues depends crucially on the domain of L. For example, suppose Lu u is defined on V = {u(t) = (u 1 (t),..., u n (t)) : u i (t) C 1 [a, b]} then for any constant µ (, ) the equation Lu = µu has the non-zero solution u(t) = xe µt where x is any non-zero vector in R n. Hence every number µ is an eigenvalue and x is a corresponding eigenfunction. On the other hand, if the same problem is considered in the subspace M = {u V : u(t ) = } then it follows from the uniqueness of the solution of u = µu u(t ) = that only the zero solution is possible. Hence there is no eigenvalue µ (, ). A more complicated example is provided by the following Problem: Find all nontrivial solutions of Lu = u Au = µu 132

u() = u(1) where A is an n n matrix. Note that the problem can be restated as finding all eigenvalues and eigenvectors of the linear operator Lu u Au on the subspace of vector valued functions which satisfy u() = u(1). Answer: Any such solution solves the linear equation u (A + µi)u =. Our discussion of constant coefficient systems says that u(t) should be of the form u(t) = xe λt where x is a constant vector. Substitution into the equation yields [λx (A + µ)x]e λt =. This implies that [A (λ µ)i]x =. A non-trivial solution can exist only if (λ µ) is an eigenvalue of A and x is a corresponding eigenvector. Let {ρ 1,..., ρ n } be the eigenvalues of A with corresponding eigenvectors {x 1,..., x n }. Then for each ρ j we obtain a vector valued function x j e (ρj+µ)t which solves u Au = µu. The additional constraint on the solution is that u() = u(1). This requires e (ρ j+µ) = 1 133

and hence that (ρ j + µ) = 2mπin where i 2 = 1 and m is any integer. Thus the eigenvalues are µ j,m = 2mπi ρ j with corresponding eigenfunctions u j,m = x j e 2mπit. Incidentally, the condition u() = u(1) is not so strange. If the first order system is equivalent to an nth order scalar equation then this condition characterizes smooth periodic functions with period 1. All further discussion of eigenvalue problems for differential operators will be restricted to second order equations of the form Lu (a(t)u ) + q(t)u = µp(t)u defined on the interval [, T ] with various boundary conditions at t = and t = T. We shall assume that all coefficients are real and continuous in t. In addition we shall require that a(t) > on [, T ] p(t) > on [, T ] except possibly at isolated points where p may be zero. The function p(t) multiplying the eigenvalue may look unusual but it does arise naturally in many applications. Its presence can greatly complicate the calculation of eigenfunctions and eigenvalues but it has little influence on the general eigenvalue theory. As we have seen time and again, a differential operator is completely specified only when we are given the domain on which it is to be defined. The domain of L given above will be a subspace M of C 2 [, T ]. In applications a number of different subspaces are common, but here we shall restrict ourselves to M = {u C 2 [, T ] : u() = u(t ) = }. 134

We now can obtain a number of results which follow from the specific form of the operator. Theorem: The eigenvalues of Lu (au ) + q(t)u = µp(t)u u() = u(t ) = are real and the corresponding eigenfunctions may be chosen to be real. Proof. As in the case of a matrix eigenvalue problem we don t know a priori whether an eigenvalue will turn out to be complex and require a complex eigenvector. Suppose that µ is an eigenvalue. Let u be the corresponding eigenvector. Then ūlu dt = µ p(t)uū dt. The integral on the right is real and positive. Integration by parts shows that ūlu dt = [(au ) ū + q(t)uū]dt = = (au ū) T [au ū + q(t)uū]dt = [(au ū) au ū + q(t)uū]dt [au ū q(t)uū]dt is real. Hence µ is real and u may be taken to be real since for a complex function both the real and the imaginary part would have to satisfy the eigenvalue equation. Theorem: Let µ m and µ n be distinct eigenvalues and φ m and φ n the corresponding eigenfunctions. Then φ m, φ n = φ m (t)φ n (t)p(t)dt = i.e., the functions φ m and φ n are orthogonal with respect to the inner product f, g = f(t)g(t)p(t)dt. Proof: It follows from Lφ m = µ m φ m p(t) Lφ n = µ n φ n p(t) that [φ n Lφ m φ m Lφ n ] dt = (µ m µ n ) φ m (t)φ n (t)p(t)dt. 135

But [φ n Lφ m φ m Lφ n ]dt = [(aφ mφ n ) (aφ nφ m ) ]dt = because of the boundary conditions. Hence φ m, φ n = for m n. Theorem: If µ is an eigenvalue with eigenfunction φ then µ < q(t)φ2 (t)dt p(t)φ2 (t)dt. Proof: We observe that φ because if φ then φ would be constant and φ() = would imply that φ which is not possible for an eigenfunction. It then follows from (µp(t) q(t))φ 2 (t)dt = (aφ ) φ dt = aφ φ dt that necessarily µ < q(t)φ2 (t)dt p(t)φ2 (t)dt. In particular, this implies that the eigenvalue is strictly negative whenever q. An example and an application: Consider the eigenvalue problem Lu(x) u (x) = µu(x) u() = u(b) = where in view of the subsequent application we have chosen x as our independent variable. In this case a(x) = p(x) = 1 and q(x) =. The above theorems assure that eigenfunctions corresponding to distinct eigenvalues are orthogonal with respect to the usual L 2 inner product on [, b], and that the eigenvalues are strictly negative. It is convenient to set µ = λ 2, λ so that the problem to be solved is u (x) + λ 2 u(x) = u() = u(b) =. 136

This is a constant coefficient second order equation with solution u(x) = c 1 cos λx + c 2 sin λx. The boundary conditions can be expressed as ( ) ( ) 1 c1 = cos b sin b c 2 ( ). A non-trivial solution is possible only if the coefficient matrix is singular. whenever its determinant is zero. Hence λ has to be found such that It is singular sin λb =. This implies that λ n = nπ b, n = ±1, ±2,.... A vector (c 1, c 2 ) which satisfies the singular system is seen to be (, 1). Hence the eigenvalues and eigenfunctions for this problem are ( nπ µ n = λ 2 n, λ = b ), φ n (x) = sin λ n x, n = 1, 2,.... A direct computation verifies that φ n, φ m = for m n. An application: The so-called wave equation and boundary and initial conditions u xx (x, t) 1 c 2 u tt = F (x, t) u(, t) = u(b, t) = u(x, ) = u (x) u t (x, ) = u 1 (x) describe the displacement u(x, t) of a string at point x and time t from its equilibrium position. The string is held fixed at x =, x = b and has the initial displacement u (x) and initial velocity u 1 (x). The source term F (x, t) is given. 137

The problem as stated is in general too difficult to solve analytically and has to be approximated. It becomes manageable if we think of t as a parameter and project all data functions as functions of x into the finite dimensional subspace M = span{φ 1,..., φ N }. Here φ n is the nth eigenfunction of Lu u (x), i.e., φ n (x) = sin λ n x, λ n = nπ b and µ n = λ 2 n. The projection is the orthogonal projection with respect to the inner product for which the {φ n } are orthogonal, i.e., the inner product f, g = b f(x)g(x)dx. As we learned early on, the projection onto M with an orthogonal basis is where P F (x, t) = N β j (t)φ j (x) j=1 β j (t) = F (x, t), φ j(x) φ j (x), φ j (x). Remember, t is considered a parameter and β j will have to depend on t because F in general will change with t. The other data functions have the simpler projections P u (x) = N j=1 c j φ j (x) with c j = u, φ j φ j, φ j and We now try to solve P u 1 (x) = N j=1 d j φ j (x) with d j = u 1, φ j φ j, φ j. u Nxx 1 c 2 u N tt = P F (x, t) u N (, t) = u N (b, t) = 138

u N (x, ) = P u (x) u Nt (x, ) = P u 1 (x). The reason that this problem is manageable is that it has a solution u N (x, t) which for fixed t also belongs to M. Thus we look for a solution of the form u N (x, t) = N α j (t)φ j (x). j=1 We substitute into the wave equation for u N, use that φ i (x) = λ2 i φ i(x) and collect terms. We find that N j=1 [ λ 2 jα j (t) 1 ] c 2 α j (t) β j (t) φ j (x) =. But as orthogonal functions the {φ i } are linearly independent so that this equation can only hold if the coefficient of each φ j vanishes. Thus the function α i (t) must satisfy the linear constant coefficient second order differential equation α i (t) + c 2 λ 2 i α i (t) = c 2 β i (t). Moreover, the initial conditions u N = P u and u Nt = P u 1 at t = require that α i () = c i and It follows that α i() = d i. α i (t) = γ i cos λ i ct + δ i sin λ i ct + α ip (t) where the coefficients {γ i, δ i } can only be determined after the particular integral α ip (t) is known. Problems of this type arise routinely in connection with diffusion, electrostatics and wave motion. They often can be solved on finite dimensional subspaces defined by eigenfunctions of the spatial operator of the partial differential equation. For a concrete application of the above development for a driven string we refer to the problem section of this module. 139

Module 18 - Homework 1) Consider Lu u defined on M = {u C 2 [, 1] : u() =, u (1) = }. Modify the proofs of this module to prove that: i) The eigenvalues of L must be real. ii) The eigenvalues of L must be strictly negative. iii) The eigenfunctions corresponding to distinct eigenvalues must be orthogonal with respect to the usual L 2 [, 1] inner product. iv) Which results change when the space M is changed to M = {u C 2 [, 1] : u () = u (1) = }? 2) Compute the eigenvalues and eigenvectors of Lu u = µu u() = u(1), u () = u (1). 3) Compute the eigenvalues of Lu u = µu u() = u (), u(1) = u (1). (It suffices to set up the equation which has to be solved to find the eigenvalues. You will not be able to find them in closed form.) 4) Consider the problem Lu u u = µu u() = u(1) =. i) Compute the eigenvalues and eigenvectors of L. This operator L is not of the form required by the theorems of this module. But it can be brought into the correct form as outlined in the following instructions: 14

ii) Find a function φ(t) such that [e φ(t) u (t)] = [u u ]e φ(t) = µue φ(t). iii) What orthogonality is predicted by the theorem of the module for eigenfunctions corresponding to distinct eigenvalues? iv) Verify by direct computation that the eigenfunctions are orthogonal in the correct inner product. 5) Consider the problem of a vibrating string which is oscillated at x =. The model is: u xx u tt = u(, t) = F cos δt u(1, t) = u(x, ) = F (1 x) u t (x, ) =. Find an approximate solution. When does resonance occur? Hint: Reformulate the problem for w(x, t) = u(x, t) F (1 x) cos δt and apply the ideas outlined in the module to find an approximation w N (x, t). 141