C. Fourier Sine Series Overview

Similar documents
4 Separation of Variables

14 Separation of Variables Method

6 Wave Equation on an Interval: Separation of Variables

4 1-D Boundary Value Problems Heat Equation

Week 6 Lectures, Math 6451, Tanveer

Math 124B January 17, 2012

Math 124B January 31, 2012

MA 201: Partial Differential Equations Lecture - 10

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations)

Lecture Notes for Math 251: ODE and PDE. Lecture 32: 10.2 Fourier Series

2M2. Fourier Series Prof Bill Lionheart

Wave Equation Dirichlet Boundary Conditions

Strauss PDEs 2e: Section Exercise 2 Page 1 of 12. For problem (1), complete the calculation of the series in case j(t) = 0 and h(t) = e t.

Fourier Series. 10 (D3.9) Find the Cesàro sum of the series. 11 (D3.9) Let a and b be real numbers. Under what conditions does a series of the form

Assignment 7 Due Tuessday, March 29, 2016

Course 2BA1, Section 11: Periodic Functions and Fourier Series

Lecture Notes 4: Fourier Series and PDE s

CS229 Lecture notes. Andrew Ng

Math 220B - Summer 2003 Homework 1 Solutions

Lecture Notes for Math 251: ODE and PDE. Lecture 34: 10.7 Wave Equation and Vibrations of an Elastic String

MA 201: Partial Differential Equations Lecture - 11

David Eigen. MA112 Final Paper. May 10, 2002

FOURIER SERIES ON ANY INTERVAL

Lecture Note 3: Stationary Iterative Methods

Separation of Variables and a Spherical Shell with Surface Charge

A Brief Introduction to Markov Chains and Hidden Markov Models

1D Heat Propagation Problems

Higher dimensional PDEs and multidimensional eigenvalue problems

SEMINAR 2. PENDULUMS. V = mgl cos θ. (2) L = T V = 1 2 ml2 θ2 + mgl cos θ, (3) d dt ml2 θ2 + mgl sin θ = 0, (4) θ + g l

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

1 Heat Equation Dirichlet Boundary Conditions

Mat 1501 lecture notes, penultimate installment

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

Week 5 Lectures, Math 6451, Tanveer

Problem set 6 The Perron Frobenius theorem.

( ) is just a function of x, with

Section 6: Magnetostatics

Physics 505 Fall 2007 Homework Assignment #5 Solutions. Textbook problems: Ch. 3: 3.13, 3.17, 3.26, 3.27

Strauss PDEs 2e: Section Exercise 1 Page 1 of 7

LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS

b n n=1 a n cos nx (3) n=1

The Wave Equation. Units. Recall that [z] means the units of z. Suppose [t] = s, [x] = m. Then u = c 2 2 u. [u]

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

More Scattering: the Partial Wave Expansion

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

22.615, MHD Theory of Fusion Systems Prof. Freidberg Lecture 2: The Moment Equations

Solution of Wave Equation by the Method of Separation of Variables Using the Foss Tools Maxima

$, (2.1) n="# #. (2.2)

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

Symbolic models for nonlinear control systems using approximate bisimulation

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues

8 Digifl'.11 Cth:uits and devices

Lecture 17 - The Secrets we have Swept Under the Rug

FFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection

WMS. MA250 Introduction to Partial Differential Equations. Revision Guide. Written by Matthew Hutton and David McCormick

SCHOOL OF MATHEMATICS AND STATISTICS. Mathematics II (Materials) Section A. Find the general solution of the equation

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

THINKING IN PYRAMIDS

Related Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage

Discrete Techniques. Chapter Introduction

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

Integrating Factor Methods as Exponential Integrators

Coupling of LWR and phase transition models at boundary

Chapter 5. Wave equation. 5.1 Physical derivation

Vibrations of Structures

arxiv:gr-qc/ v1 12 Sep 1996

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased

XSAT of linear CNF formulas

Generalized Bell polynomials and the combinatorics of Poisson central moments

Discrete Techniques. Chapter Introduction

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

221B Lecture Notes Notes on Spherical Bessel Functions

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

arxiv: v1 [math.fa] 23 Aug 2018

Separation of Variables. A. Three Famous PDE s

c 2007 Society for Industrial and Applied Mathematics

AST 418/518 Instrumentation and Statistics

MAS 315 Waves 1 of 8 Answers to Examples Sheet 1. To solve the three problems, we use the methods of 1.3 (with necessary changes in notation).

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5].

THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE

Module 22: Simple Harmonic Oscillation and Torque

Cryptanalysis of PKP: A New Approach

NIKOS FRANTZIKINAKIS. N n N where (Φ N) N N is any Følner sequence

V.B The Cluster Expansion

Srednicki Chapter 51

arxiv:nlin/ v2 [nlin.cd] 30 Jan 2006

On a geometrical approach in contact mechanics

Lecture 6: Moderately Large Deflection Theory of Beams

EXISTENCE OF SOLUTIONS FOR ONE-DIMENSIONAL WAVE EQUATIONS WITH NONLOCAL CONDITIONS

arxiv:gr-qc/ v2 10 Apr 1997

Haar Decomposition and Reconstruction Algorithms

Legendre Polynomials - Lecture 8

An explicit Jordan Decomposition of Companion matrices

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

On Some Basic Properties of Geometric Real Sequences

Torsion and shear stresses due to shear centre eccentricity in SCIA Engineer Delft University of Technology. Marijn Drillenburg

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Transcription:

12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a pair of homogeneous boundary conditions (BC), as foows. (ODE) y (x) + λy(x) =, < x < (BC) y() =, y() = The primary task is to identify a scaars λ for which it is possibe to satisfy a the conditions boxed above with some nonzero continuous function defined on [, ]. Of course, we are aso interested in exacty which nonzero functions come up in the search for suitabe λ-vaues. To expain why it is reasonabe to ca the boxed probem above an eigenvaue probem, rewrite (ODE) in Leibniz notation and rearrange the terms a itte: d2 (y) = λy, < x <. dx2 Now compare the famiiar eigenvaue equation Av = λv from the vector-matrix word. The correspondence is rather good: in both equations, we appy some kind of inear operation to the object of interest, and the resut is a constant mutipe of the thing we started with. Let s make this comparison more expicit. Objects. In the vector-matrix word, we specify the dimension N and focus on vectors v that define scaar vaues v k for 1 k N. In the FSS probem, we specify the interva width and focus on smooth functions y that define scaar vaues y(x) for x. Inner Products. For vectors u, v in R N, the dot product is a scaar-vaued combination buit from corresponding components: u v def = N u k v k. For functions y 1 and y 2 defined on [, ], we used corresponding x-vaues instead of corresponding components, and generaize the sum with an integra. This defines the so-caed inner product (y 1, y 2 ) = y 1 (x)y 2 (x) dx. Orthogonaity. Vectors u, v in R N are orthogona when u v =. Fie 214notes, version of 6 Jun 214, page 12. Typeset at 15:2 June 6, 214.

III. Eigenfunction Series and Appications 13 Functions y 1, y 2 on [, ] are orthogona when (y 1, y 2 ) =. Exampe. If = 1, so the interva of interest is [, 1], then the functions y 1 (x) = x and y 2 (x) = 2 3x are orthogona. To see this, we cacuate their inner product: (y 1, y 2 ) = 1 x (2 3x) dx = 1 ( 2x 3x 2 ) dx = [ x 2 x 3 ] 1 x= =. Object size. The Eucidean ength of a vector v in R N is reated to its dot product with itsef: v = ( N 1/2 v v = vk) 2. The norm of a function y on [, ] comes from the anaogous operation on the inner product with itsef: y = ( 1/2 (y, y) = y(x) dx) 2. (Readers with a background in Eectrica Engineering wi recognize this as the Root Mean Square, or RMS, vaue associated with a signa y.) Operators. In the vector-matrix context, a symmetric rea matrix A can be appied to an object of interest, namey some vector v, to produce another object of the same type, the new vector Av. In the FSS probem, the operator L = d2 dx 2 can be appied to an object of interest, now a function y, to produce another object of the same type, the new function L[y] = y. Eigenvaues. The eigenvaue equation Av = λv for a given matrix A highights those specia vectors v for which mutipication by A produces a scaar mutipe of the input object. In the function-space setup, we need to insert some boundary conditions to make the anaogy work best. So we focus on smooth functions y = y(x) defined in the interva [, ] that aso satisfy (BC), y() =, y() =. In the sice of function space determined by these conditions, to say that a scaar λ is an eigenvaue for L means that some nonzero function y satisfying the given boundary conditions satisfies L[y] = λy, i.e., y + λy =. Any such y is caed an eigenfunction for L corresponding to the eigenvaue λ. For each eigenvaue λ, we define the eigenspace E(λ) = {y = y(x) : y + λy =, y() =, y() = }. Fie 214notes, version of 6 Jun 214, page 13. Typeset at 15:2 June 6, 214.

14 PHILIP D. LOEWEN Learning Pan. In the previous section, we chose a particuar 2 2 matrix A and used its eigenvectors to spit severa interesting probems invoving vectors into scaar probems in orthogona components. It took some work to find the eigenvectors, but knowing them turned out to be usefu in a variety of situations. Of course there are other symmetric matrices, and the ideas we iustrated using a specific choice for A woud aso work for them. In this section, we wi take the a parae approach in function space. We wi start with one particuar eigenvaue probem, the FSS setup boxed above. Then we wi see how to use its eigenfunctions to spit severa interesting probems invoving functions into scaar probems associated with the orthogona components. It wi take some work to set this up, by finding the eigenvaues and eigenfunctions for the FSS, but once we have them they can be put to work in various ways. Of course there are other eigenvaue probems, and the ideas we wi iustrate for the FSS wi aso work for them. Eigen-anaysis. To find the rea numbers λ for which (ODE)(BC) permit a nontrivia soution function y, we wi simpy consider every possibe vaue of λ and determine the set of functions compatibe with (ODE)(BC). For most choices of λ, this set of functions wi contain the singe eement y(x) =. But there wi be a few where something more interesting happens: these wi be the desired eigenvaues. To check on each rea number λ, an agebraic approach wi be essentia: we need to hande arge bocks of possibe λ-vaues by simiar methods. So imagine λ is some particuar rea number, and start soving (ODE). Guessing y = e sx produces a soution if and ony if s 2 = λ, or s = ± λ. The sign of the quantity inside the square root here determines the interpretation of this equation. There are three distinct possibiities. Case λ < If λ <, then ( 1)λ >, so it makes sense to define α = λ >. The genera soution of (ODE) becomes y = Ae αx + Be αx, A, B R. The BC y() = forces B = A, so the famiy of compatibe functions reduces to To arrange the BC y() = requires y = Ae αx [ 1 e 2αx], A R. = Ae α [ 1 e 2α]. The ony way this product can produce is if one of the factors equas. Obviousy e α >. And since α > and >, we have e 2α < e = 1, so [ 1 e 2α] >. By eimination, the zero factor must be A, and this reduces the famiy of soutions for (ODE)(BC) to the singe function y(x) =. Therefore no negative vaue for λ is an eigenvaue. Case λ = If λ =, then the genera soution of (ODE) is y = A + Bx, A, B R. Fie 214notes, version of 6 Jun 214, page 14. Typeset at 15:2 June 6, 214.

III. Eigenfunction Series and Appications 15 The BC y() = forces A =, so the famiy of compatibe functions reduces to To arrange the BC y() = requires y = Bx, B R. = B. The ony way this product can produce is if one of the factors equas. Since >, the zero factor must be B, and this reduces the famiy of soutions for (ODE)(BC) to the singe function y(x) =. Therefore is not an eigenvaue. Case λ > If λ >, then it makes sense to define ω = λ >. The genera soution of (ODE) becomes y = A cos(ωx) + B sin(ωx), A, B R. The BC y() = forces A =, so the famiy of compatibe functions reduces to y = B sin(ωx), B R. Now the BC y() = requires = B sin(ω). The ony way this product can produce is if one of the factors equas. To get a nonzero function y, we need B =, so the zero factor must be sin(ω). Now sin(ω) = wi happen ony for certain choices of ω = λ. Since ω >, the fu ist of these is ω n = nπ, n = 1, 2,.... Thus the compete ist of FSS eigenvaues with their corresponding eigenspaces is λ n = ωn 2 = n2 π 2 2, { E(λ n ) = B sin } : B R, n = 1, 2, 3,.... Each eigenspace is one-dimensiona, and a representative eigenfunction corresponding to λ n is y n (x) = sin. (Any nonzero mutipe of that same sine function woud make an equay usefu choice for y n.) Here we have an infinite sequence of eigenfunctions. //// Fie 214notes, version of 6 Jun 214, page 15. Typeset at 15:2 June 6, 214.

16 PHILIP D. LOEWEN FSS Summary. The probem (ODE) (BC) y (x) + λy(x) =, < x <, y() = = y(), has an infinite sequence of eigenvaues, a positive: λ n def = ( ) 2 nπ, n = 1, 2, 3,.... When λ = λ n, a soutions of (ODE)/(BC) are mutipes of y n (x) = sin. These satisfy the foowing orthogonaity reation: (y m, y n ) = {, if m n, /2, if m = n. Sketches Orthogonaity. The FSS eigenfunctions just found are mutuay orthogona: (y n, y k ) = ( ) ( ) nπx kπx sin sin dx = {, if n k, /2, if n = k. This can be proved directy, using trig identities, as in Trench s Exampe 11.1.4, but the more theoretica expanation in Trench s Theorem 13.2.4 is much more eegant. Somewhere near here is a box containing a summary of this core information. Linear Combinations. For a symmetric N N matrix, every vector f in R N can be written as a inear combination of the mutuay orthogona eigenvectors for A. Even in the vector-matrix word this is a major resut ( The Spectra Theorem ) that Fie 214notes, version of 6 Jun 214, page 16. Typeset at 15:2 June 6, 214.

III. Eigenfunction Series and Appications 17 takes some effort to prove. It s reasonabe to expect any corresponding statement in function space if there is one to be even more chaenging. In considering the operator L = d 2 /dx 2, we have found infinitey many eigenfunctions and the natura anaogue of a inear combination of them woud be an infinite series, of the form b 1 y 1 (x) + b 2 y 2 (x) + = b k y k (x). Deaing with infinite series instead of finite sums brings in some new technica issues: does the series converge? What interpretations of converge appy here? Once those are setted, we are ready to ask, is the set of eigenfunctions sufficienty compete for a series ike this to reproduce any reaistic function? These are a important and interesting questions, and amost a of them are beyond the scope of this course. For now, we wi treat a series ike this just ike finite sums. We wi differentiate and integrate them term-by-term, just ike any finite sum, and we wi just assume the representabiity property conjectured above. Task #: Coefficient-Matching. Any identity of the form a k sin = b k sin, < x < forces the corresponding coefficients to match: a n = b n for each n. Proof. Recognize the identity as the expanded form of this equation in function space: a k y k = b k y k. Fix the n you wish to test, then make the inner product of both sides with y n to get ( ) ( ) a k y k, y n = b k y k, y n a n (y n, y n ) = b n (y n, y n ) [a n b n ](y n, y n ) =. Reca that (y n, y n ) = /2 >, so this forces a n b n =, i.e., b n = a n. //// Task #1: Coefficient Extraction. Given some f: [, ] R, find the constants b k such that f = b k y k. ( ) Soution: Use orthogonaity: for fixed n N, (f, u n ) = b k (u k, u n ) = + + + b n (u n, u n ) + + = b n = (f, u n) (u n, u n ). Fie 214notes, version of 6 Jun 214, page 17. Typeset at 15:2 June 6, 214.

18 PHILIP D. LOEWEN Notice the exact correspondence with the cognate matrix task in Section A. We deduce the coefficient formuas b n = 2 f(x) sin dx, n = 1, 2,.... ( ) For a given function from space E (i.e., a smooth f with f() = = f()), formua ( ) works! Sometimes we stretch it beyond E by taking a genera function f, using ine ( ) to define a sequence of coefficients b n, and then defining f(x) = b k sin. This f is caed the Fourier sine series for f. Again: If f E, then f = f; if f E, then f is cosey reated to f, with detais to be described ater. Exampe. Let = 1. Write the Fourier Sine Series for the function f(x) = x(1 x) on the interva [, 1]. Soution: The coefficient formuas derived above give, for each n, b n = 2 1 1 Two steps of integration by parts revea b n = [ x x 2 ] sin(nπx) dx. 4(1 cos(nπ)) n 3 π 3 = 8 n 3 π 3 ( 1 cos(nπ) The rearrangement shown on the right is meant to expoit the fact that cos(nπ) = ( 1) n. It makes the parenthesized ratio equa 1 when n is odd, and when n is even. The identity f(x) = b n y n (x) for x 1 turns into x x 2 = 8 π sin(πx) + 8 3 27π sin(3πx) + 8 3 125π sin(5πx) + 8 sin(7πx) +.... 3 343π3 Some readers may be surprised to hear that the representation on the right of this equation is sometimes preferabe to the one on the eft. //// Task #2: Equation-Soving. Reca the sice of function space in which the FSS probem operates: 2 ). E = {y = y(x) : y is smooth on [, ], and y() = = y()}. Given f E, we may need to sove for u in the abstract equation L[u] = f (u E). That is we seek a function u such that u = f(x), < x < ; u() = = u(). ( ) Fie 214notes, version of 6 Jun 214, page 18. Typeset at 15:2 June 6, 214.

III. Eigenfunction Series and Appications 19 Soution: The eigenfunctions y k (x) = sin are perfecty adapted to the operator L and the set E, so postuate a soution in the form u = b k y k, and ook instead for the coefficients b k. That is, seek b k so that ( ) hods for u(x) = b k sin. ( ) The BC s are automatic, so ony the ODE needs to be checked. Pug the series in there: [ ( )] [ f(x) = u (x) = b k d2 kπx (kπ ) 2 ( ) ] kπx dx 2 sin = b k sin. (Actuay cacuating the second derivative shown here is the recommended approach, because it gives one the uxury of forgetting the eigenvaue associated with the eigenfunction y k. But it s worth observing that it is precisey the eigenvaue property of y k that gives the resut its convenient form.) This is a case for coefficient extraction (Task 1): here we have a FSS representation for the given f, ony with constant coefficients of the eaborate form k 2 π 2 b k / 2. As shown above, these must satisfy k 2 π 2 b k 2 = 2 f(x) sin dx. ( ) Hence the probem is soved by the series shown in ( ), once we cacuate b k = 2 k 2 π 2 f(x) sin dx, k = 1, 2,.... The derivation above coud be used with any f. In the particuar case where f(x) = x x 2 and = 1, we know the vaue of the right side in ( ), and we have k 2 π 2 b k = 8 ( ) 1 cos(kπ). k 3 π 3 2 Therefore this particuar combination of f and eads to ( ) 8 1 cos(kπ) u(x) = sin(kπx) k 5 π 5 2 = 8 [ sin(πx) + 1 π 5 3 sin(3πx) + 1 sin(5πx) + ], < x < 1. 5 55 It s a pretty easy cacuus exercise to come up with a simpe poynomia representation for the same function: u(x) = 1 ( x 4 2x 3 + x ). 12 Sometimes, amazingy, the series form has advantages over the poynomia stye. //// Fie 214notes, version of 6 Jun 214, page 19. Typeset at 15:2 June 6, 214.

2 PHILIP D. LOEWEN Task #3: Continuous Evoution (First Order in t: Heat-Stye). The functionspace counterpart of a time-varying vector u in R N satisfying u = Au woud be a time-varying function u on [, ], satisfying a dynamic reationship ike du dt = Lu, with L = d2 as before. To capture this idea in standard cacuus-friendy notation, we recognize u as a function of two rea variabes, x and t, and switch to dx2 partia-derivative notation to emphasize the independence of these two variabes. This transforms the dynamic reationship above into the foowing partia differentia equation (PDE) for the unknown function u = u(x, t): (PDE) u t = 2 u x2, < x <, t >, (BC) u(, t) = = u(, t), t >, (IC) u(x, ) = f(x), < x <. The ine abeed (BC) repeats the requirement that the time-varying function of x we seek must ie in the space of interest for FSS probems. In the fina ine, (IC) stands for initia condition : it wi aow us to find a unique soution for the so-caed Boundary-Vaue Probem (BVP) above. To be specific, suppose = 1 and f(x) = x x 2 is the function whose FSS coefficients b n we cacuated earier. Since any reasonabe function of x can be written as a superposition of eigenfunctions with constant coefficients, any reasonabe timevarying function of x can be given a simiar representation by etting the coefficients vary in time. So we postuate the desired soution in the foowing form: u(x, t) = B n (t) sin(nπx). Finding the time-varying FSS coefficients B n (t) wi sove the probem. Again, the BC s are automatic thanks to the form of the FSS, so ony the PDE and IC s can hep us here. Use these assets in two steps. 1. Initiaize: From IC and the FSS coefficient formua, f(x) = u(x, ) = = B n () = 2 1 π B n () sin(nπx) for < x < 1 f(x) sin(nπx) dx def = b n for n = 1, 2,.... (We have expicit vaues for b n from a recent cacuation.) 2. Propagate: Pug the postuated series soution into the PDE, cacuating Fie 214notes, version of 6 Jun 214, page 2. Typeset at 15:2 June 6, 214.

III. Eigenfunction Series and Appications 21 derivatives term-by-term. = u t 2 u x 2 [ ] = B n (t) sin(nπx) t = 2 [ ] B x 2 n (t) sin(nπx) [ B n (t) + n 2 B n (t) ] sin(nπx) for < x < 1. This is an extreme case for coefficient-matching (Task #). On the eft side is the simpest imaginabe combination of eigenfunctions, in which a coefficients are. On the right, each instant t gives a different-ooking superposition of eigenfunctions, but at each instant t we must have the equation B n (t) + n2 B n (t) = for n = 1, 2,.... So for each n we have B n (t) = n2 B n (t), giving B n (t) = B n ()e n2t. Combine these resuts to find B n (t) = b n e n2t with b n as above. Here is the desired soution, in series form: u(x, t) = b n e n2t sin(nπx) = 8 1 ( 1) n π 3 2n 3 e n2t sin(nπx) = 8 [ π 3 e t sin(πx) + e 9t e 25t sin(3πx) + 27 125 sin(5πx) + Task #4: Continuous Evoution (Second Order in t: Wave-Stye). Now suppose the evoution of a time-varying function of x is controed by its second time derivative, as in d 2 u dt 2 = Lu. This is the abstract version of a homogeneous wave-motion probem. In more concrete notation, where we seek a function u = u(x, t) obeying (PDE) 2 u t 2 = 2 u x2, < x < 1, t >, (BC) u(, t) = = u(1, t), t >, (IC 1 ) u(x, ) = f(x), < x < 1, (IC 2 ) u t (x, ) =, < x < 1. Again we seek a soution in the form of an standard eigenfunction series with timevarying coefficients: u(x, t) = B n (t) sin(nπx). ]. Fie 214notes, version of 6 Jun 214, page 21. Typeset at 15:2 June 6, 214.

22 PHILIP D. LOEWEN This makes (BC) work out automaticay. To bring in the (PDE) and (IC) information, we foow the same steps as before. 1. Initiaize: The first IC gives f(x) = u(x, ) = B n () sin(nπx), so we get B n () = b n again. The second IC says = u t (x, ) = B n () sin(nπx), giving B n () = for n = 1, 2, 3,.... (This is coefficient-matching at work in a particuary easy situation.) 2. Propagate: Pugging the postuated series form into the PDE gives, for each fixed t >, [ = u tt u xx = B n(t) + n 2 B n (t) ] sin(nπx) for < x < 1. Since the coefficients in square brackets are independent of x, this is a FSS identity for each fixed t >, and it forces each B n to obey B n (t) + n2 B n (t) =, for each t >. The genera soution for this famiiar ODE is we known: B n (t) = α n cos(nt) + β n sin(nt), α n, β n R. Thanks to the initia information above, we have b n = B n () = α n and = B n() = nβ n. Thus β n = and we have B n (t) = b n cos(nt), n = 1, 2,.... If we are given f(x) = x x 2 as before, the constants b n are known, and we can present a competey expicit answer in series form: u(x, t) = b n cos(nt) sin(nπx) = 8 1 ( 1) n cos(nt) sin(nπx) π 3 2n 3 = 8 [ π 3 cos(t) sin(πx) + 1 1 cos(3t) sin(3πx) + cos(5t) sin(5πx) + 27 125 D. Practica Notes for FSS Cacuations ]. Piecewise-defined functions. Consider the function f, defined for each x in the interva [, 2] by 1, for x =, x, for < x < 1, f(x) = 1/2, for x = 1, 2 x, for 1 < x 2. Fie 214notes, version of 6 Jun 214, page 22. Typeset at 15:2 June 6, 214.

We are interested in the number A = III. Eigenfunction Series and Appications 23 f(x) dx. Intuitivey, A represents the area above the interva [, 2] on the x-axis and the graph of f. A gance at the sketch beow reveas, correcty, that A = 1. 1.2 y=f(x) 1.8.6 y.4.2.2.5 1 1.5 2 x It s vitay important that even though the function f has a piecewise definition, there is nothing piecewise about the definite integra A. It s just a singe number, independent of x, with a natura physica meaning. Two other points are worth mentioning. 1. Isoated point-vaues make no difference inside integras. The function f is discontinuous at the points x = and x = 1, and this has no infuence on the area of interest. For a points other than those two, we have f(x) = g(x) for and this makes g(x) = f(x) dx = { x, for x 1, 2 x, for 1 < x 2, graph of h(x) = f(x) g(x) and contempate h(x) dx = g(x) dx. If you re sti unconvinced, sketch the f(x) dx h(x) dx. Then note that g(x) dx. 2. To cacuate the integra A the hard way, one coud spit the job into segments where the work is routine. The natura division point occurs at x = 1, where the formua describing f switches. We woud say A = = = 1 f(x) dx = 1 f(x) dx + 1 f(x) dx [ ] x 2 1 x dx + (2 x) dx = + 1 2 x= ) ) + ([4 2] [2 12 ] = 1. ( 1 2 ] 2 [2x x2 2 x=1 Fie 214notes, version of 6 Jun 214, page 23. Typeset at 15:2 June 6, 214.

24 PHILIP D. LOEWEN Any cacuation of FSS coefficients associated with f woud use the same principes. If = 2, we woud have b n = 2 2 f(x) sin dx = 2 1 x sin dx + 2 1 (2 x) sin dx. 2 Carefu integration by parts and evauation woud produce a resut for b n that is a singe agebraic quantity depending on n. FSS Coefficients have imit. Pick any function f on [, ] for which f is bounded and integrabe, pug into the coefficient-extraction formua, and integrate by parts: ( ) b n = f(x) sin dx 2 [ = f(x) ( )] nπx nπ cos x= [ = f() f() cos(nπ) + nπ ( nπx nπ cos ( nπx cos ) f (x) dx ) f (x) dx Since cos(θ) 1 for any rea θ, taking absoute vaues on both sides impies ( ) [ b n ] f() + f() + f (x) dx. 2 nπ For each specific function f we might consider, the bracketed quantity on the right is some positive number independent of n. Invent the symbo M for this number and rearrange the inequaity to show b n 2M nπ. Ceary, as n we wi have b n. This happens for any reasonabe f, and the derivation just shown adapts easiy even to functions that are ony piecewise continuous. Knowing that b n as n makes a nice quick sanity check on hand-cacuated coefficient sequences. To ock this idea into your memory, ook back at the sampe tasks in Section C above, and note that a the coefficient sequences derived there have this decay property. (An extension of this topic: There is a correspondence on the decay rate of the coefficient sequence and the smoothness of the function f. The smoother f is, the faster b n tends to, and vice versa.) ]. Fie 214notes, version of 6 Jun 214, page 24. Typeset at 15:2 June 6, 214.