More east Squares Convergence and ODEs James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April 12, 219 Outline Fourier Sine and Cosine Series Revisited ODE
et s look at east Squares and Uniform Convergence for the Fourier sin and Fourier cosine series we have discussed. In those cases, we have a function f defined only on [, ]. Assuming f exists and is integrable on [, ], consider the Fourier Sine series for f on [, ]. For the odd extensions of f, fo and fe, we know the Fourier sine and cosine coefficients of fo, Ai,o, and fe, Bi,o, satisfy A 2 i,o 1 2 B,o 2 1 B 2 i,o 1 ( ) iπ fo(s) sin s ds A i 2 2 fo(s)ds B 1 f (s)ds, i 2 ( ) iπ fo(s) cos s Bi 2 We calculate the inner products by integrating by parts. ( ) iπ f (s) sin s ds, i 1 ( ) iπ f (s) cos s ds, i 1 et the Fourier coefficients for f o be A,2 i,o A,2 i,o 1 ( ) iπ f o, sin x 1 2 1 (fo(x) cos( iπ 2 2 x) 1 2 fo(x) iπ cos(iπ x) dx iπ and B,2 i,o. We have for i > f (x) sin( iπ x) dx f(x) iπ ) cos(iπ x) dx ( ) iπ fo(x), cos x iπ B2 i,o We also have for i > B,2 i,o 1 ( ) iπ f o, cos x 1 2 f (x) cos( iπ x) dx 1 ( fo(x) sin( iπ 2 2 x) + f(x) iπ ) sin(iπ x) dx 1 2 fo(x) iπ sin(iπ x) dx iπ ( ) iπ fo(x), sin x iπ A2 i,o
Then letting En,o f o n iπ A2 i,o cos( iπ x) or < En,o, En,o >< f, f o > 2 f o iπ A2 i,o < f o, cos( iπ x) > iπ jπ + A2 i,oa 2 j,o < cos( iπ x), cos(jπ x) > j1 ( < f o, f o > 2 + 2 ( )) iπ iπ A2 i,o cos x, f o iπ A2 i,o f o, cos ( iπ (A2 i,o) cos 2 x ( j1 ( ) iπ x ( ) iπ x ), cos ( )) jπ jπ A2 j,o cos x > We know on [, 2] that < cos( iπ x), cos( iπ x) >. Thus, we have f o ( ) iπ iπ A2 i,o cos x, f o < f o, f o > 2 iπ A2 i,o ( ) jπ jπ A2 j,o cos x j1 f o, cos ( iπ x ( ) Since f o iπ, cos x iπa 2 i, iπa i, we have f o ( ) iπ iπ A2 i,o cos x, f o j1 < f o, f o > 2 < f o, f o > ( ) jπ jπ A2 j,o cos x (A 2 i,o) 2 + ) + (A2 i,o) 2 (A2 i,) 2 < f o, f o > (A2 i,o) 2 (A i ) 2
But we know 2 (f o ) 2 (s)ds 2 (f ) 2 (s)ds and so for all n, 2 (A i ) 2 f 2 2. Note this immediately tells us that ia i in addition to A i. Hence, when f is integrable we get more information about the rate at which these Fourier coefficients go to zero. Note our calculations have also told us the Fourier Sine Series expansion of f on [, ] is ( ) ( ) 2 iπ iπ f, sin x sin x ( ) iπ iπ B i sin x For the even extension of f, fe, we know the Fourier sine and cosine coefficients of fe satisfy A 2 i,e 1 2 B,e 2 1 B 2 i,e 1 ( ) iπ fe(s) sin s ds A i 2 2 fe(s)ds B 1 f (s)ds, i 2 ( ) iπ fe(s) cos s Bi 2 ( ) iπ f (s) cos s ds, i 1 ( ) iπ f (s) cos s ds, i 1
et the Fourier coefficients for f e be A,2 i,e A,2 i,e 1 ( ) iπ f e, sin x 1 2 1 (fe(x) cos( iπ 2 2 x) 1 2 fe(x) iπ cos(iπ iπ x) dx 2 and B,2 i,e. We have for i > f e (x) sin( iπ x) dx fe(x) iπ ) cos(iπ x) dx ( ) iπ fe(x), cos x iπ B2 i,e We also have for i > B,2 i,e 1 ( ) iπ f e, cos x 1 2 f e (x) cos( iπ x) dx 1 ( fe(x) sin( iπ 2 2 x) + fe(x) iπ ) sin(iπ x) dx 1 2 fe(x) iπ ( ) sin(iπ iπ iπ x) dx 2 fe(x), sin x iπ A2 i,e Then f e f e + < f e, f e > +2 + B 2 i,e ( iπ ) sin ( iπ x ), f e B 2 j,e ( jπ ) sin ( jπ x ) j1 ( Bj,e 2 jπ jπ sin x j1 ( ) iπ iπ f e, sin x ( ) ( ) iπ iπ sin x, sin x ( ) Bi,e 2 iπ iπ sin x, f e + Bi,e 2 (B 2 i,e) 2 2 < f e, f e > 2 < f e, f e > (B2 i,e) 2 + (B2 i,e) 2 as < sin( iπ x), sin( iπ x) >. (B2 i,e) 2 )
Since 2 (f e ) 2 (s)ds 2 (f ) 2 (s)ds, we conclude since Bi,e 2 B i for all n, 2 (B i ) 2 2 (B2 i,e) 2 f 2 2. that Again, this immediately tells us that ib i in addition to B i. Hence, when f is integrable we get more information about the rate at which these Fourier coefficients go to zero. We are now ready to show the Fourier Sine Series converges uniformly on [, ]. et Tn denote the n th partial sum of the Fourier Sine Series on [, ]. Then, the difference of the n th and m th partial sum for m > n gives Tm(x) Tn(x) m in+1 A i ( ) iπ sin x m ( iπ A i in+1 ) ( ( )) iπ iπ sin x Now apply our analogue of the Cauchy - Schwartz inequality for series. m ( ) ( ) iπ Tm(x) Tn(x) A i iπ iπ sin x in+1 m m ( ) 2 A i 2 2 iπ 2 sin x in+1 in+1 m m 2 A i 2 2 in+1 in+1 Since our Fourier Series derivative arguments imply 2 (A i ) 2 f 2 2 2 (A i ) 2 2 f 2 2
we have Tm(x) Tn(x) 2 f 2 m 2. in+1 Since the series 2 converges, this says the sequence of partial this says (Tn) satisfies the UCC for series and so the sequence of partial sums converges uniformly to a function T which by uniqueness of limits must be f except possibly at the points and. We know if f () f (), the Fourier Sine series will converge to a function with a jump at those points. Hence, we know the Fourier Sine Series of f converges uniformly to f on compact subsets of (, ) as long as f exists. The same sort of argument works for the Fourier Cosine Series on [, ] using the bounds we found from the Fourier Sine Series expansion for f. So there is a function Y which this series converges to uniformly. Since limits are unique, we must have Y f except possibly at the points and. We know if f () f (), the Fourier Cosine series will converge to a function with a jump at those points. Hence, we know the Fourier Cosine Series of f converges uniformly to f on compact subsets of (, ) as long as f exists.
Some kinds of ordinary differential equation models are amenable to solution using Fourier series techniques. The following problem is a simple one but illustrates how we can use Fourier expansions in this context. Consider the model u (t) f (t) u(), u () u () where f C([, ] with f (). This is an example of a Boundary Value Problem or BVP and problems of this sort need not have solutions at all. Note the easiest thing to do is to integrate. t u (t) A + f (s)ds t t s u(t) B + At + u (s) ds B + At + f (z)dz Now apply the boundary conditions. s u() B + f (z)dz B For the other condition, we have A + f (s)ds A + f (s)ds f (s)ds and we see we do not have a way to determine the value of A. Hence, we have an infinite family of solutions S, { t } s S φ(t) At + f (z)dz} : A R as long as the external data f satisfies f (s)ds. To find a way to resolve the constant A, we can use Fourier Series. To motivate how this kind of solution comes about, look at the model below for θ >. This can be rewritten as u (t) + θ 2 u(t) u(), u (t) θ 2 u(t) u () u () u(), u () u ()
et Y denote the vector space of functions Y {x : [, ] R : x C 2 ([, ]), x(), x () x ()} where C 2 ([, ]) is the vector space of functions that have two continuous derivatives. Then this equation is equivalent to the differential operator D(u) u whose domain in Y satisfying D(u) θ 2 u which is an example of an eigenvalue problem. Nontrivial solutions ( i.e. solutions which are not identically zero on [, ] ) and the corresponding values of θ 2 are called eigenvalues of the differential opetator D and the corresponding solutions u for a given eigenvalue θ 2 are called eigenfunctions. The eigenfunctions are not unique and there is an infinite family of eigenfunctions for each eigenvalue. Finding the eigenvalues amounts to finding the general solution of this model. The general solution is u(t) α cos(θt) + β sin(θt) and applying the boundary conditions, we find α β θ(1 cos(θ)) This has unique solution B giving us the trivial solution u on [, ] unless 1 cos(θ). This occurs when θn 2nπ/ for integers n. Thus, the eigenvalues of D are θ 2 n 4n 2 π 2 / 2 with eigenfunctions un(t) β sin(2nπ t/). On [, ] we already know these eigenfunctions are mutually orthogonal with length /2 on the interval [, ] and indeed these are the functions we previously saw but with only even, 2n, arguments. No sin((2n + 1)πt/ functions! Hence the eigenfunctions { 2/ sin(2nπt/)} form an orthornomal sequence in Y.
Since f is continuous on [, ] if we also assume f is differentiable everywhere or is differentiable except at a finite number of points where it has finite right and left hand derivative limits, we know it has a Fourier Sine Series ( ) nπt f (t) An sin n1 where An 2 f (t) sin(nπt/) dt. We know any finite combination of the functions sin( 2nπt ) satisfies the boundary conditions but we still don t know what to do about the data condition. If we set ( ) Bi sin( 2iπt ) 4 2 Bi sin( 2iπt ) f (t) then taking inner products for the interval [, ], we have 2 4 2 Bi sin( 2iπt ) sin(jπt )dt 2 f (t)sin( jπt )dt Aj The orthonomality of the functions sin( jπt ) gives 4 2 Bi 2 sin( 2iπt ) sin(jπt )dt 4 2 Biδj 2i Hence, if j 2k, we have a match and Bk A2k 2 4k 2 π2. But if j 2k + 1 (i.e. it is odd), the left hand side is always leading us to A2k+1. This suggests we need a function f with a certain type of Fourier Sine Series: f () and be periodic on [, ] which makes f (). f (t)dt The Fourier Sine Series for f on [, ] must have all its odd terms zero. A good choice is this: for a given function g on [, /2] with g() g(/2) define f (t) { g(t), t /2 g( t), /2 < t
We can check the condition that the odd coefficients are zero easily. 2 f (t) sin( nπt )dt 2 /2 g(t) sin( nπt )dt + 2 /2 etting s t in the second integral, we have g( t) sin( nπt )dt 2 f (t) sin( nπt )dt 2 /2 g(t) sin( nπt )dt + 2 g(s)( ) sin( nπs /2 ) cos(nπ)( ds) { 2 2 /2 g(t) sin( nπt )dt, n is even A2n, n is odd A2n+1 For the f as described above, since f is continuous, periodic on [, ] if we also assume f is differentiable everywhere or is differentiable except at a finite number of points where it has finite right and left hand derivative limits, we know it has a Fourier Sine Series ( ) nπt ( ) 2nπt f (t) An sin A2n sin n1 n1 where A2n 2 f (t) sin(2nπt/) dt and A2n+1. To solve the boundary value problem, assume the solution u also has a Fourier Sine Series expansion which is differentiable term by term so that we have u(t) u (t) u (t) ( ) 2nπt Bn sin n1 2nπ ( ) 2nπt Bn cos n1 ( ) 2 ( ) 2nπ 2nπt Bn sin n1
This solution u is built from functions which satisfy the BVP and so u does also. The last condition is that u f which gives ( ) 2 ( ) 2nπ 2nπt Bn sin n1 A2n sin(2nπt/) As discussed, this suggests we choose Bn A2n 2 4n 2 π2. For this choice, we have Now let θn 2nπ u(t) 2 A2n 4k 2 π 2 n1 and w2n(t) sin( 2nπt ). n1 sin(2nπt ) We know n1 A2 2n converges and so for n > m we have A2k θ 2 w2k, km+1 k A2j θ 2 jm+1 j w2j km+1 A 2 k θk 4 2 The series A 2 2k θ 4 k1 k 2 A 2 4 2k 16k 4 π 4 2 < A 2 5 2k 2 k1 k1 and so it converges by comparison. Hence, its partial sums are a Cauchy Sequence and we therefore know n1 A2n/θ2 n w2n satisfies the UCC and converges uniformly on [, ] to a function u on [, ]. In addition, N lim w2j, A2k N θ 2 w2k k1 k w2j, lim N w2j, N A2k θ 2 w2k k1 k A2k θ 2 w2k k1 k
Thus w2j, A2k θ 2 w2k k1 k A2j θj 2 2 We already knew n1 A2n w2n converged uniformly on [, ] and so we can conclude our original guess that Bn A2n/θn 2 is justified as w2j, Bk θ 2 w2k k1 k w2j, implies Bj A2j. The last question is whether θ 2 j A2k w2k k1 ( u (t) A2n/θn w2n(t)) 2 n1 A2n/θn 2 w 2n(t) n1 ( ) A2n/θn 2 θn cos(θnt) n1 A2n/θn cos(θnt) n1 The partial sums of the u (t) series are Tn(t) n k1 A2n/θn cos(θnt). We see for n > m, ( ) 2 A2k/θk cos(θkt) A2k 2πk 2 km+1 km+1 km+1 and since k1 1 k 2 and k1 Ak 2 converge, each has partial sums which form Cauchy Sequences and hence the partial sums of u n(t) satisfy the UCC on [, ]. Thus, there is a continuous function D so that u n unif D on [, ]. Since the u series is just the Fourier Sine Series for f, we also know the partial sums of u converge uniformly to a function E on [, ].
To apply the derivative interchange theorem for u, we let un(t) n k1 A2n/θ2 n sin(θnt) be the n th partial sum of u. 1. un is differentiable on [, ]: True. 2. u n is Riemann Integrable on [, ]: True as each is a polynomial of cos functions. 3. There is at least one point t [, ] such that the sequence (un(t)) converges. True as the series converges on [, ]. 4. u n unif y on [, t] and the limit function y is continuous. True as we have just shown u n unif D on [, ]. The derivative interchange theorem conditions are satisfied and so there unif is a function W on [, ] with un W on [, t] and W D. Since limits are unique, we then have W u with u D. Thus, we have ( u (t) n1 A2n/θ 2 n w2n(t)) A2n/θn 2 w 2n(t) D(t) n1 Also, it is true ( u (t) A2n/θn w2n) 2 n1 A2n/θn 2 w 2n(t) n1 ( ) A2n/θn 2 θnw2n(t) 2 n1 A2nw2n(t) n1 as these derivative interchanges are valid as the Derivative Interchange Theorem can be justified here as well. Hence, we have used the Fourier Sine Series expansions to solve this ordinary differential equation model with just modest assumptions about the external data f.
Example Find the first four terms of the solution u to u (t) f (t); u(), u () u () for 4 and f is the function which is g(t) 1t(2 t) on [, 2] and is g(t 2) on [2, 4]. Solution et s code the data function. f @( x ) 1 x. (2 x ) ; f 2 @( x ) f (4 x ) ; g @( x ) s p l i t f u n c ( x, f, f2, 4 ) ; X l i n s p a c e (, 4, 1 1 ) ; f o r i 1 : 1 1 G( i ) g (X( i ) ) ; end p l o t (X, G) ; Solution The graph of the data function.
Solution Next, we find the first eight Fourier Sine coefficients on [, 4]. Then we find the B coefficients for the solution to the ODE. t h e t a @(n, ) 2 n p i / ; [A, p8 ] F o u r i e r S i n e A p p r o x (g,4,2,8) ; t h e t a @(n, ) 2 n p i / ; B [ ] ; f o r i 1 : 4 B( i ) A(2 i ) /( t h e t a ( i, 4 ) ) ˆ 2 ; end >> A 6.474e 16 1. 3 2 e+1 4. 5 3 2 5 e 16 1.9543e 16 6.3843e 16 3. 8 2 2 4 e 1 4.4631e 16 1.6621e 16 >> B B 4.1827 e+ 1. 9 8 1 e 17 1.7213e 2 4. 2 1 1 e 18 Solution Here is the Fourier Sine approximation to the data function
Solution We construct the approximate ode solution u4 @( t ) ; f o r i 1 : 4 u4 @( t ) (u4 ( t ) + B( i ) s i n (2 i p i t /4) ) ; end p l o t (X, u4 (X) ) ; x l a b e l ( Time ) ; y l a b e l ( S o l u t i o n ) ; t i t l e ( u ( t ) f ( t ), u ( ), u ( )u ( 4 ) ) ; Solution Here is the Fourier Sine approximation to the solution
Homework 33 33.1 Find the first four terms of the solution u to u (t) f (t); u(), u () u () for 6 and f is the function which is g(t) 5t(3 t) on [, 3] and is the on g(6 t) on [3, 6]. Plot the data function, the approximation to the data function and the approximate solution. Homework 33 33.2 Find the first four terms of the solution u to u (t) f (t); u(), u () u () for 1 and f (t) P(t) on [, 5] where P is the pulse P on [, 5] with height 1 applied at t 1.5 for a duration of 1.3 and f is the function P(1 t) on the interval [5, 1]. Plot the data function, the approximation to the data function and the approximate solution. Note that f is not differentiable at the jumps but f has simple jumps at these points. Hence the fourier sine series for f converges to the average value of the jump at those points. Also, the original differential equation is not defined at those jumps either. We could eliminate these issues by using C bump functions instead of pulse functions.
Homework 33 33.3 Find the GSO of the functions f1(t) t 2, f2(t) cos(2t) and f3(t) 4t 3 on the interval [ 1, 2] for various values of NIP. Check to see the matrix d is the diagonal.