Math Methods for Polymer Physics Lecture 1: Series Representations of Functions

Similar documents
Differential Equations

Lecture 1 Notes: 06 / 27. The first part of this class will primarily cover oscillating systems (harmonic oscillators and waves).

Infinite series, improper integrals, and Taylor series

Math 112 Rahman. Week Taylor Series Suppose the function f has the following power series:

1 Separation of Variables

Final exam (practice) UCLA: Math 31B, Spring 2017

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

MATH 241 Practice Second Midterm Exam - Fall 2012

OR MSc Maths Revision Course

ENGI 9420 Lecture Notes 8 - PDEs Page 8.01

1.4 Techniques of Integration

8.7 MacLaurin Polynomials

5.9 Representations of Functions as a Power Series

AP Calculus Chapter 9: Infinite Series

Fourier and Partial Differential Equations

Math 0230 Calculus 2 Lectures

Final Exam Solution Dynamics :45 12:15. Problem 1 Bateau

FOURIER ANALYSIS. (a) Fourier Series

2 = = 0 Thus, the number which is largest in magnitude is equal to the number which is smallest in magnitude.

Part 3.3 Differentiation Taylor Polynomials

7.5 Partial Fractions and Integration

Announcements. Topics: Homework:

ENGI 9420 Lecture Notes 8 - PDEs Page 8.01

5. LIGHT MICROSCOPY Abbe s theory of imaging

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Notes on Fourier Series and Integrals Fourier Series

t 2 + 2t dt = (t + 1) dt + 1 = arctan t x + 6 x(x 3)(x + 2) = A x +

Lecture 10. (2) Functions of two variables. Partial derivatives. Dan Nichols February 27, 2018

2t t dt.. So the distance is (t2 +6) 3/2

Polynomial Solutions of the Laguerre Equation and Other Differential Equations Near a Singular

Chapter 6. Techniques of Integration. 6.1 Differential notation

General elastic beam with an elastic foundation

(x 3)(x + 5) = (x 3)(x 1) = x + 5. sin 2 x e ax bx 1 = 1 2. lim

Chapter 6. Techniques of Integration. 6.1 Differential notation

Math 113 Winter 2005 Key

Announcements. Topics: Homework:

6x 2 8x + 5 ) = 12x 8

Exam Question 10: Differential Equations. June 19, Applied Mathematics: Lecture 6. Brendan Williamson. Introduction.

2.2 The derivative as a Function

Slide 1. Slide 2. Slide 3 Remark is a new function derived from called derivative. 2.2 The derivative as a Function

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

Separation of Variables in Linear PDE: One-Dimensional Problems

Constructing Taylor Series

Quintic beam closed form matrices (revised 2/21, 2/23/12) General elastic beam with an elastic foundation

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Partial Differential Equations

Chapter 11 - Sequences and Series

Strauss PDEs 2e: Section Exercise 4 Page 1 of 6

Physics 250 Green s functions for ordinary differential equations

Mathematics 136 Calculus 2 Everything You Need Or Want To Know About Partial Fractions (and maybe more!) October 19 and 21, 2016

Math Numerical Analysis

1 Lesson 13: Methods of Integration

MATH 3B (Butler) Practice for Final (I, Solutions)

Final exam (practice) UCLA: Math 31B, Spring 2017

MA 114 Worksheet # 1: Improper Integrals

MATH 308 COURSE SUMMARY

How might we evaluate this? Suppose that, by some good luck, we knew that. x 2 5. x 2 dx 5

Infinite series, improper integrals, and Taylor series

Chapter 4 Sequences and Series

Higher-order ordinary differential equations

The above statement is the false product rule! The correct product rule gives g (x) = 3x 4 cos x+ 12x 3 sin x. for all angles θ.

FOURIER SERIES. Chapter Introduction

Math 260: Solving the heat equation

Problem Set Number 01, MIT (Winter-Spring 2018)

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

8.5 Taylor Polynomials and Taylor Series

Physics 486 Discussion 5 Piecewise Potentials

01 Harmonic Oscillations

ENGI 4430 PDEs - d Alembert Solutions Page 11.01

Appendix C: Recapitulation of Numerical schemes

n=1 ( 2 3 )n (a n ) converges by direct comparison to

Introduction Derivation General formula List of series Convergence Applications Test SERIES 4 INU0114/514 (MATHS 1)

Learning Objectives for Math 166

Math 5a Reading Assignments for Sections

Calculus II Lecture Notes

19. TAYLOR SERIES AND TECHNIQUES

2. FUNCTIONS AND ALGEBRA

Mathematics for Chemists 2 Lecture 14: Fourier analysis. Fourier series, Fourier transform, DFT/FFT

(L, t) = 0, t > 0. (iii)

CHAPTER 3 POTENTIALS 10/13/2016. Outlines. 1. Laplace s equation. 2. The Method of Images. 3. Separation of Variables. 4. Multipole Expansion

MITOCW 6. Standing Waves Part I

3. On the grid below, sketch and label graphs of the following functions: y = sin x, y = cos x, and y = sin(x π/2). π/2 π 3π/2 2π 5π/2

Continuum Limit and Fourier Series

Announcements. Topics: Homework:

3.4 Introduction to power series

1. Taylor Polynomials of Degree 1: Linear Approximation. Reread Example 1.

1 Exponential Functions Limit Derivative Integral... 5

7x 5 x 2 x + 2. = 7x 5. (x + 1)(x 2). 4 x

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Second-Order Linear ODEs

MATH 250 TOPIC 13 INTEGRATION. 13B. Constant, Sum, and Difference Rules

Normal modes. where. and. On the other hand, all such systems, if started in just the right way, will move in a simple way.

Table of Contents. Module 1

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

February 13, Option 9 Overview. Mind Map

Completion Date: Monday February 11, 2008

Introduction Derivation General formula Example 1 List of series Convergence Applications Test SERIES 4 INU0114/514 (MATHS 1)

Essential Mathematics 2 Introduction to the calculus

TAYLOR SERIES [SST 8.8]

swapneel/207

Transcription:

Math Methods for Polymer Physics ecture 1: Series Representations of Functions Series analysis is an essential tool in polymer physics and physical sciences, in general. Though other broadly speaking, a series expansion allows one to analyze an arbitrarily complicated function into the sum of a simpler set of functions. Though other series expansions exist, two are especially useful: the Taylor series and Fourier series. In a crude way, we may think of both series as 2 different ways of approximating, or fitting, a given function to a simpler form. For further reading on Taylor series and Fourier series see chapters 5 and 14, respectively, of Arken and Weber s text, Mathematical Methods for Physicists. 1 Taylor Series et s start with Taylor series expansions. The Taylor expansion is a representation of a function, say f(x), as an infinite power series in the polynomials, (x x ) n, where x is some reference point for the independent variable, x. Why are Taylor series useful? Well, let s say you have a complicated function: f(x) = ln ( cos x 2 + 2 ) ( x ) 3 +. (1) 3 This function is plotted in Fig. 1. Often it s sufficient and useful to have a simpler description of the function in the neighborhood of some point, say x = x. For many functions, you may replace f(x) with a power series expansion in polynomials of x = x x distance from the reference point. f(x) = a n ( x) n n= = a + a 1 x + a 2 ( x) 2 + a 3 ( x) 3 +... (2) What are these coefficients a n? The first one can be deduced from x = x and x =, so that f(x ) = a + a 1 x + a 2 ( x) 2 + a 3 ( x) 3 +... = a }{{} (3) 1

Figure 1: Plot of f(x) in eq. (1), dark solid line. For many applications, it is often necessary to know behavior of the functionf(x) near some point, say x = 2. The Taylor series expansions for f(x) around x = x including 1, 2, 3, and 4 terms only are shown as labelled. To find the higher-order (larger n) coefficients, take derivatives of both sides. Note that after this operation the right side is still a Taylor series. In general, we may show f (x ) = a 1 + 2a 2 x + 3a 3 ( x) 2 +... = a }{{} 1 (4) a n = 1 d n f n! dx n. (5) x=x In order to find the Taylor series expansion we need only to take derivatives of f(x) evaluated only at the point of reference, x = x. Eqs. (2) and (5) define the Taylor series expansion of a functions of a single variable. Functions which can be represented by a Taylor series are known as analytic functions. Notice from eq. (2) that as x x and x the higher order (large n) terms in the power series expansion go to zero very quickly. Hence, if one is interested in f(x) sufficiently close to x, a Taylor series expansion truncated to include only a few leading terms may often be sufficient to approximate the function. Geometrically, we can think of this in terms of a local description of a function near x = x. f(x) = f(x ) + xf (x }{{} ) + ( x)2 f (x }{{} ) + ( x)3 f (x ) +... (6) } 2! {{}} 3! {{} constant linear parabolic cubic 2

This shows that sufficiently close a point of interest that analytic functions are well approximated by constant plus a sloped, linear correction plus a parabolic correction plus.... Further away from a given reference point at x = x, the less and less a function looks like a straight line. In order to get a better a approximation, you need functions with more wiggles (e.g. higher order polynomials). et s try some examples. Example 1: Expand ln(x) in a Taylor series around x = 1. a = ln 1 = a 1 = d dx ln(x) = 1 = 1 x x=1 x=1 a 2 = d2 dx 2 ln(x) = d 1 = 1 dx x x 2 = 1 x=1 x=1 x=1 a 3 = d3 dx 3 ln(x) = 2 x 3 = 2 x=1 x=1 In general, a n = ( 1) n (n 1)! for n > 1. (x 1)2 ln(x) = (x 1) 2 ( 1) n (x 1) n = n n=1 + (x 1)3 3 (x 1)4 4 +... (7) Example 2: Expand 1 1 x in a Taylor around x =. a = a 1 = d 1 1 = dx 1 x (1 x) 2 = 1! x= x= a 2 = d2 1 2 dx 2 = 1 x (1 x) 3 = 2! x= x= a 3 = d3 1 dx 3 = 1 2 3 1 x (1 x) 4 = 3! x= x= In general, a n = n!. So 1 1 x = x n (8) n= 3

which is the well-known geometric series. These particular series, eqs. (7) and (8), do not converge for all values of x. When the series does not converge, for some large enough x, the successive terms terms a n ( x) n become larger than that the sum of the previous terms, meaning that adding more terms in the series expansion does not provide a better approximation, and the Taylor series fails to represent the function. For ln(x) around x = 1 and 1 1 x around x =, these only converge for x < 1. In general we may define R c as the radius of convergence of the Taylor series of f(x) around x = x. If x < R c, then n= a n( x) n = f(x). Otherwise series does not provide a good approximation of f(x) (adding more terms makes things worse). There are some functions for which R c and the Taylor series always converges. Important examples include e x, sin x, cos x. These functions arise in many contexts, so it is useful to commit these series to memory. Example 3: Expand e x around x =. Well, first notice d n dx n ex = e x = 1 x= x= From eq. (5) this gives right away the Taylor series coefficient of e x e x = 1 + x + x2 2! + x3 3! + x4 4! +... = n= x n n!. (9) The Taylor series representation of e x is a particularly useful way to see that d dx (ex ) = e x. Indeed, it is reasonable to view n= xn n! as the definition of e x. You should also commit expressions of sin x and cos x to memory. These converge for all x: sin x = x x3 3! + x5 5! x7 7! +... (1) cos x = 1 x2 2! + x4 4! x6 6! +... (11) Notice that these expansions allow you to derive the following important identity, e ix = cos x + i sin x, (12) which is used heavily in Fourier analysis. It is reasonably straightforward to generalize the Taylor series expansion for a function of a single variable to a mutli-variable function, say f(x, y), 4

expanded around the point x = x and y = y : f(x, y) = f(x, y ) + x f f + y x y + 1 [ ( x) 2 2 f 2! x 2 + 2 x y 2 f x y + ( y)2 2 f ] y 2 + 1 [ ( x) 3 3 f 3! x 3 + 3( x)2 y 3 f x 2 y +3 x( y) 2 3 f x y 2 + ( y)3 3 f ] y 3 +... (13) where x = x x, y = y y and all partial derivatives are evaluated at (x, y ). This expansion can be confirmed by taking first, second, third (etc.) derivatives of both sides of the equation above. Why is the Taylor expansion a useful description? In many physical systems, the full expression for a function may be impossible to write down (i.e. PE of strongly interacting mixtures of charged particles). But often, equilibrium and dynamic behavior depends only on local properties of function. By local, we mean, sufficiently close to some set of values for the independent variable. As a concrete example, consider a colloidal bead in a laser trap (Fig. 2), an experimental tool which has been exploited to measure the forces generated by single macromolecules. If the bead has a polarizability, α, then when it is subject to an electric field, E, it obtains a dipole moment, p = αe. The potential energy of a polarized object in an electric field is simply, U = 1 2p E, while the energy required to polarize the bead is U polarization = p 2 /(2α). Therefore, if the polarizable bead is subject to an electric field E(x) that varies in space (as near the focal point of a laser beam, the net electrostatic interaction between the bead and the field is described by the potential energy, U(x) = α 2 E(x) 2. (14) Hence, the potential energy is lowest in regions where the electric-field intensity, E(x) 2, is highest. This explains why a small polarizable object, like colloidal beads, are drawn into the focal point of a high-intensity laser (shown schematically in Fig. 2). In general, the pattern of electric field intensity, E(x) 2, may be rather complicated. But, if we are interested only in the behavior very close to the center of the trap, the behavior always has the same simple form, U( x) = U }{{} constant + U 1 x + U 2 }{{} 2 ( x)2 +.... (15) = }{{} quadratic 5

Figure 2: Top: a schematic depiction of a polarizable bead near to the highintensity focal point of a laser beam. Bottom: A sketch of U, the potential energy of a optically-trapped colloidal particle in terms of x, the deviation from the center of the trap. By definition, is minimum at x =, so we know du = U 1 =. This dx x= means that the force on the bead at the center of the trap is zero, because the electric field intensity is maximal. ocal equilibrium (mechanical, dynamic, etc.) always looks like this: constant + quadratic (first non-trivial term in expression about equilibrium). What is force if bead is displaced? F x = du dx = U 2 x = k x (16) The linear force response is identical to a Hooke s aw elastic spring, and k is spring constant. For all interest and purpose (near equilibrium or steady state), we are often interested in expression up to harmonic order. Therefore, if one calibrates the strength of optical trapping (the value of k) and carefully measure x, you can measure magnitude of external forces that pull a bead from the center of the trap, generated, say, by a strand of DNA chemically tethered to the bead. 2 Fourier Series The second important series representation of functions is the Fourier series. A simple way to describe this series is to contrast it with the Taylor series 6

described in the previous section: Taylor Series - decompose f(x) into infinite series of polynomials ( x) n Fourier Series - decompose f(x) into infinite series of sines and cosines Why are Fourier series (and transforms) useful? 1. Fourier analysis is necessary to understand interaction between matter and radiation/waves (i.e. scattering) and spectral analysis 2. Sines and cosines are harmonic functions, which means they form a complete basis of solutions to certain PDE s common to the study of physical systems Indeed, properties 1 and 2 are intimately related as the wave equation is harmonic, and therefore, radiation (light, x-rays, etc.) is sinusoidal in nature. In addition, you ll likely see how property 2 can be used to solve problems in continuum elasticity and polymer dynamics. For example, in the study of polymer dynamics, we come across equations like, d 2 R(n) + kr(n) =, (17) dn2 where k > describes a relaxation rate chain motion, and R(n) ( specifies d kn ) the position of the bead n along a polymer chain. Since, 2 sin = dn ( kn ) 2 k sin, sines and cosines form a natural set of solutions to this equation. For the purposes of this review, a Fourier series is the unique decomposition of an arbitrary function (in some domain) into an infinite series of sines and cosines. et s say we are interested in a function f(x) in the domain x [, ] (see Fig. 3). In this domain we can write Fourier series as: f(x) = a 2 + a n cos x + n=1 n=1 b n sin x (18) a n and b n are coefficients. Just as the coefficients of the Taylor series are related uniquely to the given function, a n and b n are uniquely determined by properties of f(x) on this domain. How are a n and b n related to f(x)? This relationship derives ( from ) an important properties of sines and cosines. In particular, sin 2πn x and ( ) cos 2πn x are orthogonal functions on this domain. This means that if a 7

Figure 3: Plot of f(x), in the range of [, ]. multiply any two of these elementary functions and integrate over the domain x [, ], the resulting integral is zero unless these functions are identically. Consider two produce of two sine functions sin ( 2πn x) sin ( 2πm x) : = 1 2 ( 2πn dx sin dx ) ( ) 2πm x sin x [ ( ) 2πx cos (n m) ( )] 2πx cos (n + m). (19) This integral is only non-zero if n = m for which the first term in the integrand becomes cos ( 2πx (n m)) = 1. From this we can show the following for the orthogonality between sines, ( ) ( ) { 2πn 2πm 2 dx sin x sin x if n = m = (2) if n m Similarly, for the cosines, ( ) ( ) 2πn 2πm dx cos x cos x = { 2 if n = m if n m (21) Sines and cosines are always orthogonal, ( ) ( ) 2πn 2πm dx sin x cos x = for all m, n. (22) The orthogonality relations, eqs. (2) - (22), are important because they allow one to invert the Fourier series, to determine the unique set of 8

coefficients, a n and b n, the correspond to the function f(x). Operationally, the coefficients of the Fourier series are( determined ) by projecting out the term in series proportional to, say, sin 2πn x, by multiplying both sides ( ) of eq. (18) by sin 2πn x and integrating the product over the domain x [, ]: ( ) 2πm dx f(x) sin x ( ) [ 2πm = dx sin x a 2 + ( ) ] 2πn a n cos x + b n sin x n=1 Carrying out the integration, a and all cosine terms in sum will be zero due to orthogonality conditions, eqs. (21) and (22). ikewise, all sine terms in sum except n = m term are zero too. Thus, the only term from the right-hand side of eq. (18) that survives this projection operation is from n = m: ( ) 2πm dx f(x) sin x = 2 b m and, b n = 2 n=1 dx sin x f(x) (23) By performing the same operation with the cosine functions we can also derive, a n = 2 dx cos x f(x) (24) Example 4 Consider a function f(x) = A+Bx (see Fig.??). Compute coefficients a n and b n for a Fourier series in the domain x [, ]: From eq. (24) we( compute ) the coefficients to the cosine terms by multiplying f(x) by cos 2πn x and integrating over the domain. For m =, this is easy, a = 2 dx (A + Bx) = 2 ] [A + B2 = 2A + B (25) 2 Now consider b n, b n = 2 = 2B How do you do this integral? integrals. ( 2πm dx sin ( 2πm dx x sin ) x (A + Bx) ) x (26) et s review a useful trick for evaluating 9

Figure 4: Plot of f(x), in the domain of [, ]. Aside: Integrations by parts et s say you want to compute dx u(x)v (x), and you don t know the anti-derivative of v (x). The chain rule of differentiation gives you, d dx (u(x)v(x)) = u (x)v(x) + u(x)v (x) (27) or u(x)v (x) = d dx (u(x)v(x)) u (x)v(x). Substituting this expression for the integrand, [ ] d dx u(x)v (x) = dx dx (u(x)v(x)) u (x)v(x) = u(x)v(x) dx u (x)v(x). (28) Colloquially, we say that this operation flips the derivative from v(x) to u(x). (Hopefully, the remaining integrand is known!) Applying integration by parts to our case in eq. (26): ( ) 2πm u = x v = sin x u = 1 v = ( ) 2πm 2πn cos x 1

and, thus dx x sin x = x ( ) 2πn 2πn cos x = 2 2πn + 2πn dx cos x b n = πn. (29) Applying integration by parts, we can also show a n = for a. All together, we have ( f(x) = A + B ) ( ) B 2πn 2 πn sin x. (3) This result is plotted in Fig. 5, where the series has been truncated after including a finite number of terms. It is quite clear, that additional terms improve the quality of the Fourier expansion, and the series will ultimately converge to f(x). It is common to refer to the individual terms contributing to the Fourier sum as Fourier modes. From the result b n = πn and from Fig. 5, it is clear that the contribution, or amplitude, of the higher-order modes decreases as the mode-number n increases, explaining why this sum converges to a reasonable approximation to the function f(x) after a finite number of terms. Three final notes on Fourier series. First, domain of Fourier series can be chosen arbitrarily. It is commonly convenient to shift domain to be symmetric about x = : x 2 n=1 [ ] 2, 2 In this case, form of Fourier series looks the same. Only formula for coefficients changes. a n = 2 ( ) 2 2πn dx cos x f(x), (31) and similarly for the b n. Second, notice that all terms in Fourier series are periodic under x x + n (shift by length of domain). For this reason Fourier series especially useful as a general representation of any periodic function. For example, one may calculate the Fourier coefficients for a given function based on the projection operation within a single domain, say from x to x = in Fig. 6. In crystalline materials, for example, the electron density is a periodic function that is naturally described in as a Fourier spectrum, and the modes of non-zero amplitude represent regions of strong-scattering by diffraction. 11

Figure 5: Plot of f(x), in the domain of [, ], with the Fourier series expansion truncated after including a different number of terms. Clearly, the inclusion of higher order terms improves the overall approximation. Figure 6: Plot of an infinitely periodic function. The Fourier series for a single domain describes an infinite array of periodic copies of the same function, translated by one domain length,. 12

Finally, recall that for a discontinuous (non-analytic) function, the Taylor series near to the point of discontinuity does not converge, providing a poor fit to a discontinuous function. However, the convergence of the Fourier serious does not require a function to be analytic. Any function, even discontinuous functions, can be decomposed into a Fourier series that converges as the number of terms included in the series goes to. 13