Mathematical Methods for Physics

Similar documents
Introduction to Mathematical Physics

Appendix: Orthogonal Curvilinear Coordinates. We define the infinitesimal spatial displacement vector dx in a given orthogonal coordinate system with

Math 302 Outcome Statements Winter 2013

Contents. MATH 32B-2 (18W) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables. 1 Multiple Integrals 3. 2 Vector Fields 9

송석호 ( 물리학과 )

ENGINEERING MATHEMATICS I. CODE: 10 MAT 11 IA Marks: 25 Hrs/Week: 04 Exam Hrs: 03 PART-A

Review of Electrostatics. Define the gradient operation on a field F = F(x, y, z) by;

The purpose of this lecture is to present a few applications of conformal mappings in problems which arise in physics and engineering.

Review of Electrostatics

Mathematical Tripos Part IA Lent Term Example Sheet 1. Calculate its tangent vector dr/du at each point and hence find its total length.

ADVANCED ENGINEERING MATHEMATICS MATLAB

MATH 332: Vector Analysis Summer 2005 Homework

Introduction and Vectors Lecture 1

The 3 dimensional Schrödinger Equation

Electric fields in matter

Differential equations, comprehensive exam topics and sample questions

example consider flow of water in a pipe. At each point in the pipe, the water molecule has a velocity

Course Outline. Date Lecture Topic Reading

lim = F F = F x x + F y y + F z

Salmon: Lectures on partial differential equations

Topics for the Qualifying Examination

1 Curvilinear Coordinates

Mathematical Notes for E&M Gradient, Divergence, and Curl

4 Power Series Solutions: Frobenius Method

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p

Unit-1 Electrostatics-1

1 Fundamentals. 1.1 Overview. 1.2 Units: Physics 704 Spring 2018

ENGI Gradient, Divergence, Curl Page 5.01

Magnetostatics. Lecture 23: Electromagnetic Theory. Professor D. K. Ghosh, Physics Department, I.I.T., Bombay

Upon successful completion of MATH 220, the student will be able to:

Maxwell s equations for electrostatics

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Here are brief notes about topics covered in class on complex numbers, focusing on what is not covered in the textbook.

Multiple Integrals and Vector Calculus (Oxford Physics) Synopsis and Problem Sets; Hilary 2015

METHODS OF THEORETICAL PHYSICS

Physics 342 Lecture 23. Radial Separation. Lecture 23. Physics 342 Quantum Mechanics I

Before you begin read these instructions carefully.

Review of Vector Analysis in Cartesian Coordinates

Quantum Mechanics in Three Dimensions

l=0 The expansion coefficients can be determined, for example, by finding the potential on the z-axis and expanding that result in z.

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

Vectors and Fields. Vectors versus scalars

Introduction and Review Lecture 1

PONDI CHERRY UNI VERSI TY

Final Exam May 4, 2016

Covariant Formulation of Electrodynamics

Chapter 3 Second Order Linear Equations

2.20 Fall 2018 Math Review

Curvilinear coordinates

CHAPTER 4 ELECTROMAGNETIC WAVES IN CYLINDRICAL SYSTEMS

Tensors, and differential forms - Lecture 2

Electromagnetism HW 1 math review

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

PART 2: INTRODUCTION TO CONTINUUM MECHANICS

Steady and unsteady diffusion

Chapter 7. Kinematics. 7.1 Tensor fields

Boundary value problems

Spherical Coordinates and Legendre Functions

Introduction of Partial Differential Equations and Boundary Value Problems

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Connection to Laplacian in spherical coordinates (Chapter 13)

Physics 6303 Lecture 2 August 22, 2018

free space (vacuum) permittivity [ F/m]

Solutions to Laplace s Equation in Cylindrical Coordinates and Numerical solutions. ρ + (1/ρ) 2 V

Physics 208, Spring 2016 Exam #3

Chapter 1. Vector Algebra and Vector Space

Chap. 1 Fundamental Concepts

Tyn Myint-U Lokenath Debnath. Linear Partial Differential Equations for Scientists and Engineers. Fourth Edition. Birkhauser Boston Basel Berlin

Mathematical Concepts & Notation

EELE 3331 Electromagnetic I Chapter 3. Vector Calculus. Islamic University of Gaza Electrical Engineering Department Dr.

Gradient, Divergence and Curl in Curvilinear Coordinates

FLUX OF VECTOR FIELD INTRODUCTION

On the existence of magnetic monopoles

Chapter 10 Operators of the scalar Klein Gordon field. from my book: Understanding Relativistic Quantum Field Theory.

Expansion of 1/r potential in Legendre polynomials

Math 4263 Homework Set 1

1 Solutions in cylindrical coordinates: Bessel functions

20 The Hydrogen Atom. Ze2 r R (20.1) H( r, R) = h2 2m 2 r h2 2M 2 R

Review and Notation (Special relativity)

Differential Operators and the Divergence Theorem

CBE 6333, R. Levicky 1. Orthogonal Curvilinear Coordinates

Separation of Variables in Linear PDE: One-Dimensional Problems

Classical Field Theory: Electrostatics-Magnetostatics

ADVANCED ENGINEERING MATHEMATICS

MATHEMATICAL FORMULAS AND INTEGRALS

Before you begin read these instructions carefully:

Additional Mathematical Tools: Detail

Chapter 5. Magnetostatics

Divergence Theorem December 2013

Coordinate systems and vectors in three spatial dimensions

ASTR 320: Solutions to Problem Set 2

1 Tensors and relativity

Chapter 3 - Vector Calculus

Exact Solutions of the Einstein Equations

Created by T. Madas VECTOR OPERATORS. Created by T. Madas

Introduction to Electrostatics

Summary of time independent electrodynamics

APPLIED PARTIM DIFFERENTIAL EQUATIONS with Fourier Series and Boundary Value Problems

III. TRANSFORMATION RELATIONS

Electromagnetic waves in free space

Transcription:

Mathematical Methods for Physics Peter S. Riseborough June 8, 8 Contents Mathematics and Physics 5 Vector Analysis 6. Vectors................................ 6. Scalar Products............................ 7.3 The Gradient............................. 7.4 The Divergence............................ 8.5 The Curl................................6 Successive Applications of.....................7 Gauss s Theorem........................... 3.8 Stokes s Theorem........................... 5.9 Non-Orthogonal Coordinate Systems................ 6.9. Curvilinear Coordinate Systems............... 8.9. Spherical Polar Coordinates................. 9.9.3 The Gradient..........................9.4 The Divergence.........................9.5 The Curl............................9.6 Compounding Vector Differential Operators in Curvilinear Coordinates........................ 3 3 Partial Differential Equations 7 3. Linear First-Order Partial Differential Equations......... 3 3. Classification of Partial Differential Equations........... 37 3.3 Boundary Conditions......................... 48 3.4 Separation of Variables........................ 48 4 Ordinary Differential Equations 64 4. Linear Ordinary Differential Equations............... 66 4.. Singular Points........................ 67 4. The Frobenius Method........................ 68 4.. Ordinary Points....................... 69

4.. Regular Singularities..................... 74 4.3 Linear Dependence.......................... 86 4.3. Linearly Independent Solutions............... 9 4.3. Abel s Theorem........................ 9 4.3.3 Other Solutions........................ 9 5 Stürm Liouville Theory 93 5. Degenerate Eigenfunctions...................... 98 5. The Inner Product.......................... 99 5.3 Orthogonality of Eigenfunctions................... 5.4 Orthogonality and Linear Independence.............. 5.5 Gram-Schmidt Orthogonalization.................. 5.6 Completeness of Eigenfunctions................... 6 6 Fourier Transforms 6. Fourier Transform of Derivatives.................. 3 6. Convolution Theorem........................ 6 6.3 Parseval s Relation.......................... 7 7 Fourier Series 9 7. Gibbs Phenomenon.......................... 3 8 Bessel Functions 33 8.. The Generating Function Expansion............ 33 8.. Series Expansion....................... 34 8..3 Recursion Relations..................... 34 8..4 Bessel s Equation....................... 36 8..5 Integral Representation................... 37 8..6 Addition Theorem...................... 38 8..7 Orthonormality........................ 44 8..8 Bessel Series......................... 46 8. Neumann Functions......................... 53 8. Spherical Bessel Functions...................... 57 8.. Recursion Relations..................... 58 8.. Orthogonality Relations................... 59 8..3 Spherical Neumann Functions................ 59 9 Legendre Polynomials 6 9..4 Generating Function Expansion............... 6 9..5 Series Expansion....................... 63 9..6 Recursion Relations..................... 63 9..7 Legendre s Equation..................... 68 9..8 Orthogonality......................... 69 9..9 Legendre Expansions..................... 7

9. Associated Legendre Functions................... 8 9.. The Associated Legendre Equation............. 8 9.. Generating Function Expansion............... 84 9..3 Recursion Relations..................... 85 9..4 Orthogonality......................... 88 9. Spherical Harmonics......................... 9 9.. Expansion in Spherical Harmonics............. 9 9.. Addition Theorem...................... 93 Hermite Polynomials 97..3 Recursion Relations..................... 97..4 Hermite s Differential Equation............... 97..5 Orthogonality......................... 98 Laguerre Polynomials..6 Recursion Relations.......................7 Laguerre s Differential Equation............... Associated Laguerre Polynomials.................... Generating Function Expansion............... Inhomogeneous Equations 8. Inhomogeneous Differential Equations............... 8.. Eigenfunction Expansion.................. 9.. Piece-wise Continuous Solution............... 9. Inhomogeneous Partial Differential Equations........... 8.. The Symmetry of the Green s Function........... 9.. Eigenfunction Expansion.................. 3 3 Complex Analysis 4 3. Contour Integration......................... 45 3. Cauchy s Integral Theorem..................... 48 3.3 Cauchy s Integral Formula...................... 54 3.4 Derivatives.............................. 56 3.5 Morera s Theorem........................... 58 4 Complex Functions 6 4. Taylor Series............................. 6 4. Analytic Continuation........................ 6 4.3 Laurent Series............................. 65 4.4 Branch Points and Branch Cuts................... 67 4.5 Singularities.............................. 7 3

5 Calculus of Residues 75 5. Residue Theorem........................... 75 5. Jordan s Lemma........................... 77 5.3 Cauchy s Principal Value...................... 78 5.4 Contour Integration......................... 8 5.5 The Poisson Summation Formula.................. 94 5.6 Kramers-Kronig Relations...................... 98 5.7 Integral Representations....................... 3 4

Mathematics and Physics Physics is a science which relates measurements and measurable quantities to a few fundamental laws or principles. It is a quantitative science, and as such the relationships are mathematical. The laws or principles of physics must be able to be formulated as mathematical statements. If physical laws are to be fundamental, they must be few in number and must be able to be stated in ways which are independent of any arbitrary choices. In particular, a physical law must be able to be stated in a way which is independent of the choice of reference frame in which the measurements are made. The laws or principles of physics are usually formulated as differential equations, as they relate changes. The laws must be invariant under the choice of coordinate system. Therefore, one needs to express the differentials in ways which are invariant under coordinate transformations, or at least have definite and easily calculable transformation properties. It is useful to start by formulating the laws in fixed Cartesian coordinate systems, and then consider invariance under:- i Translations ii Rotations iii Boosts to Inertial Reference Frames iv Boosts to Accelerating Reference Frames Quantities such as scalars and vectors have definite transformation properties under translations and rotations. Scalars are invariant under rotations. Vectors transform in the same way as a displacement under rotations. 5

Vector Analysis. Vectors Consider the displacement vector, in a Cartesian coordinate system it can be expressed as r = êx x + ê y y + ê z z where ê x, ê y and ê z, are three orthogonal unit vectors, with fixed directions. The components of the displacement are x, y, z. In a different coordinate system, one in which is passively rotated through an angle θ with respect to the original coordinate system, the displacement vector is unchanged. However, the components x, y, z defined with respect to the new unit vectors ê x, ê y and ê z, are different. A specific example is given by the rotation about the z-axis r = ê x x + ê y y + ê z z The new components are given in terms of the old components by x cos θ sin θ x y = sin θ cos θ y 3 z z Hence, r = ê x x cos θ + y sin θ + ê y y cos θ x sin θ + ê z z 4 The inverse transformation is given by the substitution θ θ, r = êx x cos θ y sin θ + ê y y cos θ + x sin θ + ê z z 5 Any arbitrary vector A can be expressed as A = êx A x + ê y A y + ê z A z 6 where ê x, ê y and ê z, are three orthogonal unit vectors, with fixed directions. The components of the displacement are A x, A y, A z. The arbitrary vector transforms under rotations exactly the same way as the displacement A = ê x A x cos θ + A y sin θ + ê y A y cos θ A x sin θ + ê z A z 7 6

. Scalar Products Although vectors are transformed under rotations, there are quantities associated with the vectors that are invariant under rotations. These invariant quantities include:- i Lengths of vectors. ii Angles between vectors. These invariant properties can be formulated in terms of the invariance of a scalar product. The scalar product of two vectors is defined as A. B = Ax B x + A y B y + A z B z 8 The scalar product transforms exactly the same way as a scalar under rotations, and is thus a scalar or invariant quantity..3 The Gradient A. B = Ax B x + A y B y + A z B z = A x B x + A y B y + A z B z 9 The gradient represents the rate of change of a scalar quantity φ r. The gradient is a vector quantity which shows the direction and the maximum rate of change of the scalar quantity. The gradient can be introduced through consideration of a Taylor expansion φ r + a = φ r + a x φ + a y φ y + a z φ z +... = φ r + a. φ r +... The change in the scalar qunatity is written in the form of a scalar product of the vector displacement a given by with another quantity defined by a = êx a x + ê y a y + ê z a z φ = êx φ + ê y φ y + ê z φ z The latter quantity is a vector quantity, as follows from the scalar quantities φ r and φ r + a being invariant. Thus, the dot product in the Taylor expansion must behave like a scalar. This is the case if φ is a vector, since the scalar product of the two vectors is a scalar. 7

The gradient operator is defined as = êx + ê y y + ê z z 3 The gradient operator is an abstraction, and only makes good sense when the operator acts on a differentiable function. The gradient specifies the rate of change of a scalar field, and the direction of the gradient is in the direction of largest change. An example of the gradient that occurs in physical applications is the relationship between electric field and the scalar potential E = φ 4 in electro-statics. This has the physical meaning that a particle will move accelerate from regions of high potential to low potential,. The particle always accelerates in the direction of the maximum decrease in potential. For a point charge of magnitude q the potential is given by φ = q r 5 and the electric field is given by E = + q r r 3 6.4 The Divergence The gradient operator, since it looks like a vector, could possibly be used in a scalar product with a differentiable vector field. This can be used to define the divergence of a vector as the scalar product. A 7 The divergence is a scalar quantity. as In Cartesian coordinates, the divergence is evaluated from the scalar product. A = A x + A y y + A z z 8 8

Consider a vector quantity A of the form A = r fr 9 which is spherically symmetric and directed radially from the origin. The divergence of A is given by x fr + y fr y + z fr z = 3 fr + x + y + z r = 3 fr + r fr r fr r It is readily seen that the above quantity is invariant under rotations around the origin, as is expected if the divergence of a vector is a scalar. Another example is given by the vector t in the x y plane which is perpendicular to the radial vector ρ. The radial vector is given by then tangential vector is founds as ρ = êx x + ê y y t = êx y + ê y x since it satisfies t. ρ = 3 Therefore, the divergence of the tangential vector field is zero. t = 4 In this example, the vector field t is flowing in closed circles, and the divergence is zero. Given a differentiable vector field A, which represents the flow of a quantity, then the divergence represents the net inflow of the quantity to a volume and, as such, is a scalar. A physical example of the divergence is provided by the continuity equation ρ t +. j = 5 where ρ is the density and j is the current density. The continuity equation just states that the accumulation of matter in a volume the increase in density 9

is equal to the matter that flows into the volume. The presence of a non-zero divergence represents a source or sink for the flowing quantity. In electro-magnetics, electric charge acts as a source for the electric field. E = 4 π ρ 6 For the example of a point charge at the origin, the electric field is given by E For r, the divergence is found as. E = q = q = q r r 3 7 x r 3 + y 3 r 3 3 x + y + z r 5 y r 3 + z z r 3 = 8 which is not defined at r =. By consideration of Gauss s theorem, one can see that the divergence must diverge at r =. Thus, the electric field accumulates at the point charge. There is no source for magnetic induction field, and this shows up in the Maxwell equation. B = 9 The finite magnetic induction field is purely a relativistic effect in that it represents the electric field produced by compensating charge densities which are in relative motion..5 The Curl Given a differentiable vector field representing a flow, one can define a vector pseudo-vector quantity which represents the rotation of the flow. The curl is defined as the vector product A 3 which is evaluated as ê x + ê x or = ê x ê y ê z y z A x A y A z + ê x = ê x ê x A x + ê y A y + ê z A z 3 Az y A y Az ê y z A x Ay + ê z z A x y 3

The curl of a radial vector A = r fr is evaluated as ê x ê y ê z A = y z x fr y fr z fr = 33 The curl of a tangential vector t given by t = êx y ê y x 34 is evaluated as t = êz 35 The tangential vector represents a rotation about the z-axis in a clockwise negative direction. A physical example of the curl is given by the relationship between a magnetic induction B and the vector potential A B = A 36 The vector potential B z A = produces a magnetic field x ê y y ê x 37 B z B = x ê y y ê x = ê z B z 38 which is uniform and oriented along the z-axis. Another example is that of a magnetic induction field produced by a straight long current carrying wire. If the wire is oriented along the z-axis, Ampere s law yields I B = π ρ x ê y y ê x 39 where ρ = x + y 4 The vector potentials A that produces this B can be found as a solution of A = B 4 The solutions are not unique, one solution is given by A = êz I π ln ρ 4

.6 Successive Applications of The gradient operator can be used successively to create higher-order differentials. Frequently found higher-derivatives include the divergence of a gradient of a scalar φ. φ = φ 43 which defines the Laplacian of φ. In Cartesian coordinates one has. φ = + y + z φ 44 The Laplacian appears in electrostatic problems. On combining the definition of a scalar potential φ in electro-statics E = φ 45 with Gauss s law. E = 4 π ρ 46 one obtains Laplace s equation φ = 4 π ρ 47 which relates the electrostatic potential to the charge density. Another useful identity is obtained by taking the curl of a curl. It can be shown, using Cartesian coordinates, that the curl of the curl can be expressed as A = A +. A 48 This identity is independent of the choice of coordinate systems. This identity is often used in electromagnetic theory by combining and the static form of Ampere s law B = A 49 B = 4 π c j 5 which yields the relation between the vector potential and the current density A +. A = 4 π c j 5 The above equation holds only when the fields are time independent.

Other useful identities include φ = 5 and. A = 53.7 Gauss s Theorem Gauss s theorem relates the volume integral of a divergence of a vector to the surface integral of the vector. Since the volume integral of the divergence is a scalar, the surface integral of the vector must also be a scalar. The integration over the surface must be of the form of a scalar product of the vector and the normal to the surface area. Consider the volume integral of. A d 3 r. A 54 For simplicity, consider the integration volume as a cube with faces oriented parallel to the x, y and z axes. In this special case, Gauss s theorem can be easily proved by expressing the divergence in terms of the Cartesian components, and integrating the three separate terms. Since each term is of the form of a derivative with respect to a Cartesian coordinate, the corresponding integral can be evaluated in terms of the boundary term x+ y+ z+ Ax dx dy dz x y z + A y + A z y z y+ z+ x + x+ z+ y + x+ y+ z + = dy dz A x + dx dz A y + dx dy A z 55 y z x x z y x y z The last six terms can be identified with the integrations over the six surfaces of the cube. It should be noted that the for a fixed direction of A the integration over the upper and lower surfaces have different signs. If the normal to the surface is always chosen to be directed outwards, this expression can be written as an integral over the surface of the cube d S. A 56 Hence, we have proved Gauss s theorem for our special geometry d 3 r. A = d S. A 57 V S 3

where the surface S bounds the volume V. The above argument can be extended to volumes of arbitrary shape, by considering the volume to be composed of an infinite number of infinitesimal cubes, and applying Gauss s theorem to each cube. The surfaces of the cubes internal to the volume occur in pairs. Since the surface integrals are oppositely directed, they cancel in pairs. This leaves only the integrals over the external surfaces of the cubes, which defines the integral over the surface of the arbitrary volume. Example: Using Gauss s theorem, show that the divergence of the electric field caused by a point charge q is proportional to a Dirac delta function. Solution: The electric field of a point charge is given by and one finds that E = q r r 3 58. E = 59 for r. Then, since Gauss s theorem applied to a volume containing the point charge becomes d 3 r. E = d S. E = d q r S. r 3 = 4 π q 6 So one must have. E = 4 π q δ 3 r 6 Therefore, since the charge density is given the divergence of the electric field, the charge density must be represented by a Dirac delta function as is expected for a point charge. 4

.8 Stokes s Theorem Stoke s theorem relates the surface integral of the curl of a vector to an integral of the vector around the perimeter of the surface. Stokes s theorem can easily be proved by integration of the curl of the vector over a square, with normal along the z-axis and sides parallel to the Cartesian axes d S. A = x+ x dx y+ y dy ê z. ê z Ay A x y 6 The scalar product with the directed surface selects out the z-component of the curl. One integral in each term can be evaluated, yielding y+ x + x+ y + = dy A y dx A x d r. A 63 x y x y = which is of the form of an integration over the four sides of the square, in which the loop is traversed in a counterclockwise direction. Stokes s theorem can be proved for a general surface, by subdividing it into a grid of infinitesimal squares. The integration over the interior perimeters cancel, the net result is an integration over the loop bounding the exterior perimeters. A physical example of Stokes s theorem is found in the quantum mechanical description of a charged particle in a magnetic field. The wave function can be composed of a super-position of two components stemming from two paths. The two components have a phase difference which is proportional of the integral of the vector potential along the two paths :- path d r. A 64 and path d r. A 65 If these two paths are traversed consecutively, but the second path is traced in the reverse direction, then one has a loop. If the phase of the wave function at the origin is unique, up to multiples of π, then the loop integral d r. A 66 5

must take on multiples of a fundamental value φ. Stokes s theorem leads to the discovery that magnetic flux must be quantized as d r. A = n φ d S. B = n φ 67 where n is an arbitrary integer and φ is the fundamental flux quantum. This phenomenon, discovered by Dirac, is known as flux quantization..9 Non-Orthogonal Coordinate Systems One can introduce non-cartesian coordinate systems. The most general coordinate systems do not use orthogonal unit vectors. As an example, consider a coordinate system for a plane based on two unit vectors of fixed direction ê and ê. The position of a point on the plane can be labeled by the components x and x where r = ê x + ê x 68 The length l of the vector is given by l = x + x + ê. ê x x 69 The expression for the length can be written as l = i,j g i,j x i x j 7 where g i,j is known as the metric tensor and is given by g i,j = ê i. ê j 7 In this prescription, the components have been given as the successive displacements needed to be traversed parallel to the unit vectors to arrive at the point. This is one way of specifying a vector. The components x and x are known as the co-variant components. Another way of specifying the same vector is given by specifying the components x, x as the displacements along the unit vectors, if the point is given by the intersection of the perpendiculars subtended from the axes. The components x and x are the contra-variant components. 6

Example: What is the relationship between the co-variant and contra-variant components of the vector? Express the relationship and inverse relationship in terms of the components of the metric. Solution: Let θ be the angle between ê and ê, such that ê. ê = cos θ 7 Consider a vector of length l which is oriented at an angle φ relative to the unit vector ê. The relationship between the Cartesian components of the vector and the covariant components is given by l cos φ = x + x cos θ The contra-variant components are given by Hence, we find the relationship which can be summarized as l sin φ = x sin θ 73 l cos φ = x l cosθ φ = x 74 x = x + x cos θ x = x + x cos θ 75 x i = j g i,j x j 76 The inverse relation is given by which can be summarized as x = sin θ x x cos θ x = sin θ x x cos θ 77 x i = j g i,j x j 78 7

Example: How does the length get expressed in terms of the contra-variant components? Solution: The length is given in terms of the contra-variant components by l = i,j g i,j x i x j 79 It is customary to write the inverse of the metric as g i,j = g i,j 8 so that the sub-scripts balance the superscripts when summed over..9. Curvilinear Coordinate Systems It is usual to identify generalized coordinates, such as r, θ, ϕ, and then define the unit vectors corresponding to the direction of increasing generalized coordinates. That is ê r is the unit vector in the direction of increasing r, ê θ as the unit vector in the direction of increasing θ and ê ϕ as the unit vector in the direction of increasing ϕ. If we denote the generalized coordinates as q i, then an infinitesimal change in a Cartesian coordinate can be expressed as dx i = j i q j dq j 8 The change in length dl can be expressed as dl = i dx i 8 which becomes dl = j,j i i q j i q j dq j dq j 83 8

Thus, the metric is found to be g j,j = i i q j i q j 84 to The three unit vectors of the generalized coordinate system are proportional ê qj i ê i i q j 85 In general, the direction of the unit vectors depends on the values of the set of three generalized coordinates q j s. In orthogonal coordinate systems, the coordinates are based on the existence of three orthogonal unit vectors. The unit vectors are orthogonal when the scalar products of the unit vectors vanish, which gives the conditions i i q j i q j = 86 for j j. Thus, for orthogonal coordinate systems, the metric is diagonal. g j,j δ j,j 87 The metric is positive definite as the non-zero elements are given by g j,j = i i > 88 q j The inverse of the metric is also diagonal and has non-zero elements g i,i. Thus, in this case, the co-variant and contra-variant components of a vector are simply related by x i = g i,i x i 89 An example of orthogonal curvilinear coordinates is given by the spherical polar coordinate representation..9. Spherical Polar Coordinates In the spherical polar representation, an arbitrary displacement vector is specified by the generalized coordinates r, θ, ϕ, such that r = êx r sin θ cos ϕ + ê y r sin θ sin ϕ + ê z r cos θ 9 9

The unit vectors are denoted by ê r, ê θ, ê ϕ and are in the direction of increasing coordinate. Thus, and ê r = r r = ê x sin θ cos ϕ + ê y sin θ sin ϕ + ê z cos θ 9 ê θ r θ = ê x r cos θ cos ϕ + ê y r cos θ sin ϕ ê z r sin θ 9 The unit vector ê θ is given by ê θ = ê x cos θ cos ϕ + ê y cos θ sin ϕ ê z sin θ 93 Finally, we find the remaining unit vector from ê ϕ r ϕ = ê x r sin θ sin ϕ + ê y r sin θ cos ϕ 94 which is in the x y plane. The unit vector ê ϕ is given by normalizing the above vector, and is ê ϕ = ê x sin ϕ + ê y cos ϕ 95 As can be seen by evaluation of the scalar product, these three unit vectors are mutually perpendicular. Furthermore, they form a coordinate systems in which ê r ê θ = ê ϕ 96 Due to the orthogonality of the unit vectors, the metric is a diagonal matrix and has the non-zero matrix elements g r,r = g θ,θ = r In terms of the metric, the unit vectors are given by g ϕ,ϕ = r sin θ 97 ê qj = g j,j r q j 98

.9.3 The Gradient In curvilinear coordinates, the gradient of a scalar function φ is given by consideration of the infinitesimal increment caused by the change in the independent variables q j dφ = φ dq j 99 q j j which, for orthogonal coordinate systems, can be written as dφ = φ ê qj. ê qj g j,j dq j j g j,j q j j = φ ê qj. d r j g j,j q j Thus, in orthogonal coordinate systems, the gradient is identified as φ φ = ê qj g j,j q j In spherical polar coordinates, the gradient is given by φ φ φ = êr + ê θ + ê ϕ r r θ r sin θ j φ ϕ.9.4 The Divergence Gauss s theorem can be used to define the divergence in a general orthogonal coordinate system. In particular, applying Gauss s theorem to an infinitesimal volume, one has d 3 r. A = d S. A 3 where the elemental volume is given by d 3 r = Π j dq j g j,j and the elemental surface areas are given by = Det g j,j Π j dq j 4 d S i = Π j i dq j g j,j = Det g j,j Π j dq j 5 dq i g i,i

Hence, from Gauss s theorem, the divergence is given by the the sums of the scalar product of the vector with the directed surface areas divided by the volume element. Since the surfaces with normals in the direction of ê qi occur in pairs and are oppositely directed, one finds the divergence as the derivative Ai. A = Det g j,j 6 Detg j,j q i g i,i i For spherical polar coordinates, the divergence is evaluated as. A = r r sin θ A r + sin θ r r r sin θ A θ sin θ θ + r r A ϕ 7 sin θ ϕ.9.5 The Curl In a generalized orthogonal coordinate system, Stokes s theorem can be used to define the curl. We shall apply Stokes s theorem to an infinitesimal loop integral d S. A = d r. A 8 The components of the curl along the unit directions ê qj in the direction of increasing q i can be evaluated over the surface areas d S with normals êqj. Then, we have d Detg i,i S j. A = A Π i j dq i 9 g j,j For the surface with normal in direction, this is given by d S. A = A g, g 3,3 dq dq 3 The loop integral over the perimeter of this surface becomes d r. A = A g, dq + A 3 g 3,3 + dq A 3 g 3,3 dq 3 q A g, + dq 3 A g, dq A 3 g 3,3 dq 3 q 3 j

where we have Taylor expanded the vector field about the center of the infinitesimal surface. The lowest-order terms in the expansion stemming from the opposite sides of the perimeter cancel. Hence, the component of the curl along the normal to the infinitesimal surface is given by the expression A = g, g 3,3 q g 3,3 A 3 q 3 g, A The expression for the entire curl vector can be expressed as a determinant A = Detg j,j ê g, ê g, ê 3 g 3,3 q q q 3 g, A g, A g 3,3 A 3 ] 3 In spherical polar coordinates, the curl is given by ê r r ê θ r sin θ ê ϕ A = r sin θ r θ ϕ A r r A θ r sin θ A ϕ 4.9.6 Compounding Vector Differential Operators in Curvilinear Coordinates When compounding differential operators, it is essential to note that operators should be considered to act on an arbitrary differentiable function. Thus, since one can compound x fx = x and x via fx + fx 5 x = x + 6 The order of the operators is important. The differential operator acts on everything in front of it, which includes the unit vectors. In Cartesian coordinates, the directions of the unit vectors are fixed thus, êx = y êx = z êx = 7 3

etc. For curvilinear coordinates this is no longer true. For example, in spherical polar coordinates although the directions of the unit vectors are not determined by the radial distance r êr = r êθ = r êϕ = 8 the other derivatives of the unit vectors are not zero, as and θ êr = ê θ θ êθ = ê r θ êϕ = 9 ϕ êr = sin θ ê ϕ ϕ êθ = cos θ ê ϕ ϕ êϕ = sin θ ê r + cos θ ê θ The Laplacian of a scalar φ can be evaluated by computing the divergence of the gradient of φ, i.e.,. φ Here it is important to note that the differential operator acts on the unit vectors, before the scalar product is evaluated. Problem:. Find an explicit expression for the angular momentum operator in spherical polar coordinates. Problem:. ˆ L = i r How are the Laplacian of a scalar ψ and the square of the angular momentum ˆ L ψ related in spherical polar coordinates? 4

Problem:.3 The two dimensional x y model consists of a set of classical spins located at the sites of a square lattice. Pairs of spins at neighboring lattice sites R and R + a, have an interaction energy J cosθ R θ R + a where θ R is the angle that the spin at site R subtends to a fixed axis. For positive J, the interaction is minimized if the neighboring spins are all parallel. In the continuum limit, the interactions can be Taylor expanded in the lattice constant a. For small a, angle θ can be considered as a continuous field θ r, defined at every point in the two-dimensional space. The energy of the field is approximately given by E = J a d r θ. θ The fields that extremalize the energy satisfy Laplace s equation θ = i Find an expresssion for the Laplacian and the gradient in circular polar coordinates r, ϕ. ii Show that θ r = m r m A m cos mϕ + B m sin mϕ satisfies Laplace s equation everywhere. iii Present arguments as to why the m should be restricted to the set of positive integers. iv Show that that the energy density of these solutions diverges at the boundary of the circle at R, and so do not represent low-energy excitations. v Show that θ = ln r + C θ = ϕ + C are solutions of Laplace s equation, except at the point r =. Since the continuum appozimation is valid for distances greater than a, these may be considered good solutions, if a cut-off is introduced for distances smaller than a. vi Show that θ r = y n i tan yi + θ x x i i 5

can also be considered at a good solution, except in the vicinity of a finite number of points x i, y i where the solution is singular. vii Why should the n i be restricted to positive and negative integer values? These singular solutions have topological characters. The thermally induced the binding or unbinding of pairs of excitations with singularities at x i, y i with x j, y j which have opposite values of n i, such that of n i + n j =, give rise to the topological Kosterlitz-Thouless transition. 6

3 Partial Differential Equations The dynamics of systems are usually described by one or more partial differential equations. A partial differential equation is characterized as being an equation for an unknown function of more than one independent variable, which expresses a relationship between the partial derivatives of the function with respect to the various independent variables. Conceptually, a solution may be envisaged as being obtained by direct integration. Since integration occurs between two limits, the solution of a partial differential equation is not unique unless its value is given at one of the limits. That is, the solution is not unique unless the constants of integration are specified. These are usually specified as boundary conditions or initial conditions. Important examples are provided by the non-relativistic Schrödinger equation h m ψ + V r ψ = i h ψ t in which the wave function ψ r, t is usually a complex function of position and time. The one particle wave function has the interpretation that ψ r, t is the probability density for finding a particle at position r at time t. In order that ψ r, t be uniquely specified, it is necessary to specify boundary conditions. These may take the form of specifying ψ r, t or its derivative with respect to r on the boundary of the three-dimensional region of interest. Furthermore, since this partial differential equation contains the first derivative with respect to time, it is necessary to specify one boundary condition at a temporal boundary, such as the initial time t. That is the entire wave function must be specified as an initial condition, ψ r, t. Another important example is given by the wave equation φ c φ t = f r, t 3 where φ r, t describes the wave motion, c is the phase velocity of the wave and the force density f r, t acts as a source for the waves, inside the region of interest. Again appropriate boundary conditions for four-dimensional space-time r, t need to be specified, for the solution to be unique. Since this equation is a second-order equation with respect to time, it is necessary to specify φ at two times. Alternatively, one may specify φ r, t at the initial time and its derivative φ r,t t t at the initial time. Poisson s equation is the partial differential equation φ = 4 π ρ 4 7

which specifies the scalar or electrostatic potential φ r produced by a charge density ρ r. The boundaries of the spatial region in which φ is to be determined may also involve charge densities on the boundary surfaces or they may be surfaces over which φ is specified. The charge density is to be regarded as a source for the electric potential φ. Maxwell s theory of electromagnetism is based on a set of equation of the form. E = 4 π ρ. B = 4 π B = j + c c B E = c t E t 5 for the two vector quantities E and B, where ρ and j are source terms, respectively representing the charge and current densities. These can be considered as forming a set of eight equations for the six components of E and B. In general specifying more equations than components may lead to inconsistencies. However, in this case two equations can be thought of specifying consistency conditions on the initial data, such as the continuity of charge or the absence of magnetic monopoles. Since these equations are first-order in time, it is only necessary to specify one initial condition on each of the E and B fields. This is in contrast to the wave equation, which is obtained by combining the equations for E and B, which is a second-order partial differential equation. That is, on combining Maxwell s equations, one can find a second-order partial differential equation for the unknown E vector-field E E + c t = 4π c j t 6 which in the absence of a charge density, leads to an inhomogeneous wave equation. The two initial conditions required to solve the wave equation correspond to specifying E and the derivative of E with respect to t, the last condition is equivalent to specifying B in Maxwell s equations. All of the above equations posses the special property that they are linear partial differential equations. Furthermore, they are all second-order linear partial differential equations since, the highest order derivative that enters is the second-order derivative. Consider the homogeneous equation which is obtained by setting the source terms to zero. In the absence of the source terms, each term in these equations 8

only involve the first power of the unknown function or the first power of a first or higher-order partial derivative of the function. The solution of the partial differential equation is not unique, unless the boundary conditions are specified. That is, one may find more than one linearly independent solution for the unknown function, such as φ i for i =,,..., N. Due to the linearity, the general solution of the homogeneous equation φ can be expressed as a linear combination N φ = C i φ i 7 i= where the C i are arbitrary complex numbers. The constant C i may be determined if appropriate boundary conditions and initial conditions are specified. This is often referred to as the principle of linear superposition. Now consider the inhomogeneous equation, that is, the equation in which the source terms are present. If a particular solution of the inhomogeneous equation is found as φ p, then it can be seen that due to the linearity it is possible to find a general solution as the sum of the particular solution and the solutions of the homogeneous equation N φ = φ p + C i φ i 8 The solution may be uniquely determined if appropriate boundary and initial conditions are specified. i= Non-Linear Partial Differential Equations. By contrast, a non-linear partial differential equation involves powers of different orders of the unknown function and its derivatives. Examples are given by the sine-gordon equation φ c φ t m sin φ = 9 which is a second-order non-linear partial differential equation or the Kortewegde Vries equation φ + φ φ t + 3 φ 3 = 3 which describes shallow water waves in a one-dimensional channel. The Kortwegde Vries equation is a third-order non-linear partial differential equation as the highest-order derivative that it contains is third-order. In these non-linear equations, the principle of linear superposition does not hold. One can not express the general solution of the equation as a linear sum of individual solutions φ i. Both these non-linear equations are special as they have travelling wave like solutions which propagate without dispersion. These special solutions are known 9

as soliton solutions. For the Korteweg-de Vries equation one can look for soliton solutions which propagate with velocity c. These are of the form φx, t = φx c t 3 Substituting this form of the solution into the partial differential equation leads to φ c φ + 3 φ 3 = 3 which can be integrated to yield φ c φ + φ = κ 33 The constant of integration κ is chosen to be zero, by specifying that φ when x c t. On identifying an integrating factor of φ 34 and multiplying the differential equation by the integrating factor, one obtains φ φ c φ φ + φ This can be integrated again to yield φ 3 6 c φ + φ = 35 φ = γ 36 The boundary conditions can be used again to find γ =. The square root of the equation can be taken, giving the solution as an integral with z = φx,t 3 c dz z z = c The integral can be evaluated, using the substitution φ 3 c x ct dx 37 z = sech x 38 and giving dz = sech x tanh x dx 39 φx, t = 3 c cosh c x c t 4 3

This non-linear solution has a finite spatial extent and propagates with velocity c, and does not disperse or spread out. The stability of shape of the soliton solution is to be contrasted with the behavior found from either linear equations with non-trivial dispersion relations or with the non-linear first-order differential equation which has the solution φ t = φ φ 4 φ = f x φ t 4 where fx is the arbitrary initial condition. This can be solved graphically. As the point with largest φ moves most rapidly, the wave must change its shape. It leads to a breaker wave and may give rise to singularities in the solution after a finite time has elapsed. That is, the solution may cease to be single-valued after the elapse of a specific time. 3. Linear First-Order Partial Differential Equations Consider the homogeneous linear first-order partial differential equation φ + ax, t φ t = 43 with initial conditions φx, = fx. This is known as a Cauchy problem. We shall solve the Cauchy problem with the method of characteristics. The method of characteristics is a powerful method that allows one to reduce any linear first-order partial differential equation into an ordinary differential equation. The characteristics are defined to be curves in the x, t plane, xt, which satisfy the equation dxt = axt, t 44 dt The solution of this equation defines a family or a set of curves xt. The different curves correspond to the different constants of integration, or initial conditions x = x. The solution φx, t, when evaluated on the characteristic xt, yields φxt, t. This has the special property that φ has a constant value along the curve xt. This can be shown by taking the derivative along the characteristic curve d dt φxt, t = dxt dt φx, t + φx, t t 45 3

and since the characteristic satisfies dxt dt = axt, t 46 one has d φx, t φx, t φxt, t = ax, t + = 47 dt t as φx, t satisfies the homogeneous linear partial differential equation. Thus, φx, t is constant along a characteristic. Hence, φxt, t = φx, = fx 48 This means that if we can determine the characteristic, one can compute the solution of the Cauchy problem. Essentially, the method of characteristics consists of finding two non-orthogonal curvilinear coordinate systems ξ and η. The initial data is specified on one coordiate system ξ =, and an η is found such that the system conserves the initial values along the curves of constant η. Example: 3.. Consider the Cauchy problem with initial conditions φ t + c φ = 49 φx, = fx 5 Solution: The characteristic is determined from the ordinary differential equation it has only one variable dx = c 5 dt which has solutions xt = c t + x 5 Thus, the characteristic consist of curves of uniform motion with constant velocity c. 3

The solution φx, t is constant on the curve passing through x,, and is determined from φx, t = φxt, t = fx 53 However, on inverting the equation for the characteristic one finds so one has x = x c t 54 φx, t = f x c t 55 which clearly satisfies both the initial condition and the differential equation. The above solution corresponds to a wave travelling in the positive direction with speed c. Example: 3.. Solve the Cauchy problem with initial conditions φ t + x φ = 56 φx, = fx 57 Solution: The characteristic is determined by the ordinary differential equation which has a solution dx dt = x 58 xt = x exp t ] 59 On inverting the equation for the characteristic and the initial condition, one has x = xt exp t ] 6 Then, as the solution is constant along the characteristic curve φxt, t = fx 6 one has φx, t = f x exp t ] 6 33

This is the solution that satisfies both the partial differential equation and the initial condition. Inhomogeneous First-Order Partial Differential Equations. Inhomogeneous linear first-order partial differential equations can be solved by a simple extension of the method of characteristics. Consider, the partial differential equation φ t + ax, t φ = bx, t 63 where the inhomogeneous term bx, t acts as a source term. The initial condition is given by φx, = fx 64 The characteristics are given by the solution of dx dt = ax, t 65 and x = x. The solution along the characteristic is not constant due to the presence of the inhomogeneous term, but instead satisfies dφxt, t dt = φ t = φ t + dxt dt φ + ax, t φ = bx, t 66 However, the solution can be found by integration along the characteristic curve. This yields, t φxt, t = fx + dt bxt, t 67 On inverting the relation between xt and x, and substituting the resulting relation for fx in the above equation, one finds the solution. Example: 3..3 Consider the inhomogeneous Cauchy problem φ t + c φ = λ x 68 34

where the solution has to satisfy the initial condition φx, = fx. Solution: The characteristics are found as xt = x + c t 69 and the solution of the partial differential equation along the characteristic is given by φxt, t = fx + λ t dt xt = fx + λ x t + c t 7 Since the characteristic can be inverted to yield the initial condition as x = x c t, one has the solution φx, t = f x c t + λ t x c t 7 which completely solves the problem. Example: 3..4 Find the solution of the inhomogeneous Cauchy problem φ t subject to the initial condition + c φ = τ φx, t 7 φx, = fx 73 Solution: The characteristic is given by xt = x + c t 74 The ordinary differential equation for φxt, t evaluated on a characteristic is dφxt, t dt = τ φxt, t 75 35

which has the solution φxt, t = φx, exp t τ ] = f x c t exp t τ ] 76 which is a damped forward travelling wave. Example:3..5 Find the solution of the equation for x and t. Solution: x φ t φ t + t φ = t 77 The characteristics xt satisfy which has the solution dx dt = x t 78 x t = η 79 where η is an arbitrary constant. The ordinary differential equation for φx, tx on the characteristic tx is given by x dφ dx + η x φ = for constant η. On multiplying by the integrating factor ] exp η x η x 8 and integrating, one finds that the general solution is given by ] ] η φx, tx = exp x fη + exp η x 8 8 Hence, on substituting for η, one obtains the solution ] t φx, t = exp fxt + 83 36

Alternatively, the characteristic curve xt could have been specified parametrically in terms of the variable ξ dx dξ dt dξ which can be solved independently to yield x = expξ] In terms of ξ, the partial differential equation becomes = x 84 = t 85 t = η exp ξ] 86 dφ dξ + η exp ξ] φ = η exp ξ] 87 On multiplying the equation by the integrating factor ] exp η exp ξ] 88 The resulting equation can be solved by integration and substitution of the t = boundary condition. 3. Classification of Partial Differential Equations Second-order partial differential equations can be classified into three types, Elliptic, Parabolic and Hyperbolic. The classification is based upon the shape of surfaces on which the solution φ is constant, or on which φ has the maximal variation. These surfaces correspond to the wave fronts in space-time r, t, and the normals to the surfaces which correspond to the direction of propagation of waves and are called the characteristics. That is, one finds coordinates that correspond to either the wave fronts or the light rays of geometrical optics. To motivate the discussion, consider a general second-order partial differential equation A φ + B φ t + C φ t + D φ + E φ t + F = 89 where A, B, C, D, E and F are smooth differentiable functions of x and t. Suppose we are trying to obtain a Frobenius series of φ in the independent variables 37

x and t. If φx, is given, then all the derivatives n φx, can be obtained by n direct differentiation. If the first-order derivative φx,t t is given, then the t= derivatives n+ φx,t t can also be obtained by direct differentiation. t= n These two pieces of initial data allow the second derivative φx,t t to t= be evaluated, by using the differential equation, φ t = A C φ B C φ t D C φ E C φ t F C 9 Also by repeated differentiation of the differential equation with respect to t, all the higher-order derivatives can be obtained. Hence, one can find the solution as a Taylor expansion φx, t = φx, + t φx, t t + t t= φx, t t +... 9 t= which gives a solution within a radius of convergence. Thus, if φx, and the derivative φx,t t are specified, then φx, t can be determined. However, it t= is essential that all the appropriate derivatives exist. In this method, the initial conditions where specified in a special coordinate system t =. Generally, initial conditions may be specified in more general coordinate systems ξx, t, ηx, t, at ξ =. Characteristics. Consider a family of curves ξx, t = const. together with another family of curves ηx, t = const. which act as a local coordinate system. These set of curves must not be parallel, therefore we require that the Jacobian be non-zero or J ξ, η x, t = ξ η t ξ t The family of curves do not need to be mutually orthogonal. η 9 The differential equation can be solved in the ξ, η local coordinate system. We shall define the solution in the new coordinates to be φx, t = φξ, η 93 Let us assume that the differential equation has boundary conditions in which 38

the function φξ =, η and its derivative are given. By differentia- ξ= tion with respect to η, one can determine and φ, η η φξ,η ξ φ, η η 94 φξ, η ξ φξ, η ξ η ξ= 95 ξ= To obtain all the higher-order derivatives needed for a Taylor series expansion of the solution, one must express the partial differential equation in terms of the new variables. Thus, one has φ φ ξ φ η = + 96 t x ξ η t x η ξ t x and φ t x = φ ξ ξ t φ + ξ η x ξ t φ ξ + ξ η t x φ η + η t x ξ x η t x + φ η η t x etc. Therefore, the differential equation can be written as φ ξ ξ ξ ξ ξ A + B + C t t ] φ ξ η ξ η η ξ ξ η + A + B + + C ξ η t t t t φ η η η η + η A + B + C t t φ ξ ξ ξ ξ ξ + A ξ + B + C t t + D + E t φ η η η η η + A η + B + C t t + D + E t 97 + F φ + G = 98 39

The ability to solve this equation with the initial data given on ξ =, rests on whether or not the second derivative φ ξ 99 can be determined from the differential equation, since all other quantities are assumed to be known. The second derivative w.r.t. ξ can be found if its coefficient is non-zero A ξ + B ξ ξ t + C ξ t If the above expression never vanishes for any real function ξx, t, then the solution can be found. All higher-order derivatives of φ can be found by repeated differentiation. On the other hand, if a real function ξx, t exists for which ξ ξ ξ ξ A + B + C = t t then the problem is not solvable when the initial conditions are specified on the curve ξ =. The condition of the solvability of the partial differential equation is that the quadratic expression ξ ξ ξ ξ A + B + C t t is non-vanishing. The types of roots of the quadratic equation are governed by the discriminant B A C 3 and, therefore, results in three different types of equations. The above analysis can be extended to the case in which the coefficients A, B and C are well-behaved functions of x, t, since only the second-order derivatives w.r.t. to ξ, η play a role in the analysis. Hyperbolic Equations. The case where B A C > corresponds to that of hyperbolic equations. In this case, one finds that the solvability condition vanishes on the curve B ± ] B A C ξ ξ / = 4 t A 4

However, the slope of the curve ξx, t = const. is given ξ ξ dx + dt = 5 t Hence, the slopes of the family of curves are given by ξ dx t = dt or dt dx = ξ B B A C A 6 ] 7 The solution can not be determined from Cauchy initial boundary conditions specified on these curves. The set of curves are the characteristics of the equation. In the local coordinate system corresponding to the positive sign, we see that A ξ + B ξ ξ t + C ξ = 8 t The other coordinate ηx, t can be found by choosing the negative sign. On this family of curves, one also finds A η + B η η t + C η = 9 t In this special coordinate system, one finds the partial differential equation simplifies and has the canonical form φ ξ η + α φ ξ + β φ η + γ φ + δ = where α, β, γ and δ are functions of ξ and η. An example of a hyperbolic equation is given by the wave equation φ φ c t = where A = and B = and C = c. The equation for the characteristics is given by ξ ξ c = t 4