Euler equations for multiple integrals

Similar documents
Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Calculus of Variations

Linear First-Order Equations

Schrödinger s equation.

Physics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1

Euler Equations: derivation, basic invariants and formulae

PDE Notes, Lecture #11

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10

Implicit Differentiation

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

Chapter 2 Lagrangian Modeling

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

Applications of the Wronskian to ordinary linear differential equations

Math 342 Partial Differential Equations «Viktor Grigoryan

Problem set 2: Solutions Math 207B, Winter 2016

II. First variation of functionals

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Calculus of Variations

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Diagonalization of Matrices Dr. E. Jacobs

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

IMPLICIT DIFFERENTIATION

Matrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan

ELEC3114 Control Systems 1

Lagrangian and Hamiltonian Dynamics

Table of Common Derivatives By David Abraham

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Quantum Mechanics in Three Dimensions

A. Incorrect! The letter t does not appear in the expression of the given integral

Separation of Variables

1 Heisenberg Representation

there is no special reason why the value of y should be fixed at y = 0.3. Any y such that

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

Vectors in two dimensions

The Principle of Least Action

Relation between the propagator matrix of geodesic deviation and the second-order derivatives of the characteristic function

Math 115 Section 018 Course Note

Permanent vs. Determinant

Kinematics of Rotations: A Summary

Chapter 7. Integrals and Transcendental Functions

Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess. That way, they develop a good, lucky feeling.

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const.

Conservation Laws. Chapter Conservation of Energy

G j dq i + G j. q i. = a jt. and

Mathematical Review Problems

Lecture 1b. Differential operators and orthogonal coordinates. Partial derivatives. Divergence and divergence theorem. Gradient. A y. + A y y dy. 1b.

Pure Further Mathematics 1. Revision Notes

θ x = f ( x,t) could be written as

05 The Continuum Limit and the Wave Equation

Jointly continuous distributions and the multivariate Normal

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

Short Intro to Coordinate Transformation

and from it produce the action integral whose variation we set to zero:

The Exact Form and General Integrating Factors

Math 1271 Solutions for Fall 2005 Final Exam

First Order Linear Differential Equations

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Implicit Differentiation

February 21 Math 1190 sec. 63 Spring 2017

Markov Chains in Continuous Time

Lagrangian and Hamiltonian Mechanics

State-Space Model for a Multi-Machine System

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

model considered before, but the prey obey logistic growth in the absence of predators. In

THE DIVERGENCE. 1. Interpretation of the divergence 1 2. The divergence theorem Gauss law 6

Chapter 6: Integration: partial fractions and improper integrals

Sturm-Liouville Theory

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

Math 1B, lecture 8: Integration by parts

1 Lecture 20: Implicit differentiation

Linear and quadratic approximation

Introduction to the Vlasov-Poisson system

6 General properties of an autonomous system of two first order ODE

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions

Introduction to variational calculus: Lecture notes 1

cosh x sinh x So writing t = tan(x/2) we have 6.4 Integration using tan(x/2) = 2 2t 1 + t 2 cos x = 1 t2 We will revisit the double angle identities:

Noether s theorem applied to classical electrodynamics

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk

Calculus and optimization

Fall 2016: Calculus I Final

The total derivative. Chapter Lagrangian and Eulerian approaches

COUPLING REQUIREMENTS FOR WELL POSED AND STABLE MULTI-PHYSICS PROBLEMS

Further Differentiation and Applications

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

Math 1272 Solutions for Spring 2005 Final Exam. asked to find the limit of the sequence. This is equivalent to evaluating lim. lim.

Zachary Scherr Math 503 HW 3 Due Friday, Feb 12

4.2 First Differentiation Rules; Leibniz Notation

Implicit Differentiation. Lecture 16.

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

cosh x sinh x So writing t = tan(x/2) we have 6.4 Integration using tan(x/2) 2t 1 + t 2 cos x = 1 t2 sin x =

Self-normalized Martingale Tail Inequality

Agmon Kolmogorov Inequalities on l 2 (Z d )

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations

Exam 2 Review Solutions

Transcription:

Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................ 3 1.3 Multiimensional integration.................... 7 1

This part eals with multivariable variational problems that escribe equilibria an ynamics of continua, optimization of shapes, etc. We consieration of a multivariable omain x R an coorinates x = (x 1,..., x ). The variational problem requires minimization of an integral over of a Lagrangian L(x, u, Du) where u = u(x) is a multivariable vector function an u of minimizers an matrix Du = u of graients of these minimizers. The optimality conitions for these problems inclue ifferentiation with respect to vectors an matrices. First, we recall several formulas of vector (multivariable) calculus which will be commonly use in the following chapters. 1 Reminer of multivariable calculus 1.1 Vector ifferentiation We remin the efinition of vector erivative or erivative of a scalar function with respect to a vector argument a R n. Definition 1.1 If φ(a) is a scalar function of a column vector argument a = (a 1,..., a n ) T, then the erivative φ a is a row vector ( φ φ a =,..., φ ) if a = a 1... (1) a 1 a n a n assuming that all partial erivatives exist. This efinition comes from consieration of the ifferential φ of φ(a): φ(a) = φ(a + a) φ(a) = φ(a) a a + o( a ) Inee, the left-han sie is a scalar an the secon multiplier in the right-han sie is a column vector, therefore the first multiplier is a row vector efine in (1) Examples of vector ifferentiation The next examples show the calculation of erivative for several often use functions. The results can be checke by straightforwar calculations. We assume here that a R n. 1. If φ(a) = a 2 = a 2 1 +... a 2 n, the erivative is a a 2 = 2a T 2. The vector erivative of the eucliean norm a of a nonzero vector a is a row vector b, b = a 2 = at a = at a 2 a Observe that b is coirecte with a an has unit length. 2

3. The erivative of a scalar prouct c a, where c is an n-imensional vector, c R n, is equal to c: a ct a = c Similarly, if C is a k n matrix, erivative of a prouct Ca equals C T, Ca = CT a 4. Derivative of a quaratic form a T Ca where C is a symmetric matrix, equals a at Ca = 2a T C = 2(Ca) T. Directional erivative Let φ ν be a irectional erivative of a scalar function φ in a irection ν: φ ν = φ ν. Partial erivative of F ( φ) with respect to φ ν is efine as: = φ ν φ ν (2) Graient of a vector If u = (u 1,..., u n ) is a vector function, u j = u j (x 1,... x ), x R, then the graient of u is efine as a n matrix enote u or Du, u = j x i, or, in elements, 1 2 x 1 x 1... n x 1 u = Du =............ = ( u 1 u 2... u n ) (3) 1 2 x x... The columns of this matrix are graients of the components of the vector function u. 1.2 Matrix ifferentiation Similarly to the vector ifferentiation we efine matrix ifferentiation consiering a scalar function φ(a) of a matrix argument A. As in the vector case, the efinition is base on the notion of scalar prouct. Definition 1.2 The scalar prouct a.k.a. the convolution of the n m matrix A an m n matrix B is efine as following One can check the formula A : B = n n x i=1 j=1 m a ij b ji. A : B = Tr (A B) (4) 3

that brings the convolution into the family of familiar matrix operations. The convolution allows us to for calculate the increment of a matrix-ifferentiable function of a matrix argument cause by variation of this argument: φ(a) = φ(a + A) φ(a) = φ(a) A an to give the efinition of the matrix-erivative. : A + o( A ). Definition 1.3 The erivative of a scalar function φ by an n m matrix argument A is an m n matrix D = φ A with elements where a ij is the ij-element of A. D ij = φ a ji In element form, the efinition becomes φ φ φ A = a 11 a 21............... φ a 1n φ a 2n... φ a m1 φ a mn (5) Examples of matrix ifferentiation Next examples show the erivatives of several often use functions of matrix argument. 1. As the first example, consier φ(a) = Tr A = n i=1 a ii. Obviously, { φ 1 if i = j = a ij 0 if i j, therefore the erivative of the trace is the unit matrix, A Tr A = I. 2. Using efinition of the erivative, we easily compute the erivative of the scalar prouct or convolution of two matrices, (A : B) A = A Tr (A BT ) = B. 3. Assume that A is a square n n matrix. The erivative of the quaratic form x T Ax = n i,j=1 x ix j a ij with respect to matrix A is an n n ya matrix (x T Ax) = xx T A 4

4. Compute the erivative of the eterminant of matrix A for A R n n. Notice that the eterminant linearly epens on each matrix element, et A = a ij M ij + constant(a ij ) where M ij is the minor of the matrix A obtaine by eliminating the ith row an the jth column; it is inepenent of a ij. Therefore, et A a ij = M ij an the erivative of et A is the matrix M of minors of A, A et A = M = M 11... M 1n......... M n1... M nn Recall that the inverse matrix A 1 can be conveniently expresse through these minors as A 1 = 1 et AM, an rewrite the result as et A = (et A)A 1 A We can rewrite the result once more using the logarithmic erivative x log f(x) = f (x) f(x). The erivative becomes more symmetric, A (log et A) = A 1. Remark 1.1 If A is symmetric an positively efine, we can bring the result to a perfectly symmetric form (log et A) = I log A Inee, we introuce the matrix logarithmic erivative similarly to the logarithmic erivative of a real positive argument, f log x = x f x, which reas f log A = A f A. Here, log A is the matrix that has the same eigenvectors as A an the eigenvalues equal to logarithms of the corresponing eigenvalues of A. Notice that log et A is the sum of logarithms of the eigenvalues of A, log et A = Tr log A. Notice also that when the matrix A is symmetric an positively efine which means that the eigenvalues of A are real an positive, the logarithms of the eigenvalues are real. 5

5. Using the chain rule, we compute the erivative of the trace of the inverse matrix: A Tr A 1 = A 2. 6. Similarly, we compute the erivative of the quaratic form associate with the inverse matrix: A xt A 1 x = xa 2 x T. Remark 1.2 (About the notations) The historical Leibnitz notation g = z for partial erivative is not the most convenient one an can even be ambiguous. Inee, the often use in one-variable variational problems partial becomes in multivariable problem the partial of the partials x. Since there is no conventional analog for the symbol in partial erivatives, we nee a convenient way to express the fact that the argument z of ifferentiation can itself be a partial erivative like z = 1 x 2. If we were substitute this expression for z into z, we woul arrive at an a bit awkwar expression g = 1 x 2 (still use in Gelfan & Fomin) which replaces the expression use in onevariable variational problem. There are several ways to fix the inconvenience. To keep analogy with the onevariable case, we use the vector of partials ( u) in the place of. If neee, we specify a component of this vector, as follows [ ] g = ( u 1 ) Alternatively, we coul rename the partial erivatives of u with a single inexe array D ij arriving at the formula of the type g = or use comma to show the erivative g = D 12, where D 12 = 1 x 2. 1,2, where u 1,2 = 1 x 2. The most raical an logical solution (which we o not are to evelop in the textbook) replaces Leibnitz notation with something more convenient, namely with Newton-like or Maple-like notation g = D(f, D(u 1, x 2 )) 2 6

Remark 1.3 (Ambiguity in notations) A more serious issue is the possible ambiguity of partial erivative with respect to one of inepenent coorinates. The partial x means the erivative upon the explicitly given argument x of a function of the type F (x, u). If the argument x is one of the inepenent coorinates, an if u is a function of these coorinates, in particular of x (as it is common in calculus of variations problems), the same partial coul mean x + x. To fix this, we nee to specify whether we consier u as a function of x, u = u(x), or as an inepenent argument, which coul make the notations awkwar. For this reason, we always assign the symbol x for a vector of inepenent variables (coorinates). When ifferentiation with respect to inepenent coorinates is consiere, we use the graient notations as u. Namely, the vector is introuce F (u, x) = x 1.. x + where x k always means the erivative upon explicit variable x. The partials correspons to components of this vector. If necessary, we specify the argument of the graient, as follows ξ. 1.3 Multiimensional integration Change of variables Consier the integral I = f(x) x an assume that x = x(ξ), or in coorinates x 1.. x x i = x i (ξ 1,..., ξ ), i = 1,..., In the new coorinates, the omain is mappe into the omain ξ an the volume element x becomes et J ξ where et J is the Jacobian an J is the matrix graient J = ξ x = {J ij }, J ij = x i ξ j, i, j = 1,...,. The integral I becomes I = f(x(ξ))(et ξ x) x ξ (6) The change of variables in the multivariable integrals is analogous to the oneimensional case. 7

Green s formula The Green s formula is a multivariable analog of the Leibnitz formula a.k.a. the funamental theorem of calculus. For a ifferentiable in the omain vector-function a(x), it has the form a x = a ν s (7) Here, ν is the outer normal to an is the ivergence operator, a = a 1 x 1 +..., + a n x n. If = 1, omain becomes an interval [c, ], the normal show the irection along this interval, an (7) becomes c a x = a() a(c). x Integration by parts We will use multivariable analogs of the integration by parts. Suppose that b(x) is a scalar ifferentiable function in an a(x) is a vector ifferentiable fiel in. Then the following generalization of integration by parts hols (a b) x = (b a) x + (a ν)b s (8) The formula follows from the ifferential ientity (ifferentiation of a prouct) an Green s formula a b + b a = (ba) ( c)x = (c ν) s A similar formula hols for two ifferentiable in vector fiels a an b: (a b) x = (b a) x (a b ν) s (9) It immeiately follows from the Green s formula an the ientity (a b) = b a a b 8