Calculus of Variations

Similar documents
Principles of Optimal Control Spring 2008

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Euler equations for multiple integrals

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

The Principle of Least Action

θ x = f ( x,t) could be written as

and from it produce the action integral whose variation we set to zero:

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

II. First variation of functionals

Introduction to the Vlasov-Poisson system

Linear First-Order Equations

Schrödinger s equation.

Problem set 2: Solutions Math 207B, Winter 2016

PDE Notes, Lecture #11

BEYOND THE CONSTRUCTION OF OPTIMAL SWITCHING SURFACES FOR AUTONOMOUS HYBRID SYSTEMS. Mauro Boccadoro Magnus Egerstedt Paolo Valigi Yorai Wardi

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

Implicit Differentiation

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

Calculus of Variations

Table of Common Derivatives By David Abraham

Lecture 27: Generalized Coordinates and Lagrange s Equations of Motion

Exam 2 Review Solutions

Lecture 6: Calculus. In Song Kim. September 7, 2011

G j dq i + G j. q i. = a jt. and

Math 342 Partial Differential Equations «Viktor Grigoryan

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Introduction to variational calculus: Lecture notes 1

Sturm-Liouville Theory

Math 2163, Practice Exam II, Solution

Calculus of variations - Lecture 11

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Math 115 Section 018 Course Note

Math 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

Partial Differential Equations

First Order Linear Differential Equations

Quantum Mechanics in Three Dimensions

Separation of Variables

Fall 2016: Calculus I Final

Calculus Class Notes for the Combined Calculus and Physics Course Semester I

d dx [xn ] = nx n 1. (1) dy dx = 4x4 1 = 4x 3. Theorem 1.3 (Derivative of a constant function). If f(x) = k and k is a constant, then f (x) = 0.

Linear and quadratic approximation

The Exact Form and General Integrating Factors

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

Chapter 2 Lagrangian Modeling

Mathematical Review Problems

Integration Review. May 11, 2013

The Sokhotski-Plemelj Formula

Applications of the Wronskian to ordinary linear differential equations

The Principle of Least Action and Designing Fiber Optics

1 Lecture 20: Implicit differentiation

Advanced Partial Differential Equations with Applications

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations

Calculus in the AP Physics C Course The Derivative

Chapter 3 Definitions and Theorems

Final Exam Study Guide and Practice Problems Solutions

SOME RESULTS ON THE GEOMETRY OF MINKOWSKI PLANE. Bing Ye Wu

WUCHEN LI AND STANLEY OSHER

Calculus and optimization

Topic 2.3: The Geometry of Derivatives of Vector Functions

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Pure Further Mathematics 1. Revision Notes

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

Lecture 1b. Differential operators and orthogonal coordinates. Partial derivatives. Divergence and divergence theorem. Gradient. A y. + A y y dy. 1b.

Lecture 2 - First order linear PDEs and PDEs from physics

Average value of position for the anharmonic oscillator: Classical versus quantum results

IMPLICIT DIFFERENTIATION

5-4 Electrostatic Boundary Value Problems

Lecture 3 Notes. Dan Sheldon. September 17, 2012

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Section 7.1: Integration by Parts

The continuity equation

Diagonalization of Matrices Dr. E. Jacobs

Define each term or concept.

Armenian Transformation Equations For Relativity

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

. ISSN (print), (online) International Journal of Nonlinear Science Vol.6(2008) No.3,pp

Implicit Differentiation. Lecture 16.

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

Math Chapter 2 Essentials of Calculus by James Stewart Prepared by Jason Gaddis

Kramers Relation. Douglas H. Laurence. Department of Physical Sciences, Broward College, Davie, FL 33314

Numerical Integrator. Graphics

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy

Vectors in two dimensions

A new approach to explicit MPC using self-optimizing control

model considered before, but the prey obey logistic growth in the absence of predators. In

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

1 Heisenberg Representation

Well-posedness of hyperbolic Initial Boundary Value Problems

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions

6 General properties of an autonomous system of two first order ODE

12.11 Laplace s Equation in Cylindrical and

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

Lagrangian and Hamiltonian Mechanics

7.1 Support Vector Machine

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule

Transcription:

16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t Figure by MIT OCW.

Spr 2006 16.323 5 1 Calculus of Variations Goal: Develop alternative approach to solve general optimization problems for continuous systems variational calculus Formal approach will provie new insights for constraine solutions, an a more irect path to the solution for other problems. Main issue Cost metric is a function of the functions x(t) an u(t). min J = h(x(t f )) + g(x(t), u(t), t)) subject to ẋ = f (x, u, t) x( ), = given m(x(t f ), t f ) = 0 Call J (x(t), u(t)) a functional. Nee to investigate how to fin the optimal values of a functional. For a function, we foun the graient, an set it to zero to fin the stationary points, an then investigate the higher orer erivatives to etermine if it is a maximum or minimum. Will investigate something similar for functionals.

Spr 2006 16.323 5 2 Maximum an minimum: function A function f (x) has a local minimum at x if f (x) f (x ) for all amissible x in x x ɛ Minimum can occur at (i) stationary point, (ii) at a bounary, or (iii) a point of iscontinuous erivative. If we only consier stationary points of the ifferentiable function f (x), then this conition is equivalent to requiring that the ifferential of f satisfy: f f = x = 0 x for all small x, which gives the same necessary conition as before f = 0 x Note that this efinition use norms to compare two vectors. Can o the same thing with functions: istance between two functions where = x 2 (t) x 1 (t) 1. x(t) 0 for all x(t), an x(t) = 0 only if x(t) = 0 for all t in the interval of efinition. 2. ax(t) = a x(t) for all real scalars a. 3. x 1 (t) + x 2 ( t) x 1 (t) + x 2 (t) Common functional norm: x(t) 2 = ( ) 1/2 x(t) T x(t)

Spr 2006 16.323 5 3 Maximum an minimum: functional A functional J(x(t)) has a local minimum at x (t) if J(x(t)) J(x (t)) for all amissible x(t) in x(t) x (t) ɛ Now efine something equivalent to the ifferential of a function calle a variation of a functional. An increment of a functional ΔJ(x(t), δx(t)) = J(x(t) + δx(t)) J(x(t)) A variation of the functional is a linear approximation of this increment: ΔJ(x(t), δx(t)) = δj(x(t), δx(t)) + H.O.T. i.e. δj(x(t), δx(t)) is linear in δx(t). f(t) Slope equals f' (t 1 ) f f t t t 1 Differential f versus increment f shown for a function, but the same ifference hols for a functional. Figure by MIT OCW.

Spr 2006 16.323 5 4 x(t) x* x*+ αδx (1) x*- αδx(1) αδx (1) αδx (1) t f t Figure by MIT OCW. Figure 2: Visualization of perturbations to function x(t) by δx(t) it is a potential change in the value of x over the entire time perio of interest. Typically require that if x(t) is in some class (i.e., continuous), that x(t) + δx(t) is also in that class. Funamental theorem of the calculus of variations Let x be a function of t in the class Ω, an J(x) be a ifferentiable functional of x. Assume that the functions in Ω are not constraine by any bounaries. If x is an extremal function, then the variation of J must vanish on x, i.e. for all amissible δx, δj(x(t), δx(t)) = 0 Proof is in Kirk, page 121, but it is relatively straightforwar. How o we compute the variation? If J(x(t)) = f(x(t)) where f has cts first an secon erivatives with respect to x, then { } f(x(t)) δj(x(t), δx) = (t) δx + f(x(t f ))δt f f(x( ))δ x tf = f x (x(t))δx + f(x(t f ))δt f f(x( ))δ

Spr 2006 Variation Examples: Scalar 16.323 5 5 For more general problems, consier the cost evaluate on a scalar function x(t) J(x(t)) = tf g(x(t), ẋ(t), t) with, t f an the curve enpoints fixe. Then δj(x(t), δx) = [ g x (x(t), ẋ(t), t)δx + g ẋ (x(t), ẋ(t), t)δẋ] Note that δẋ = δx so δx an δẋ are not inepenent. Integrate by parts: uv = uv vu with u = g ẋ an v = δ ẋ to get: δj(x(t), δx) t = f gx (x(t), ẋ(t), t)δx + [g ẋ (x(t), ẋ(t), t)δx] t0 gẋ(x(t), ẋ(t), t)δx t 0 t f = g x (x(t), ẋ(t), t) g ẋ (x(t), ẋ(t), t) δx(t) t + f [gẋ (x(t), ẋ(t), t)δx] t0 But since x( ) an x(t f ) are given, then δx( ) = δx(t f ) = 0, yieling δj(x(t), δx) = g x (x(t), ẋ(t), t) g ẋ (x(t), ẋ(t), t) δx(t)

Spr 2006 16.323 5 6 Recall that we nee δj = 0 for all amissible δx(t), which are arbitrary within (, t f ), then the (first orer) necessary conition for a maximum or minimum is that ( ) g(x(t), ẋ(t), t) g(x(t), ẋ(t), t) = 0 x ẋ which is calle the Euler equation. Example: Fin the curve that gives the shortest istance between 2 points in a plane (x 0, y 0 ) an (x f, y f ). Cost function sum of ifferential arc lengths: x f x f J = s = (x) 2 + (y) 2 x 0 x 0 x f ( ) 2 y = 1 + x x 0 x Take y as the epenent variable, an x as the inepenent one y x ẏ New form of the cost: x f x f J = 1 + ẏ 2 x g(ẏ)x x 0 x 0 Take partials: g/ y = 0, an ( ) ( ) ( ) g g ẏ ẏ ÿ = = ÿ = = 0 x ẏ ẏ ẏ x ẏ (1 + y 2 ) 1/2 (1 + y 2 ) 3/2 which implies that ÿ = 0 Most general curve with ÿ = 0 is a line y = c 1 x + c 2

Spr 2006 Vector Functions Can generalize the problem by incluing several (N) functions x i (t) an possibly free enpoints J (x(t)) = g(x(t), ẋ(t), t) with, t f, x( ) fixe. Then (rop the arguments for brevity) δj (x(t), δx) = [ g x δx(t) + g ẋ δẋ(t)] 16.323 5 7 Integrate by parts to get: δj (x(t), δx) = g x g ẋ δx(t) + g ẋ (x(t f ), ẋ(t f ), t f )δx(t f ) The requirement then is that for t (, t f ), x(t) must satisfy g g = 0 x ẋ where x( ) = x 0 which are the given N bounary conitions, an the remaining N more BC follow from: x(t f ) = x f if x f is given as fixe, If x(t f ) are free, then g(x(t), ẋ(t), t) = 0 ẋ(t f ) Note that we coul also have a mixture, where parts of x(t f ) are given as fixe, an other parts are free just use the rules above on each component of x i (t f )

Spr 2006 Free Terminal Time Now consier a slight variation: the goal is to minimize J (x(t)) = g(x(t), ẋ(t), t) with, x( ) fixe, t f free, an various constraints on x(t f ) 16.323 5 8 Compute the variation of the functional where we consier 2 caniate solutions: x(t), which we consier to be a perturbation of the optimal x (t) (that we nee to fin) δj (x (t), δx) = [ g x δx(t) + g ẋ δẋ(t)] + g(x (t f ), ẋ (t f ), t f )δt f Integrate by parts to get: δj (x t f (t), δx) = g x g ẋ δx(t) + g ẋ (x (t f ), ẋ (t f ), t f )δx(t f ) + g(x (t f ), ẋ (t f ), t f )δt f Looks stanar so, but we have to be careful how we hanle the terminal conitions

Spr 2006 16.323 5 9 x (t) δx(t f ) x f x x* δx f x 0 t f + δt f Comparison of possible changes to function at en time when t f is free. t f t By efinition, δx(t f ) is the ifference between two amissible functions at time t f (in this case the optimal solution x an another caniate x. But in this case, we must also account for possible changes to δt f. Define δx f as being the ifference between the ens of the two possible functions total possible change in the final state: δx f δx(t f ) + ẋ (t f )δt f so δx(t f ) = δx f in general. Figure by MIT OCW. Substitute to get δj (x t f (t), δx) = g x g ẋ δx(t) + g ẋ (x (t f ), ẋ (t f ), t f )δx f + [g(x (t f ), ẋ (t f ), t f ) g ẋ (x (t f ), ẋ (t f ), t f )ẋ (t f )] δt f

Spr 2006 16.323 5 10 Inepenent of the terminal constraint, the conitions on the solution x (t) to be an extremal for this case are that it satisfy the Euler equations gẋ (x (t), g x (x (t), ẋ (t), t) x (t), t) = 0 Now consier the aitional constraints on the iniviual elements of x (t f ) an t f to fin the other bounary conitions Type of terminal constraints etermines how we treat δx f an δt f Unrelate Relate by a simple function x(t f ) = Θ(t f ) Specifie by a more complex constraint m(x(t f ), t f ) = 0 If t f an x(t f ) are free but unrelate, then δx f an δt f are inepenent an arbitrary their coefficients must both be zero. g x (x (t), ẋ (t), t) g ẋ(x (t), ẋ (t), t) = 0 g(x (t f ), ẋ (t f ), t f ) g ẋ(x (t f ), ẋ (t f ), t f ) ẋ (t f ) = 0 g ẋ(x (t f ), ẋ (t f ), t f ) = 0 Which makes it clear that this is a two point bounary value problem, as we have conitions at both an t f

Spr 2006 16.323 5 11 If t f an x(t f ) are free but relate as x(t f ) = Θ(t f ), then Θ δx f = (t f )δt f Substitute an collect terms gives [ Θ δj = g g ẋ (x (t f ), x g ẋ δx + x (t f ), t f ) (t f ) ] + g(x (t f ), ẋ (t f ), t f ) g ẋ (x (t f ), ẋ (t f ), t f )ẋ (t f ) δt f Set coefficient of δt f to zero (it is arbitrary) full conitions g ẋ(x (t f ), ẋ (t f ), t f ) g x (x (t), ẋ (t), t) g ẋ(x (t), ẋ (t), t) = 0 Θ (t f ) ẋ (t f ) + g(x (t f ), ẋ (t f ), t f ) = 0 Last conition calle the transversality conition

DETERMINATION OF BOUNDARY-VALUE RELATIONSHIPS Problem Description Substitution Bounary Conitions Remarks 1 x(t f ), t f both specifie (Problem 1) δx f = δx(t f ) = 0 δt f = 0 x*( ) = x 0 x*(t f ) = x f 2n equations to etermine 2n constants of integration 2 x(t f ) free; t f specifie (Problem 2) δx f = δx(t f ) δt f = 0 x*( ) = x 0 g x (x*(t f ), x*(t.. f ),t f ) = 0 2n equations to etermine 2n constants of integration 3 t f free; x(t f ) specifie (Problem 3) δx f = 0 x*( ) = x 0 x*(t f ) = x f. g(x*(t f ), x*(t f ),t f ) Ṭ g x (x*(t.. f), x*(t f ),t f ) x*(t f ) = 0 (2n + 1) equations to etermine 2n constants of integration an t f 4 t f, x(t f ) free an inepenent (Problem 4) x*( ) = x 0 g x (x*(t.. f), x*(t f ),t f ) = 0. g(x*(t f ), x*(t f ),t f ) = 0 (2n + 1) equations to etermine 2n constants of integration an t f 5 t f, x(t f ) free but relate by x(t f ) = θ(t f ) (Problem 4) θ δx f = δt (t f) f x*( ) = x 0 x*(t f ) = θ(t f ). g(x*(t f ), x*(t f ),t f ) T g x (x*(t.. f), x*(t f ),t f ) θ (t f) - x*(t f ) = 0 (2n + 1) equations to etermine 2n constants of integration an t f θ θ 1 θ enotes the n x 1 column vector 2... θ 7 T Figure by MIT OCW. Aapte from: figure 4 in Kirk, Donal E. Optimal Control Theory: An Introuction. New York, NY: Dover, 2004. ISBN: 0486434842. Figure 4: Possible terminal constraints To hanle the thir type of terminal conition, we must aress the solution of constraine problems.

Spr 2006 Example: 5 1 16.323 5 12 Fin the shortest curve from the origin to a specifie line. Goal: minimize the cost functional (See page 5 6) J = 1 + ẋ 2 (t) Since g(x, ẋ, t) is only a function of ẋ, the Euler equation reuces to ẋ (t) = 0 [1 + ẋ (t) 2 ] 1/2 given that = 0, x(0) = 0, an t f an x(t f ) are free, but x(t f ) must line on the line θ(t) = 5t + 15 which after ifferentiating an simplifying, gives ẍ (t) = 0 from which we know that the answer is a straight line but since x(0) = 0, then c 0 = 0 x (t) = c 1 t + c 0 The transversality conition gives ẋ (t f ) 2 [ 5 ẋ (t f )] + [1 + ẋ (t f ) ] 1/2 = 0 [1 + ẋ (t f ) 2 ] 1/2 that simplifies to [ẋ (t f )] [ 5 ẋ (t f )] + [1 + ẋ (t f ) 2 ] = 5ẋ (t f ) + 1 = 0 so that ẋ (t f ) = c 1 = 1/5 Not a surprise, as this gives the slope of a line orthogonal to the constraint line. To fin final time: x(t f ) = 5t f + 15 = t f /5 which gives t f 2.88

Spr 2006 Example: 5 2 16.323 5 13 Ha the terminal constraint been a bit more challenging, such as 1 Θ Θ(t) = ([t 5] 2 1) = t 5 2 Then the transversality conition gives ẋ (t f ) [t f 5 ẋ (t f )] + [1 + ẋ (t f ) 2 ] 1/2 = 0 [1 + ẋ (t f ) 2 ] 1/2 [ẋ (t f )] [t f 5 ẋ (t f )] + [1 + ẋ (t f ) 2 ] = 0 c 1 [t f 5] + 1 = 0 Now look at x (t) an Θ(t) at t f t f 1 x (t = ([t f 5] 2 f ) = 1) (tf 5) 2 which gives t f = 3, c 1 = 1/2 an x (t f ) = t/2 3 2.5 2 1.5 x 1 0.5 0 0.5 1 0 1 2 3 4 5 6 t Figure 5: Quaratic terminal constraint.

Spr 2006 Constraine Solutions 16.323 5 14 Now consier variations of the basic problem that inclue constraints. For example, if the goal is to fin the extremal function x that minimizes tf J(x(t), t) = g(x(t), ẋ(t), t) subject to the constraint that a given set of n ifferential equations be satisfie f (x(t), ẋ(t), t) = 0 where we assume that x R n+m (take t f an x(t f ) to be fixe) As with the basic optimization problems in Lecture 2, we procee by augmenting the cost with the constraints using Lagrange multipliers Since the constraints must be satisfie at all time, these multipliers are also assume to be functions of time. { } J a (x(t), t) = g(x, ẋ, t) + p(t) T f (x, ẋ, t) Which oes not change the cost if the constraints are satisfie. Take variation of augmente functional consiering perturbations to both x(t) an p(t) δj(x(t), δx(t), p(t), δp(t)) { } = g x + p(t) T f x δx(t) + g ẋ + p(t) T f ẋ δẋ(t) + f T δp(t) As before, integrate by parts to get: δj(x(t), δx(t), p(t), δp(t)) ({ } ) = g x + p(t) T f x g ẋ + p(t) T f ẋ δx(t) + f T δp(t)

Spr 2006 16.323 5 15 To simplify things a bit, efine g a (x(t), ẋ(t), t) g(x(t), ẋ(t), t) + p(t) T f (x(t), ẋ(t), t) On the extremal, the variation must be zero, but since δx(t) an δp(t) can be arbitrary, can only occur if g a (x(t), ẋ(t), t) x g a (x(t), ẋ(t), t) ẋ = 0 f (x(t), ẋ(t), t) = 0 which are obviously a generalize version of the Euler equations that we saw before.

Spr 2006 16.323 5 16 General Terminal Conitions Now re consier the case on 5 10 with very general terminal constraints, with t f free, an x(t f ) given by the conition: m(x(t f ), t f ) = 0 Constraine optimization, so as before, augment the cost functional J (x(t), t) = h(x(t f ), t f ) + g(x(t), ẋ(t), t) with the constraint using Lagrange multipliers: J a (x(t), ν, t) = h(x(t f ), t f )+ν T m(x(t f ), t f )+ g(x(t), ẋ(t), t) Consiering changes to x(t), t f, x(t f ) an ν, the variation for J a is δj a = h x (t f )δx f + h tf δt f + m T (t f )δν + ν T m x (t f )δx f +ν T m tf (t f )δt f + [g x δx + g ẋ δẋ] + g(t f )δt f = h x (t f ) + ν T m x (t f ) δx f + h tf + ν T m tf (t f ) + g(t f ) δt f T +m (t f )δν + g x g ẋ δx + g ẋ (t f )δx(t f ) Now use that δx f = δx(t f ) + ẋ(t f )δt f as before to get δj a = h [ x (t f ) + ν T m x (t f ) + g ẋ (t f ) δx f ] + h tf + ν T m tf (t f ) + g(t f ) g ẋ (t f )ẋ(t f ) δt f + m T (t f )δν + g x g ẋ δx

Spr 2006 16.323 5 17 Looks like a bit of a mess, but we can clean it up a bit using to get w(x(t f ), ν, t f ) = h(x(t f ), t f ) + ν T m(x(t f ), t f ) δj a = [w [ x (t f ) + g ẋ (t f )] δx f ] + w tf + g(t f ) g ẋ (t f )ẋ(t f ) δt f + m T (t f )δν + g x g ẋ δx Given the extra egrees of freeom in the multipliers, we can treat all of the variations as inepenent all coefficients must be zero to achieve δj a = 0 So the necessary conitions are g x g ẋ = 0 (im n) (2) l x (t f ) + g ẋ (t f ) = 0 (im n) (3) l tf + g(t f ) g ẋ (t f )ẋ(t f ) = 0 (im 1) (4) with x( ) = x 0 (there are n of these) an m(x(t f ), t f ) = 0 (m of these), (3) an (4) From the solution of Euler s equation, there are 2n constants of integration to fin for x(t), we must also fin ν (m of those), an 2n + m + 1 variables to fin t f Some special cases: If t f is fixe, h = h(x(t f )), m m(x(t f )) an we lose the last conition (which is the transversality conition (4)) others remain unchange If t f is fixe, x(t f ) free, then there is no m, no ν an w reuces to h. Kirk consiers several other type of constraints.