NOTES ON CALCULUS OF VARIATIONS. September 13, 2012
|
|
- Martin Park
- 5 years ago
- Views:
Transcription
1 NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R, and the problem is to maximise (or minimise) the functional J(x) = that acts on functions x: [, t 1 ] R fulfilling F (t, x(t), ẋ(t)) dt (1) x( ) = x, x(t 1 ) = x 1. (2) Hereby x, x 1 are given numbers; and any C 1 -function 1 x(t) satisfying these two boundary conditions is said to be admissible. Example 1 (Ramsey 1928). In economics it is a task to determine the amount of capital K(t), say in a country, such that, when f(k(t)) denotes the gross domestic product created by having the capital K(t), one has max T U(f(K(t)) K(t))e ρt dt, K() = K, K(T ) = K T. (3) Here f(k) K is the consumption and U is a utility function, fixed so that U > > U on ], [. The factor e ρt, ρ >, gives a discount of future consumption in order to give priority to consumption in the near future. K and K T are given initial and terminal values of the available capital. By maximising the integral, the country is envisaged to benefit as much as possible from the change in capital from K to K T. The problem above can be seen as an optimisation problem in infinitely many variables (one for each t [, t 1 ]). But fortunately the possible minimising and maximising functions x(t) can be found among the solutions to a certain ordinary differential equation. This is the content of the following famous result: 1 Recall that a function f(t) is said to be a C k -function on an interval [, t 1 ] if all derivatives dm f dt with m m k exist and are continous on [, t 1 ]. The analogous definition applies to functions of several variables, such as F (t, x, u). 1
2 2 JON JOHNSEN Theorem 2 (Euler Lagrange equation). For an admissible C 1 -function x (t) to maximise or minimise J(x) = F (t, x(t), ẋ(t)) dt, such that x( ) = x, x(t 1 ) = x 1 (4) it is necessary that x (t) solves the ordinary differential equation d (F dt 3(t, x(t), ẋ(t))) = F 2(t, x(t), ẋ(t)). (5) To reveal the nature of the above differential equation, it is instructive to carry out the t-differentiation using the chain rule (this is allowed at least if the solution x(t) is a C 2 -function): F 33(t, x(t), ẋ(t))ẍ(t) + F 32(t, x(t), ẋ(t))ẋ(t) + F 31(t, x(t), ẋ(t)) F 2(t, x(t), ẋ(t)) =. (6) This is a homogeneous second order differential equation, which is said to be quasi-linear because the coefficients depend on the solution and its lower order derivatives. Being of second order (if F 33 ) the set of solutions usually involves two arbitrary (real or complex) constants. These are often determined by imposing the initial and terminal conditions in (2). To illustrate the usefulness of Theorem 2, one can take the simple Example 3. Consider J(x) = 1 (x(t) 2 + ẋ(t) 2 ) dt; x() =, x(1) = e 2 1. Here F (x, u) = x 2 + u 2 so the Euler Lagrange equation is = d (2ẋ) 2x = 2(ẍ x). dt The complete solution to this is given by the functions x(t) = Ae t + Be t. Invoking the boundary conditions it is seen that x(t) is admissible if and only if A + B = and Ae Be 1 = e 2 1, that is, precisely for A = e = B, so the only candidate for a minimiser or maximiser is x (t) = e 1+t e 1 t. However, x is not a maximising function (e.g. J(x) can be seen to take on arbitrarily large values), but it will follow later from sufficient conditions that x is a minimiser. The proof of Theorem 2 is given further below in case x (t) is a C 2 -function. The point of departure is to show the du Bois-Reymond lemma, which is also known as the fundamental lemma of calculus of variations: Lemma 4. Let f : [, t 1 ] R be a continuous function. When f has the property that, for every function µ in C 2 ([, t 1 ]) fulfilling µ( ) = = µ(t 1 ), f(t)µ(t) dt = (7) then f(t) = for every t.
3 NOTES ON CALCULUS OF VARIATIONS 3 Proof. Suppose the contrary, say f(s) > at some s, for simplicity. By continuity there is some subinterval I = ]a, b[ on which f(t) > and a < b t 1. On I one can then define µ(t) as µ(t) = (t a) 3 (b t) 3 >, and let µ(t) = outside I; the zeroes at a and b have so high order that µ (t) for t a + and for t b, so the Mean Value Theorem implies that µ is of class C 2 on [, t 1 ]. Denoting by P (t) a primitive function of f(t)µ(t), it follows by the Mean Value Theorem and the choice of I that = f(t)µ(t) dt = f(t)µ(t) dt = P (b) P (a) = f(τ)µ(τ)(b a) >. (8) I This is a contradiction; the case f(s) < is similar. Hence f. The lemma above is exploited by forming a so-called variation of the given solution, x(t, α) = x (t) + αµ(t). (9) Here α R is just a parameter, while µ is an arbitrary C 2 -function on [, t 1 ] such that µ( ) = = µ(t 1 ); clearly t x(t, α) is then admissible for every fixed α R. Such variations have of course given rise to the name of the whole theory Proof of the Euler Lagrange equation. To exploit the variation in (9), let I(α) = J(x(t, α)) = F (t, x(t, α), x(t,α) ) dt. (1) For simplicity the proof continues with the case of a maximum at x (the case of a minimum is similar). This means that I(α) I() for all α, whence I () =. (11) Of course this requires that I(α) can be differentiated. More precisely we shall proceed under the assumptions that α I(α) is differentiable for every α R, I (α) = α x F (t, x(t, α), (t, α)) dt. Here the second line means that one can simply differentiate under the integral sign. This non-trivial fact will be proved later in Lemma 5 below. However, granted this lemma, the easy consequences of it are given first: using an integration by parts in the last step below we get I () = = = (F 2(t, x, ẋ ) x (t, ) + F α 3(t, x, ẋ ) 2 x (t, )) dt α (F 2(t, x, ẋ )µ(t) + F 3(t, x, ẋ ) µ(t)) dt ( ) µ(t) F 2(t, x, ẋ ) d F dt 3(t, x, ẋ ) dt + [ µ(t)f 3(t, x, ẋ ) ] t=t 1 t=. (12)
4 4 JON JOHNSEN Since µ vanishes at the end points, the above yields = I () = µ(t) ( F 2(t, x, ẋ ) d dt F 3(t, x, ẋ ) ) dt. (13) Here µ is an arbitrary C 2 -function vanishing at the end points, and that F 2 d F dt 3 is continuous with respect to t can be seen from the terms in (6). So it follows from Lemma 4 that the function F 2 d F dt 3 equals for every t. This means that x (t) solves the Euler Lagrange equation, as desired. Now it only remains to account for the statements in the next lemma, which is lengthy to prove, but elementary in content: Lemma 5. The function I(α) is differentiable at every α R and I (α ) = x F (t, x(t, α), (t, α)) α α=α dt. (14) Proof. For simplicity the details are given in the main case, which is α =. We shall show that I(α) I() = Aα + o(α) with A equal to the above integral for α =. Now, it is clear that I(α) I() = (F (t, x(t, α), x(t,α) ) F (t, x, ẋ )) dt. (15) Here the integrand is a C 2 -function of α, as one can easily check. (The derivative of order 2 is given below.) So by using Taylor s formula (for fixed t), we find for some remainder R(α) that I(α) I() = α In fact, when we use the integral remainder α 2 1 x F (t, x(t, α), (t, α)) α α= dt + R(α). (16) (1 θ)f ( t, x(t, ), x(t, )) (θα) dθ, then the term R is by straightforward calculation seen to be given by R(α) = α 2 Therefore it fulfils R(α) α 2 1 (1 θ) ( µ(t) 2 F 22(t, x(t, θα), x (t, θα)) 1 + 2µ(t) µ(t)f 23(t, x(t, θα), x (t, θα)) + µ(t) 2 F 33(t, x(t, θα), x (t, θα))) dθdt. (17) ( µ 2 F 22(t, x(t, θα), x (t, θα)) + 2 µ µ F 23(t, x(t, θα), x (t, θα)) + µ 2 F 33(t, x(t, θα), x (t, θα)) ) dθdt, (18) To show that this double integral is a bounded function of α, we consider the map Φ(α, θ, t) = (t, x (t) + θαµ(t), ẋ (t) + θα µ(t)) (19)
5 NOTES ON CALCULUS OF VARIATIONS 5 with the domain D = [ 1, 1] [, 1] [, t 1 ]. Since D is a compact set, and since Φ is continuous, its range B = Φ(D) is necessarily compact. Consequently the functions F 22, F 23 and F 33 are all bounded on B (they are continuous because F C 2 ), and it follows that the double integral is less than or equal to the constant C = (max F 22 + max B B F 23 + max B F 33 ) ( µ(t) + µ(t) ) 2 dt. (2) Therefore R(α) C α α for α, (21) which means that R(α) is o(α) for α. Altogether (16), (21) account for both the differentiability of I(α) at α = and that the derivatives can be calculated under the integral sign. For general α R the main changes are that one should replace θα everywhere by α + θ(α α ) and x by x(, α ). This completes the proof of Theorem 2 on the necessity of the Euler Lagrange equation Further necessary conditions. In analogy with the second order conditions for functions of real variables one has the next result, which will only be given here without proof. Theorem 6 (Legendre). If F is a C 2 -function, it is a necessary condition for the functional J(x) = t 1 F (t, x(t), ẋ(t)) dt to have an extreme value at an admissible function x (t) that F 33(t, x (t), ẋ (t)) in case x maximises, (22) F 33(t, x (t), ẋ (t)) in case x minimises. (23) (These inequalities are required to hold for all t [, t 1 ].) The Legendre condition is often useful, when one wants to show that a solution candidate is not a maximiser (or a minimiser). This is elucidated by the next example. Example 7. In the above Example 3 where J(x) = 1 (x2 + ẋ 2 ) dt one finds at once that F 33 = 2 >, and this rules out that the admissible function x = e 1+t e 1 t can be a maximiser. (It is still open whether x is a minimiser; this cannot be concluded from the fact that F 33 > 2.) One of the possible complications in practise is that, for good reasons, there are further constraints on the admissible functions. This can e.g. leave us with the problems of finding a C 1 -function x: [, t 1 ] R such that max F (t, x, ẋ) dt; x( ) = x, x(t 1 ) = x 1 ; (24) h(t, x(t), ẋ(t)) > for all t [, t 1 ]. (25) Hereby h is a suitable C 1 -function defining the constraint. However, one can show that the Euler Lagrange and Legendre conditions are necessary also for such problems.
6 6 JON JOHNSEN In fact the preimage O = h 1 ( ], [ ) (26) is an open set in [, t 1 ] R n R n, and the image K = { (t, x (t), ẋ (t)) t [, t 1 ] } is a compact subset of O. Therefore K has distance d > to the boundary of O, so by restricting α (as we may) to be so close to that α d/2 max t [t,t 1 ] 1 + µ(t) 2 + µ(t), (27) 2 the curve t (t, x(t, α), x (t, α)) will for each α consequently have distance < d/2 to K, hence have its range completely inside O: that is, < h(t, x(t, α), x (t, α)) for all t, α. This suffices e.g. for the proof of the Euler equation s necessity also under the constraint given by h. The next example shows how such constraints can appear. Example 8. In the above Example 1 from macro economics, one has the integrand F (t, K(t), K(t)) = U(f(K(t)) K(t))e ρt. (28) But the utility function U is typically only a C 2 -function on ], [ ; for example U( ) = ( ) meets the requirements that U > > U. So here it is natural to impose the constraint that h(t, K(t), K(t)) := f(k(t)) K(t) >. This is not just a kind of mathematical obstruction (as U a, already a C 1 - extension of U to the whole axis R is impossible), for negative values of the consumption f(k(t)) K(t) does not make sense at all in the model. 2. General Terminal Conditions A more radical change is met, if one considers the problem of having max together with one of the following terminal conditions: (i) x(t 1 ) free (t 1 given); (ii) x(t 1 ) x 1 (t 1 and x 1 given); (iii) x(t 1 ) = g(t 1 ) (t 1 free, but g given in C 1 )). F (t, x(t), ẋ(t)) dt, x() = x, (29) Note that in order for case (iii) to make sense, the functions F and g must be defined on [, T ] R R, respectively [, T ], for some T in the range < T. However, this point will not be stressed in the following, where T = is used for simplicity s sake. Correspondingly the admissible functions are now required to be C 1 -functions that fulfill the stated initial and terminal conditions. It is clear that a maximising function x still fulfills the Euler Lagrange equations, for x also maximises among the admissible functions that satisfy x(t 1 ) = x (t 1 ). But one has to add some transversality conditions:
7 NOTES ON CALCULUS OF VARIATIONS 7 Theorem 9. If x (t) is an admissible function solving the above maximisation problem, then x satisfies the Euler Lagrange equation and the corresponding transversality condition, (i) F 3(t 1, x (t 1 ), ẋ (t 1 )) = ; (ii) F 3(t 1, x (t 1 ), ẋ (t 1 )) (and = holds if x (t 1 ) > x 1 ); (iii) F (t 1, x (t 1 ), ẋ (t 1 )) (ġ(t 1 ) ẋ (t 1 ))F 3(t 1, x (t 1 ), ẋ (t 1 )) =. The conclusion in (iii) is valid with the proviso that ẋ (t 1 ) ġ(t 1 ). In case (ii) the inequality is reversed if x solves the minimisation problem. Proof of case (i),(ii). Set y 1 = x (t 1 ). Since x maximises J, the inequality J(x ) J(y) holds in particular for all admissible functions y(t) that satisfy y(t 1 ) = y 1. Therefore x is also a solution of the basic problem on the fixed time interval [, t 1 ] and with data x, y 1. Consequently x satisfies the Euler Lagrange equation. The transversality condition (i) can be proved as a corollary to the proof of Theorem 2: since the terminal value x(t 1 ) is not fixed in this context, there is the freedom to take the variation µ(t) such that µ(t 1 ) (but µ( ) = is still required); again it is seen that I () =. Since it is already known that x solves the Euler Lagrange equation, formula (12) therefore yields µ(t 1 )F 3(t 1, x (t 1 ), ẋ (t 1 )) =. (3) Taking µ such that µ(t 1 ) = 1 the conclusion in (i) follows. With a little more effort also (ii) can be obtained along these lines. Indeed, in case x (t 1 ) > x 1 the only modification needed is a restriction on α, which should be taken in an interval so narrow around α = that the variation t x(t, α) is admissible, i.e. such that x(t 1, α) x 1. Setting ε = x (t 1 ) x 1 >, this is obtained by requiring that α µ(t 1 ) < ε. (31) In the remaining cases one has x (t 1 ) = x 1, whence αµ(t 1 ) is necessary for admissibility. To proceed here it is useful to specialise to variations with µ(t 1 ) > ; then I(α) is only defined for α, so the maximality gives I(α) I() for all α. As I(α) is differentiable on [, α[ according to Lemma 5, this implies I (). So this time (12) gives µ(t 1 )F 3(t 1, x (t 1 ), ẋ (t 1 )), (32) which implies (ii). As a preparation for the somewhat longer proof of (iii), one may now deduce Lemma 1. Let f(t, α) be a continuous function of (t, α) R 2 and set I(α, β) = β f(t, α) dt. (33) If f exists and is continuous on α R2 and, moreover, I exists, equals β f α (t, α) dt and is α continuous with respect to α for each β, then I(α, β) is differentiable at every (α, β) and it holds that I = f(α, β). β
8 8 JON JOHNSEN Proof. For each fixed α the fundamental theorem of calculus states that the integral is a differentiable function of the upper limit (since the integrand is continuous) and moreover that I = f(α, β). By assumption on f this function is continuous on β R2, and the first partial derivative I is continuous with respect to α. By a standard result for functions α of several variables this implies the differentiability. (Here a small well-known sharpening of the usual C 1 -condition is used: in the proof, that uses the Mean Value Theorem on increments along the coordinate axes, continuity of the first partial derivative is only needed along the first axis.) It is left for the reader to formulate and prove a local version of the lemma, in which both α, β belong to proper open subintervals of the real line. Proof of (iii). To obtain (iii), it is convenient to invoke the Implicit Function Theorem, which is applied to the mapping Φ(t, α) = x (t) + αµ(t) g(t), t [, [, α R. (34) (Hereby the function x is artificially continued for t t 1 as the quadratic polynomial x (t 1 ) + ẋ (t 1 )(t t 1 ) + 1 2ẍ (t 1 )(t t 1 ) 2 in order that it be C 2 on the interval [, [.) The function µ is in C 2 ([, [) and need only satisfy µ( ) =. Now, since the pair (x, t 1 ) solves the maximum problem under condition (iii), the point (t, α) = (t 1, ) is obviously on the level surface Φ(t, α) =. (35) Since Φ (t 1, ) = ẋ (t 1 ) ġ(t 1 ) by assumption, there exists according to the Implicit Function Theorem some α >, τ > and a C 1 -function ψ(α), mapping [ α, α ] to [t 1 τ, t 1 + τ ], with ψ() = t 1 and the crucial property that its graph { (ψ(α), α) R 2 α α α } (36) constitutes all the solutions of (35) in the rectangle [t 1 τ, t 1 + τ ] [ α, α ]. By its construction the function ψ fulfills Φ(ψ(α), α) for α α, that is, x (ψ(α)) + αµ(ψ(α)) g(ψ(α)) =. (37) This relation is important because it shows that t x(t, α), defined for t [, ψ(α)], actually fulfills the terminal condition (iii), hence is admissible. Moreover, differentiation of (37) gives so in particular for α =, ẋ (ψ(α)) ψ(α) + µ(ψ(α)) + (α µ(ψ(α)) ġ(ψ(α))) ψ(α) =, (38) (ẋ (t 1 ) ġ(t 1 )) ψ() = µ(t 1 ). (39) Choosing µ from now on so that µ(t 1 ), it follows that ψ(). To exploit this knowledge, it is first noted that for the above variation, I(α) = ψ(α) F (t, x (t) + αµ(t), ẋ (t) + α µ(t)) dt. (4)
9 NOTES ON CALCULUS OF VARIATIONS 9 In view of Lemma 5 and Lemma 1, the chain rule gives the differentiability of α (ψ(α), α) I(α) and that I (α) = F (t, x(t, α), x (t, α)) t=ψ(α) ψ(α) + ψ(α) (µf 2 + µf 3) dt. (41) With the usual partial integration this gives, since x solves the Euler equation, I () = F (t 1, x (t 1 ), ẋ (t 1 )) ψ() + [ µf 3 ] t1 t=t1 t= + µ(f 2 d dt F 3) dt = F (t 1, x (t 1 ), ẋ (t 1 )) ψ() + µ(t 1 )F 3(t 1, x (t 1 ), ẋ (t 1 )). Since I(α) I() also here implies I () =, an insertion of the above formula for µ(t 1 ) now gives that = ( F t=t1 + (ġ(t 1 ) ẋ (t 1 ) f t=t1 ) ) ψ(). (43) Because ψ() was seen earlier, the transversality condition (iii) now follows at once. Remark 11. The additional assumption made in Theorem 9 for the transversality condition (iii) that ẋ (t 1 ) ġ(t 1 ) is hardly severe. Indeed, it is most likely always fulfilled as a consequence of the fact that the pair (x, t 1 ) maximises the functional: if ẋ (t 1 ) = ġ(t 1 ) it is conceivable that x could be continued along the graph of g for t > t 1 so that a larger integral would be obtained on the interval [, t 1 + ε] for some ε >, for one can arrange that the integrand is positive at t 1 by adding a suitable constant to F. (The continuation along the tangent of course only gives a C 1 -function, so that framework for admissibility would be the natural one for a final discussion of this point.) Sufficient conditions for solutions Finally, a result on sufficient conditions for a solution is given. However, it holds only in case the basic function F (t, x, u) is concave with respect to (x, u). This means that the Hessian matrix is negative semi-definite at all points, i.e. ( 2 F ) ; that is, for all t, x, x u u, this matrix has eigenvalues λ 1, λ 2 in ], ]. Theorem 12. Suppose F (t, x, u) is concave with respect to (x, u), and that x (t) fulfils the Euler Lagrange equation and one of the terminal conditions a) x(t 1 ) = x 1, b) x(t 1 ) x 1, or c) x(t 1 ) free, together with the corresponding transversality condition b) F 3(t 1, x (t 1 ), ẋ (t 1 )) (and = if x (t 1 ) > x 1 ), respectively c) F 3(t 1, x (t 1 ), ẋ (t 1 )) =. Then x is a global maximiser, i.e. for every other admissible function x(t) it holds true that F (t, x(t), ẋ(t)) dt ẋ (42) F (t, x (t), ẋ (t)) dt. (44) Since minimisation of F (t, x, ẋ) dt is achieved by maximising for F, one has the analogous minimisation result for convex functions.
10 1 JON JOHNSEN Example 13. Since F (x, u) = x 2 + u 2 is convex in Example 3, the solution x (t) = e 1+t e 1 t actually minimises the functional J(x) = 1 (x(t)2 + ẋ(t) 2 ) dt subject to the conditions x() = and x(1) = e 2 1. (As was claimed earlier.) Exercises Exercise 1 (i) Calculate the value of J(x) = 1 (x2 + ẋ 2 ) dt in the cases (a) x(t) = (e 2 1)t, (b) x(t) = e 2t 1, (c) x(t) = e 1+t e 1 t, (d) x(t) = at 2 + (e 2 1 a)t. (They all go through (, ) and (1, e 2 1), hence are admissible.) (ii) Let a and conclude that J(x) has no maximum on the curves joining (, ) to (1, e 2 1). Exercise 2 Let J(x) = 2 1 t 2 ẋ(t) 2 dt and consider the boundary conditions x(1) = 1, x(2) = 2. (i) Find the admissible solutions to the Euler Lagrange equation. (ii) Show that the maximisation problem for J has no solution. (Try x(t) = at 2 (1 3a)t + 2a.) (iii) Does the above imply that the solution in (i) minimises J(x)? Exercise 3 Consider the problem min 1 (t + x) 4 dt, x() =, x(1) = a. Find the solution of the Euler Lagrange equation and determine the value of a for which this solution is admissible. For this value of a, find the solution of the problem. Exercise 4 The length of the graph of a C 1 -function x(t) which connects (, x ) to (t 1, x 1 ) is given by L(x) = 1 + ẋ(t)2 dt. Prove that L(x) attains its minimum over the admissible functions exactly when x(t) has a straight line as its graph. Exercise 5 Let A(t) denote the assets (or wealth) of a person at time t, let w be the constant wage, and suppose money can be borrowed at the fixed interest rate r; thus the consumption at time t is modelled as C(t) = ra(t) + w A(t). Suppose the person wants to maximise consumption from now until the expected death date T, T U(C(t))e ρt dt. Hereby U is a certain utility function, U > > U, and ρ is a discount factor. While the present assests are A, the purpose is also to leave at least the amount A T to the heirs, i.e. to have A(T ) A T.
11 NOTES ON CALCULUS OF VARIATIONS 11 Apply the necessary conditions to this case. Show in particular that A(t) is only optimal if A(T ) = A T. (Is this understandable?) Solve Euler equation if U(C) = a e bc for constants a, b >. Exercise 6 Investigate what the Euler Lagrange equations give in the special cases when F = F (t, x), F = F (t, ẋ), F = F (x, ẋ). Prove that here d dt 3(x, ẋ) = F 2(x, ẋ) (45) = F (x, ẋ) ẋf 3(x, ẋ) is constant (46) = ẋ = or d dt F 3(x, ẋ) = F 2(x, ẋ). (47)
LECTURES ON OPTIMAL CONTROL THEORY
LECTURES ON OPTIMAL CONTROL THEORY Terje Sund May 24, 2016 CONTENTS 1. INTRODUCTION 2. FUNCTIONS OF SEVERAL VARIABLES 3. CALCULUS OF VARIATIONS 4. OPTIMAL CONTROL THEORY 1 INTRODUCTION In the theory of
More information2 Statement of the problem and assumptions
Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on
More informationS2500 ANALYSIS AND OPTIMIZATION, SUMMER 2016 FINAL EXAM SOLUTIONS
S25 ANALYSIS AND OPTIMIZATION, SUMMER 216 FINAL EXAM SOLUTIONS C.-M. MICHAEL WONG No textbooks, notes or calculators allowed. The maximum score is 1. The number of points that each problem (and sub-problem)
More informationSocial Welfare Functions for Sustainable Development
Social Welfare Functions for Sustainable Development Thai Ha-Huy, Cuong Le Van September 9, 2015 Abstract Keywords: criterion. anonymity; sustainable development, welfare function, Rawls JEL Classification:
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationOptimisation and optimal control 1: Tutorial solutions (calculus of variations) t 3
Optimisation and optimal control : Tutorial solutions calculus of variations) Question i) Here we get x d ) = d ) = 0 Hence ṫ = ct 3 where c is an arbitrary constant. Integrating, xt) = at 4 + b where
More informationNewtonian Mechanics. Chapter Classical space-time
Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles
More information1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y =
DIFFERENTIAL EQUATIONS. Solved exercises.. Find the set of all solutions of the following first order differential equations: (a) x = t (b) y = xy (c) x = x (d) x = (e) x = t (f) x = x t (g) x = x log
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationEC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1
EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationFIRST YEAR CALCULUS W W L CHEN
FIRST YER CLCULUS W W L CHEN c W W L Chen, 994, 28. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationExhaustible Resources and Economic Growth
Exhaustible Resources and Economic Growth Cuong Le Van +, Katheline Schubert + and Tu Anh Nguyen ++ + Université Paris 1 Panthéon-Sorbonne, CNRS, Paris School of Economics ++ Université Paris 1 Panthéon-Sorbonne,
More informationL2 gains and system approximation quality 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION L2 gains and system approximation quality 1 This lecture discusses the utility
More informationPutzer s Algorithm. Norman Lebovitz. September 8, 2016
Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More informationSUPPLEMENTARY MATERIAL TO IRONING WITHOUT CONTROL
SUPPLEMENTARY MATERIAL TO IRONING WITHOUT CONTROL JUUSO TOIKKA This document contains omitted proofs and additional results for the manuscript Ironing without Control. Appendix A contains the proofs for
More informationf(s)ds, i.e. to write down the general solution in
1. First order ODEs. Simplest examples. Existence and uniqueness theorem Example 1.1. Consider the equation (1.1) x (t) = 1 This is an equation because there is an unknown: the function x(t). This is a
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationNonlinear Control Systems
Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationENGI 4430 PDEs - d Alembert Solutions Page 11.01
ENGI 4430 PDEs - d Alembert Solutions Page 11.01 11. Partial Differential Equations Partial differential equations (PDEs) are equations involving functions of more than one variable and their partial derivatives
More informationDesign and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016
Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)
More informationBasic Techniques. Ping Wang Department of Economics Washington University in St. Louis. January 2018
Basic Techniques Ping Wang Department of Economics Washington University in St. Louis January 2018 1 A. Overview A formal theory of growth/development requires the following tools: simple algebra simple
More informationMA102: Multivariable Calculus
MA102: Multivariable Calculus Rupam Barman and Shreemayee Bora Department of Mathematics IIT Guwahati Differentiability of f : U R n R m Definition: Let U R n be open. Then f : U R n R m is differentiable
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationMATH 1A, Complete Lecture Notes. Fedor Duzhin
MATH 1A, Complete Lecture Notes Fedor Duzhin 2007 Contents I Limit 6 1 Sets and Functions 7 1.1 Sets................................. 7 1.2 Functions.............................. 8 1.3 How to define a
More informationMetric Spaces Lecture 17
Metric Spaces Lecture 17 Homeomorphisms At the end of last lecture an example was given of a bijective continuous function f such that f 1 is not continuous. For another example, consider the sets T =
More informationChapter III. Stability of Linear Systems
1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,
More informationMonetary Economics: Solutions Problem Set 1
Monetary Economics: Solutions Problem Set 1 December 14, 2006 Exercise 1 A Households Households maximise their intertemporal utility function by optimally choosing consumption, savings, and the mix of
More informationObstacle problems and isotonicity
Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle
More informationConvergence Rate of Nonlinear Switched Systems
Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the
More informationSolution by the Maximum Principle
292 11. Economic Applications per capita variables so that it is formally similar to the previous model. The introduction of the per capita variables makes it possible to treat the infinite horizon version
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationMath Camp Notes: Everything Else
Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady
More informationTaylor and Laurent Series
Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x
More informationDifferential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011
Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3.
More informationNATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II
NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II Chapter 2 Further properties of analytic functions 21 Local/Global behavior of analytic functions;
More information4.1 Analysis of functions I: Increase, decrease and concavity
4.1 Analysis of functions I: Increase, decrease and concavity Definition Let f be defined on an interval and let x 1 and x 2 denote points in that interval. a) f is said to be increasing on the interval
More informationObserver design for a general class of triangular systems
1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals
More informationStrong and Weak Augmentability in Calculus of Variations
Strong and Weak Augmentability in Calculus of Variations JAVIER F ROSENBLUETH National Autonomous University of Mexico Applied Mathematics and Systems Research Institute Apartado Postal 20-126, Mexico
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationEstimates for probabilities of independent events and infinite series
Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences
More informationDifferentiable Functions
Differentiable Functions Let S R n be open and let f : R n R. We recall that, for x o = (x o 1, x o,, x o n S the partial derivative of f at the point x o with respect to the component x j is defined as
More informationL 2 -induced Gains of Switched Systems and Classes of Switching Signals
L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit
More informationCopyrighted Material. L(t, x(t), u(t))dt + K(t f, x f ) (1.2) where L and K are given functions (running cost and terminal cost, respectively),
Chapter One Introduction 1.1 OPTIMAL CONTROL PROBLEM We begin by describing, very informally and in general terms, the class of optimal control problems that we want to eventually be able to solve. The
More information1 The Existence and Uniqueness Theorem for First-Order Differential Equations
1 The Existence and Uniqueness Theorem for First-Order Differential Equations Let I R be an open interval and G R n, n 1, be a domain. Definition 1.1 Let us consider a function f : I G R n. The general
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationPreliminary draft only: please check for final version
ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:
More informationRamsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path
Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path Ryoji Ohdoi Dept. of Industrial Engineering and Economics, Tokyo Tech This lecture note is mainly based on Ch. 8 of Acemoglu
More informationSingularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations
Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations Irena Rachůnková, Svatoslav Staněk, Department of Mathematics, Palacký University, 779 OLOMOUC, Tomkova
More informationA Characterization of Polyhedral Convex Sets
Journal of Convex Analysis Volume 11 (24), No. 1, 245 25 A Characterization of Polyhedral Convex Sets Farhad Husseinov Department of Economics, Bilkent University, 6533 Ankara, Turkey farhad@bilkent.edu.tr
More informationDYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION
DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal
More informationConstructions with ruler and compass.
Constructions with ruler and compass. Semyon Alesker. 1 Introduction. Let us assume that we have a ruler and a compass. Let us also assume that we have a segment of length one. Using these tools we can
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationLecture 2 The Centralized Economy
Lecture 2 The Centralized Economy Economics 5118 Macroeconomic Theory Kam Yu Winter 2013 Outline 1 Introduction 2 The Basic DGE Closed Economy 3 Golden Rule Solution 4 Optimal Solution The Euler Equation
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More informationDifferentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5
Differentiation Table of contents. Derivatives................................................. 2.. Definition................................................ 2.2. Arithmetics...............................................
More informationFUNDAMENTAL GROUPS AND THE VAN KAMPEN S THEOREM
FUNDAMENTAL GROUPS AND THE VAN KAMPEN S THEOREM ANG LI Abstract. In this paper, we start with the definitions and properties of the fundamental group of a topological space, and then proceed to prove Van-
More informationConvexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.
Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More informationMATHEMATICS FOR ECONOMISTS. An Introductory Textbook. Third Edition. Malcolm Pemberton and Nicholas Rau. UNIVERSITY OF TORONTO PRESS Toronto Buffalo
MATHEMATICS FOR ECONOMISTS An Introductory Textbook Third Edition Malcolm Pemberton and Nicholas Rau UNIVERSITY OF TORONTO PRESS Toronto Buffalo Contents Preface Dependence of Chapters Answers and Solutions
More informationCHAPTER 7. Connectedness
CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set
More informationRobotics. Islam S. M. Khalil. November 15, German University in Cairo
Robotics German University in Cairo November 15, 2016 Fundamental concepts In optimal control problems the objective is to determine a function that minimizes a specified functional, i.e., the performance
More informationOrdinary Differential Equation Theory
Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit
More informationLARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011
LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic
More information4. Images of Varieties Given a morphism f : X Y of quasi-projective varieties, a basic question might be to ask what is the image of a closed subset
4. Images of Varieties Given a morphism f : X Y of quasi-projective varieties, a basic question might be to ask what is the image of a closed subset Z X. Replacing X by Z we might as well assume that Z
More information1 Relative degree and local normal forms
THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a
More informationUNIVERSITY OF VIENNA
WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers
More informationThe Laplace Transform
C H A P T E R 6 The Laplace Transform Many practical engineering problems involve mechanical or electrical systems acted on by discontinuous or impulsive forcing terms. For such problems the methods described
More informationIntroduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University
Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series
More informationWe denote the space of distributions on Ω by D ( Ω) 2.
Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study
More information31.1.1Partial derivatives
Module 11 : Partial derivatives, Chain rules, Implicit differentiation, Gradient, Directional derivatives Lecture 31 : Partial derivatives [Section 31.1] Objectives In this section you will learn the following
More informationThe goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T
1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.
More informationON CERTAIN CLASSES OF CURVE SINGULARITIES WITH REDUCED TANGENT CONE
ON CERTAIN CLASSES OF CURVE SINGULARITIES WITH REDUCED TANGENT CONE Alessandro De Paris Università degli studi di Napoli Federico II Dipartimento di Matematica e Applicazioni R. Caccioppoli Complesso Monte
More informationExamples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling
1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples
More informationIowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions
Math 50 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 205 Homework #5 Solutions. Let α and c be real numbers, c > 0, and f is defined
More informationThe Skorokhod problem in a time-dependent interval
The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem
More information2 A Model, Harmonic Map, Problem
ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or
More informationDISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction
DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be
More informationViscosity Solutions for Dummies (including Economists)
Viscosity Solutions for Dummies (including Economists) Online Appendix to Income and Wealth Distribution in Macroeconomics: A Continuous-Time Approach written by Benjamin Moll August 13, 2017 1 Viscosity
More informationLecture 6: Discrete-Time Dynamic Optimization
Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,
More informationz = f (x; y) f (x ; y ) f (x; y) f (x; y )
BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most
More informationModule 5 : Linear and Quadratic Approximations, Error Estimates, Taylor's Theorem, Newton and Picard Methods
Module 5 : Linear and Quadratic Approximations, Error Estimates, Taylor's Theorem, Newton and Picard Methods Lecture 14 : Taylor's Theorem [Section 141] Objectives In this section you will learn the following
More information3 Applications of partial differentiation
Advanced Calculus Chapter 3 Applications of partial differentiation 37 3 Applications of partial differentiation 3.1 Stationary points Higher derivatives Let U R 2 and f : U R. The partial derivatives
More informationGENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION
Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124
More informationVector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.
Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:
More information1. First and Second Order Conditions for A Local Min or Max.
1 MATH CHAPTER 3: UNCONSTRAINED OPTIMIZATION W. Erwin Diewert 1. First and Second Order Conditions for A Local Min or Max. Consider the problem of maximizing or minimizing a function of N variables, f(x
More informationExtreme points of compact convex sets
Extreme points of compact convex sets In this chapter, we are going to show that compact convex sets are determined by a proper subset, the set of its extreme points. Let us start with the main definition.
More informationCourse information EC3120 Mathematical economics
Course information 205 6 Mathematical modelling is particularly helpful in analysing a number of aspects of economic theory The course content includes a study of several mathematical models used in economics
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationAnalysis II - few selective results
Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem
More informationNonlinear Optimization
Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables
More informationChapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems
Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P
More informationECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption
More information