ARE202A, Fall Contents

Size: px
Start display at page:

Download "ARE202A, Fall Contents"

Transcription

1 ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions for a solution to an NPP (cont) Preliminaries: the problem of the vanishing gradient Preliminaries: The relationship between quasi-concavity and the Hessian of f Preliminaries: The relationship between (strict) quasi-concavity and (strict) concavity Sucient Conditions for a solution to the NPP 8 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions for a solution to an NPP (cont) So far we've only established necessary conditions for a solution to the NPP. But we can't stop here. We could have found a minimum on the constraint set, and the same KKT conditions would be satised. In this lecture we focus on nding sucient conditions for a solution, and in particular, conditions under which the KKT conditions will be both necessary and almost but not quite sucient for a solution. The basic suciency conditions we're going to rely on are that the objective function f is strictly quasi-concave while the constraint functions are quasi-convex. But there are a lot of subtleties that we need to address. We begin with some preliminary issues.

2 2 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Preliminaries: the problem of the vanishing gradient. In addition to the usual quasi-concavity, quasi-convexity conditions, we have to deal with the familiar annoyance posed by the example: max x 3 on [ ; ]. Obviously this problem has a solution at x =. However, at x = 0, the KKT conditions are satised, i.e., gradient is zero, and so can be written as a nonnegative linear combination of the constraints vectors, with weights zero applied to each of the constraints, niether of which is satised with equality. That is, f 0 (x) = 0 which is the sum of the gradients of two constraints at zero, each weighted by zero. So without imposing a restriction that excludes function such as this, we cannot say that satisfying the KKT conditions is sucient for a max when the objective and constraint functions have the right \quasi" properties. To exclude this case, we could assume that f has a non-vanishing gradient. But this restriction throws the baby out with the bath-water: e.g., the problem max x( x) s.t. x 2 [0; ] has a global max at 0.5, at which point the gradient vanishes. So we want to exclude precisely those functions that have vanishing gradients at x's which are not unconstrained maxima. The following condition on f called pseudoconcavity in S&B (the original name) and M.K.9 in MWG does just this, in addition to implying quasi-concavity. 8x; x 0 2 X; if f(x 0 ) > f(x) then rf(x) (x 0 x) > 0: () Note that () says a couple of things. First, it says that a necessary condition for f(x 0 ) > f(x) is that dx = (x 0 x) makes an acute angle with the gradient of f. (This looks very much like quasi-concavity). Second, it implies that if rf( ) = 0 at x then f( ) attains a global max at x (2) since if not then there would necessarily exist x; x 0 2 X s.t. f(x 0 ) > f(x), and rf(x) (x 0 x) = 0 (x 0 x) = 0, violating ().

3 ARE202A, Fall Figure. pseudo-concavity implies quasi-concavity Our next result establishes precisely the relationship between pseudo-concavity and quasi-concavity: if f is C 2 then f satises () i f is quasi-concave and satises (2) (3) To prove the =) direction of (3), we'll show that (2) together with (: quasi-concavity) implies (: pseudo-concavity). Suppose that f satises (2) but is not quasi-concave, i.e., 9x 0 ; x 00 ; z 2 X such that z = x 0 +( )x 00, for some 2 (0; ) and f(x 00 ) f(x 0 ) > f(z). We will show that f violates () at a point x near z. Since f does not obtain a global maximum at z, (2) implies that rf(z) 6= 0. Therefore by continuity, we can pick > 0 suciently small and x = z + rf(z) such that f(x) < f(x 0 ) and rf(x) rf(z) > 0. As Fig. indicates, there are now two cases to consider: the angle between rf(x) and x 0 x 00 is either 90 or > 90. First suppose that rf(x) (x 0 x 00 ) 0. In this case, we have rf(x) (x 0 x) = rf(x) (x 0 (z + rf(z))) = rf(x) (( )(x 0 x 00 ) rf(z))) = ( ) rf(x) (x 0 x 00 ) {z } 0 by assumption rf(x)rf(z) {z } >0 by construction < 0

4 4 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) On the other hand, if rf(x) (x 00 x 0 ) < 0, repeat the above argument to conclude that rf(x) (x 00 x) < 0. We have thus identied a point x such that f(x 00 ) > f(x 0 ) f(x) such that either rf(x) (x 0 x) < 0 or rf(x) (x 00 x) < 0, verifying that () is violated. To prove the ( direction of (3), we'll show that (2) together with (: pseudo-concavity) implies (: quasi-concavity). Assume that there exists x; y such that f(x 0 ) > f(x) but rf(x) (x 0 x) 0.. Since x is not a global maximizer of f, (2) implies that rf(x) 6= 0. By continuity, we can pick > 0 suciently small that for y = x 0 rf(x), f(y) > f(x). We'll show that a portion of the line segment joining y and x does not belong to the upper contour set Figure 2. pseudo-concavity rules this out of f corresponding to f(x), proving that f is not quasi-concave. We have rf(x) (y x) = rf(x) ((x 0 rf(x)) x) = rf(x) (x 0 x) {z } 0 by assumption jjrf(x)jj 2 {z } >0 < 0 Now, let dx = (y x). For all, rf(x)dx = rf(x) (y x) < 0. Taylor-Young's theorem then implies that if f is suciently small, f(x + dx ) < f(x), establishing that a portion of the line segment joining x and y does not belong to the upper contour set of f corresponding to f(x). The following modication, changing only the rst strict inequality to a weak inequality, gives us a condition that implies strict quasi-concavity. 8x; x 0 2 X s.t. f(x 0 ) f(x); rf(x) (x 0 x) > 0: ( 0 ) Conclude that pseudo-concavity is a much weaker assumption than the non-vanishing gradient condition, and will give us just enough to ensure that the KT conditions are not only necessary

5 ARE202A, Fall but sucient as well. In particular, pseudo-concavity admits the possibility that our solution to the NPP may be unconstrained Preliminaries: The relationship between quasi-concavity and the Hessian of f. Recall from earlier that an alternative way to specify that f is strictly (weakly) concave is to require that the Hessian of f be everywhere negative (semi) denite. There is an analogous way to specify that f is strictly (weakly) quasi-concave which involves a restricted kind of deniteness property for f. The following result gives a sucient condition for strict quasi concavity. Theorem (SQC): A sucient condition for f : R n! R to be strictly quasi-concave is that for all x and all dx such that rf(x) 0 dx = 0, dx 0 Hf(x)dx < 0. A sucient condition for the above hessian property is the following condition on the leading principal minors of the bordered hessian of f: for all x, and all k = ; :::; n, the sign of the k'th leading principal minor of the following bordered matrix must have the same sign as ( ) k : rf(x) where the k'th leading principal minor of this matrix is the determinant of the rf(x) Hf(x) top-left (k + ) (k + ) submatrix. It's important to emphasize that to guarantee strict quasiconcavity (or any global concavity/convexity property for that matter), the hessian property has to hold for all x in the domain of the function. By extension, the leading principal minor property has to hold for all x in order to guarantee strict quasi-concavity. Note also that the above condition isn't necessary for strict quasi-concavity: the usual example, f(x) = x 3, establishes this: f is strictly quasi-concave, but at x = 0, and all dx, rf(x)dx = 0, while dx 0 Hf(x)dx = 0.

6 6 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) To prove Theorem (SQC), assume that the condition of the Theorem is satised and pick arbitrary points x; x 0 in the domain of f such that f(x 0 ) f(x). Let Y = fy = x + ( )x 0 : 2 [0; ]g and let Y denote the set of minimizers of f on Y. We'll prove that Y fx; x 0 g. A consequence of this fact is that for all 2 (0; ), f(x+( )x 0 ) > f(x), implying that f is strictly quasi-concave. To establish that Y fx; x 0 g, consider an arbitrary interior point y of Y, i.e., i.e., some point y = x + ( )x 0 ), 2 (0; ). Clearly, a necessary condition for y to minimize f( ) on Y is that rf(y)(y x) = 0. (Otherwise you could move along the line Y in one or other direction and increase f.) Assume therefore that this condition is satised at y. But in this case, for all y 0 2 Y, rf(y)(y 0 y) = 0. and hence, by the condition of the theorem above, (y 0 y) 0 Hf(y)(y 0 y) < 0. Now apply the Taylor-Young theorem, for k = 2, to observe that for y 0 2 Y suciently close to y, sign(f(y 0 ) f(y)) = sign rf(y)(y 0 y) + (y 0 y) 0 Hf(y)(y 0 y) < 0 Conclude that if y satises the necessary condition for an interior minimum of f on Y, then y is in fact a local maximizer of f on Y. This completes the proof Preliminaries: The relationship between (strict) quasi-concavity and (strict) concavity. A sucient condition for strict concavity is that for all x, dx 0 Hf(x)dx < 0, and all dx 6= 0. For strict quasi-concavity, we only require this property of the Hessian holds for vectors that are orthogonal to r(x). This seems much weaker, innitely weaker in fact. However, it's not quite as weak as it looks: the condition on orthogonal vectors also has implications for dx's that are almost orthogonal to r(x), and we need these implications in order to prove that the KT conditions are sucient for a solution. Specically, if rf(x)dx = 0 implies dx 0 Hf(x)dx < 0, then by continuity, for dx 6= 0 such that jrf(x)dxj is suciently small, dx 0 Hf(x)dx < 0 (4) Why is (4) so important? The problem we face, once again, is that in order to have a local maximum, you need an -ball around x such that f( ) is strictly less than f(x) on the intersection

7 ARE202A, Fall of this ball with the constraint set. Now as we've discussed over and over again, you can't nd this ball just by using rst order conditions. You need your second order conditions to be cooperative at the point where the rst order conditions fail you. If they don't cooperate, then for any given -ball, there are going to be dx's that make an acute angle with r(x) for which your rst order conditions won't be inadequate. To see the point, suppose for the moment that the following were the case, for some f : R n! R and some > 0. Fortunately for all of us it cannot happen. (It's ruled out by (4).) Unfortunately for me, it's so unable to happen that I can't even draw it. for all x and all dx 6= 0 such that rf(x)dx = 0, dx 0 Hf(x)dx < 0 (5) for all x and all dx 6= 0 such that rf(x)dx 6= 0, dx 0 Hf(x)dx > Now, for any x satisfying the KKT, we can nd dx such that rg j (x)dx < 0 for all j, and rf(x)dx is an extremely small negative number. Now suppose that (5) could happen. In this case, if we considered dx, for > 0 extremely small, we would nd that the second order term in the Taylor expansion would dominate the rst term, with the result that f(x + dx) > f(dx). In short, if (5) could happen, then the KKT conditions plus the hessian condition above plus quasi-convexity of the constraint functions would not be sucient for a solution. How small is \suciently small" in (4)? The following example shows that the requirement for suciently small gets tougher and tougher, the less concave is f. Example: Consider the function f(x; y) = (xy), which is strictly quasi-concave but not concave for > 0:5. We'll illustrate that regardless of the value of, dx 0 Hf dx < 0, for any vector that is almost orthogonal to rf, but that the criterion of \almost" gets tighter and tighter as gets

8 8 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) larger. That is, the higher is (i.e., the less concave is f), the closer to orthogonal does dx have to be in order to ensure that dx 0 Hf dx is negative. We have rf(x ; x 2 ) = x x 2 ; x x 2 Evaluated at (x ; x 2 ) = (; ), we have Hf(; ) = Note that! as!. 2 3 ( )x 2 x 2 x x2 and Hf(x ; x 2 ) = x x 2 ( )x x = , where =. Now choose a unit length vector dx and consider dx 0 Hf dx = (dx 2 +dx 2 2)+2dx dx 2 = (dx +dx 2 ) 2 ( )(dx 2 +dx 2 2) = (dx +dx 2 ) 2 ( ) For dx such that x = x 2, dx 0 Hf dx < 0, for all <, verifying that f is strictly quasi-concave. However, the closer is to unity, the smaller is the set of unit vectors for which dx 0 Hf dx < Sucient Conditions for a solution to the NPP The following theorem gives sucient conditions for a solution (not necessarily unique) to the NPP. Theorem (S): (Sucient conditions for a solution to the NPP): If f satises () and the g j 's are quasi-convex, then a sucient condition for a solution to the NPP at x 2 R m + is that there exists a vector 2 R m + such that rf(x) T = T Jg(x) and has the property that j = 0, for each j such that g j (x) < b j.

9 ARE202A, Fall Note that Theorem (S) doesn't guarantee that a solution exists. Need compactness for this. Note also that the sucient conditions are like the necessary conditions, except that you don't need the CQ but do need the quasi-concavity/() and quasi-convexity stu. (MWG's version of Theorem (S) Theorem M.K.3 is just like mine except that they do include the constraint qualication. This addition is unnecessary (they're not wrong, they just have a meaningless additional condition). The C.Q. says that you can have a maximum without the non-negative cone condition holding. If you assume as in (S) that this condition holds, then, obviously, you don't need to worry that perhaps it mightn't hold! Theorem (S) would also be true if the words \f satises ()" were replaced by the stronger condition \f is quasi-concave and rf( ) is never zero." Since quasi-concavity is easier to work with, I'll discuss this modied, though less satisfactory, theorem. Moreover, if f is strictly quasi-concave or condition ( 0 ) is satised, then we get a unique solution. Sketch of the proof of Theorem (S) for the case of one constraint: Suppose that the KT conditions are satised at (x; ), along with the other conditions of the suciency theorem, i.e., quasi-concave objective, quasi-convex constraint functions and nonvanishing gradient. In fact, we are going to assume that the constraint function is strictly quasi-convex, just to make things a bit easier. First note that if is zero, then rf(x) is zero also. But then we're done, because by (2), f must attain a global and hence constrained max at x. Now assume that > 0. This in turn (complementary slackness) implies that x is on the boundary of the constraint set, i.e., g(x) = b. In this case, the Kuhn Tucker conditions say that rf(x) must be pointing in the same direction as rg(x).

10 0 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Figure 3. Sucient conditions for a solution to the NPP How does this guarantee us a max? We'll show that the KT conditions plus nonvanishing gradient plus strict quasi-concavity (applied locally) guarantee a local max, and the quasi-concavity/quasiconvexity conditions (applied globally) do the rest. To establish a strictly local max, we have to show that there exists a ball about x such that no matter where we move within that ball, we either decrease the value of f or move outside the constraint set. To nd the right ball, we proceed as follows () Since f and g are, respectively, strictly quasi-concave and strictly quasi-convex, we know that dx 0 Hf dx < 0 and dx 0 Hg dx > 0, for dx's that are orthogonal to the direction that the gradients of f and g both point in. By continuity, there exists an interval around 90 degrees such that for any vector dx that makes an angle in this interval with rf (i.e., lives in the cone-shaped object C in Fig. 3), dx 0 Hf dx < 0, while dx 0 Hg dx > 0.

11 ARE202A, Fall 2005 (2) Next note that there exists > 0 such that for dx's in B(x; ) but not in the cone-like set C the rst term in the Taylor expansion about x in the direction dx determines the sign of the entire expansion. You should, hopefully, understand well, by now, why without excluding the cone we couldn't nd such an.! (3) Finally, note that there exists 0 < 2 such that for dx's in B(x; 2 ) and in the cone-like set C, the rst two terms in the Taylor expansion about x in the direction dx determines the sign of the entire expansion. There are four classes of directions to consider: () if we move from x in a direction that makes a big enough acute angle with the gradient vector, rg(x) (e.g., dx ), then by the rst order version of Taylor's theorem, we increase the value of g, i.e., move outside the constraint set. (2) if we move from x in a direction that makes a big enough obtuse angle with the gradient vector, rf(x) (e.g., dx 4 ), then by the rst order version of Taylor's theorem, enough we reduce the value of f, (3) if we move from x in a direction dx that makes an barely acute angle with the gradient vector, rg(x), i.e., we move in a direction almost perpendicular to the gradient vector, (e.g., dx 2 ), by the second order version of Taylor's theorem, enough we increase the value of g (since both of the rst two terms in the expansion of g are positive). (4) if we move from x in a direction dx that makes an barely obtuse angle with the gradient vector, rf(x), i.e., we move in a direction almost perpendicular to the gradient vector, (e.g., dx 3 ), by the second order version of Taylor's theorem, enough we reduce the value of f (since both of the rst two terms in the expansion of f are negative). So we've shown that if we move a distance less than 2 in any possible direction away from x, either f goes down or g goes up. Conclude that we have a local max.

12 2 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Of course, these derivative arguments only guarantee a local maximum on the constraint set. That is, there exists a neighborhood of x such that f is at least as large at x as it is anywhere on the intersection of this neighborhood with the constraint set. Quasi-concavity and quasi-convexity are needed to ensure that x is indeed a solution to the NPP, i.e., a global max on the constraint set. Quasi-concavity says that if there is a point x 0 anywhere in the constraint set at which f attained a strictly higher value than at x, then f will be strictly higher than at x on the whole line segment joining x 0 to x. Quasi-convexity guarantees that the constraint set is convex, so that if x 0 is in the constraint set, then so is the whole line segment. But this means that if there exists a point such as x 0, then on any neighborhood of x there will be points at which f is strictly higher than it is at x, contradicting the fact that we have a local max. We'll make the above informal argument precise, for the special case of maximizing f(x) such that g(x) b, where f is strictly quasi-concave with non-vanishing gradient and g : R n! R is linear. Further more we'll make the assumption that v 0 Hf(x)v < 0 for all x and all v such that rf(x)v = 0. Recall that this condition is sucient for quasi-concavity, but not necessary. We don't want to have to deal with the case where it isn't satised. Suppose that x satises the KKT conditions, i.e., rf(x) and rg(x) are colinear. We need to show that there exists > 0 such that for all dx 2 B(0; ), g(x + dx) b implies f(x + dx) f(x) < 0. Let S denote the unit hypersphere, i.e., S = fv 2 R n : jjvjj = g. Recall from the lecture on Taylor expansions that if f is thrice continuously dierentiable, then for all x, for all v 2 S there exists (v) 2 [0; ] such that f(x + v) f(x) = rf(x)v + v 0 Hf(x)v=2 + Tf 3 (x + (v)v; v)=6 {z } a cubic remainder term

13 ARE202A, Fall so that for > 0 and dx = v, f(x + dx) f(x) = 2 rf(x)v= + v 0 Hf(x)v=2 + Tf 3 (x + (v)v; v)=6 By continuity and the fact that S is compact, there exists! such that for all v 2 S, jv 0 Hf(x)v=2j <! and jtf 3 (x + (v)v; v)=6j <!. Because f is strictly quasi-concave, there exists > 0 such that rf(x)v = 0 implies v 0 Hf(x)v=2 < 2. By continuity, therefore, there exists > 0 such that for all v 2 S, jrf(x)vj < implies v 0 Hf(x)v=2 <. Let = min[; ; ]=2! and observe that for all v 2 S and all 0 < <, if dx = v then 8 jrf(x)vj implies: >< jrf(x)v=j > 2!; jv 0 Hf(x)v=2j <!; >: jtf 3 (x + (v)v; v)=6j <! 8 while jrf(x)vj < implies: >< jv 0 Hf(x)v=2j > ; >: jtf 3 (x + (v)v; v)=6j < Therefore, for all v 2 S, all 0 < <, 8 sign f(x + dx) f(x) = >< sign rf(x)v if rf(x)v >: sign v 0 Hf(x)v=2 if < rf(x)v 0 It follows, therefore, that for all v 2 S all 0 < <, rf(x)v 0 implies f(x + dx) f(x) < 0. Moreover, since the KKT conditions are satised, rf(x)v > 0 implies rg(x)v > 0. Now by assumption, rf(x) 6= 0 so that g(x) = b. Since g is linear, rf(x)v > 0 implies g(x + dx) > b (i.e., x+dx is outside of the constraint set so we don't have to worry about it). We have, therefore, show The function h(v) = v0 Hf(x)v is continuous w.r.t v; since S is compact, it follows from Weierstrass's theorem that h( ) attains a maximum at, say v. Moreover, since h( ) is negative on S, h(v) < =2 < 0, for some > 0. Now consider the function Tf 3 : B(x; ) S! R, where Tf 3 (x 0 ; v) is the third order Taylor term centered at x 0 in the direction v. Since f is thrice continuously dierentiable, Tf 3 ( ; ) is a continuous function. Since B(x; ) S is compact, Tf 3 ( ; ) is bounded. Hence there exists! > 0 such that for all v 2 S, jtf 3 (x + (v)v; v)=6j <!.

14 4 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) that f attains a local maximum on the constraint set. Moreover, since the constraint set is convex and f is quasi-concave, a local maximum is a global maximum. It follows that x is a solution to the NPP.

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:

More information

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2

ARE211, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Taylor s Theorem (cont) 2 ARE211, Fall2015 CALCULUS4: THU, SEP 17, 2015 PRINTED: SEPTEMBER 22, 2015 (LEC# 7) Contents 2. Univariate and Multivariate Differentiation (cont) 1 2.4. Taylor s Theorem (cont) 2 2.5. Applying Taylor theory:

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

Modern Optimization Theory: Concave Programming

Modern Optimization Theory: Concave Programming Modern Optimization Theory: Concave Programming 1. Preliminaries 1 We will present below the elements of modern optimization theory as formulated by Kuhn and Tucker, and a number of authors who have followed

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

ARE211, Fall 2004 CONTENTS. 4. Univariate and Multivariate Differentiation (cont) Four graphical examples Taylor s Theorem 9

ARE211, Fall 2004 CONTENTS. 4. Univariate and Multivariate Differentiation (cont) Four graphical examples Taylor s Theorem 9 ARE211, Fall 24 LECTURE #18: TUE, NOV 9, 24 PRINT DATE: DECEMBER 17, 24 (CALCULUS3) CONTENTS 4. Univariate and Multivariate Differentiation (cont) 1 4.4. Multivariate Calculus: functions from R n to R

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Existence of minimizers

Existence of minimizers Existence of imizers We have just talked a lot about how to find the imizer of an unconstrained convex optimization problem. We have not talked too much, at least not in concrete mathematical terms, about

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

1 Convexity, concavity and quasi-concavity. (SB )

1 Convexity, concavity and quasi-concavity. (SB ) UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

Static Problem Set 2 Solutions

Static Problem Set 2 Solutions Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

. This matrix is not symmetric. Example. Suppose A =

. This matrix is not symmetric. Example. Suppose A = Notes for Econ. 7001 by Gabriel A. ozada The equation numbers and page numbers refer to Knut Sydsæter and Peter J. Hammond s textbook Mathematics for Economic Analysis (ISBN 0-13- 583600-X, 1995). 1. Convexity,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 7: Quasiconvex Functions I 7.1 Level sets of functions For an extended real-valued

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall215 CALCULUS3: TUE, SEP 15, 215 PRINTED: AUGUST 25, 215 (LEC# 6) Contents 2. Univariate and Multivariate Differentiation (cont) 1 2.4. Multivariate Calculus: functions from R n to R m 1 2.5.

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

ARE201-Simon, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Four graphical examples Taylor s Theorem 11

ARE201-Simon, Fall2015. Contents. 2. Univariate and Multivariate Differentiation (cont) Four graphical examples Taylor s Theorem 11 ARE2-Simon, Fall25 CALCULUS3: TUE, SEP 5, 25 PRINTED: SEPTEMBER 5, 25 (LEC# 6) Contents 2. Univariate and Multivariate Differentiation (cont) 2.4. Multivariate Calculus: functions from R n to R m 2.5.

More information

Functions of Several Variables

Functions of Several Variables Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Functions of Several Variables We now generalize the results from the previous section,

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

The Great Wall of David Shin

The Great Wall of David Shin The Great Wall of David Shin Tiankai Liu 115 June 015 On 9 May 010, David Shin posed the following puzzle in a Facebook note: Problem 1. You're blindfolded, disoriented, and standing one mile from the

More information

Final Exam - Answer key

Final Exam - Answer key Fall2015 Final Exam - Answer key ARE211 Problem 1 (True/False) [36 points]: Answer whether each of the following is true (T), or false (IF). Each part is worth 4 points. Not much more than a small amount

More information

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E IDOSR PUBLICATIONS International Digital Organization for Scientific Research ISSN: 2550-7931 Roles of Convexity in Optimization Theory Efor T E and Nshi C E Department of Mathematics and Computer Science

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games

A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games Towson University Department of Economics Working Paper Series Working Paper No. 2017-04 A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games by Finn Christensen

More information

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S}

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S} 6 Optimization The interior of a set S R n is the set int X = {x 2 S : 9 an open box B such that x 2 B S} Similarly, the boundary of S, denoted@s, istheset @S := {x 2 R n :everyopenboxb s.t. x 2 B contains

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis. Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS Carmen Astorne-Figari Washington University in St. Louis August 12, 2010 INTRODUCTION General form of an optimization problem: max x f

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 14: Unconstrained optimization Prof. John Gunnar Carlsson October 27, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 27, 2010 1

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Constrained maxima and Lagrangean saddlepoints

Constrained maxima and Lagrangean saddlepoints Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2015 NPP1: THU, SEP 17, 2015 PRINTED: AUGUST 25, 2015 (LEC# 7) Contents 3. Nonlinear Programming Problems and the Karush Kuhn Tucker conditions 2 3.1. KKT conditions and the Lagrangian: a cook-book

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu February

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object A VIEW ON NONLINEAR OPTIMIZATION JOSE HERSKOVITS Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro 1 Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL 1. Introduction Once

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002 Test 1 September 20, 2002 1. Determine whether each of the following is a statement or not (answer yes or no): (a) Some sentences can be labelled true and false. (b) All students should study mathematics.

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

2.5 The Fundamental Theorem of Algebra.

2.5 The Fundamental Theorem of Algebra. 2.5. THE FUNDAMENTAL THEOREM OF ALGEBRA. 79 2.5 The Fundamental Theorem of Algebra. We ve seen formulas for the (complex) roots of quadratic, cubic and quartic polynomials. It is then reasonable to ask:

More information

Useful Math for Microeconomics

Useful Math for Microeconomics Useful Math for Microeconomics Jonathan Levin Antonio Rangel September 2001 1 Introduction Most economic models are based on the solution of optimization problems. These notes outline some of the basic

More information

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0 4 Lecture 4 4.1 Constrained Optimization with Inequality Constraints We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as Problem 11 (Constrained Optimization

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Microeconomics I. September, c Leopold Sögner

Microeconomics I. September, c Leopold Sögner Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

One Variable Calculus. Izmir University of Economics Econ 533: Quantitative Methods and Econometrics

One Variable Calculus. Izmir University of Economics Econ 533: Quantitative Methods and Econometrics Izmir University of Economics Econ 533: Quantitative Methods and Econometrics One Variable Calculus Introduction Finding the best way to do a specic task involves what is called an optimization problem.

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

LAGRANGE MULTIPLIERS

LAGRANGE MULTIPLIERS LAGRANGE MULTIPLIERS MATH 195, SECTION 59 (VIPUL NAIK) Corresponding material in the book: Section 14.8 What students should definitely get: The Lagrange multiplier condition (one constraint, two constraints

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information