MATH2070 Optimisation

Size: px
Start display at page:

Download "MATH2070 Optimisation"

Transcription

1 MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart

2 Review The full nonlinear optimisation problem with equality constraints Method of Lagrange multipliers Dealing with Inequality Constraints and the Kuhn-Tucker conditions Second order conditions with constraints

3 The full nonlinear optimisation problem with equality constraints Method of Lagrange multipliers Dealing with Inequality Constraints and the Kuhn-Tucker conditions Second order conditions with constraints

4 Non-linear optimisation Interested in optimising a function of several variables with constraints. Multivariate framework Variables x = (x 1, x 2,..., x n ) D R n Objective Function Z = f (x 1, x 2,..., x n ) Constraint equations, g 1 (x 1, x 2,..., x n ) = 0, h 1 (x 1, x 2,..., x n ) 0, g 2 (x 1, x 2,..., x n ) = 0, h 2 (x 1, x 2,..., x n ) 0,.. g m (x 1, x 2,..., x n ) = 0, h p (x 1, x 2,..., x n ) 0.

5 Vector notation Multivariate framework Minimise Z = f (x) such that g(x) = 0, h(x) 0, where g : R n R m and h : R n R p. Objectives? Find extrema of above problem. Consider equality constraints first. Extension: Allow inequality constraints.

6 Initial Example Minimise the heat loss through the surface area of an open lid rectangular container. Formulated as a problem Minimise, S = 2xy + 2yz + xz such that, V = xyz constant and x, y, z > 0. Parameters : (x, y, z) Objective function : f = S Constraints : g = xyz V

7 Graphical interpretation of problem level contours of f(x, y) g(x,y) = c P Feasible points occur at the tangential intersection of g(x, y) and f(x, y). Gradient of f is parallel to gradient of g. Recall from linear algebra and vectors for some constant, λ. f + λ g = 0,

8 Lagrangian Definition Define the Lagrangian for our problem, L(x, y, z, λ) = S(x, y, z) + λg(x, y, z) = 2xy + 2yz + xz + λ (xyz V ) Necessary conditions, L x = L y = L z = L λ = 0. This will be a system of four equations with four unknowns (x, y, z, λ) that needs to be solved

9 Solving the system Calculate derivatives, L x = 2y + z λyz, L y = 2x + 2z λxz, L z = 2y + x λxy, L λ = xyz V, Ensure constraint, L λ = 0 z = V xy (1) Eliminate λ, solve for (x, y, z) L x = 0 2y + z = λyz λ = 2 z + 1 y L y = 0 2x + 2z = λxz λ = 2 z + 2 x L z = 0 2y + 2x = λxy λ = 2 x + 1 y (2) (3) (4)

10 Solving system (continued) From (2) and (3) it follows x = 2y From (2) and (4) it follows x = z Use (1) with x = z = 2y then simplify, ( ) 3 3 V x = (x, y, z) = 2V, 4, 3 2V

11 The full nonlinear optimisation problem with equality constraints Method of Lagrange multipliers Dealing with Inequality Constraints and the Kuhn-Tucker conditions Second order conditions with constraints

12 Constrained optimisation Wish to generalised previous argument to multivariate framework, Minimise Z = f (x), x D R n such that g(x) = 0, h(x) 0, where g : R n R m and h : R n R p. Geometrically, the constraints define a subset Ω of R n, the feasible set, which is of dimension less than n m, in which the minimum lies.

13 Constrained Optimisation A value x D is called feasible if both g(x) = 0 and h(x) 0. An inequality constraint h i (x) 0 is Active if the feasible x D satisfies hi (x) = 0 Inactive otherwise.

14 Equality Constraints A point x which satisfies the constraint g(x ) = 0 is a regular point if { g 1 (x ), g 2 (x )...., g m (x )} is linearly independent. g 1 x 1 Rank.... g m x 1... g 1 x n. g m x n = m.

15 Tangent Subspace Definition Define the constrainted tangent subspace T = {d R n : g(x )d = 0}, where g(x ) is an m n matrix defined, g 1 x 1 g(x ) :=.... g m x 1... ( ) and gi T := gi x 1,..., g i x n g 1 x n. g m x n g 1 (x ) T =. g m (x ) T

16 Tangent Subspace (Example)

17 Generalised Lagrangian Definition Define the Lagrangian for the objective function f(x) with constraint equations g(x) = 0, L(x, λ) := f + m λ i g i = f + λ T g. i=1 Introduce a lagrange multiplier for each equality constraint.

18 First order necessary conditions Theorem Assume x is a regular point of g(x) = 0 and let x be a local extremum of f subject to g(x) = 0, with f, g C 1. Then there exists λ R m such that, L(x, λ) = f(x ) + λ T g(x ) = 0. In sigma notation: L(x ) x j = f(x ) x j + m i=1 λ i g i (x ) x j = 0, j = 1,..., n ; and L(x ) λ i = g i (x ) = 0, i = 1,..., m.

19 Another example Example Find the extremum for f(x, y) = 1 2 (x 1) (y 2)2 + 1 subject to the single constraint x + y = 1. Solution: Construct the Lagrangian: L(x, y, λ) = 1 2 (x 1) (y 2) λ(x + y 1). Then the necessary conditions are: L x = x 1 + λ = 0 L y = y 2 + λ = 0 L λ = 0 x + y = 1.

20 Solve system Can solve this by converting problem into matrix form. i.e. solve for a in Ma = b x y = λ 1 Invert matrix M and solve gives, x y = λ =

21 Comparison with unconstrained problem Example Minimize f(x, y) = 1 2 (x 1) (y 2)2 + 1 subject to the single constraint x + y = 1. In unconstrained problem, minimum attained at x = (1, 2). Minimum at f = f(1, 2) = 1 Constraint modifies solution. Constrained minimum at f = f(0, 1) = 2. Note: Constrained minimum is bounded from below by the unconstrained minumum.

22 Use of convex functions : Example Example Minimise: f(x, y) = x 2 + y 2 + xy 5y such that: x + 2y = 5. The constraint is linear (therefore convex) and Hessian matrix of f is positive definite so f is convex as well. The Lagrangian is L(x, y, λ) = x 2 + y 2 + xy 5y + λ(x + 2y 5).

23 Solve Finding first order necessary conditions, L = 2x + y + λ = 0 x (5) L = 2y + x 5 + 2λ = 0 y (6) L = x + 2y 5 = 0. λ (7) In matrix form the system is, x y = λ 5 Inverting and solving yields, x y = λ 0

24 The full nonlinear optimisation problem with equality constraints Method of Lagrange multipliers Dealing with Inequality Constraints and the Kuhn-Tucker conditions Second order conditions with constraints

25 Inequality constraints We now consider constrained optimisation problems with p inequality constraints, Minimise: f(x 1, x 2,..., x n ) subject to: h 1 (x 1, x 2,..., x n ) 0 In vector form the problem is Minimise: h 2 (x 1, x 2,..., x n ) 0 h p (x 1, x 2,..., x n ) 0. f(x) subject to: h(x) 0..

26 The Kuhn-Tucker Conditions To deal with the inequality constraints, introduce the so called Kuhn-Tucker conditions. f x j + p i=1 λ i h i x j = 0, j = 1,..., n ; λ i h i = 0, i = 1,..., m ; λ i 0, i = 1,..., m. Extra conditions for λ apply in this sense, h i (x) = 0, if λ i > 0, i = 1,..., p or h i (x) < 0, if λ i = 0, i = 1,..., p.

27 Graphical Interpretation level contours of f(x, y) h(x,y) < 0 (λ = 0) P h(x,y) > 0 h(x,y) = 0 (λ > 0) If λ > 0 then find solution on line h(x, y) = 0 If λ = 0 then find solution in region h(x, y) < 0

28 Graphical Interpretation level contours of f(x, y) (x,y ) h(x,y) = 0 (λ > 0) Consider h(x, y) = 0, then λ > 0. Optimum at x = (x, y )

29 Graphical Interpretation (x,y ) level contours of f(x, y) h(x,y) < 0 (λ = 0) h(x,y) = 0 Consider h(x, y) < 0, then λ = 0. Optimum at x = (x, y )

30 Kuhn Tucker Kuhn-Tucker conditions are always necessary. Kuhn-Tucker conditions are sufficient if the objective function and the constraint functions are convex. Need a procedure to deal with these conditions.

31 Introduce slack variables Slack Variables To deal with the inequality constraint introduce a variable, s 2 i for each constraint. New constrained optimisation problem with m equality constraints, Minimise: f(x 1, x 2,..., x n ) subject to: h 1 (x 1, x 2,..., x n ) + s 2 1 = 0 h 2 (x 1, x 2,..., x n ) + s 2 2 = 0 h p (x 1, x 2,..., x n ) + s 2 p = 0..

32 New Lagrangian The Lagrangian for constrained problem is, L(x, λ, s) = f(x) + p i=1 ( λ i hi (x) + s 2 ) i = f(x) + λ T h(x) + λ T s, where s is a vector in R p with i-th component s 2 i. L depends on the n + 2m independent variables x 1, x 2,..., x n, λ 1, λ 2,..., λ p and s 1, s 2,..., s p. To find the constrained minimum we must differentiate L with respect to each of these variables and set the result to zero in each case.

33 Solving the system Need to solve n + 2m variables given by the n + 2m equations generated by, L = L = = L = 0 x 1 x 2 x n L = L = = L = L = L = = L = 0. λ 1 λ 2 λ p s 1 s 2 s p Having λ i 0 ensures that L = f(x) + has a well defined minimum. p i=1 ( λ i hi (x) + s 2 ) i If λ i < 0, we can always make L smaller due to the λ i s 2 i term.

34 New first order equations The differentiation, L/ λ i = 0, ensures the constraints hold. L λi = 0 h i + s 2 i = 0, i = 1,..., p. The differentiation, L s i minimising, = 0, controls observed region of L si = 0 2λ i s i = 0, i = 1,..., p. If λi > 0 we are on the boundary since s i = 0 If λi = 0 we are inside the feasible region since s i 0. If λi < 0 we are on the boundary again, but the point is not a local minimum. So only focus on λ i 0.

35 Simple application Example Consider the general problem Minimise: f(x) subject to: a x b. Re-express the problem to allow the Kuhn-Tucker conditions. Minimise: subject to: f(x) x a x b. The Lagrangian L is given by L = f + λ 1 ( x + s a) + λ 2 (x + s 2 2 b).

36 Solving system Find first order equations L x = f (x) λ 1 + λ 2 = 0, L = x + a + s 2 L 1 = 0 = x b + s 2 2 = 0, λ 1 λ 2 L L = 2λ 1 s 1 = 0 = 2λ 2 s 2 = 0. s 1 s 2 Consider the scenarios for when the constraints are at the boundary or inside the feasible region.

37 Possible scenarios y y y f (x) > 0 f (x) = 0 f (x) < 0 a DOWN b x a IN b x a UP b x Down variable scenario (lower boundary). s 1 = 0 λ 1 > 0 and s 2 2 > 0 λ 2 = 0. Implies that x = a, and f (x) = f (a) = λ 1 > 0, so that the minimum lies at x = a.

38 Possible scenarios y y y f (x) > 0 f (x) = 0 f (x) < 0 a DOWN b x a IN b x a UP b x In variable scenario (Inside feasible region). s 2 1 > 0 λ 1 = 0 and s 2 2 > 0 λ 2 = 0. Implies f (x) = 0, for some x [a, b] determined by the specific f(x), and thus there is a turning point somewhere in the interval [a, b]

39 Possible scenarios y y y f (x) > 0 f (x) = 0 f (x) < 0 a DOWN b x a IN b x a UP b x Up variable scenario (upper boundary). s 2 1 > 0 λ 1 = 0 and s 2 = 0 λ 2 > 0. Implies x = b, so that f (x) = f (b) = λ 2 < 0 and we have the minimum at x = b.

40 Numerical example Example Minimise: f(x 1, x 2, x 3 ) = x 1 (x 1 10) + x 2 (x 2 50) 2x 3 subject to: x 1 + x 2 10 x Write down Lagrangian, L(x, λ, s) = f(x) + λ T (h(x) + s), where, h(x) = ( ) x1 + x 2 10 x 3 10 s = ( ) s 2 1 s 2 2

41 Solve the system of first order equations L = 2x λ 1 = 0 x 1 (8) L = 2x λ 1 = 0 x 2 (9) L = 2 + λ 2 = 0 x 3 (10) The Kuhn-Tucker conditions are λ 1, λ 2 0 and λ 1 (x 1 + x 2 10) = 0 (11) λ 2 (x 3 10) = 0 (12) Immediately from (10) and (12), λ 2 = 2 and x 3 = 10.

42 Two remaining cases to consider Inside the feasible set λ 1 = 0. From equations (8) and (9) find, x = (x 1, x 2, x 3 ) = (5, 25, 10) However, constraint x 1 + x 2 10, is violated Reject this solution! At the boundary, λ 1 > 0 Using constraint equation (11) x 1 + x 2 = 10. Find the system: 2x λ 1 = 0 2x λ 1 = 0, Solving gives, λ 1 = 20 and x 1 = 5, x 2 = 15.

43 The full nonlinear optimisation problem with equality constraints Method of Lagrange multipliers Dealing with Inequality Constraints and the Kuhn-Tucker conditions Second order conditions with constraints

44 Motivation Determine sufficient conditions for minima (maxima) given a set of constraints. Define the operator 2 to denote the Hessian matrices, 2 f := H f. What { are the necessary/sufficient conditions on 2 f and 2 } m g j that ensure a constrained minimum? j=1 Consider equality constraints first. Extend to inequality constraints.

45 Notation Hessian of the constraint equations in the upcoming results needs care. Define as follows, Constraint Hessian involving Lagrange multipliers, λ T 2 g(x) := m λ i 2 g i (x) = i=1 m λ i H gi (x) i=1

46 Necessary second order condition for a constrained minimum with equality constraints Theorem Suppose that x is a regular point of g C 2 and is a local minimum of f such that g(x) = 0 and x is a regular point of g. Then there exists a λ R m such that, f(x ) + λ T g(x ) = 0. Also, from the tangent plane T = {y : g(x )y = 0}, the Lagragian Hessian at x, 2 L(x ) := 2 f(x ) + λ T 2 g(x ), is positive semi-definite on T. i.e. y T 2 L(x )y 0 for all y T.

47 Sufficient second order condition for a constrained minimum with equality constraints Theorem Suppose that x and λ satisfy, f(x ) + λ T g(x ) = 0. Suppose further that the Lagragian Hessian at x, 2 L(x ) := 2 f(x ) + λ T 2 g(x ). If y T 2 L(x )y > 0 for all y T. Then f(x ) is a local minimum that satisfies g(x ) = 0.

48 Determine if positive/negative definite Options for determining definiteness of 2 L(x ) on the subspace T. Complete the square of y T 2 L(x )y. Compute the eigenvalues of y T 2 L(x )y. Example Max: Z = x 1 x 2 + x 1 x 3 + x 2 x 3 subject to: x 1 + x 2 + x 3 = 3.

49 Extension: Inequality constraints Problem: where f, g, h C 1. Min f(x) subject to: h(x) 0 g(x) = 0. Extend arguments to include both equality and inequality constraints. Notation: Define the index set J : Definition Let J (x ) be the index set of active constraints. h j (x ) = 0, j J ; h j (x ) < 0, j / J.

50 First order necessary conditions The point x is called a regular point of the constraints, g(x) = 0 and h(x) 0, if are linearly independent. Theorem { g i } m i=1 and { h j } j J Let x be a local minimum of f(x) subject the constraints g(x) = 0 and h(x) 0. Suppose further that x is a regular point of the constraints. Then there exists λ R m and µ R p with µ 0 such that, f(x ) + λ T g(x ) + µ T h(x ) = 0 µ T h(x ) = 0. (13)

51 Second order necessary conditions Theorem Let x be a regular point of g(x) = 0, h(x) 0 with f, g, h C 2. If x is a local minimum point of the fully constrained problem then there exists λ R m and µ R p such that (13) holds and 2 f(x ) + m λ i 2 g i (x ) + i=1 p µ j 2 h j (x ) j=1 is positive semi-definite on the tangent space of the active constraints. i.e. on the space, { } T = d R n g(x )d = 0; h j (x )d = 0, j J

52 Second order sufficient conditions Theorem Let x satisfy g(x) = 0, h(x) 0. Suppose there exists λ R m and µ R p such that µ 0 and (13) holds and 2 f(x ) + m λ i 2 g i (x ) + i=1 p µ j 2 h j (x ) is positive definite on the tangent space of the active constraints. i.e. on the space, { } T = d R n g(x )d = 0; h j (x )d = 0, j J, j=1 { } where J = j h j (x ) = 0, µ j > 0. Then x is a strict local minimum of f subject to the full constraints.

53 Ways to compute the sufficient conditions The nature of the Hessian of the Lagrangian function and with its auxiliary conditions can be determined. 1. Eigenvalues on the subspace T. 2. Find the nature of the Bordered Hessian.

54 Eigenvalues on subspaces Let {e i } m i=1 be an orthonormal basis of T. Then d T can be written, d = Ez, where z R m and, E = [e 1 e 2... e m ]. Then use, d T 2 Ld =...,

55 Bordered Hessian Alternative method to determining the behaviour of the Lagragian Hessian. Definition Given an objective function f with constraints g : R n R m, define the Bordered Hessian matrix, ( 2 L(x, λ) g(x) T ) B(x, λ) := g(x) 0 Find the nature of the Bordered Hessian.

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

MATH2070/2970 Optimisation

MATH2070/2970 Optimisation MATH2070/2970 Optimisation Introduction Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Course Information Lecture Information Optimisation: Weeks 1 7 Contact Information Email:

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Link lecture - Lagrange Multipliers

Link lecture - Lagrange Multipliers Link lecture - Lagrange Multipliers Lagrange multipliers provide a method for finding a stationary point of a function, say f(x, y) when the variables are subject to constraints, say of the form g(x, y)

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Math 212-Lecture Interior critical points of functions of two variables

Math 212-Lecture Interior critical points of functions of two variables Math 212-Lecture 24 13.10. Interior critical points of functions of two variables Previously, we have concluded that if f has derivatives, all interior local min or local max should be critical points.

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Method of Lagrange Multipliers

Method of Lagrange Multipliers Method of Lagrange Multipliers A. Salih Department of Aerospace Engineering Indian Institute of Space Science and Technology, Thiruvananthapuram September 2013 Lagrange multiplier method is a technique

More information

3.7 Constrained Optimization and Lagrange Multipliers

3.7 Constrained Optimization and Lagrange Multipliers 3.7 Constrained Optimization and Lagrange Multipliers 71 3.7 Constrained Optimization and Lagrange Multipliers Overview: Constrained optimization problems can sometimes be solved using the methods of the

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers 3-15-2018 The method of Lagrange multipliers deals with the problem of finding the maxima and minima of a function subject to a side condition, or constraint. Example. Find graphically

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints. g j (x) = c j, j = 1,...

3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints. g j (x) = c j, j = 1,... 3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints g j (x) = c j, j = 1,..., m < n, where c 1,..., c m are given numbers. Theorem 3.1. Let

More information

Partial Derivatives. w = f(x, y, z).

Partial Derivatives. w = f(x, y, z). Partial Derivatives 1 Functions of Several Variables So far we have focused our attention of functions of one variable. These functions model situations in which a variable depends on another independent

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 4: Optimization. Maximizing a function of a single variable Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Math 291-3: Lecture Notes Northwestern University, Spring 2016

Math 291-3: Lecture Notes Northwestern University, Spring 2016 Math 291-3: Lecture Notes Northwestern University, Spring 216 Written by Santiago Cañez These are lecture notes for Math 291-3, the third quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Math 164-1: Optimization Instructor: Alpár R. Mészáros Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing

More information

ECON 5111 Mathematical Economics

ECON 5111 Mathematical Economics Test 1 October 1, 2010 1. Construct a truth table for the following statement: [p (p q)] q. 2. A prime number is a natural number that is divisible by 1 and itself only. Let P be the set of all prime numbers

More information

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers Rafikul Alam Department of Mathematics IIT Guwahati What does the Implicit function theorem say? Let F : R 2 R be C 1.

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

Constrained Optimization. Unconstrained Optimization (1)

Constrained Optimization. Unconstrained Optimization (1) Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

1 Lagrange Multiplier Method

1 Lagrange Multiplier Method 1 Lagrange Multiplier Method Near a maximum the decrements on both sides are in the beginning only imperceptible. J. Kepler When a quantity is greatest or least, at that moment its flow neither increases

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Microeconomics I. September, c Leopold Sögner

Microeconomics I. September, c Leopold Sögner Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Math 2163, Practice Exam II, Solution

Math 2163, Practice Exam II, Solution Math 63, Practice Exam II, Solution. (a) f =< f s, f t >=< s e t, s e t >, an v v = , so D v f(, ) =< ()e, e > =< 4, 4 > = 4. (b) f =< xy 3, 3x y 4y 3 > an v =< cos π, sin π >=, so

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem Unit: Optimality Conditions and Karush Kuhn Tucker Theorem Goals 1. What is the Gradient of a function? What are its properties? 2. How can it be used to find a linear approximation of a nonlinear function?

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

MATH20132 Calculus of Several Variables. 2018

MATH20132 Calculus of Several Variables. 2018 MATH20132 Calculus of Several Variables 2018 Solutions to Problems 8 Lagrange s Method 1 For x R 2 let fx = x 2 3xy +y 2 5x+5y i Find the critical values of fx in R 2, ii Findthecriticalvaluesoffxrestrictedtotheparametriccurvet

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Higher order derivative

Higher order derivative 2 Î 3 á Higher order derivative Î 1 Å Iterated partial derivative Iterated partial derivative Suppose f has f/ x, f/ y and ( f ) = 2 f x x x 2 ( f ) = 2 f x y x y ( f ) = 2 f y x y x ( f ) = 2 f y y y

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Optimality conditions for Equality Constrained Optimization Problems

Optimality conditions for Equality Constrained Optimization Problems International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 4 April. 2016 PP-28-33 Optimality conditions for Equality Constrained Optimization

More information

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified. PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

More information

Math 207 Honors Calculus III Final Exam Solutions

Math 207 Honors Calculus III Final Exam Solutions Math 207 Honors Calculus III Final Exam Solutions PART I. Problem 1. A particle moves in the 3-dimensional space so that its velocity v(t) and acceleration a(t) satisfy v(0) = 3j and v(t) a(t) = t 3 for

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

MATH 4211/6211 Optimization Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization

More information

3 Applications of partial differentiation

3 Applications of partial differentiation Advanced Calculus Chapter 3 Applications of partial differentiation 37 3 Applications of partial differentiation 3.1 Stationary points Higher derivatives Let U R 2 and f : U R. The partial derivatives

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Optimization: Problem Set Solutions

Optimization: Problem Set Solutions Optimization: Problem Set Solutions Annalisa Molino University of Rome Tor Vergata annalisa.molino@uniroma2.it Fall 20 Compute the maxima minima or saddle points of the following functions a. f(x y) =

More information

Quadratic Programming

Quadratic Programming Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Constrained maxima and Lagrangean saddlepoints

Constrained maxima and Lagrangean saddlepoints Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application

More information