Optimality, Duality, Complementarity for Constrained Optimization

Size: px
Start display at page:

Download "Optimality, Duality, Complementarity for Constrained Optimization"

Transcription

1 Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

2 Linear Programming The fundamental problem in constrained optimization is linear programming (LP). Continuous variables, gathered into a vector x R n ; A linear objective function (just one!) Linear constraints (usually many!) can be equalities or inequalities. A standard form of LP is: where x R n are the variables; min x c T x subject to Ax = b, x 0, A R m n is the constraint matrix; b R m is the right-hand side of the constraints; c R n is the cost vector; x 0 means that we require all components of x to be nonnegative. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

3 An LP in Two Variables Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

4 Other Forms of LP Any LP can be converted to the standard form, by adding extra variables and constraints, and doing other simple manipulations. Example 1: the constraint 3x 1 + 5x 2 0 can be converted to standard form by introducing a slack variable s 1 : 3x 1 + 5x 2 s 1 = 0, s 1 0. Example 2: the free variable x 10 can be replaced by a difference of two nonnegative variables: x 10 = x + 10 x 10, x , x Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

5 Does it have a solution? There are three possible outcomes for an LP. It can be: INFEASIBLE: There is no x that satisfies all the constraints: min 3x 1 + 2x 2 s.t. x 1 + x 2 = 3, x 1 0, x 2 0. x 1,x 2 UNBOUNDED: There is a feasible ray along which the objective decreases to ; min x 1 x 1 s.t. x 1 0. OPTIMAL: The LP has one or more points that achieve the optimal objective value. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

6 LP Dual Associated with any LP is another LP called its dual. Together, these two LPs form a primal-dual pair of LPs. The dual takes the same data that defines the primal LP A, b, c but arranges it differently: cost vector switches with right-hand side; constraint matrix is transposed. Primal and dual give two different perspectives on the same data. (Primal) (Dual) min x c T x subject to Ax = b, x 0, max λ b T λ subject to A T λ c. We can introduce a slack s to get an alternative form of the dual: (Dual) max λ b T λ subject to A T λ + s = c, s 0. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

7 LP Duality Practically speaking, the dual may be easier to solve than the primal. More importantly, the primal and dual problems give a great deal of valuable information about each other. There are two big theorems about LP duality: Weak duality: One-line proof! Strong duality: really hard to prove! Theorem (Weak Duality) If x is feasible for the primal and (λ, s) is feasible for the dual, we have c T x b T λ. Proof. b T λ = x T A T λ = }{{} x T (A T λ c) +c T x c T x. }{{} 0 0 Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

8 LP Duality Theorem (Strong Duality) Given a primal-dual pair of LPs, exactly one of the following three statements is true. (a) Both are feasible, in which case both have solutions, and their objectives are equal at the solutions: c T x = b T λ. (b) One of the pair is unbounded, in which case the other is infeasible. (c) Both are infeasible. Proof. One way is to show that the simplex method works in which case it can resolve cases (a) and (b). But this is not at all trivial in particular we have to enhance simplex to avoid getting stuck. (c) can be illustrated with a simple example. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

9 LP General Form For purposes of finding duals, etc, it can be a pain to convert to standard form, take the dual, then simplify. We can shorten this process by taking general form LP and defining its dual. Primal: Dual: max u,v min x,y ct x + d T y s.t. Ax + By b, Ex + Fy = g, x 0. b T u + g T v s.t. A T u + E T v c, B T u + F T v = d, u 0. Dual variable u is associated with first primal constraint. We have u 0 because this constraint is an inequality. Dual variable v is associated with second primal constraint. We have v free because this constraint is an equality. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

10 Karush-Kuhn-Tucker (KKT) Conditions Back to standard form... KKT conditions are a set of algebraic conditions satisfied whenever x is a primal solution and (λ, s) is a dual solution. Ax = b A T λ + s = c 0 x s 0, where x s means x T s = 0: perpendicularity. The last KKT condition means that x i = 0 AND/OR s i = 0 for all i = 1, 2,..., n. Another strategy for solving an LP would be to solve the KKT conditions. In fact, this strategy would yield solutions to primal and dual. Primal-dual interior-point methods use this strategy. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

11 KKT for General Form We can write KKT conditions for more general LP formulations. For the one on the earlier slide we have 0 Ax + By b u 0, Ex + Fy = g, 0 x c A T u E T v 0, B T u + F T v = d. KKT conditions consist of All primal and dual constraints, and Complementarity between each inequality constraint and its corresponding dual variable. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

12 Example Consider minimization of a linear function over the simplex: n min c T x s.t. x i = 1, x 0. x Dual is: max λ s.t. λ c i, i = 1, 2,..., n. λ KKT conditions are: n x i = 1, 0 x i (c i λ i ) 0, i = 1, 2,..., n. i=1 i=1 Solution of dual is totally obvious, by inspection: λ = min i c i. Can use KKT conditions to figure out solution to the primal: x is the set of all vectors for which n x 0, xi = 1, xj = 0 if c j > min i c i. i=1 Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

13 Theorems of the Alternative: Party Tricks with Duality LP Duality can be used to prove an interesting class of results known as theorems of the alternative. These theorems have a generic form: Two logical statements, consisting of a set of algebraic conditions, labelled I and II. Exactly one of I and II is true. Lemma (Farkas Lemma) Given a matrix A R m n and a vector c R n, exactly one of the following two statements is true: I. There exists µ R m with µ 0 such that A T µ = c; II. There exists y R n such that Ay 0 and c T y < 0. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

14 Farkas Lemma: Proof Proof. Consider the following LP and its dual: P : min y c T y s.t. Ay 0, D : max µ 0 µ s.t. A T µ = c, µ 0. Suppose first that II is true. Then P is unbounded, so by strong duality D is infeasible. Hence I is false. Now suppose that II is false. Then the optimal objective for P must be 0. In fact, y = 0 is optimal, with objective 0. Strong duality tells us that D also has a solution (with objective 0, trivially). Any solution of D will satisfy I, so I is true Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

15 Convex Quadratic Programming (Convex QP) Standard form: 1 min x 2 x T Qx + c T x subject to Ax = b, x 0, where Q is symmetric positive semidefinite (that is, x T Qx 0 for all x). KKT conditions are a straightforward extension of those for LP: Ax = b A T λ + s = Qx + c 0 x s 0. Duality is a bit more complicated. We define it via a Lagrangian function: L(x, λ, s) := 1 2 x T Qx + c T x λ T (Ax b) s T x, which combines the objective and constraints, using the dual variables as coefficients for the constraints. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

16 QP Dual The Wolfe dual is: which reduces to max x,λ,s max L(x, λ, s) s.t. xl(x, λ, s) = 0, s 0, x,λ,s 1 2 x T Qx +c T x λ T (Ax b) s T x s.t. Qx +c A T λ s = 0, s 0, or equivalently max x,λ,s 1 2 x T Qx + b T λ s.t. Qx + c A T λ s = 0, s 0. When Q = 0 (linear programming) this simplifies to max λ,s b T λ s.t. c A T λ s = 0, s 0, so that x disappears from the problem, and we have exactly the dual LP form obtained earlier. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

17 Dual: Eliminating x If Q is positive definite (that is, positive semidefinite and nonsingular), we can eliminate x entirely from the QP dual. From the constraint Qx + c A T λ s = 0 we can write x = Q 1 (A T λ + s c). By substitution into the objective we obtain max λ,s 1 2 (AT λ + s c) T Q 1 (A T λ + s c) + b T λ s.t. s 0. This form may be easier to solve when Q is easy to invert (e.g. diagonal) because it has only nonegativity constraints s 0 no general constraints. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

18 Lagrangian Duality Consider a constrained optimization problem involving possibly nonlinear functions: min f (x) subject to c i (x) 0, i = 1, 2,..., m. Define the Lagrangian in the obvious way: L(x, λ) := f (x) λ T c(x) = f (x) Define the dual objective q(λ) as: q(λ) := inf x L(x, λ). m λ i c i (x). Note that we require the global infimum of L with respect to x to make this work but this can be found tractably when f and c i, i = 1, 2,..., m are all convex. The Lagrangian dual problem is then: max λ q(λ) s.t. λ 0. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41 i=1

19 Some Properties Since we wish to maximize q, we are not interested in values of λ for which q(λ) =. Hence define the domain: D := {λ q(λ) > }. Theorem q is concave and its domain D is convex. Theorem (Weak Duality) For any x feasible for the primal and λ feasible for the dual, we have q( λ) f ( x). Proof. q( λ) = inf x since λ 0 and c( x) 0. f (x) λ T c(x) f ( x) λc( x) f ( x), Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

20 Another View of the Primal Another way to define the primal problem is as minimizing the max of the Lagrangian over λ 0. Consider r(x) := sup L(x, λ) = max f (x) λ 0 λ 0 λt c(x). If c i (x) < 0 for any i, we can drive λ i to + to make r(x) =! Hence, if we are looking to minimize r, we need only consider values of x for which c(x) 0. When c(x) 0, the max w.r.t. λ 0 is attained at λ = 0. For this λ we have r(x) = f (x). Thus the problem min x r(x) is equivalent to the original primal! There s a nice symmetry between primal and dual. Primal objective is sup λ 0 L(x, λ) Dual objective is inf x L(x, λ). Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

21 Equality Constraints Given primal problem min f (x) s.t. c i (x) 0, i = 1, 2,..., m; d j (x) = 0, j = 1, 2,..., p, Define Lagrangian as L(x, λ, µ) = f (x) λ T c(x) µ T d(x), and dual objective as before: q(λ, µ) := inf x L(x, λ, µ). Dual problem is then: max λ,µ q(λ, µ) s.t. λ 0. (µ is free.) Note that the primal objective is sup λ 0,µ L(x, λ, µ). Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

22 Linear Complementarity Problem (LCP) Complementarity problems involve algebraic and complementary relationships. In linear complementarity, all the relationships are linear. The basic LCP is defined by matrix M R N N and vector q R N. The problem is: Find z R N such that 0 z Mz + q 0. There s no objectve function! This is not an optimization problem. But it s closely related to optimization (via KKT conditions) and can also be used to formulate problems in economics, game theory, contact problems in mechanics. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

23 Varieties of LCP Monotone LCP: M is positive semidefinite: z T Mz 0 for all z. Strictly monotone LCP: M is positive definite (z T Mz > 0 for all z 0). Mixed LCP: contains equality constraints as well as complementarity conditions. Partition M as: [ ] M11 M M = 12, with M M 21 M 11 and M 22 square, 22 and partition q and z accordingly. Mixed LCP defined as: M 11 z 1 + M 12 z 2 + q 1 = 0 0 M 21 z 1 + M 22 z 2 + q 2 z 2 0. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

24 LP as LCP The KKT conditions for LP and QP form an LCP (usually mixed, depending on the formulation). KKT for LP: This is a mixed LCP with [ ] M11 M M = 12 = M 21 M 22 Ax b = 0, 0 A T λ + c x 0. [ ] 0 A A T, q = 0 In fact, it s a monotone mixed LCP, since z T Mz = λ T Ax λ T Ax = 0. [ ] b, z = c [ ] λ. x Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

25 QP as KKT Similarly can write KKT for QP as a mixed LCP: Ax b = 0, 0 A T λ + Qx + c x 0. This is a mixed LCP with [ ] [ M11 M M = 12 0 A = M 21 M 22 A T Q Note that ], q = [ ] b, z = c z T Mz = λ T Ax λ T Ax + x T Qx = x T Qx, [ ] λ. x so that it s a monotone LCP provided that Q is positive semidefinite i.e. the QP is convex. It can t be a strongly monotone LCP unless A is vacuous (that is, the QP has only bound constraints x 0) and Q is positive definite. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

26 Algorithms for LCP If we have an algorithm for solving monotone LCP, then we also have an algorithm for LP and convex QP. (Algorithms for nonmonotone LCP and nonconvex QP are a different proposition; these are hard problems in general for which polynomial algorithms are not known to exist.) Two main classes of algorithms of practical interest: Simplex algorithms for LP, related to active-set algorithm for QP and Lemke s method for LCP. Primal-dual interior-point methods which are quite similar for all three classes of problems. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

27 Bimatrix Games as LCP In a bimatrix game there are two players each of whom can play one of a finite number of moves. Depending on the combination of moves playes, each player wins or loses something, the amount being determined by an entry in a loss matrix. Player 1 has m possible moves: i = 1, 2,..., m; Player 2 has n possible moves: j = 1, 2,..., n; There are m n loss matrices A and B, such that if Player 1 plays move i and Player 2 plays move j, then Player 1 loses A ij dollars while Player 2 loses B ij dollars. It s a zero-sum game if A + B = 0. Example: Matching Pennies: Each player shows either H or T, and Player 1 wins $1 and Player 2 loses $ when the pennies match; Player 1 loses $1 and Player 2 wins $ when the pennies don t match. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

28 Matching Pennies loss matrices: [ ] 1 1 A =, B = 1 1 [ ] Assume that the bimatrix game is played repeatedly. Usually both players play a mixed strategy in which they choose ech move randomly with a certain probability, and independently of moves before and after: Player 1 plays move i with probability x i ; Player 2 plays move j with probability y j. Since x and y denote vectors of probabilities, we have x 0, y 0, e T x = 1, e T y = 1. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

29 Nash Equilibrium A Nash Equilibrium is a pair of mixed strategies x and ȳ such that neither player can gain an advantage by changing to a different strategy, provided that the opponent also does not change. Formally: (x x) T Aȳ 0, for all x with x 0 and e T x = 1. x T B(y ȳ) 0, for all y with y 0 and e T y = 1. Note that the definition is not changed it we add a constant to all elements of A and B. Thus can assume that A and B have all positive elements. (Useful for computation.) We can find Nash equilibria by solving an LCP with [ ] 0 A M = B T, q = e, 0 where e = (1, 1,..., 1) T is the vector of ones. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

30 Lemma Let A and B be positive loss matrices of dimension m n and suppose that (s, t) R m+n solves the LCP above. Then the point ( x, ȳ) = (s/(e T s), t/(e T t)) is a Nash equilibrium. Proof. By complementarity, have x T (At e) = 1 e T s st (At e) = 0, so that x T At = x T e = 1. We thus have Aȳ ( x T Aȳ)e = 1 e T t (At ( x T At)e) = 1 e T (At e) 0. t Thus for any x with e T x = 1 and x 0, we have 0 x T (Aȳ e( x T Aȳ)) (x x)aȳ 0. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

31 KKT Conditions for Nonlinear Problems Consider now the more general problem min f (x) subject to x Ω, where f is smooth and Ω is a polyhedral set, define by linear inequalities: Special cases: Ω := {x a T i x b i, i = 1, 2,..., m}. Positive orthant: Ω = R n + = {x x 0}. Bound constraints: Ω = {x l x u}. We can define optimality conditions for this problem using local linear approximations of f. (The constraint sets is already defined by linear quantities so no need to approximate it.) Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

32 Taylor s Theorem (version 1) Theorem If f : R n R is a continuously differentiable function, we have for x, p R n that there is s (0, 1) such that f (x + p) = f (x) + f (x + sp) T p. A consequence is that if d is a direction with f (x) T d < 0, then for all ɛ sufficiently small, we have f (x + sɛd) T d < 0, for all s [0, 1] (by continuity of f ); f (x + ɛd) < f (x) (by applying Taylor s theorem with p = ɛd). Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

33 Active Sets and Feasible Directions Given a feasible point x Ω, we can define Active set A(x) := {i = 1, 2,..., m ai T x = b i }; Inactive set I(x) := {1, 2,..., m} \ A(x) = {i = 1, 2,..., m ai T x > b i }. The feasible directions F(x) are the directions that move into Ω from x. F(x) := {d ai T d 0, i A(x)}. We have that d F(x) x + ɛd Ω, for all ɛ 0 sufficiently small, because i A(x) implies that ai T (x + ɛd) = b i + ɛai T d b i ; i I(x) implies that ai T (x + ɛd) > b i + ɛai T d > b i for ɛ small enough. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

34 x feasible directions Ω Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

35 Optimality: A Necessary Condition Lemma If x is a local solution of min x Ω f (x) that is, there are no other points close to x that are feasible and have a lower function value then there can be no feasible direction d with f (x ) T d < 0. Proof. Suppose that in fact we have d F(x ) with f (x ) T d < 0. By definition of F(x ), we have x + ɛd Ω for all ɛ 0 sufficiently small. Moreover from our consequence of Taylor s theorem, we also have f (x + ɛd) < f (x ) for all ɛ sufficiently small. Hence there are feasible points with lower values of f arbitrarily close to x, so x is not a local minimum. This is neat, but it s not a checkable, practical condition, because F(x ) contains infinitely many directions in general. However we can use Farkas s Lemma to turn it into KKT conditions. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

36 KKT Conditions for Linearly Constrained Optimization Theorem If x is a local solution of min x Ω f (x), then there exist Lagrange multipliers λ i, i A(x ), such that f (x ) = i A(x ) a i λ i. Proof. By the lemma above, there can be no direction d such that f (x ) T d < 0 and a T i d 0 for all i A(x ). Thus Farkas s Lemma tells us that the alternative statement must be true which is exactly the expression above. Full KKT conditions are obtained by adding feasibility for x : a T i x b i. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

37 KKT Conditions for Linearly Constrained Optimization We can restate the KKT conditions by using complementarity conditions to absorb the definition of A(X ): 0 ai T x b i λ i 0, i = 1, 2,..., m, m f (x ) + a i λ i = 0. i=1 The complementarity condition implies that λ i = 0 for i I(x ), so the inactive constraints do not contribute to the sum in the second condition. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

38 KKT conditions using Lagrangian We can define the Lagrangian for this case too: m L(x, λ) := f (x) λ i (ai T x b) = f (x) λ T (Ax b), where A = [a 1, a 2,..., a m ] and b = (b 1, b 2,..., b m ) T. Restating the KKT conditions in this notation, we have: 0 Ax b λ 0, x L(x, λ ) = 0. Example: When Ω = R n +, we have A(x ) = {i xi = 0}, a i = e i, b i = 0. KKT conditions reduce to [ f (x )] i 0 for i A(x ); [ f (x )] i = 0, for i I(x ). i=1 Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

39 Nonlinear Constraints: What Could Possibly Go Wrong? Suppose that Ω is define by nonlinear algebraic inequalities: Ω := {x c i (x) 0, i = 1, 2,..., m}. A natural extension of the KKT conditions above is obtained by linearizing each c i around the point x, just as we did with f. Define Lagrangian: and KKT conditions: m L(x, λ) = f (x) λ i c i (x), i=1 x L(x, λ ) = 0, 0 c i (x ) λ i 0, i = 1, 2,..., m. Can we say that when x is a local minimizer, there must be λ such that these KKT conditions hold? NOT QUITE! We need constraint qualifications to make sure that nothing pathological is happening with the constraints at x. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

40 Constraint Qualifications Constraint qualifications (CQ) ensure that the linearized approximation to the feasible set Ω, evaluated at x, has a similar geometry to Ω itself. The linearization comes from a first-order Taylor expansion of the active constraints around x : {x c i (x ) T (x x ) 0, i A(x )}. x Ω * x* "linearized" Ωat x* Linearization captures the geometry of Ω well a CQ would be satisfied. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

41 Ω (single point) "linearized" Ω (entire line) The true Ω is the single point x, whereas the linearization is the entire line very different geometry. CQs would not be satisfied here, and KKT conditions would not be satisfied in general at x, even though it must be a local solution, regardless of f. Wright (UW-Madison) Optimality, Duality, Complementarity May / 41

42 References I Bertsekas, D. P. (1999). Nonlinear Programming. Athena Scientific, second edition. Ferris, M. C., Mangasarian, O. L., and Wright, S. J. (2007). Linear Programming with Matlab. MOS-SIAM Series in Optimization. SIAM. Nocedal, J. and Wright, S. J. (2006). Numerical Optimization. Springer, New York, second edition. Wright (UW-Madison) Optimality, Duality, Complementarity May / 1

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

The Karush-Kuhn-Tucker conditions

The Karush-Kuhn-Tucker conditions Chapter 6 The Karush-Kuhn-Tucker conditions 6.1 Introduction In this chapter we derive the first order necessary condition known as Karush-Kuhn-Tucker (KKT) conditions. To this aim we introduce the alternative

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

Lecture 2: Linear SVM in the Dual

Lecture 2: Linear SVM in the Dual Lecture 2: Linear SVM in the Dual Stéphane Canu stephane.canu@litislab.eu São Paulo 2015 July 22, 2015 Road map 1 Linear SVM Optimization in 10 slides Equality constraints Inequality constraints Dual formulation

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Ryan M. Rifkin Google, Inc. 2008 Plan Regularization derivation of SVMs Geometric derivation of SVMs Optimality, Duality and Large Scale SVMs The Regularization Setting (Again)

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

LECTURE 10 LECTURE OUTLINE

LECTURE 10 LECTURE OUTLINE LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang) MATHEMATICAL BIOSCIENCES doi:10.3934/mbe.2015.12.503 AND ENGINEERING Volume 12, Number 3, June 2015 pp. 503 523 OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

How to Take the Dual of a Linear Program

How to Take the Dual of a Linear Program How to Take the Dual of a Linear Program Sébastien Lahaie January 12, 2015 This is a revised version of notes I wrote several years ago on taking the dual of a linear program (LP), with some bug and typo

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST Computing Solution Concepts of Normal-Form Games Song Chong EE, KAIST songchong@kaist.edu Computing Nash Equilibria of Two-Player, Zero-Sum Games Can be expressed as a linear program (LP), which means

More information

Economic Foundations of Symmetric Programming

Economic Foundations of Symmetric Programming Economic Foundations of Symmetric Programming QUIRINO PARIS University of California, Davis B 374309 CAMBRIDGE UNIVERSITY PRESS Foreword by Michael R. Caputo Preface page xv xvii 1 Introduction 1 Duality,

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Game Theory Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Bimatrix Games We are given two real m n matrices A = (a ij ), B = (b ij

More information

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Interior-Point and Augmented Lagrangian Algorithms for Optimization and Control

Interior-Point and Augmented Lagrangian Algorithms for Optimization and Control Interior-Point and Augmented Lagrangian Algorithms for Optimization and Control Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Constrained Optimization May 2014 1 / 46 In This

More information

Optimization 4. GAME THEORY

Optimization 4. GAME THEORY Optimization GAME THEORY DPK Easter Term Saddle points of two-person zero-sum games We consider a game with two players Player I can choose one of m strategies, indexed by i =,, m and Player II can choose

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

Linear programming: Theory

Linear programming: Theory Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analsis and Economic Theor Winter 2018 Topic 28: Linear programming: Theor 28.1 The saddlepoint theorem for linear programming The

More information