Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Size: px
Start display at page:

Download "Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79"

Transcription

1 Topic one: Production line profit maximization subject to a production rate constraint c 21 Chuan Shi Topic one: Line optimization : 22/79

2 Production line profit maximization The profit maximization problem max k 1 k 1 J() = AP () b i i c i n i () i=1 i=1 s.t. P () ˆP, i min, i = 1,, k 1. where P () = production rate, parts/time unit ˆP = required production rate, parts/time unit A = profit coefficient, $/part n i () = average inventory of buffer i, i = 1,, k 1 b i = buffer cost coefficient, $/part/time unit = inventory cost coefficient, $/part/time unit c i c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 23/79

3 An example about the research goal "data.txt" "feasible.txt" Optimal boundary J() Figure 2: () vs. 1 and 2 c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 24/79

4 Two problems Original constrained problem max k 1 k 1 J() = AP () b i i c i n i () s.t. P () ˆP, i=1 i=1 i min, i = 1,, k 1. Simpler unconstrained problem (Schor s problem) max k 1 k 1 J() = AP () b i i c i n i () i=1 i=1 s.t. i min, i = 1,, k 1. c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 25/79

5 An example for algorithm derivation Data 1 =.1, 1 =.1, 2 =.11, 2 =.1, 3 =.1, 3 =.9, ˆ =.88 Cost function () = 2 () () 2 () 1 9 P(1,2)>P J() Figure 3: () vs. 1 and 2 c 21 Chuan Shi Topic one: P(1,2)<P P(1,2)=P Figure 4: () Line optimization : Algorithm derivation 26/79

6 An example for algorithm derivation J() Figure 5: () vs. 1 and 2 c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 27/79

7 Algorithm derivation Two cases Case 1 The solution of the unconstrained problem is s.t. ( ) ˆ. In this case, the solution of the constrained problem is the same as the solution of the unconstrained problem. We are done. 1 Unconstrained problem = () 1 u =1 u ( 1, 2 ) max () () P(1,2) > P 4 3 =1 2 s.t. c 21 Chuan Shi min, = 1,, 1. Topic one: Line optimization : Algorithm derivation /79

8 Algorithm derivation Two cases (continued) Case 2 u satisfies P ( u ) < Pˆ. This is not the solution of the constrained problem P(1,2) > P 3 2 u u ( 1, 2 ) c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 29/79

9 Algorithm derivation Two cases (continued) Case 2 (continued) In this case, we consider the following unconstrained problem: k 1 k 1 max J() = A P () b i i c i n i() i=1 i=1 s.t. i min, i = 1,, k 1. in which A is replaced by A. Let (A ) be the solution to this problem and P (A ) = P ( (A )). c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 3/79

10 Assertion The constrained problem k 1 k 1 max J() = A P () b i i c i n i() s.t. P () P ˆ, i=1 i=1 i min, i = 1,, k 1. has the same solution for all A in which the solution of the corresponding unconstrained problem k 1 k 1 max J() = A P () b i i c i n i() has P (A ) Pˆ. i=1 i=1 s.t. i min, i = 1,, k 1. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 31/79

11 Interpretation of the assertion We claim max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min *(A ) 5 1 P(*(A )) < P We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79

12 Interpretation of the assertion We claim max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min *(A ) 5 1 P(*(A )) < P We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79

13 Interpretation of the assertion We claim max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min *(A ) 5 1 P(*(A )) < P We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79

14 Interpretation of the assertion A = 15 (Original Problem) A = J() "infeasible.txt" "feasible.txt" Optimal 1 boundary 9 J() c 21 Chuan Shi Topic one: "infeasible.txt" "feasible.txt" Optimal 1 boundary J() Line optimization : Algorithm derivation "infeasible.txt" "feasible.txt" Optimal 1 boundary 9 2 A = (Final A ) J() A = "infeasible.txt" "feasible.txt" Optimal 1 boundary /79

15 Karush-Kuhn-Tucker (KKT) conditions Let x be a local minimum of the problem min f(x) s.t. h 1 (x) =,, h m (x) =, g 1 (x),, g r (x), where f, h i, and g j are continuously differentiable functions from R n to R. Then there exist unique Lagrange multipliers λ 1,, λ m and μ 1,, μ r, satisfying the following conditions: x L(x, λ, μ ) =, μ j, j = 1,, r, μ j g j (x ) =, j = 1,, r. m where L(x, λ, μ) = f(x) + i=1 λ ih i (x) + j=1 μ j g j (x) is called the Lagrangian function. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 34/79 r

16 Convert the constrained problem to minimization form Minimization form The constrained problem min k 1 k 1 J() = AP () + b i i + c i n i () i=1 i=1 s.t. ˆP P () min i, i = 1,, k 1 We have argued that we treat i as continuous variables, and P () and J() as continuously differentiable functions. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 35/79

17 Applying KKT conditions The Slater constraint qualification for convex inequalities guarantees the existence of Lagrange multipliers for our problem. So, there exist unique Lagrange multipliers, =,, 1 for the constrained problem to satisfy the KKT conditions: ( ) + ( ˆ ( )) + 1 ( min ) = (1) =1 or ( ) 1 ( ) 2... ( ) 1 ( ) 1 ( ) 2... ( ) =..., (2) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 36/79

18 Applying KKT conditions and haha haha i μ, i =,, k 1, (3) μ (Pˆ P ( )) =, i μ i ( min ) =, i = 1,, k 1, (5) where is the optimal solution of the constrained problem. Assume that i > min for all i. In this case, by equation (5), we know that μ i =, i = 1,, k 1. (4) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 37/79

19 Applying KKT conditions The KKT conditions are simplified to J( ) P ( ) 1 J( ) 2. J( ) k 1 1 P ( ) 2 μ =., (6).. P ( ) k 1 μ (Pˆ P ( )) =, (7) where μ. Since is not the optimal solution of the unconstrained problem, J( ) =. Thus, μ = since otherwise condition (6) would be violated. By condition (7), the optimal solution satisfies P ( ) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 38/79

20 Applying KKT conditions The KKT conditions are simplified to J( ) P ( ) 1 J( ) 2. J( ) k 1 1 P ( ) 2 μ =.,.. P ( ) k 1 μ (Pˆ P ( )) =, In addition, conditions (6) and (7) reveal how we could find μ and. For every μ, condition (6) determines since there are k 1 equations and k 1 unknowns. Therefore, we can think of = (μ ). We search for a value of μ such that P ( (μ )) = Pˆ. As we indicate in the following, this is exactly what the algorithm does. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 39/79

21 Applying KKT conditions Replacing μ by μ > in constraint (6) gives J( c ) P ( c ) 1 J( c 1 ) P ( c ) 2 μ 2.. J( c ) P ( c ) k 1 k 1 =.,. where c is the unique solution of (8). ote that c is the solution of the following optimization problem: (8) min J() = J() + μ ( ˆP P ()) s.t. min i, i = 1,, k 1. (9) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 4/79

22 Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79

23 Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79

24 Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79

25 Applying KKT conditions or, finally, k 1 k 1 max J () = A P () b i i c i n i i=1 i=1 (13) s.t. i min, i = 1,, k 1. where A = A + μ. This is exactly the unconstrained problem, and c is its optimal solution. ote that μ > indicates that A > A. In addition, the KKT conditions indicate that the optimal solution of the constrained problem satisfies P ( ) = Pˆ. This means that, for every A > A (or μ > ), we can find the corresponding optimal solution c satisfying condition (8) by solving problem (13). We need to find the A such that the solution to problem (13), denoted as (A ), satisfies P ( (A )) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79

26 Applying KKT conditions or, finally, k 1 k 1 max J () = A P () b i i c i n i i=1 i=1 (13) s.t. i min, i = 1,, k 1. where A = A + μ. This is exactly the unconstrained problem, and c is its optimal solution. ote that μ > indicates that A > A. In addition, the KKT conditions indicate that the optimal solution of the constrained problem satisfies P ( ) = Pˆ. This means that, for every A > A (or μ > ), we can find the corresponding optimal solution c satisfying condition (8) by solving problem (13). We need to find the A such that the solution to problem (13), denoted as (A ), satisfies P ( (A )) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79

27 Applying KKT conditions Then, μ = A A and (A ) satisfy conditions (6) and (7): J( (A )) + μ (Pˆ P ( (A ))) =, μ (Pˆ P ( (A ))) =. Hence, μ = A A is exactly the Lagrange multiplier satisfying the KKT conditions of the constrained problem, and = (A ) is the optimal solution of the constrained problem. d Consequently, solving the constrained problem through our algorithm is essentially finding the unique Lagrange multipliers and optimal solution of the problem. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79

28 Applying KKT conditions Then, μ = A A and (A ) satisfy conditions (6) and (7): J( (A )) + μ (Pˆ P ( (A ))) =, μ (Pˆ P ( (A ))) =. Hence, μ = A A is exactly the Lagrange multiplier satisfying the KKT conditions of the constrained problem, and = (A ) is the optimal solution of the constrained problem. d Consequently, solving the constrained problem through our algorithm is essentially finding the unique Lagrange multipliers and optimal solution of the problem. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79

29 Algorithm summary for case 2 Solve unconstrained problem Solve, by a gradient method, the unconstrained problem for fixed max () = () 1 =1 1 Search: Choose A' () =1 Solve unconstrained problem s.t. min, = 1,, 1. Search Do a one-dimensional search on > to find such that the solution of the unconstrained problem, ( ), satisfies ( ( )) = ˆ. c 21 Chuan Shi Topic one: P(*(A'))=P? o Yes Quit Line optimization : Proofs of the algorithm by KKT conditions 44/79

30 umerical results umerical experiment outline Experiments on short lines. Experiments on long lines. Computation speed. Method we use to check the algorithm Pˆ surface search in ( 1,, k 1 ) space. All buffer size allocations,, such that P () = Pˆ compose the Pˆ surface. c 21 Chuan Shi Topic one: Line optimization : umerical results 45/79

31 ˆP surface search 9 P surface The optimal solution Figure 6: Pˆ Surface search c 21 Chuan Shi Topic one: Line optimization : umerical results 46/79

32 Experiment on short lines (4-buffer line) Line parameters: Pˆ =.88 machine M 1 M 2 M 3 M 4 M 5 r p Machine 4 is the least reliable machine (bottleneck) of the line. Cost function 4 4 J() = 25P () i n i() i=1 i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 47/79

33 Experiment on short lines (4-buffer line) Results Optimal solutions ˆP Surface Search The algorithm Error Rounded Prod. rate % % % % 87. n % n % n % n % Profit ($) % The maximal error is.23% and appears in n 2. Computer time for this experiment is 2.69 seconds. c 21 Chuan Shi Topic one: Line optimization : umerical results 48/79

34 Experiment on long lines (11-buffer line) Line parameters: Pˆ =.88 machine M 1 M 2 M 3 M 4 M 5 M 6 r p Cost function machine M 7 M 8 M 9 M 1 M 11 M 12 r p J() = 6P () i n i() i=1 i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 49/79

35 Experiment on long lines (11-buffer line) Results Optimal solutions, buffer sizes: ˆP Surface Search The algorithm Error Rounded Prod. rate % % % % % % % % % % % 49. c 21 Chuan Shi Topic one: Line optimization : umerical results 5/79

36 Experiment on long lines (11-buffer line) Results (continued) Optimal solutions, average inventories: ˆP Surface Search The algorithm Error Rounded n % n % n % n % n % n % n % n % n % n % n % Profit ($) % Computer time is seconds. c 21 Chuan Shi Topic one: Line optimization : umerical results 51/79

37 Experiments for Tolio, Matta, and Gershwin (22) model Consider a 4-machine 3-buffer line with constraints Pˆ =.87. In addition, A = 2 and all b i and c i are 1. machine M 1 M 2 M 3 M 4 r i p i r i p i2.5.4 ˆP Surf. Search The algorithm Error P ( ) % % % n % n % n % Profit ($) % c 21 Chuan Shi Topic one: Line optimization : umerical results 52/79

38 Experiments for Levantesi, Matta, and Tolio (23) model Consider a 4-machine 3-buffer line with constraints Pˆ =.87. In addition, A = 2 and all b i and c i are 1. machine M 1 M 2 M 3 M 4 μ i r i p i r i p i2.5.6 P Surf. Search The algorithm Error P ( ) % % % n % n % n % Profit ($) % c 21 Chuan Shi Topic one: Line optimization : umerical results 53/79

39 Computation speed Experiment Run the algorithm for a series of experiments for lines having iden- tical machines to see how fast the algorithm could optimize longer lines. Length of the line varies from 4 machines to 3 machines. Machine parameters are p =.1 and r =.1. In all cases, the feasible production rate is Pˆ =.88. The objective function is k 1 k 1 J() = AP () i n i(). i=1 where A = 5k for the line of length k. i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 54/79

40 Computation speed 3 25 Computer time (seconds) mins Length of production lines 3 mins 1 min c 21 Chuan Shi Topic one: Line optimization : umerical results 55/79

41 Algorithm reliability We run the algorithm on 739 randomly generated 4-machine 3-buffer lines % of these experiments have a maximal error less than 6% Frequency Maximal Error c 21 Chuan Shi Topic one: Line optimization : umerical results 56/79

42 Algorithm reliability Taking a closer look at those 98.92% experiments, we find a more accurate distribution of the maximal error. We find that, out of the total 739 experiments, 83.9% of them have a maximal error less than 2% Frequency Maximal Error c 21 Chuan Shi Topic one: Line optimization : umerical results 57/79

43 MIT OpenCourseWare Manufacturing Systems Analysis Spring 21 For information about citing these materials or our Terms of Use,visit:

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

MIT Manufacturing Systems Analysis Lecture 14-16

MIT Manufacturing Systems Analysis Lecture 14-16 MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Line Design Given a process, find the best set of machines

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

The Karush-Kuhn-Tucker (KKT) conditions

The Karush-Kuhn-Tucker (KKT) conditions The Karush-Kuhn-Tucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module - 5 Lecture - 22 SVM: The Dual Formulation Good morning.

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Finite Dimensional Optimization Part III: Convex Optimization 1

Finite Dimensional Optimization Part III: Convex Optimization 1 John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology Stanley B. Gershwin Massachusetts Institute of Technology The purpose of this note is to supplement the slides that describe the Karush-Kuhn-Tucker conditions. Neither these notes nor the slides are a

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Support vector machines

Support vector machines Support vector machines Guillaume Obozinski Ecole des Ponts - ParisTech SOCN course 2014 SVM, kernel methods and multiclass 1/23 Outline 1 Constrained optimization, Lagrangian duality and KKT 2 Support

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Pattern Classification, and Quadratic Problems

Pattern Classification, and Quadratic Problems Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Machine Learning Support Vector Machines. Prof. Matteo Matteucci Machine Learning Support Vector Machines Prof. Matteo Matteucci Discriminative vs. Generative Approaches 2 o Generative approach: we derived the classifier from some generative hypothesis about the way

More information

Support Vector Machine

Support Vector Machine Andrea Passerini passerini@disi.unitn.it Machine Learning Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

More information

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26 Huiping Cao, Slide 1/26 Classification Linear SVM Huiping Cao linear hyperplane (decision boundary) that will separate the data Huiping Cao, Slide 2/26 Support Vector Machines rt Vector Find a linear Machines

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts Sridhar Mahadevan: CMPSCI 689 p. 1/32 Margin Classifiers margin b = 0 Sridhar Mahadevan: CMPSCI 689 p.

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

The Karush-Kuhn-Tucker conditions

The Karush-Kuhn-Tucker conditions Chapter 6 The Karush-Kuhn-Tucker conditions 6.1 Introduction In this chapter we derive the first order necessary condition known as Karush-Kuhn-Tucker (KKT) conditions. To this aim we introduce the alternative

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Math 5311 Constrained Optimization Notes

Math 5311 Constrained Optimization Notes ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality

More information

Mathematical Foundations II

Mathematical Foundations II Mathematical Foundations 2-1- Mathematical Foundations II A. Level and superlevel sets 2 B. Convex sets and concave functions 4 C. Parameter changes: Envelope Theorem I 17 D. Envelope Theorem II 41 48

More information

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning Kernel Machines Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 SVM linearly separable case n training points (x 1,, x n ) d features x j is a d-dimensional vector Primal problem:

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

Existence of minimizers

Existence of minimizers Existence of imizers We have just talked a lot about how to find the imizer of an unconstrained convex optimization problem. We have not talked too much, at least not in concrete mathematical terms, about

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Second Order Optimality Conditions for Constrained Nonlinear Programming

Second Order Optimality Conditions for Constrained Nonlinear Programming Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)

More information

Optimization Methods in Management Science

Optimization Methods in Management Science Optimization Methods in Management Science MIT 15.05 Recitation 8 TAs: Giacomo Nannicini, Ebrahim Nasrabadi At the end of this recitation, students should be able to: 1. Derive Gomory cut from fractional

More information

Convex Optimization and Support Vector Machine

Convex Optimization and Support Vector Machine Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Convex Optimization Lecture 6: KKT Conditions, and applications

Convex Optimization Lecture 6: KKT Conditions, and applications Convex Optimization Lecture 6: KKT Conditions, and applications Dr. Michel Baes, IFOR / ETH Zürich Quick recall of last week s lecture Various aspects of convexity: The set of minimizers is convex. Convex

More information

Optimality conditions for Equality Constrained Optimization Problems

Optimality conditions for Equality Constrained Optimization Problems International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 4 April. 2016 PP-28-33 Optimality conditions for Equality Constrained Optimization

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

LECTURE 7 Support vector machines

LECTURE 7 Support vector machines LECTURE 7 Support vector machines SVMs have been used in a multitude of applications and are one of the most popular machine learning algorithms. We will derive the SVM algorithm from two perspectives:

More information