A Theory-Based Termination Condition for Convergence in Real- Parameter Evolutionary Algorithms

Size: px
Start display at page:

Download "A Theory-Based Termination Condition for Convergence in Real- Parameter Evolutionary Algorithms"

Transcription

1 A Theory-Based Termination Condition for Convergence in Real- Parameter Evolutionary Algorithms Kalyanmoy Deb Koenig Endowed Chair Professor Department of Electrical and Computer Engineering Michigan State University East Lansing, USA

2 Overview! Most Evolutionary applications in search and optimization! Theoretical Optimality conditions! Karush-Kuhn-Tucker (KKT) conditions! KKT conditions for near-kkt points! Approximate KKT points! KKT Proximity Measure (KKTPM) for single and multi-objective optimization! Results on standard test problems! Termination criterion! Practicalities of implementation! A fast and approximate method! Conclusions 2

3 Evolutionary Optimization (EO): A Mimicry of Natural Evolution and Genetics begin Solution Representation t := 0; // generation counter Initialization P(t); Evaluation P(t); while not TERMINATION do P'(t) := Selection (P(t)); P''(t) := Variation (P'(t)); Evaluation P''(t); P(t+1):= Survivor (P(t),P''(t)); t := t+1; od end Initial population Recombination Mutation Optimum! " Mean approaches optimum! " Variance reduces! " No need of gradients 3

4 Mitsubishi Regional Jet (MRJ) Adopted in Practice!" Nose of Shinkansen (Bullet Train in Japan) N700 Series KONE elevator s control system ( countries/en_mp/documents/brochures/kone%20alta.pdf) 4

5 Termination Condition for EOs Maximum # of generations achieved Pragmatic, but no guarantee of near-optimum Stagnated condition Past τ gen, f improvement is ε Average and best performance is close A target is achieved Pragmatic, but requires knowledge Since none guarantees near-optimum, run EO multiple times Can we take help of Optimization Theory? T o g e t h e r 5

6 Hybrid Methods are Popular EA followed by local search Mutation operator of an EA uses a local search How do we even terminate a local search Optimality conditions satisfied? Are opt. conditions regular near optimum? Neighbor, monotonic along a line? Surprising results follow! optimum 6

7 x2 Unconstrained Single-Objective Problems Problem: Minimize f(x) Optimality condition: x*={x Grad f(x) = 0 & H(x)=+ve definite} Himmelblau Function Norm of Gradient Vector x1 (4 minima) x1 Error(x) has x Error(x) = Grad f(x) many zeros, Cannot be used alone

8 Monotonicity of Error Error reduced to zero locally For unconstrained Problems, norm. can be used a Proximity Measure locally Norm of Gradient Vector at y=2 Optimal point x=3 Norm If +ve def. check on H(x) is made, four minima are found x1 Regular, but not so in the entire space

9 Constrained Optimization Problem! " Decision variables: x = (x 1, x 2,, x n )! " Constraints restrict some solutions to be feasible Min. f(x) s.t. g j (x) # 0 j = 1,2,,J h k (x)=0 k = 1,2,,K x L i $ x i $ x U i i = 1,2,,n! " Equality and inequality constraints! " Minimum of f(x) need not be constrained minimum! " Constraints can be non-linear 9

10 Karush-Kuhn-Tucker (KKT) Conditions Variable bounds are treated as inequality constraints Eqm. cond. A triplet (x, u, v) that sa2sfies above condi2ons is a KKT point, when certain constraint qualifica.on is sa2sfied } Feasibility cond. Complementary Slackness condition Non-negativity of Lagrange multiplier 10

11 Geometrical Significance of " 99:"(<=>"6578%257?" - = - KKT Conditions Inequality constraints only constraint is not satisfied objective value reduces All u j must be +ve or zero "!"$(8,6(8"54@(62-(" 15',257"65=(1"57'A" B$5="%7B(01%4'("15'71>" "C%D('A"6078%80#("B5$" =%7%=,=" both constr. infeasible A KKT point need not be a minimum 11

12 Karush-Kuhn-Tucker (KKT) Necessity Theorem Let f, g, and h are differen.able func.ons and x* be a feasible solu.on. Let I={j g j (x*)=0}, a set of ac.ve constraints. Furthermore, grad g j (x*) for j in I and grad h k (x*) are linearly independent. If x* is an op.mal solu.on to the NLP, then there exists a (u*, v*) such that triplet (x*, u*, v*) solves the Kuhn- Tucker condi.ons (x* is a KKT point) Linear Independence Constraint Qualifica2on (LICQ) condi2on: (strong CQ) Implies certain regularity condi2ons Most prac2cal problems sa2sfy the condi2on 12

13 Main Results KKT necessity theorem can be used to identify points that are not optimal If a feasible point satisfies CQ condition and if it is not a KKT point, it cannot be a minimum If a feasible point satisfies CQ condition and if it is a KKT point, it may or may not be minimum Need more conditions to confirm optimality If a feasible point does not satisfy CQ condition, it may or may not be a KKT point or minimum 13

14 Approximate KKT Point " x is "-KKT KT point, if given ">0, there exists u i #0 for all constraints, such that Eqm. Cond. not satisfied but within " Complementary slackness satisfied " Proven in Dutta et al. (2012) in J. Global Optimization (Springer Journal) Dutta, J., Deb, K., Tulshyan, R. and Arora, R. (2013). Approximate KKT points and a proximity measure for termination. Journal of Global Optimization, 56(4),

15 KKT Error Metric A feasible iterate x k found by any opt. algorithm Define KKT Error as Extent of violation of Equilibrium Condition for a feasible solution: I(x k ): active constraint set Then define: At KKT point, KKT Error is zero How about at a near-kkt point? Surprising results 15

16 A Quadratic Problem with Linear Constraints Along x=y Acceptable Along x+y=3 NOT Acceptable! 16

17 Modified ε- KKT Point Discontinuity: u i =0 slightly away from boundary, but u i 0 at boundary -> Sudden change x is a Modified ε-kkt point, if given ε>0, there exists u i 0 for all constraints, such that Both Eqm. and Complementary slackness cond. are within bounds Similar theorem proven for smooth and non-smooth problems leading to approximate KKT point Dutta, J., Deb, K., Tulshyan, R. and Arora, R. (2013). Approximate KKT points and a proximity measure for termination. Journal of Global Optimization, 56(4),

18 Proposed KKT Proximity Metric (KKTPM) For a feasible iterate x k, solve # of constraints Compute KKTPM = ε * k By-product: Get u i * for every constraint Corresponding KKTPM for non-smooth problems defined with sub-differentials of f and g. 18

19 A Test Problem KKT Error Metric KKT Error suddenly zero A C Along AC KKT Proximity Metric A Along AC KKTPM smoothly goes to zero C 19

20 G-Series Constrained Test Problems: RGA Results g01 g07 Remarkable, as RGAs do not use any gradient information KKTPM can be used as termination condition g04 g09 20

21 Comparison with KNITRO KNITRO OptErr requires to find u i Similar reduction in performance metrics 21

22 Computed Optima and Lagrange Multipliers G-series problems Settled once and for all 22 Refer to J of Global Optimization paper

23 Multi-Objective Optimization: Handling multiple conflicting objectives! " Bueno, Bonito, Barato? Good, nice, cheap 3B A doomed car " Multiple solutions are optimal " How to apply KKTPM? A Dominated car Keynote Talk at EVOLVE Conference, 23

24 Which Solutions are Optimal? Mathematical Definition: A solution x* " X is Pareto-optimal if there is no x"x such that f(x)-f(x*) " -R M + \{0} Domination-based:! " x (1) dominates x (2), if! " x (1) is no worse than x (2) in all objectives! " x (1) is strictly better than x (2) in at least one objective! " 1 gets dominated by 3 Keynote Talk at EVOLVE Conference, Min. f2 Objective space Objective space Min. f1 24

25 Pareto-Optimal Solutions! " P =Nondominated(P)! " Solutions which are not dominated by any member of the set P! " O(N log N) algorithms exist! " Pareto-Optimal set = Non-dominated(S)! " A number of solutions are optimal Efficient front Keynote Talk at EVOLVE Conference, 25

26 Evolutionary Multi-Objective Optimization (EMO): Principle Step 1 : Find a set of Pareto-optimal solutions Step 2 : Choose one from the set Keynote Talk at EVOLVE Conference, 26

27 Evolution of EMO! " Early penalty-based approaches! " VEGA (1984)! " Goldberg's (1989) suggestion! MOGA, NSGA, NPGA ( ) used Goldberg's suggestion! Elitist EMO (SPEA, NSGA-II, PAES, MOMGA etc.) ( Present) EMOO Web site (as of Jan 2010) 1,534 journal, 2,266 conference 226 PhD theses VEGA MOGA NPGA NSGA NSGA-II SPEA Keynote Talk at EVOLVE Conference, Iasi, Romania, 2015 MOSES Weighted L p norm 27

28 Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II)! NSGA-II! Modular! No additional parameter! Fast! Commercialization:! MO-Sherpa (RCT)! ModeFrontier (Estico)! isight (Engeneous)! VisualDoc (Vanderplatts) (Deb et al., IEEE TEC 2002) Fast-Breaking Paper in Engineering by ISI Web of Science (Feb 04), Thomson Citation Laureate Award 2006, Current Classic and Most Highly Cited Paper (14,500 GS citations) Keynote Talk at EVOLVE Conference, 28

29 NSGA-II Simulation on an Unconstrained Problem " Parallel search " Multiple solutions in a single run Keynote Talk at EVOLVE Conference, 29

30 NSGA-II on a Constrained Problem Keynote Talk at EVOLVE Conference, 30

31 Many-Objective Optimization One of the Main Thrusts in EMO Today! Multi-objective: {2,3} objectives! Many-objective: >3 objectives! EMO difficulties for many-obj. problems: 1." Large fraction of population gets dominated 2." Maintaining diversity difficult 3." Recombination operator inefficient 4." Representation of PO front requires exponentially more points 5." Performance measures difficult to compute 6." Visualization is difficult Keynote Talk at EVOLVE Conference, 31

32 NSGA-III (IEEE TEC 2014) An EMO for Many-Obj. Optimization " Guided search through supplied reference pts " Similar to NSGA-II selection " Niching through selecting points close to reference lines " Normalization " Association " Niching No additional parameter needed Keynote Talk at EVOLVE Conference, 32

33 Some Results of NSGA-III 3-obj. Problem (IEEE TEC August 2014) 15-obj. Problem Water Problem A few Solutions Constrained Problem Keynote Talk at EVOLVE Conference, Constrained Problem 33

34 NSGA-III on ZDT1 (2 Objectives) Keynote Talk at EVOLVE Conference, 34

35 NSGA-III on DTLZ2 (3 Objectives) Keynote Talk at EVOLVE Conference, 35

36 Land Use Management 315 independently manageable paddocks, each having around 100 choices in any of 10 years ( ) 10 solutions ~10 82 atoms in the universe After 10 Years

37 Variation of 14 Objectives! "sheep substantively lower;! "dairy drastically reduced;! "more and less frequent forest harvesting;! "substantial increase in beef cattle.

38 KKT Proximity in Evolutionary Multi-objective Optimization As points move towards efficient front, KKTPM must reduce KKTPM must have similar values for points equidistant from front 38

39 KKT Optimality Conditions for MO Problems Equilibrium condition x k is supplied Constraint satisf. Compl. slackness Non-neg. of mult. Find λ*,u* for minimum KKT Error: 39

40 KKT Error Metric for Multi-objective Problems An example: Solve: using Matlab s fmincon() 40

41 KKT Error on P1 KKT Error increases towards efficient front KKT Error cannot be a metric 41

42 KKT Proximity Metric for MO Scalarize MO problem using ASF formulation: Choose z and w ASF(KG)=ASF(GH) Minimize ASF finds O Idea borrowed from MCDM literature

43 KKTPM (cont.) z: utopian point w: If F is on efficient front, it is minimum ASF Otherwise, KKT error will not be zero 43

44 KKT Proximity Metric (cont.) Smoother ASF: o Recall for 1- obj: o KKTPM=ε k * 44

45 KKT Proximity Metric (cont.) Treat ASF as single-obj. problem o Solve and find op2mal ε k * 1. Relax compl. slackness cond. 2. Add a penalty o Define KKTPM: Use Matlab s fmincon() to solve it 1 linear and 1 quadratic constraints 45

46 KKT Proximity Metric on P1 Smooth reduction to zero o Contour parallel to PO front 46

47 ZDT1 Test Problem with NSGA-II KKTPM Surface (Zitzler, Deb and Thiele, 2000) NSGA-II Populations N=100 KKTPM parallel to PO front Population converges as a front 47

48 ZDT1 Problem (cont.) KKTPM reduces with closeness to PO front Correlation to distance (R=0.993) 48

49 ZDT2 Test Problem with NSGA-II (Zitzler, Deb and Thiele, 2000) KKTPM Surface NSGA-II Populations N=100 KKTPM parallel to PO front KKTPM is computed for every ND point at every generation of NSGA-II 49

50 ZDT4 Test Problem with NSGA-II (Zitzler, Deb and Thiele, 2000) " ZDT4 has multiple local efficient fronts " Hence more difficult to solve " KKTPM plots demonstrate this aspect " Takes 200 generations to converge KKTPM Measure NSGA-II Populations N=100 Avg g() Smallest 1st Quartile Median 3rd Quartile Largest Avg. g() Average g() Generation Number 50

51 DTLZ1 Problems with NSGA-III KKTPM Variations (NSGA-III) KKTPM Measure obj Smallest 1st Quartile Median 3rd Quartile Largest Avg. g() Average g() 5-obj 10-obj (Deb et al., 2002) Generation Number

52 DTLZ2 Problems with NSGA-III KKTPM Variations (NSGA-III) KKTPM Measure obj Smallest 1st Quartile Median 3rd Quartile Largest Avg. g() Average g() 5-obj 10-obj (Deb et al., 2002) Generation Number

53 Constrained MO Problems TNK (Deb, 2001, Wiley Book) BNH KKTPM Measure Smallest 1st Quartile Median 3rd Quartile Largest Generation Number

54 Problem SRN with NSGA-II NSGA-II convergence is poor (Deb, 2001, Wiley Book) Gen=250 Local search method is being pursued 54

55 Problem OSY with NSGA-II 25% points did not converge until 250 gen. Local search to speed up EMO runs (Deb, 2001, Wiley Book) Local search method is being pursued 55

56 Initial Results on Local Search Based Methods " With and without local search ZDT1 " KKTPM identifies non-po solutions " LS helps find true PO points KKTPM allows differential treatment of ND solutions 56

57 Engineering Design Problems with NSGA-II Welded Beam Design (Deb and Jain, 2014, NSGA-III) Car Side Impact Design 57

58 KKTPM as Termination Criterion Set a threshold Terminate when it is met Consistent results!

59 Pros and Cons of KKTPM " Advantages: " Guarantees convergence to theoretical optimum " Scientific termination condition " Address troubled areas with local search " Applicable to classical methods as well " Disadvantages: " Computationally demanding (use after every 10 generations or so) " Discuss propose a Direct method next " Applicable to differentiable problems (use numerical gradients) 59

60 A Computationally Faster Method "" EH75$("1(6578"6571#$0%7#" "" T%78"=%7%=,="5B"3$1#"6571#$0%7#")O R." #" Q5',257"5B"0"'%7(0$"1A1#(="5B"(<,02571" "" F01("M?"E8(7260'"#5"O 5&#" "" F01("U?")R%V6,'#." #" 078"O K "(01A"#5"65=&,#(" #" O!8@ W"O K" W"O R" #" O!8@ W"O 5&#" W"O R" "" X12=0#(8"O (1# "N")O!8@ YO K" YO R.Z[ " " 60

61 A Test Case Single-var. problem: ε* =0.0 ε est =0.0 x=0.5 is the optimum x=1.0 is not an optimum ε* = ε est = x=1.0 61

62 Comparison with Direct Methods Median KKTPM Optimal is in between Adjusted and Direct Estimated is closer 1.2 sec versus 62.2 sec (Zitzler, Deb and Thiele, 2000) ZDT1 62

63 More Comparisons on ZDT Problems ZDT2 ZDT4 Estimated is closer to Optimal (Zitzler, Deb and Thiele, 2000) 63

64 More Comparisons on DTLZ Problems DTLZ1_3 DTLZ1_10 (Deb et al., 2002) Estimated is closer to Optimal 64

65 More Comparisons on DTLZ Problems DTLZ2_5 DTLZ5_3 (Deb et al., 2002) Estimated is closer to Optimal 65

66 Constrained Bi-Objective Problems SRN (Deb, 2001, Wiley Book) BNH 66

67 Practical Problems (Deb and Jain, 2014, NSGA-III) CAR WELD 67

68 Single-Objective Constrained Optimization (Direct vs. opt.) True optimum solution added as the final entry g01 RGA Method g07 RGA converges well Stuck True Opt. RGA does not converge True Opt. 68

69 More Single-Objective Constrained Optimization RGA Method g10 g18 g24 Stuck Poor convergence Good convergence Good convergence Good convergence Easy problem g02 g04 Good convergence Stuck g08 Good convergence Stuck

70 Comput. Time " Approx. methods are much faster " Not much loss in accuracy " Good Trade-off S&#>" X12=0#(8" R%$(6#" U\[>[" M>]" M>^" M>^" M>]" M[_>]" M>]" M>^" M>^" M>^" U`aa>b" ]>\" ]>\" ]>M" ]>^" U\>U" \>U" \>U" \>U" \>U" ][>]" \>^" \>^" \>^" \>^" [[>b" \>U" \>U" \>U" \>U" ]U>]" M>U" M>M" M>M" M>M" ^[>_" M>M" M>\" M>\" M>\" ^]>[" M>U" M>M" M>U" M>U" MU]>b" M>]" M>^" M>^" M>]" [\>]" \>U" \>U" \>U" \>U" UMb>U" M>U" M>U" M>U" M>[" a`\>_" b>_" b>m" b>[" b>^" b`_m>`" [\>]" [\>b" [\>a" [M>\" M[^>U" \>_" \>`" \>_" \>_" M\ba>]" [>^" [>^" [>^" [>b" M``>^" M>[" M>[" M>b" M>[" U\[>]" M>M" M>\" M>\" M>\" Ub^>]" M>M" M>\" M>\" M>\" b>b" \>M" \>\" \>\" \>\" MU]M>[" a>]" a>u" a>]" a>]" am>_" \>^" \>b" \>b" \>^" M\ab>U" U>`" U>]" U>]" U>a" UU]>]" U>M" U>M" U>U" U>U"

71 Conclusions! EMO uses stochastic search principles! No convergence proof in finite time! Developed performance metric for convergence based on KKT optimality conditions! KKT proximity metric implemented with ASF scalarization (MCDM) method! Demonstrated to work well on many standard test and engineering problems! Need to couple with a diversity measure! Such studies (theory, MCDM and EMO) should bring respect to EO and EMO fields 71

72 Relevant Papers Deb, K. and Abouhawwash. M. (October, 2014). An Optimality Theory Based Proximity Measure for Set Based Multi- Objective Optimization. COIN Report No (COIN Website, MSU) Deb, K., Abouhawwash, M., and Dutta, J. (2015). A KKT Proximity Measure for Evolutionary Multi-objective and Manyobjective Optimization. Proceedings of Eighth Conference on Evolutionary Multi-Criterion Optimization (EMO-2015). Springer. Deb, K. and Abouhawwash, M. (May, 2015). A Computationally Fast and Approximate Method for Karush- Kuhn-Tucker Proximity Measure. COIN Report No

A Computationally Fast and Approximate Method for Karush-Kuhn-Tucker Proximity Measure

A Computationally Fast and Approximate Method for Karush-Kuhn-Tucker Proximity Measure A Computationally Fast and Approximate Method for Karush-Kuhn-Tucer Proximity Measure Kalyanmoy Deb and Mohamed Abouhawwash 1 Department of Electrical and Computer Engineering Computational Optimization

More information

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles Kalyanmoy Deb and Aravind Srinivasan Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method Kalyanmoy Deb Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem Unit: Optimality Conditions and Karush Kuhn Tucker Theorem Goals 1. What is the Gradient of a function? What are its properties? 2. How can it be used to find a linear approximation of a nonlinear function?

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Kalyanmoy Deb 1, Ralph E. Steuer 2, Rajat Tewari 3, and Rahul Tewari 4 1 Department of Mechanical Engineering, Indian Institute

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

1. f(β) 0 (that is, β is a feasible point for the constraints)

1. f(β) 0 (that is, β is a feasible point for the constraints) xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Robust Multi-Objective Optimization in High Dimensional Spaces

Robust Multi-Objective Optimization in High Dimensional Spaces Robust Multi-Objective Optimization in High Dimensional Spaces André Sülflow, Nicole Drechsler, and Rolf Drechsler Institute of Computer Science University of Bremen 28359 Bremen, Germany {suelflow,nd,drechsle}@informatik.uni-bremen.de

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO.X, XXXX 1

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO.X, XXXX 1 This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/TEVC.27.273778,

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Multi-objective approaches in a single-objective optimization environment

Multi-objective approaches in a single-objective optimization environment Multi-objective approaches in a single-objective optimization environment Shinya Watanabe College of Information Science & Engineering, Ritsumeikan Univ. -- Nojihigashi, Kusatsu Shiga 55-8577, Japan sin@sys.ci.ritsumei.ac.jp

More information

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Saku Kukkonen, Student Member, IEEE and Jouni Lampinen Abstract This paper presents

More information

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization Dung H. Phan and Junichi Suzuki Deptartment of Computer Science University of Massachusetts, Boston, USA {phdung, jxs}@cs.umb.edu

More information

Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism

Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism Sanaz Mostaghim 1 and Jürgen Teich 2 1 Electrical Engineering Department University of Paderborn,

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Constraint-Handling in Evolutionary Algorithms. Czech Institute of Informatics, Robotics and Cybernetics CTU Prague

Constraint-Handling in Evolutionary Algorithms. Czech Institute of Informatics, Robotics and Cybernetics CTU Prague in Evolutionary Algorithms Jiří Kubaĺık Czech Institute of Informatics, Robotics and Cybernetics CTU Prague http://cw.felk.cvut.cz/doku.php/courses/a0m33eoa/start pmotivation The general nonlinear programming

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology 4M020 Design tools Algorithms for numerical optimization L.F.P. Etman Department of Mechanical Engineering Eindhoven University of Technology Wednesday September 3, 2008 1 / 32 Outline 1 Problem formulation:

More information

Multi Objective Optimization

Multi Objective Optimization Multi Objective Optimization Handout November 4, 2011 (A good reference for this material is the book multi-objective optimization by K. Deb) 1 Multiple Objective Optimization So far we have dealt with

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Constrained Optimization. Unconstrained Optimization (1)

Constrained Optimization. Unconstrained Optimization (1) Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRYING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS

DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRYING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONAR ALGORITHMS Dilip Datta Kanpur Genetic Algorithms Laboratory (KanGAL) Deptt. of Mechanical Engg. IIT Kanpur, Kanpur,

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Research Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization

Research Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization Mathematical Problems in Engineering Volume 2011, Article ID 695087, 10 pages doi:10.1155/2011/695087 Research Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

A Brief Introduction to Multiobjective Optimization Techniques

A Brief Introduction to Multiobjective Optimization Techniques Università di Catania Dipartimento di Ingegneria Informatica e delle Telecomunicazioni A Brief Introduction to Multiobjective Optimization Techniques Maurizio Palesi Maurizio Palesi [mpalesi@diit.unict.it]

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives H. Ishibuchi N. Akedo H. Ohyanagi and Y. Nojima Behavior of EMO algorithms on many-objective optimization problems with correlated objectives Proc. of 211 IEEE Congress on Evolutionary Computation pp.

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396 Data Mining Linear & nonlinear classifiers Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1396 1 / 31 Table of contents 1 Introduction

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

THIS paper considers the general nonlinear programming

THIS paper considers the general nonlinear programming IEEE TRANSACTIONS ON SYSTEM, MAN, AND CYBERNETICS: PART C, VOL. X, NO. XX, MONTH 2004 (SMCC KE-09) 1 Search Biases in Constrained Evolutionary Optimization Thomas Philip Runarsson, Member, IEEE, and Xin

More information

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy Evolutionary Multiobjective Optimization Methods for the Shape Design of Industrial Electromagnetic Devices P. Di Barba, University of Pavia, Italy INTRODUCTION Evolutionary Multiobjective Optimization

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems

PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems Pruet Boonma Department of Computer Engineering Chiang Mai University Chiang Mai, 52, Thailand Email: pruet@eng.cmu.ac.th

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective Optimization Ke Li, Kalyanmoy Deb, Fellow, IEEE, Qingfu Zhang, Senior Member, IEEE, and Qiang Zhang COIN Report

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Selected Topics in Optimization. Some slides borrowed from

Selected Topics in Optimization. Some slides borrowed from Selected Topics in Optimization Some slides borrowed from http://www.stat.cmu.edu/~ryantibs/convexopt/ Overview Optimization problems are almost everywhere in statistics and machine learning. Input Model

More information

Pure Strategy or Mixed Strategy?

Pure Strategy or Mixed Strategy? Pure Strategy or Mixed Strategy? Jun He, Feidun He, Hongbin Dong arxiv:257v4 [csne] 4 Apr 204 Abstract Mixed strategy evolutionary algorithms EAs) aim at integrating several mutation operators into a single

More information

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method ABSTRACT Kalyanmoy Deb Department of Mechanical Engineering Indian Institute of Technology Kanpur

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

Constrained Real-Parameter Optimization with Generalized Differential Evolution

Constrained Real-Parameter Optimization with Generalized Differential Evolution 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Constrained Real-Parameter Optimization with Generalized Differential Evolution

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Multiobjective Evolutionary Algorithms. Pareto Rankings

Multiobjective Evolutionary Algorithms. Pareto Rankings Monografías del Semin. Matem. García de Galdeano. 7: 7 3, (3). Multiobjective Evolutionary Algorithms. Pareto Rankings Alberto, I.; Azcarate, C.; Mallor, F. & Mateo, P.M. Abstract In this work we present

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Intro to Nonlinear Optimization

Intro to Nonlinear Optimization Intro to Nonlinear Optimization We now rela the proportionality and additivity assumptions of LP What are the challenges of nonlinear programs NLP s? Objectives and constraints can use any function: ma

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information