Nonlinear Programming: Concepts, Algorithms and Applications

Size: px
Start display at page:

Download "Nonlinear Programming: Concepts, Algorithms and Applications"

Transcription

1 Nonlinear Programming: Concepts, Algorithms and Applications L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Nonlinear Programming and Process Optimization Introduction Constrained Optimization Karush Kuhn-Tucer Conditions Reduced Gradient Methods (GRG, CONOPT, MINOS) Successive Quadratic Programming (SQP) Interior Point Methods (IPOPT) Process Optimization Blac Bo Optimization Modular Flowsheet Optimization Infeasible Path The Role of Eact Derivatives Large-Scale Nonlinear Programming rsqp: Real-time Process Optimization IPOPT: Blending and Data Reconciliation Further Applications Sensitivity Analysis for NLP Solutions Summary and Conclusions

2 Introduction Optimization: given a system or process, find the best solution to this process within constraints. Objective Function: indicator of "goodness" of solution, e.g., cost, yield, profit, etc. Decision Variables: variables that influence process behavior and can be adjusted for optimization. In many cases, this tas is done by trial and error (through case study). Here, we are interested in a systematic approach to this tas - and to mae this tas as efficient as possible. Some related areas: - Math programming - Operations Research Currently - Over 3 journals devoted to optimization with roughly papers/month - a fast moving field! 3 Optimization Viewpoints Mathematician - characterization of theoretical properties of optimization, convergence, eistence, local convergence rates. Numerical Analyst - implementation of optimization method for efficient and "practical" use. Concerned with ease of computations, numerical stability, performance. Engineer - applies optimization method to real problems. Concerned with reliability, robustness, efficiency, diagnosis, and recovery from failure. 4

3 Motivation Scope of optimization Provide systematic framewor for searching among a specified space of alternatives to identify an optimal design, i.e., as a decision-maing tool Premise Conceptual formulation of optimal product and process design corresponds to a mathematical programming problem min R f (, y) st h(, y) = MINLP g(, y) n y {,} ny MINLP NLP 5 Optimization in Design, Operations and Control MILP MINLP Global LP,QP NLP SA/GA HENS MENS Separations Reactors Equipment Design Flowsheeting Scheduling Supply Chain Real-time optimization Linear MPC Nonlinear MPC Hybrid 6 3

4 Constrained Optimization (Nonlinear Programming) Problem: Min f() s.t. g() h() = where: f() - scalar objective function - n vector of variables g() - inequality constraints, m vector h() - meq equality constraints. Sufficient Condition for Global Optimum - f() must be conve, and - feasible region must be conve, i.e. g() are all conve h() are all linear Ecept in special cases, there is no guarantee that a local optimum is global if sufficient conditions are violated. 7 Eample: Minimize Pacing Dimensions What is the smallest bo for three round objects? Variables: A, B, (, y ), (, y ), ( 3, y 3 ) Fied Parameters: R, R, R 3 Objective: Minimize Perimeter = (A+B) Constraints: Circles remain in bo, can't overlap Decisions: Sides of bo, centers of circles. A y 3, y R B - R, y A - R ( - ) + ( y - y ) ( R + R), y R B - R, y A - R ( - 3) + ( y - y 3) ( R + R3) 3, y 3 R 3 3 B - R 3, y 3 A - R 3 ( - 3) + ( y - y 3) ( R + R3) in bo,, 3, y, y, y 3, A, B B no overlaps 8 4

5 Characterization of Constrained Optima Min Min Linear Program Linear Program (Alternate Optima) Min Min Min Conve Objective Functions Linear Constraints Min Min Min Min Nonconve Region Multiple Optima Min Nonconve Objective Multiple Optima 9 What conditions characterize an optimal solution? Unconstrained Local Minimum Necessary Conditions f (*) = p T f (*) p for p R n (positive semi-definite) Unconstrained Local Minimum Sufficient Conditions f (*) = p T f (*) p > for p R n (positive definite) 5

6 Optimal solution for inequality constrained problem Min f() s.t. g() Analogy: Ball rolling down valley pinned by fence Note: Balance of forces ( f, g) Optimal solution for general constrained problem Problem: Min f() s.t. g() h() = Analogy: Ball rolling on rail pinned by fences Balance of forces: f, g, h 6

7 Optimality conditions for local optimum Necessary First Order Karush Kuhn - Tucer Conditions L (*, u, v) = f(*) + g(*) u + h(*) v = (Balance of Forces) u (Inequalities act in only one direction) g (*), h (*) = (Feasibility) u j g j (*) = (Complementarity: either g j (*) = or u j = ) u, v are "weights" for "forces," nown as KKT multipliers, shadow prices, dual variables To guarantee that a local NLP solution satisfies KKT conditions, a constraint qualification is required. E.g., the Linear Independence Constraint Qualification (LICQ) requires active constraint gradients, [ g A (*) h(*)], to be linearly independent. Also, under LICQ, KKT multipliers are uniquely determined. Necessary (Sufficient) Second Order Conditions - Positive curvature in "constraint" directions. - p T L (*) p (p T L (*) p > ) where p are the constrained directions: g A (*) T p =, h(*) T p = 3 Single Variable Eample of KKT Conditions Min () s.t. -a a, a > * = is seen by inspection f() Lagrange function : L(, u) = + u (-a) + u (-a-) First Order KKT conditions: L(, u) = + u -u = u (-a) = u (-a-) = -a a u, u -a a Consider three cases: u, u = Upper bound is active, = a, u = -a, u = u =, u Lower bound is active, = -a, u = -a, u = u = u = Neither bound is active, u =, u =, = Second order conditions (*, u, u =) L (*, u*) = p T L (*, u*) p = (Δ) > 4 7

8 Min -() s.t. -a a, a > * = ±a is seen by inspection Single Variable Eample of KKT Conditions - Revisited -a a Lagrange function : L(, u) = - +u( (-a) + u (-a-) First Order KKT conditions: L(, u) = - + u -u = u (-a) = u (-a-) = -a a u, u f() Consider three cases: u, u = Upper bound is active, = a, u = a, u = u =, u Lower bound is active, = -a, u = a, u = u = u = Neither bound is active, u =, u =, = Second order conditions (*, u, u =) L (*, u*) = - p T L (*, u*) p = -(Δ) < 5 Interpretation of Second Order Conditions For = a or = -a, we require the allowable direction to satisfy the active constraints eactly. Here, any point along the allowable direction, * must remain at its bound. For this problem, however, there are no nonzero allowable directions that satisfy this condition. Consequently the solution * is defined entirely by the active constraint. The condition: p T L (*, u*, v*) p > for the allowable directions, is vacuously satisfied - because there are no allowable directions that satisfy g A (*) T p =. Hence, sufficient second order conditions are satisfied. As we will see, sufficient second order conditions are satisfied by linear programs as well. 6 8

9 Role of KKT Multipliers -a a a + Δa Also nown as: Shadow Prices Dual Variables Lagrange Multipliers f() Suppose a in the constraint is increased to a + Δa f(*) = (a + Δa) and [f(*, a + Δa) - f(*, a)]/δa = a + Δa df(*)/da = a = u 7 Another Eample: Constraint Qualifications Min s. t. 3 ( ) * = * = 3( ) u + u -, u, u = ( ) 3, u, u ( 3 ( ) ) = KKT conditions not satisfied at NLP solution Because a CQ is not satisfied (e.g., LICQ) 8 9

10 Algorithms for Constrained Problems Motivation: Build on unconstrained methods wherever possible. Classification of Methods: Reduced Gradient Methods - (with Restoration) GRG, CONOPT Reduced Gradient Methods - (without Restoration) MINOS Successive Quadratic Programming - generic implementations Penalty Functions - popular in 97s, but fell into disfavor. Barrier Methods have been developed recently and are again popular. Successive Linear Programming - only useful for "mostly linear" problems We will concentrate on algorithms for first four classes. Evaluation: Compare performance on "typical problem," cite eperience on process problems. 9 Reduced Gradient Method with Restoration (GRG/CONOPT) Min f() Min f(z) s.t. g() + s = (add slac variable) ` s.t. c(z) = h() = a z b a b, s Partition variables into: z B - dependent or basic variables z N - nonbasic variables, fied at a bound z S - independent or superbasic variables Modified KKT Conditions f (z) + c(z)λ ν L + ν U = c(z) = z (i) (i) = z U or z (i) = z (i) L, i N ν U (i), ν L (i) =, i N

11 Reduced Gradient Method with Restoration (GRG/CONOPT) a) S f (z) + S c(z)λ = b) B f (z) + B c(z)λ = c) N f (z) + N c(z)λ ν L + ν U = d) z (i) (i) = z U or z (i) = z (i) L, i N e) c(z) = z B = z B (z S ) Solve bound constrained problem in space of superbasic variables (apply gradient projection algorithm) Solve (e) to eliminate z B Use (a) and (b) to calculate reduced gradient wrt z S. Nonbasic variables z N (temporarily) fied (d) Repartition based on signs of ν, if z s remain at bounds or if z B violate bounds Definition of Reduced Gradient df = f + dz B f dz S z S dz S z B Because c(z) =,we have : T dc = c dz S + c dz B = z S z B dz B = c c dz S z S z B This leads to : df = S f (z) S c B c dz S T = zs c[ zb c] [ ] B f (z) = S f (z) + S c(z)λ By remaining feasible always, c(z) =, a z b, one can apply an unconstrained algorithm (quasi-newton) using (df/dz S ), using (b) Solve problem in reduced space of z S variables, using (e).

12 Eample of Reduced Gradient Min s. t. c T = [3 4], f = 4 T = [ - ] Let df dz df d S z S =, z f = z S B = z S [ c] [ 4 ] ( - ) = + 3 / = 3 c z B f z B If c T is (m n); z S c T is m (n-m); z B c T is (m m) (df/dz S ) is the change in f along constraint direction per unit change in z S 3 Gradient Projection Method (superbasic nonbasic variable partition) Define the projection of an arbitrary point onto bo feasible region. The ith component is given by Piecewise linear path (t) starting at the reference point and obtained by projecting steepest descent (or any search) direction at onto the bo region is given by where g is the reduced gradient, t is the stepsize. Also, can adapt to (quasi-) Newton method. 4

13 Setch of GRG Algorithm. Initialize problem and obtain a feasible point at z. At feasible point z, partition variables z into z N, z B, z S 3. Cl Calculate lt reduced dgradient, t(df/d (df/dz S ) 4. Evaluate search direction for z S, d = B - (df/dz S ) 5. Perform a line search. Find α (,] with z S := z S + α d Solve for c(z S + α d, z B B, z N N) = If f(z S + α d, z B, z N ) < f(z S, z B, z N ), set z + S =z S + α d, := + 6. If (df/dz S ) <ε, Stop. Else, go to. 5 Reduced Gradient Method with Restoration z B z S 6 3

14 Reduced Gradient Method with Restoration z B Fails, due to singularity in basis matri (dc/dz B ) z S 7 Reduced Gradient Method with Restoration z B Possible remedy: repartition basic and superbasic variables to create nonsingular basis matri (dc/dz B ) z S 8 4

15 GRG Algorithm Properties. GRG has been implemented on PC's as GINO and is very reliable and robust. It is also the optimization solver in MS EXCEL.. CONOPT is implemented in GAMS, AIMMS and AMPL 3. GRG uses Q-N for small problems but can switch to conjugate gradients if problem gets large. CONOPT uses eact second derivatives. 4. Convergence of c(z S, z B, z N ) = can get very epensive because c(z) is calculated repeatedly. 5. Safeguards can be added so that restoration (step 5.) can be dropped and efficiency increases. Representative Constrained Problem Starting Point [.8,.] GINO Results - 4 iterations to f(*) -6 CONOPT Results - 7 iterations to f(*) -6 from feasible point. 9 Reduced Gradient Method without Restoration z B z S 3 5

16 Reduced Gradient Method without Restoration (MINOS/Augmented) Motivation: Efficient algorithms are available that solve linearly constrained optimization i problems (MINOS): Min f() s.t. A b C = d Etend to nonlinear problems, through successive linearization Develop major iterations (linearizations) and minor iterations (GRG solutions). Strategy: (Robinson, Murtagh & Saunders). Partition variables into basic, nonbasic variables ibl and superbasic variables.. ibl. Linearize active constraints at z D z = r 3. Let ψ = f (z) + λ T c (z) + β (c(z) T c(z)) (Augmented Lagrange), 4. Solve linearly constrained problem: Min ψ (z) s.t. Dz = r a z b using reduced gradients to get z + 5. Set =+, go to. 6. Algorithm terminates when no movement between steps ) and 4). 3 MINOS/Augmented Notes. MINOS has been implemented very efficiently to tae care of linearity. It becomes LP Simple method if problem is totally linear. Also, very efficient matri routines.. No restoration taes place, nonlinear constraints are reflected in ψ(z) during step 3). MINOS is more efficient than GRG. 3. Major iterations (steps 3) - 4)) converge at a quadratic rate. 4. Reduced gradient methods are complicated, monolithic codes: hard to integrate efficiently into modeling software. Representative Constrained Problem Starting Point [.8,.] MINOS Results: 4 major iterations, function calls to f(*)

17 Successive Quadratic Programming (SQP) Motivation: Tae KKT conditions, epand in Taylor series about current point. Tae Newton step (QP) to determine net point. Derivation KKT Conditions L (*, u*, v*) = f(*) + ga(*) u* + h(*) v* = h(*) = ga(*) =, where g A are the active constraints. Newton -Step L g A h g T A h T Δ Δu = - Δv Requirements: L must be calculated and should be regular correct active set g A good estimates of u, v L (, u, v ) g A ( ) h ( ) 33 SQP Chronology. Wilson (963) - active set can be determined by solving QP: Min f( ) T d + / d T L(, u, v ) d d s.t. g( ) + g( ) T d h( ) + h( ) T d =. Han (976), (977), Powell (977), (978) - approimate L using a positive definite quasi-newton update (BFGS) - use a line search to converge from poor starting points. Notes: - Similar methods were derived using penalty (not Lagrange) functions. - Method converges quicly; very few function evaluations. - Not well suited to large problems (full space update used). For n >, say, use reduced space methods (e.g. MINOS). 34 7

18 Elements of SQP Hessian Approimation What about L? need to get second derivatives for f(), g(), h(). need to estimate multipliers, u, v ; L may not be positive semidefinite Approimate L (, u, v ) by B, a symmetric positive definite matri. BFGS Formula T + yy B = B + T s y - B s T s B s B s s = + - y = L( +, u +, v + ) - L(, u +, v + ) second derivatives approimated by change in gradients positive definite B ensures unique QP solution Newton-lie properties leads to superlinear convergence. 35 Elements of SQP Search Directions How do we obtain search directions? Form QP and let QP determine constraint activity At each iteration,, solve: Min f( ) T d + / d T B d d s.t. g( ) + g( ) T d h( ) + h( ) T d = Convergence from poor starting points As with Newton's method, choose α (stepsize) to ensure progress toward optimum: + = + α d. α is chosen by maing sure a merit function is decreased at each iteration. Eact Penalty Function ψ() = f() + μ [Σ ma (, g j ()) + Σ h j () ] μ > ma j { u j, v j } Augmented Lagrange Function ψ() = f() + u T g() + v T h() + η/ {Σ (h j ()) + Σ ma (, g j ()) } 36 8

19 Basic SQP Algorithm. Guess, Set B = I (Identity). Evaluate f( ), g( ) and h( ).. At, evaluate f( ), g( ), h( ).. If >, update B using the BFGS Formula. 3. Solve: Min d f( ) T d + / d T B d s.t. g( ) + g( ) T d h( ) + h( ) T d = If KKT error less than tolerance: L(*) ε, h(*) ε, g(*) + ε. STOP, else go to Find α so that < α and ψ( + αd) < ψ( ) sufficiently (Each trial requires evaluation of f(), g() and h()) = + α d. Set = + Go to. 37 SQP Test Problem * Min s.t (- ) - (- ) 3 * = [.5,.375]. 38 9

20 SQP Test Problem First Iteration Start from the origin ( = [, ] T ) with B = I, form: Min d + / (d + d ) s.t. d d + d d = [, ] T. with μ = and μ =. 39 SQP Test Problem Second Iteration * From = T [.5, ] with B = I (no update from BFGS possible), form:. Min d + / (d + d ) s.t. -.5 d -d d -d d = [,.375] T with μ =.5 and μ =.5 * = [.5,.375] T is optimal 4

21 Problems with SQP Nonsmooth Functions - Reformulate Ill-conditioning - Proper scaling Poor Starting Points Trust Regions can help Inconsistent Constraint Linearizations - Can lead to infeasible QP s - Analogy to Singular Jacobian in Newton s Method Min s.t. + -( ) - -( ) -/ 4 Simulation and Optimization of Flowsheets w(y) h(y) = y 5 Original Problem Transformed Problem Min f() Min f(, y) s.t. g() s.t. g(, y) h(, y) = y w(, y) = Characteristics replace recycle convergence with optimization bloc models and sequencing manager intact. larger optimization problem gradients evaluated from flowsheet passes avoids repeated (epensive) recycle convergence 4

22 Ammonia Process Optimization H N P r T r Hydrogen Feed Nitrogen Feed N 5.% 99.8% H 94.% --- CH4.79 %.% Ar ---.% ν Tf Tf T o Product Hydrogen and Nitrogen feed are mied, compressed, and combined with recycle & brought to reactor temperature. Reaction occurs in a multibed reactor (equilibrium reactor) Reactor effluent is cooled and product is separated with two flashes Liquid yields high purity NH 3 product. Vapor is recompressed and recycled. 43 Ammonia Process Optimization Optimization Problem Performance Characterstics Ma {Total 5% over five years} 5 SQP iterations.. base point simulations. s.t. 5 tons NH3/yr. objective function improves by Pressure Balance No Liquid in Compressors $.66 6 to $ H/N 3.5 difficult to converge flowsheet Treact o F NH3 purged 4.5 lb mol/hr at starting point NH3 Product Purity 99.9 % Item Optimum Starting point Tear Equations Objective Function($ 6 ) Inlet temp. reactor ( o F) 4 4. Inlet temp. st flash ( o F) Inlet temp. nd flash ( o F) Inlet temp. rec. comp. ( o F) Purge fraction (%) Reactor Press. (psia) Feed (lb mol/hr) Feed (lb mol/hr)

23 How accurate should gradients be for optimization? Recognizing True Solution KKT conditions and Reduced Gradients determine true solution Derivative Errors will lead to wrong solutions! Performance of Algorithms Constrained NLP algorithms are gradient based (SQP, Conopt, GRG, MINOS, etc.) Global and Superlinear convergence theory assumes accurate gradients Worst Case Eample (Carter, 99) Newton s Method generates an ascent direction and fails for any ε! Min T f ( ) = A ε + / ε ε / ε A = / / ε ε ε + ε = [ ] f ( ) = ε g( ) = f ( ) + O( ε ) T d = A g( ) κ ( A) = (/ ε ) f 45 Implementation of Analytic Derivatives parameters, p eit variables, s Module Equations c(v,, s, p, y) = y Automatic Differentiation Tools Sensitivity Equations dy/d ds/d dy/dp ds/dp JAKE-F, limited it to a subset of FORTRAN (Hillstrom, 98) DAPRE, which has been developed for use with the NAG library (Pryce, Davis, 987) ADOL-C, implemented using operator overloading features of C++ (Griewan, 99) ADIFOR, (Bischof et al, 99) uses source transformation approach FORTRAN code. TAPENADE, web-based source transformation for FORTRAN code Relative effort needed to calculate gradients is not n+ but about 3 to 5 (Wolfe, Griewan) 46 3

24 Flash Recycle Optimization ( decisions + 7 tear variables) Ammonia Process Optimization (9 decisions and 6 tear variables) S3 S Mier S Flash P S6 S5 S4 Ratio 3 / Ma S3(A) *S3(B) - S3(A) - S3(C) + S3(D) - (S3(E)) S7 8 Lo P GRG SQP GRG Flash SQP 6 rsqp CPU Seconds (VS 3) rsqp CPU Seconds (VS 3) 4 Numerical Eact Numerical Eact 47 Large-Scale SQP Min f(z) Min f(z ) T d + / d T W d s.t. c(z)= s.t. c(z ) + (Α ) T d = z L z z U z L z + d z U Active set strategy (usually) applied to QP problems (could use interior point) to handle nonbasic variables Few superbasics ( - ) Apply reduced space (linearized) elimination of basic variables Many superbasics ( ) Apply full-space method 48 4

25 Few degrees of freedom => reduced space SQP (rsqp) Tae advantage of sparsity of A= c() project W into space of active (or equality constraints) curvature (second derivative) information only needed in space of degrees of freedom second derivatives can be applied or approimated with positive curvature (e.g., BFGS) use dual space QP solvers + easy to implement with eisting sparse solvers, QP methods and line search techniques + eploits 'natural assignment' of dependent and decision variables (some decomposition steps are 'free') + does not require second derivatives - reduced space matrices are dense - may be dependent on variable partitioning - can be very epensive for many degrees of freedom - can be epensive if many QP bounds 49 Reduced space SQP (rsqp) Range and Null Space Decomposition Assume nonbasics removed, QP problem with n variables and m constraints becomes: W A d f ( ) = A T λ+ c( ) Define reduced space basis, Z R n (n-m) with (A ) T Z = Define basis for remaining space Y R n m, [Y Z ] R n n is nonsingular. Let d = Y d Y + Z d Z to rewrite: T d [ ] Y [ ] Y Z W A Y Z [ Y Z ] I A T = dz I λ + T f ( ) I c( ) 5 5

26 Reduced space SQP (rsqp) Range and Null Space Decomposition Y T W Y T Z W Y T A Y T Y W Y Y Z T W Z A d Y Y d = Z Z λ T T T f ( f ( c( + ) ) ) (A T Y) d Y =-c( ) is square, d Y determined from bottom row. Cancel lyy T WY and Y T WZ; (unimportant tas d Z, d Y --> >) (Y T A) λ = -Y T f( ), λ can be determined by first order estimate Calculate or approimate w= Z T WY d Y, solve Z T WZ d Z =-Z T f( )-w Compute total step: d = Y dy + Z dz 5 Reduced space SQP (rsqp) Interpretation d Z d Y Range and Null Space Decomposition SQP step (d) operates in a higher dimension Satisfy constraints using range space to get d Y Solve small QP in null space to get d Z In general, same convergence properties as SQP. 5 6

27 Choice of Decomposition Bases. Apply QR factorization to A. Leads to dense but well-conditioned Y and Z. R A = Q = R [ Y Z ]. Partition variables into decisions u and dependents v. Create orthogonal Y and Z with embedded identity matrices (A T Z =, Y T Z=). A T = I Z = C T T [ c c ] = [ N C] u N T N C Y = I 3C 3. Coordinate Basis - same Z as above, Y T = [ I] v Bases use gradient information already calculated. Adapt decomposition to QP step Theoretically same rate of convergence as original SQP. Coordinate basis can be sensitive to choice of u and v. Orthogonal is not. Need consistent initial point and nonsingular C; automatic generation T 53 rsqp Algorithm. Choose starting point.. At iteration, evaluate functions f( ), c( ) and their gradients. 3. Calculate bases Y and Z. 4. Solve for step d Y in Range space from (A T Y) d Y =-c( ) 5. Update projected Hessian B ~ Z T WZ (e.g. with BFGS), w (e.g., zero) 6. Solve small QP for step d Z in Null space. Min s. t. T T ( Z f ( ) + w ) d L + Yd Y + Zd Z Z + / d U T Z B d 7. If error is less than tolerance stop. Else 8. Solve for multipliers using (Y T A) λ = -Y T f( ) 9. Calculate total step d = Y dy + Z dz.. Find step size α and calculate new point, + = + α d 3. Continue from step with = +. Z 54 7

28 Comparison of SQP and rsqp Coupled Distillation Eample - 5 Equations Decision Variables - boilup rate, reflu ratio Method CPU Time Annual Savings Comments. SQP* hr negligible Base Case. rsqp 5 min. $ 4, Base Case 3. rsqp 5 min. $ 84, Higher Feed Tray Location 4. rsqp 5 min. $ 84, Column Overhead to Storage 5. rsqp 5 min $7, Cases 3 and 4 together 8 Q VK Q VK 55 Real-time Optimization: State of Art w APC Plant u RTO c(, u, p) = On line optimization Steady state model for states () Supply setpoints (u) to APC (control system) Model mismatch, measured and unmeasured disturbances (w) Min u F(, u, w) s.t. c(, u, p, w) = X, u U p y DR-PE c(, u, p) = Data Reconciliation & Parameter Identification Estimation problem formulations Steady state model Maimum lielihood objective functions considered to get parameters (p) Min p Φ(, y, p, w) s.t. c(, u, p, w) = X, p P 56 8

29 Real-time Optimization with rsqp Sunoco Hydrocracer Fractionation Plant (Bailey et al, 993) Eisting process, optimization on-line at regular intervals: 7 hydrocarbon components, 8 heat echangers, absorber/stripper (3 trays), debutanizer ( trays), C3/C4 splitter ( trays) and deisobutanizer (33 trays). REACTOR EFFLUENT FROM LOW PRESSURE SEPARATOR FUEL GAS ABSORBER/ STRIPPER C3 PREFLASH LIGHT NAPHTHA MAIN FRAC. REFORMER NAPHTHA MIXED LPG DEBUTANIZER C3/C4 SPLITTER DIB ic4 RECYCLE OIL nc4 square parameter case to fit the model to operating data. optimization to determine best operating conditions 57 Optimization Case Study Characteristics Model consists of 836 equality constraints and only ten independent variables. It is also reasonably sparse and contains 43 nonzero Jacobian elements. NP G E P P = z i C i + z i C i + z i C m i U i G i E m= Cases Considered:. Normal Base Case Operation. Simulate fouling by reducing the heat echange coefficients for the debutanizer 3Si 3. Simulate fouling by reducing the heat echange coefficients ffii for splitter feed/bottoms echangers 4. Increase price for propane 5. Increase base price for gasoline together with an increase in the octane credit 58 9

30 59 Many degrees of freedom => full space IPOPT W + Σ A T A d ϕ( = λ c( + ) ) wor in full space of all variables second derivatives useful for objective and constraints use specialized large-scale Newton solver + W= L(,λ) and A= c() sparse, often structured + fast if many degrees of freedom present + no variable partitioning required - second derivatives strongly desired - W is indefinite, requires comple stabilization - requires specialized large-scale linear algebra 6 3

31 Barrier Methods for Large-Scale Nonlinear Programming min f ( ) Original Formulation R s.t n c ( ) = Can generalize for a b Barrier Approach min n R ϕ μ s.t c( ) = ( ) = f ( ) μ ln i n i= As μ, *(μ) * Fiacco and McCormic (968) 6 Solution of the Barrier Problem e Newton Directions (KKT System) T f = [,,...], X = diag( ) A = c( ), W = ( ) + A( ) λ v L(, λ, ν ) Xv μeμ e c ( ) = = = Solve Reducing the System d =μx e v X v Vd W I d f + A λ v Σ A d T = A = ϕ μ d c + λ V X λ d c Xv μ e ν W + d T A Σ= X V IPOPT Code 6 3

32 Global Convergence of Newton-based Barrier Solvers Merit Function Eact Penalty: P(, η) = f() + η c() Aug d Lagrangian: L*(, λ, η) = f() + λ T c() + η c() Assess Search Direction (e.g., from IPOPT) Line Search choose stepsize α to give sufficient decrease of merit function using a step to the boundary rule with τ ~.99. for α (, α ], + = + α d ( τ ) v + + = v + α d λ + α d > ( τ ) v = λ + α ( λ λ ) How do we balance φ () and c() with η? Is this approach globally convergent? Will it still be fast? v + > 63 Global Convergence Failure (Wächter and B., ) s. t. ( ) Min f ( ) = 3 =, 3 Newton-type line search stalls even though descent directions eist T A ( ) d + c ( ) = + α d > Remedies: Composite Step Trust Region (Byrd et al.) Filter Line Search Methods 64 3

33 Line Search Filter Method Store (φ, θ) at allowed iterates Allow progress if trial point is acceptable to filter with θ margin If switching condition T a b α[ φ d] δ[ θ ], a > b > is satisfied, only an Armijo line search is required on φ If insufficient progress on stepsize, evoe restoration phase to reduce θ. Global convergence and superlinear local convergence proved (with second order correction) φ() θ() = c() 65 Implementation Details Modify KKT (full space) matri if nonsingular W + Σ + δ A T A δ I δ - Correct inertia to guarantee descent direction δ - Deal with ran deficient A KKT matri factored by MA7 Feasibility restoration phase Min c ( ) + l Q Apply Eact Penalty Formulation Eploit same structure/algorithm to reduce infeasibility u 66 33

34 IPOPT Algorithm Features Line Search Strategies for Globalization - l eact penalty merit function - augmented Lagrangian merit function - Filter method (adapted and etended from Fletcher and Leyffer) Hessian Calculation - BFGS (full/lm and reduced space) - SR (full/lm and reduced space) - Eact full Hessian (direct) - Eact reduced Hessian (direct) - Preconditioned CG Algorithmic Properties Globally, superlinearly convergent (Wächter and B., 5) Easily tailored to different problem structures Freely Available CPL License and COIN-OR distribution: IPOPT 3. recently rewritten in C++ Solved on thousands of test problems and applications 67 Gasoline Blending OIL TANKS Gasoline Blending Here GAS STATIONS Supply tans Intermediate tans FINAL PRODUCT TRUCKS Final Product tans 68 34

35 Blending Problem & Model Formulation q t, i q Supply tans (i) i f t, ij.. q t, j..q t, f t, j v v t, j Intermediate tans (j).. ma c f c ) f t, Final Product tans () f & v flowrates and tan volumes q tan qualities ( f t, i, t i t i s.t. f f + v = v t, j t, ij t +, j t, j i f t, f = t, j j q f q f + q t, j t, j t, i t, ij t +, j i q f q f = t, t, t, j t, j j q q t, q ma min v j v t, j v j min ma v t = q v +, j t, j t, j Model Formulation in AMPL 69 Small Multi-day Blending Models Single Qualities Haverly, C. 978 (HM) Audet & Hansen 998 (AHM) F B F B F P F P B F3 B F3 P B3 7 35

36 Honeywell Blending Model Multiple Days 48 Qualities 7 Comparison of NLP Solvers: Data Reconciliation terations It Degrees of Freedom LANCELOT MINOS SNOPT KNITRO LOQO IPOPT CPU Time (s, norm m.) 4 6. LANCELOT MINOS SNOPT KNITRO LOQO IPOPT. Degrees of Freedom 7 36

37 Comparison of NLP solvers (latest Mittelmann study) Mittelmann NLP benchmar (-6-8) fraction solved within IPOPT KNITRO LOQO SNOPT CONOPT Limits Fail IPOPT 7 KNITRO 7 LOQO 3 4 SNOPT 56 CONOPT log()*minimum CPU time 7 Large-scale Test Problems 5-5 variables, 5 constraints 73 Typical NLP algorithms and software SQP - NPSOL, VFAD, NLPQL, fmincon reduced SQP - SNOPT, rsqp, MUSCOD, DMO, LSSOL Reduced Grad. rest. - GRG, GINO, SOLVER, CONOPT Reduced Grad no rest. - MINOS Second derivatives and barrier - IPOPT, KNITRO, LOQO Interesting hybrids - FSQP/cFSQP - SQP and constraint elimination LANCELOT (Augmented Lagrangian w/ Gradient Projection) 74 37

38 Rules for Formulating Nonlinear Programs ) Avoid overflows and undefined terms, (do not divide, tae logs, etc.) e.g. + y - ln z = + y - u = ep u - z = ) If constraints must always be enforced, mae sure they are linear or bounds. eg e.g. v(y - z ) / =3 vu = 3 u -(y-z ) =, u 3) Eploit linear constraints as much as possible, e.g. mass balance i L + y i V = F z i l i + v i = f i L l i = 4) Use bounds and constraints to enforce characteristic solutions. e.g. a b, g () to isolate correct root of h () =. 5) Eploit global properties p when possibility eists. Conve (linear equations?) Linear Program? Quadratic Program? Geometric Program? 6) Eploit problem structure when possible. e.g. Min [T - 3Ty] s.t. T + y - T y = 5 4-5Ty + T = 7 T (If T is fied solve LP) put T in outer optimization loop. 75 Sensitivity Analysis for Nonlinear Programming At nominal conditions, p Min f(, p ) s.t. c(, p ) = a(p ) b(p ) How is the optimum affected at other conditions, p p? Model parameters, prices, costs Variability in eternal conditions Model structure How sensitive is the optimum to parametric uncertainties? Can this be analyzed easily? 76 38

39 Second Order Optimality Conditions: Reduced Hessian needs to be positive semi-definite - Nonstrict local minimum: If nonnegative, find eigenvectors for zero eigenvalues, regions of nonunique solutions - Saddle point: If any are eigenvalues are negative, move along directions of corresponding eigenvectors and restart optimization. z Saddle Point * z 77 IPOPT Factorization Byproducts: Tools for Postoptimality and Uniqueness Modify KKT (full space) matri if nonsingular W + Σ + δi A A T δi δ - Correct inertia to guarantee descent direction δ - Deal with ran deficient A KKT matri factored by indefinite symmetric factorization Solution with δ, δ = sufficient second order conditions Eigenvalues of reduced Hessian all positive unique minimizer and multipliers Else: Reduced Hessian available through sensitivity calculations Find eigenvalues to determine nature of stationary point 78 39

40 NLP Sensitivity Parametric Programming Solution Triplet Optimality Conditions NLP Sensitivity Rely upon Eistence and Differentiability of Path Main Idea: Obtain and find by Taylor Series Epansion 79 NLP Sensitivity Properties (Fiacco, 983) Assume sufficient differentiability, LICQ, SSOC, SC: Intermediate IP solution (s(μ)-s*) = O(μ) Finite neighborhood around p and μ= with same active set eists and is unique 8 4

41 NLP Sensitivity Obtaining Optimality Conditions of Apply Implicit Function Theorem to around KKT Matri IPOPT Already Factored at Solution Sensitivity Calculation from Single Bacsolve Approimate Solution Retains Active Set 8 Sensitivity for Flash Recycle Optimization ( decisions, 7 tear variables) S3 S Mier S Flash P S6 S5 S4 Ratio 3 / Ma S3(A) *S3(B) - S3(A) - S3(C) + S3(D) - (S3(E)) S7 Second order sufficiency test: Dimension of reduced Hessian = Positive eigenvalue Sensitivity to simultaneous change in feed rate and upper bound on purge ratio 8 4

42 Ammonia Process Optimization (9 decisions, 8 tear variables) Objective Function Sensitivities vs. Re-optimized Pts Reactor Sensitivity QP QP Actual Second order sufficiency test: Dimension of reduced Hessian Lo = P4 Eigenvalues = [.8E-4, 8.3E-,.8E-4, 7.7E-5] Sensitivity to simultaneous change Flash in feed rate and upper bound on reactor conversion 7. Hi P Flash Relative perturbation change.. 83 Hierarchy of Nonlinear Programming Formulations and Model Intrusion Full Space Formulation (GAMS, AMPL ) Open Interior Point Compute Efficiency Real-time Optimization Eact Derivatives rsqp SQP Blac Bo DFO Closed 4 6 Variables/Constraints 84 4

43 Summary and Conclusions Optimization Algorithms -KKT Conditions -Reduced Gradient Methods (GRG, MINOS) -Successive Quadratic Programming (SQP) -Reduced Hessian SQP -Interior Point NLP (IPOPT) Process Optimization Applications -Modular Flowsheet Optimization -Equation Oriented Models and Optimization -Realtime Process Optimization -Blending with many degrees of freedom Further Applications -Sensitivity Analysis for NLP Solutions 85 43

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

Nonlinear Programming: Concepts and Algorithms for Process Optimization

Nonlinear Programming: Concepts and Algorithms for Process Optimization Nonlinear Programming: Concepts and Algorithms for Process Optimization L.. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Nonlinear Programming and Process Optimization

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

A Brief Perspective on Process Optimization

A Brief Perspective on Process Optimization A Brief Perspective on Process Optimization L. T. Biegler Carnegie Mellon University November, 2007 Annual AIChE Meeting Salt Lake City, UT Overview Introduction Main Themes Modeling Levels Closed Models

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization

Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization L. T. Biegler and V. Zavala Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA 15213 April 12,

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Nonlinear Optimization Solvers

Nonlinear Optimization Solvers ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization

More information

Academic Editor: Dominique Bonvin Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017

Academic Editor: Dominique Bonvin Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017 Article Sensitivity-Based Economic NMPC with a Path-Following Approach Eka Suwartadi 1, Vyacheslav Kungurtsev 2 and Johannes Jäschke 1, * 1 Department of Chemical Engineering, Norwegian University of Science

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

Numerical Nonlinear Optimization with WORHP

Numerical Nonlinear Optimization with WORHP Numerical Nonlinear Optimization with WORHP Christof Büskens Optimierung & Optimale Steuerung London, 8.9.2011 Optimization & Optimal Control Nonlinear Optimization WORHP Concept Ideas Features Results

More information

Index. calculus of variations, 247 car problem, 229, 238 cascading tank problem, 356 catalyst mixing problem, 240

Index. calculus of variations, 247 car problem, 229, 238 cascading tank problem, 356 catalyst mixing problem, 240 Index A-stable, 255 absolute value, 338 absolutely stable, 255 absorption, 202 active constraints, 72 active set strongly active set, 73 weakly active set, 73 active set strategy, 134 Adams, 256 Adams

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

A Primer on Multidimensional Optimization

A Primer on Multidimensional Optimization A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise

More information

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Part 2: NLP Constrained Optimization

Part 2: NLP Constrained Optimization Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz EMAIL: James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow!

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence RC23036 (W0304-181) April 21, 2003 Computer Science IBM Research Report Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence Andreas Wächter, Lorenz T. Biegler IBM Research

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Real-Time Implementation of Nonlinear Predictive Control

Real-Time Implementation of Nonlinear Predictive Control Real-Time Implementation of Nonlinear Predictive Control Michael A. Henson Department of Chemical Engineering University of Massachusetts Amherst, MA WebCAST October 2, 2008 1 Outline Limitations of linear

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy)

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy) Mario R. Eden, Marianthi Ierapetritou and Gavin P. Towler (Editors) Proceedings of the 13 th International Symposium on Process Systems Engineering PSE 2018 July 1-5, 2018, San Diego, California, USA 2018

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Efficient Moving Horizon Estimation of DAE Systems. John D. Hedengren Thomas F. Edgar The University of Texas at Austin TWMCC Feb 7, 2005

Efficient Moving Horizon Estimation of DAE Systems. John D. Hedengren Thomas F. Edgar The University of Texas at Austin TWMCC Feb 7, 2005 Efficient Moving Horizon Estimation of DAE Systems John D. Hedengren Thomas F. Edgar The University of Teas at Austin TWMCC Feb 7, 25 Outline Introduction: Moving Horizon Estimation (MHE) Eplicit Solution

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Inexact Solution of NLP Subproblems in MINLP

Inexact Solution of NLP Subproblems in MINLP Ineact Solution of NLP Subproblems in MINLP M. Li L. N. Vicente April 4, 2011 Abstract In the contet of conve mied-integer nonlinear programming (MINLP, we investigate how the outer approimation method

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Intro to Nonlinear Optimization

Intro to Nonlinear Optimization Intro to Nonlinear Optimization We now rela the proportionality and additivity assumptions of LP What are the challenges of nonlinear programs NLP s? Objectives and constraints can use any function: ma

More information

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

Real-Time Optimization

Real-Time Optimization Real-Time Optimization w Plant APC RTO c(x, u, p) = 0 On line optimization Steady state model for states (x) Supply setpoints (u) to APC (control system) Model mismatch, measured and unmeasured disturbances

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 0.87/opre.00.0894ec e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 0 INFORMS Electronic Companion Fied-Point Approaches to Computing Bertrand-Nash Equilibrium Prices Under

More information