Numerical Optimization for Economists

Size: px
Start display at page:

Download "Numerical Optimization for Economists"

Transcription

1 Numerical Optimization for Economists Todd Munson Mathematics and Computer Science Division Argonne National Laboratory July 18 21, / 104

2 Part I Numerical Optimization I: Static Models 2 / 104

3 Model Formulation where Classify m people into two groups using v variables c {0, 1} m is the known classification d R m v are the observations β R v+1 defines the separator logit distribution function Maximum likelihood problem m max c i log(f (β, d i, )) + (1 c i ) log(1 f (β, d i, )) β i=1 f (β, x) = exp β 0 + v β j x j j=1 1 + exp β 0 + v β j x j j=1 3 / 104

4 Solution Techniques min f (x) x Main ingredients of solution approaches: Local method: given x k (solution guess) compute a step s. Gradient Descent Quasi-Newton Approximation Sequential Quadratic Programming Globalization strategy: converge from any starting point. Trust region Line search 4 / 104

5 Trust-Region Method min s f (x k ) + s T f (x k ) st H(x k )s subject to s k 5 / 104

6 Trust-Region Method 1 Initialize trust-region radius Constant Direction Interpolation 6 / 104

7 Trust-Region Method 1 Initialize trust-region radius Constant Direction Interpolation 2 Compute a new iterate 1 Solve trust-region subproblem min s f (x k ) + s T f (x k ) st H(x k )s subject to s k 7 / 104

8 Trust-Region Method 1 Initialize trust-region radius Constant Direction Interpolation 2 Compute a new iterate 1 Solve trust-region subproblem min s f (x k ) + s T f (x k ) st H(x k )s subject to s k 2 Accept or reject iterate 3 Update trust-region radius Reduction Interpolation 3 Check convergence 8 / 104

9 Solving the Subproblem Moré-Sorensen method Computes global solution to subproblem Conjugate gradient method with trust region Objective function decreases monotonically Some choices need to be made Preconditioner Norm of direction and residual Dealing with negative curvature 9 / 104

10 Line-Search Method min s f (x k ) + s T f (x k ) st (H(x k ) + λ k I )s 10 / 104

11 Line-Search Method 1 Initialize perturbation to zero 2 Solve perturbed quadratic model min s f (x k ) + s T f (x k ) st (H(x k ) + λ k I )s 11 / 104

12 Line-Search Method 1 Initialize perturbation to zero 2 Solve perturbed quadratic model min s f (x k ) + s T f (x k ) st (H(x k ) + λ k I )s 3 Find new iterate 1 Search along Newton direction 2 Search along gradient-based direction 12 / 104

13 Line-Search Method 1 Initialize perturbation to zero 2 Solve perturbed quadratic model min s f (x k ) + s T f (x k ) st (H(x k ) + λ k I )s 3 Find new iterate 1 Search along Newton direction 2 Search along gradient-based direction 4 Update perturbation Decrease perturbation if the following hold Iterative method succeeds Search along Newton direction succeeds Otherwise increase perturbation 5 Check convergence 13 / 104

14 Solving the Subproblem Conjugate gradient method 14 / 104

15 Solving the Subproblem Conjugate gradient method Conjugate gradient method with trust region Initialize radius Constant Direction Interpolation Update radius Reduction Step length Interpolation Some choices need to be made Preconditioner Norm of direction and residual Dealing with negative curvature 15 / 104

16 Performing the Line Search Backtracking Armijo Line search Find t such that f (x k + ts) f (x k ) + σt f (x k ) T s Try t = 1, β, β 2,... for 0 < β < 1 More-Thuente Line search Find t such that f (x k + ts) f (x k ) + σt f (x k ) T s f (x k + ts) T s δ f (x k ) T s Construct cubic interpolant Compute t to minimize interpolant Refine interpolant 16 / 104

17 Updating the Perturbation 1 If increasing and k = 0 k+1 = Proj [l0,u 0 ] 2 If increasing and k > 0 3 If decreasing k+1 = Proj [li,u i ] 4 If k+1 < l d, then k+1 = 0 ( ) α 0 g(x k ) ( max (α i g(x k ), β i k)) k+1 = min (α d g(x k ), β d k) 17 / 104

18 Trust-Region Line-Search Method 1 Initialize trust-region radius Constant Direction Interpolation 2 Compute a new iterate 1 Solve trust-region subproblem min s f (x k ) + s T f (x k ) st H(x k )s subject to s k 2 Search along direction 3 Update trust-region radius Reduction Step length Interpolation 3 Check convergence 18 / 104

19 Iterative Methods Conjugate gradient method Stop if negative curvature encountered Stop if residual norm is small 19 / 104

20 Iterative Methods Conjugate gradient method Stop if negative curvature encountered Stop if residual norm is small Conjugate gradient method with trust region Nash Follow direction to boundary if first iteration Stop at base of direction otherwise Steihaug-Toint Follow direction to boundary Generalized Lanczos Compute tridiagonal approximation Find global solution to approximate problem on boundary Initialize perturbation with approximate minimum eigenvalue 20 / 104

21 Preconditioners No preconditioner Absolute value of Hessian diagonal Absolute value of perturbed Hessian diagonal Incomplete Cholesky factorization of Hessian Block Jacobi with Cholesky factorization of blocks Scaled BFGS approximation to Hessian matrix None Scalar Diagonal of Broyden update Rescaled diagonal of Broyden update Absolute value of Hessian diagonal Absolute value of perturbed Hessian diagonal 21 / 104

22 Norms Residual Preconditioned r M T M 1 Unpreconditioned r 2 Natural r M 1 Direction Preconditioned s M Monotonically increasing s k+1 M > s k M. 22 / 104

23 Norms Residual Preconditioned r M T M 1 Unpreconditioned r 2 Natural r M 1 Direction Preconditioned s M Monotonically increasing s k+1 M > s k M. Unpreconditioned s 2 23 / 104

24 Termination Typical convergence criteria Absolute residual f (x k ) < τ a Relative residual f (x k ) f (x < τ 0) r Unbounded objective f (x k ) < κ Slow progress f (x k ) f (x k 1 ) < ɛ Iteration limit Time limit Solver status 24 / 104

25 Convergence Issues Quadratic convergence best outcome Linear convergence Far from a solution f (x k ) is large Hessian is incorrect disrupts quadratic convergence Hessian is rank deficient f (x k ) is small Limits of finite precision arithmetic 1 f (x k ) converges quadratically to small number 2 f (x k ) hovers around that number with no progress Domain violations such as 1 x when x = 0 Make implicit constraints explicit Nonglobal solution Apply a multistart heuristic Use global optimization solver 25 / 104

26 Some Available Software TRON Newton method with trust-region LBFGS Limited-memory quasi-newton method with line search TAO Toolkit for Advanced Optimization NLS Newton line-search method NTR Newton trust-region method NTL Newton line-search/trust-region method LMVM Limited-memory quasi-newton method CG Nonlinear conjugate gradient methods 26 / 104

27 Model Formulation Economy with n agents and m commodities e R n m are the endowments α R n m and β R n m are the utility parameters λ R n are the social weights Social planning problem ( n m ) α i,k (1 + x i,k ) 1 β i,k max λ i x 0 1 β i,k i=1 k=1 n n subject to x i,k k = 1,..., m i=1 i=1 e i,k 27 / 104

28 Solving Constrained Optimization Problems min f (x) x subject to c(x) 0 Main ingredients of solution approaches: Local method: given x k (solution guess) find a step s. Sequential Quadratic Programming (SQP) Sequential Linear/Quadratic Programming (SLQP) Interior-Point Method (IPM) Globalization strategy: converge from any starting point. Trust region Line search Acceptance criteria: filter or penalty function. 28 / 104

29 Sequential Linear Programming 1 Initialize trust-region radius 2 Compute a new iterate 29 / 104

30 Sequential Linear Programming 1 Initialize trust-region radius 2 Compute a new iterate 1 Solve linear program min s f (x k ) + s T f (x k ) subject to c(x k ) + c(x k ) T s 0 s k 30 / 104

31 Sequential Linear Programming 1 Initialize trust-region radius 2 Compute a new iterate 1 Solve linear program 2 Accept or reject iterate 3 Update trust-region radius 3 Check convergence min s f (x k ) + s T f (x k ) subject to c(x k ) + c(x k ) T s 0 s k 31 / 104

32 Sequential Quadratic Programming 1 Initialize trust-region radius 2 Compute a new iterate 32 / 104

33 Sequential Quadratic Programming 1 Initialize trust-region radius 2 Compute a new iterate 1 Solve quadratic program min s f (x k ) + s T f (x k ) st W (x k )s subject to c(x k ) + c(x k ) T s 0 s k 33 / 104

34 Sequential Quadratic Programming 1 Initialize trust-region radius 2 Compute a new iterate 1 Solve quadratic program min s f (x k ) + s T f (x k ) st W (x k )s subject to c(x k ) + c(x k ) T s 0 s k 2 Accept or reject iterate 3 Update trust-region radius 3 Check convergence 34 / 104

35 Sequential Linear Quadratic Programming 1 Initialize trust-region radius 2 Compute a new iterate 35 / 104

36 Sequential Linear Quadratic Programming 1 Initialize trust-region radius 2 Compute a new iterate 1 Solve linear program to predict active set min d f (x k ) + d T f (x k ) subject to c(x k ) + c(x k ) T d 0 d k 36 / 104

37 Sequential Linear Quadratic Programming 1 Initialize trust-region radius 2 Compute a new iterate 1 Solve linear program to predict active set min d f (x k ) + d T f (x k ) subject to c(x k ) + c(x k ) T d 0 d k 2 Solve equality constrained quadratic program min s f (x k ) + s T f (x k ) st W (x k )s subject to c A (x k ) + c A (x k ) T s = 0 3 Accept or reject iterate 4 Update trust-region radius 3 Check convergence 37 / 104

38 Acceptance Criteria Decrease objective function value: f (x k + s) f (x k ) Decrease constraint violation: c (x k + s) c (x k ) 38 / 104

39 Acceptance Criteria Decrease objective function value: f (x k + s) f (x k ) Decrease constraint violation: c (x k + s) c (x k ) Four possibilities 1 step can decrease both f (x) and c (x) GOOD 2 step can decrease f (x) and increase c (x)??? 3 step can increase f (x) and decrease c (x)??? 4 step can increase both f (x) and c (x) BAD 39 / 104

40 Acceptance Criteria Decrease objective function value: f (x k + s) f (x k ) Decrease constraint violation: c (x k + s) c (x k ) Four possibilities 1 step can decrease both f (x) and c (x) GOOD 2 step can decrease f (x) and increase c (x)??? 3 step can increase f (x) and decrease c (x)??? 4 step can increase both f (x) and c (x) BAD Filter uses concept from multi-objective optimization (h k+1, f k+1 ) dominates (h l, f l ) iff h k+1 h l and f k+1 f l 40 / 104

41 Filter Framework Filter F: list of non-dominated pairs (h l, f l ) new x k+1 is acceptable to filter F iff for all l F 1 h k+1 h l or 2 f k+1 f l 41 / 104

42 Filter Framework Filter F: list of non-dominated pairs (h l, f l ) new x k+1 is acceptable to filter F iff for all l F 1 h k+1 h l or 2 f k+1 f l remove redundant filter entries 42 / 104

43 Filter Framework Filter F: list of non-dominated pairs (h l, f l ) new x k+1 is acceptable to filter F iff for all l F 1 h k+1 h l or 2 f k+1 f l remove redundant filter entries new x k+1 is rejected if for some l F 1 h k+1 > h l and 2 f k+1 > f l 43 / 104

44 Convergence Criteria Feasible and no descent directions Constraint qualification LICQ, MFCQ Linearized active constraints characterize directions Objective gradient is a linear combination of constraint gradients 44 / 104

45 Optimality Conditions If x is a local minimizer and a constraint qualification holds, then there exist multipliers λ 0 such that f (x ) c A (x ) T λ A = 0 Lagrangian function L(x, λ) := f (x) λ T c(x) Optimality conditions can be written as f (x) c(x) T λ = 0 0 λ c(x) 0 Complementarity problem 45 / 104

46 Termination Feasible and complementary min(c(x k ), λ k ) τ f Optimal x L(x k, λ k ) τ o Other possible conditions Slow progress Iteration limit Time limit Multipliers and reduced costs 46 / 104

47 Convergence Issues Quadratic convergence best outcome Globally infeasible linear constraints infeasible Locally infeasible nonlinear constraints locally infeasible Unbounded objective hard to detect Unbounded multipliers constraint qualification not satisfied Linear convergence rate Far from a solution f (x k ) is large Hessian is incorrect disrupts quadratic convergence Hessian is rank deficient f (x k ) is small Limits of finite precision arithmetic Domain violations such as 1 x when x = 0 Make implicit constraints explicit Nonglobal solutions Apply a multistart heuristic Use global optimization solver 47 / 104

48 Some Available Software filtersqp trust-region SQP; robust QP solver filter to promote global convergence SNOPT line-search SQP; null-space CG option l 1 exact penalty function SLIQUE part of KNITRO SLP-EQP trust-region with l 1 penalty use with knitro options = "algorithm=3"; 48 / 104

49 Part II Numerical Optimization II: Optimal Control 49 / 104

50 Model Formulation Maximize discounted utility u( ) is the utility function R is the retirement age T is the terminal age w is the wage β is the discount factor r is the interest rate Optimization problem T β t u(c t ) max s,c t=0 subject to s t+1 = (1 + r)s t + w c t t = 0,..., R 1 s t+1 = (1 + r)s t c t t = R,..., T s 0 = s T +1 = 0 50 / 104

51 Solving Constrained Optimization Problems min f (x) x subject to c(x) 0 Main ingredients of solution approaches: Local method: given x k (solution guess) find a step s. Sequential Quadratic Programming (SQP) Sequential Linear/Quadratic Programming (SLQP) Interior-Point Method (IPM) Globalization strategy: converge from any starting point. Trust region Line search Acceptance criteria: filter or penalty function. 51 / 104

52 Interior-Point Method Reformulate optimization problem with slacks min f (x) x subject to c(x) = 0 x 0 Construct perturbed optimality conditions f (x) c(x) T λ µ F τ (x, y, z) = c(x) X µ τe Central path {x(τ), λ(τ), µ(τ) τ > 0} Apply Newton s method for sequence τ 0 52 / 104

53 Interior-Point Method 1 Compute a new iterate 1 Solve linear system of equations W k c(x k ) T I c(x k ) 0 0 s x s λ = F τ (x k, λ k, µ k ) µ k 0 X k s µ 2 Accept or reject iterate 3 Update parameters 2 Check convergence 53 / 104

54 Convergence Issues Quadratic convergence best outcome Globally infeasible linear constraints infeasible Locally infeasible nonlinear constraints locally infeasible Dual infeasible dual problem is locally infeasible Unbounded objective hard to detect Unbounded multipliers constraint qualification not satisfied Duality gap Domain violations such as 1 x when x = 0 Make implicit constraints explicit Nonglobal solutions Apply a multistart heuristic Use global optimization solver 54 / 104

55 Some Available Software IPOPT open source in COIN-OR line-search filter algorithm KNITRO trust-region Newton to solve barrier problem l 1 penalty barrier function Newton system: direct solves or null-space CG LOQO line-search method Newton system: modified Cholesky factorization 55 / 104

56 Optimal Technology Optimize energy production schedule and transition between old and new reduced-carbon technology to meet carbon targets Maximize social welfare Constraints Limit total greenhouse gas emissions Low-carbon technology less costly as it becomes widespread Assumptions on emission rates, economic growth, and energy costs 56 / 104

57 Model Formulation Finite time: t [0, T ] Instantaneous energy output: q o (t) and q n (t) Cumulative energy output: x o (t) and x n (t) x n (t) = t 0 q n (τ)dτ Discounted greenhouse gases emissions T 0 e at (b o q o (t) + b n q n (t)) dt z T Consumer surplus S(Q(t), t) derived from utility Production costs c o per unit cost of old technology c n (x n (t)) per unit cost of new technology (learning by doing) 57 / 104

58 Continuous-Time Model max {q o,q n,x n,z}(t) subject to T 0 e rt [S(q o (t) + q n (t), t) c o q o (t) c n (x n (t))q n (t)] dt x n (t) = q n (t) x(0) = x 0 = 0 ż(t) = e at (b o q o (t) + b n q n (t)) z(0) = z 0 = 0 z(t ) z T q o (t) 0, q n (t) / 104

59 Optimal Technology Penetration Discretization: t [0, T ] replaced by N + 1 equally spaced points t i = ih h := T /N time integration step-length approximate qi n q n (t i ) etc. Replace differential equation ẋ(t) = q n (t) by x i+1 = x i + hqi n 59 / 104

60 Optimal Technology Penetration Discretization: t [0, T ] replaced by N + 1 equally spaced points t i = ih h := T /N time integration step-length approximate qi n q n (t i ) etc. Replace differential equation ẋ(t) = q n (t) by x i+1 = x i + hqi n Output of new technology between t = 24 and t = / 104

61 Solution with Varying h Output for different discretization schemes and step-sizes 61 / 104

62 Optimal Technology Penetration Add adjustment cost to model building of capacity: Capital and Investment: Notation: K j (t) amount of capital in technology j at t. I j (t) investment to increase K j (t). initial capital level as K j 0 : Q(t) = q o (t) + q n (t) C(t) = C o (q o (t), K o (t)) + C n (q n (t), K n (t)) I (t) = I o (t) + I n (t) K(t) = K o (t) + K n (t) 62 / 104

63 Optimal Technology Penetration maximize {q j,k j,i j,x,z}(t) { T 0 [ S(Q(t), ] } e rt t) C(t) K(t) dt + e rt K(T ) subject to ẋ(t) = q n (t), x(0) = x 0 = 0 K j (t) = δk j (t) + I j (t), K j (0) = K j 0, j {o, n} ż(t) = e at [b o q o (t) + b n q n (t)], z(0) = z 0 = 0 z(t ) z T q j (t) 0, j {o, n} I j (t) 0, j {o, n} 63 / 104

64 Optimal Technology Penetration Optimal output, investment, and capital for 50% CO2 reduction. 64 / 104

65 Pitfalls of Discretizations [Hager, 2000] Optimal Control Problem minimize 1 2 subject to 1 0 u 2 (t) + 2y 2 (t)dt ẏ(t) = 1 2y(t) + u(t), t [0, 1], y(0) = 1. y (t) = 2e3t + e 3 e 3t/2 (2 + e 3 ), u (t) = 2(e3t e 3 ) e 3t/2 (2 + e 3 ). 65 / 104

66 Pitfalls of Discretizations [Hager, 2000] Optimal Control Problem minimize 1 2 subject to 1 0 u 2 (t) + 2y 2 (t)dt ẏ(t) = 1 2y(t) + u(t), t [0, 1], y(0) = 1. Discretize with 2nd order RK minimize h 2 K 1 k=0 u 2 k+1/2 +2y 2 k+1/2 subject to (k = 0,..., K): y k+1/2 = y k + h 2 ( 1 2 y k + u k ), y k+1 = y k + h( 1 2 y k+1/2 + u k+1/2 ), y (t) = 2e3t + e 3 e 3t/2 (2 + e 3 ), u (t) = 2(e3t e 3 ) e 3t/2 (2 + e 3 ). 66 / 104

67 Pitfalls of Discretizations [Hager, 2000] Optimal Control Problem minimize 1 2 subject to 1 0 u 2 (t) + 2y 2 (t)dt ẏ(t) = 1 2y(t) + u(t), t [0, 1], y(0) = 1. y (t) = 2e3t + e 3 e 3t/2 (2 + e 3 ), u (t) = 2(e3t e 3 ) e 3t/2 (2 + e 3 ). Discretize with 2nd order RK minimize h 2 K 1 k=0 u 2 k+1/2 +2y 2 k+1/2 subject to (k = 0,..., K): y k+1/2 = y k + h 2 ( 1 2 y k + u k ), y k+1 = y k + h( 1 2 y k+1/2 + u k+1/2 ), Discrete solution (k = 0,..., K): y k = 1, y k+1/2 = 0, u k = 4 + h 2h, u k+1/2 = 0, DOES NOT CONVERGE! 67 / 104

68 Tips to Solve Continuous-Time Problems Use discretize-then-optimize with different schemes Refine discretization: h = 1 discretization is nonsense Check implied discretization of adjoints 68 / 104

69 Tips to Solve Continuous-Time Problems Use discretize-then-optimize with different schemes Refine discretization: h = 1 discretization is nonsense Check implied discretization of adjoints Alternative: Optimize-Then-Discretize Consistent adjoint/dual discretization Discretized gradients can be wrong! Harder for inequality constraints 69 / 104

70 Part III Numerical Optimization III: Complementarity Constraints 70 / 104

71 Nash Games Non-cooperative game played by n individuals Each player selects a strategy to optimize their objective Strategies for the other players are fixed Equilibrium reached when no improvement is possible 71 / 104

72 Nash Games Non-cooperative game played by n individuals Each player selects a strategy to optimize their objective Strategies for the other players are fixed Equilibrium reached when no improvement is possible Characterization of two player equilibrium (x, y ) { arg min f x 1 (x, y ) x 0 { subject to c 1 (x) 0 arg min f 2 (x, y) y y 0 subject to c 2 (y) 0 72 / 104

73 Nash Games Non-cooperative game played by n individuals Each player selects a strategy to optimize their objective Strategies for the other players are fixed Equilibrium reached when no improvement is possible Characterization of two player equilibrium (x, y ) { arg min f x 1 (x, y ) x 0 { subject to c 1 (x) 0 arg min f 2 (x, y) y y 0 subject to c 2 (y) 0 Many applications in economics Bimatrix games Cournot duopoly models General equilibrium models Arrow-Debreau models 73 / 104

74 Complementarity Formulation Assume each optimization problem is convex f 1 (, y) is convex for each y f 2 (x, ) is convex for each x c 1 ( ) and c 2 ( ) satisfy constraint qualification Then the first-order conditions are necessary and sufficient min f 1(x, y ) x 0 subject to c 1(x) 0 0 x xf1(x, y ) + λ T 1 xc 1(x) 0 0 λ 1 c 1(x) 0 74 / 104

75 Complementarity Formulation Assume each optimization problem is convex f 1 (, y) is convex for each y f 2 (x, ) is convex for each x c 1 ( ) and c 2 ( ) satisfy constraint qualification Then the first-order conditions are necessary and sufficient min f 2(x, y) y 0 subject to c 2(y) 0 0 y y f2(x, y) + λ T 2 y c 2(y) 0 0 λ 2 c 2(y) 0 75 / 104

76 Complementarity Formulation Assume each optimization problem is convex f 1 (, y) is convex for each y f 2 (x, ) is convex for each x c 1 ( ) and c 2 ( ) satisfy constraint qualification Then the first-order conditions are necessary and sufficient 0 x x f 1 (x, y) + λ T 1 xc 1 (x) 0 0 y y f 2 (x, y) + λ T 2 y c 2 (y) 0 0 λ 1 c 1 (y) 0 0 λ 2 c 2 (y) 0 Nonlinear complementarity problem Square system number of variables and constraints the same Each solution is an equilibrium for the Nash game 76 / 104

77 Model Formulation Economy with n agents and m commodities e R n m are the endowments α R n m and β R n m are the utility parameters p R m are the commodity prices Agent i maximizes utility with budget constraint max x i, 0 subject to m α i,k (1 + x i,k ) 1 β i,k 1 β i,k m p k (x i,k e i,k ) 0 k=1 k=1 Market k sets price for the commodity 0 p k n (e i,k x i,k ) 0 i=1 77 / 104

78 Newton Method for Nonlinear Equations 78 / 104

79 Newton Method for Nonlinear Equations 79 / 104

80 Newton Method for Nonlinear Equations 80 / 104

81 Newton Method for Nonlinear Equations 81 / 104

82 Methods for Complementarity Problems Sequential linearization methods (PATH) 1 Solve the linear complementarity problem 0 x F (x k ) + F (x k )(x x k ) 0 2 Perform a line search along merit function 3 Repeat until convergence 82 / 104

83 Methods for Complementarity Problems Sequential linearization methods (PATH) 1 Solve the linear complementarity problem 0 x F (x k ) + F (x k )(x x k ) 0 2 Perform a line search along merit function 3 Repeat until convergence Semismooth reformulation methods (SEMI) Solve linear system of equations to obtain direction Globalize with a trust region or line search Less robust in general Interior-point methods 83 / 104

84 Semismooth Reformulation Define Fischer-Burmeister function φ(a, b) := a + b a 2 + b 2 φ(a, b) = 0 iff a 0, b 0, and ab = 0 Define the system [Φ(x)] i = φ(x i, F i (x)) x solves complementarity problem iff Φ(x ) = 0 Nonsmooth system of equations 84 / 104

85 Semismooth Algorithm 1 Calculate H k B Φ(x k ) and solve the following system for d k : H k d k = Φ(x k ) If this system either has no solution, or Ψ(x k ) T d k p 1 d k p 2 is not satisfied, let d k = Ψ(x k ). 85 / 104

86 Semismooth Algorithm 1 Calculate H k B Φ(x k ) and solve the following system for d k : H k d k = Φ(x k ) If this system either has no solution, or Ψ(x k ) T d k p 1 d k p 2 is not satisfied, let d k = Ψ(x k ). 2 Compute smallest nonnegative integer i k such that Ψ(x k + β i k d k ) Ψ(x k ) + σβ i k Ψ(x k )d k 3 Set x k+1 = x k + β i k d k, k = k + 1, and go to / 104

87 Convergence Issues Quadratic convergence best outcome Linear convergence Far from a solution r(x k ) is large Jacobian is incorrect disrupts quadratic convergence Jacobian is rank deficient r(x k ) is small Converge to local minimizer guarantees rank deficiency Limits of finite precision arithmetic 1 r(x k ) converges quadratically to small number 2 r(x k ) hovers around that number with no progress Domain violations such as 1 x when x = 0 87 / 104

88 Some Available Software PATH sequential linearization method MILES sequential linearization method SEMI semismooth linesearch method TAO Toolkit for Advanced Optimization SSLS full-space semismooth linesearch methods ASLS active-set semismooth linesearch methods RSCS reduced-space method 88 / 104

89 Definition Leader-follower game Dominant player (leader) selects a strategy y Then followers respond by playing a Nash game x i { arg min x i 0 f i (x, y) subject to c i (x i ) 0 Leader solves optimization problem with equilibrium constraints min y 0,x,λ g(x, y) subject to h(y) 0 0 x i xi f i (x, y) + λ T i xi c i (x i ) 0 0 λ i c i (x i ) 0 Many applications in economics Optimal taxation Tolling problems 89 / 104

90 Model Formulation Economy with n agents and m commodities e R n m are the endowments α R n m and β R n m are the utility parameters p R m are the commodity prices Agent i maximizes utility with budget constraint max x i, 0 subject to m α i,k (1 + x i,k ) 1 β i,k 1 β i,k m p k (x i,k e i,k ) 0 k=1 k=1 Market k sets price for the commodity 0 p k n (e i,k x i,k ) 0 i=1 90 / 104

91 Nonlinear Programming Formulation min x,y,λ,s,t 0 g(x, y) subject to h(y) 0 s i = xi f i (x, y) + λ T i xi c i (x i ) t i = c i (x i ) ) (si T x i + λ i t i 0 i Constraint qualification fails Lagrange multiplier set unbounded Constraint gradients linearly dependent Central path does not exist Able to prove convergence results for some methods Reformulation very successful and versatile in practice 91 / 104

92 Penalization Approach min g(x, y) + π ) (si T x i + λ i t i x,y,λ,s,t 0 i subject to h(y) 0 s i = xi f i (x, y) + λ T i xi c i (x i ) t i = c i (x i ) Optimization problem satisfies constraint qualification Need to increase π 92 / 104

93 Relaxation Approach min x,y,λ,s,t 0 g(x, y) subject to h(y) 0 s i = xi f i (x, y) + λ T i xi c i (x i ) t i = c i (x i ) Need to decrease τ ) (si T x i + λ i t i τ i 93 / 104

94 Limitations Multipliers may not exist Solvers can have a hard time computing solutions Try different algorithms Compute feasible starting point Stationary points may have descent directions Checking for descent is an exponential problem Strong stationary points found in certain cases Many stationary points global optimization 94 / 104

95 Limitations Multipliers may not exist Solvers can have a hard time computing solutions Try different algorithms Compute feasible starting point Stationary points may have descent directions Checking for descent is an exponential problem Strong stationary points found in certain cases Many stationary points global optimization Formulation of follower problem Multiple solutions to Nash game Nonconvex objective or constraints Existence of multipliers 95 / 104

96 Part IV Numerical Optimization IV: Extensions 96 / 104

97 Global Optimization I need to find the GLOBAL minimum! use any NLP solver (often work well!) use the multi-start trick from previous slides global optimization based on branch-and-reduce: BARON constructs global underestimators refines region by branching tightens bounds by solving LPs solve problems with 100s of variables voodoo solvers: genetic algorithm & simulated annealing no convergence theory... usually worse than deterministic 97 / 104

98 Derivative-Free Optimization My model does not have derivatives! Change your model... good models have derivatives! pattern-search methods for min f (x) evaluate f (x) at stencil x k + M move to new best point extend to NLP; some convergence theory h matlab: NOMADm.m; parallel APPSPACK solvers based on building interpolating quadratic models DFO project on Mike Powell s NEWUOA quadratic model voodoo solvers: genetic algorithm & simulated annealing no convergence theory... usually worse than deterministic 98 / 104

99 Optimization with Integer Variables Mixed-Integer Nonlinear Program (MINLP) modeling discrete choices 0 1 variables modeling integer decisions integer variables e.g. number of different stocks in portfolio (8-10) not number of beers sold at Goose Island (millions) MINLP solvers: branch (separate z i = 0 and z i = 1) and cut solve millions of NLP relaxations: MINLPBB, SBB outer approximation: iterate MILP and NLP solvers BONMIN (COIN-OR) & FilMINT on NEOS 99 / 104

100 Portfolio Management N: Universe of asset to purchase x i : Amount of asset i to hold B: Budget minimize u(x) subject to i N x i = B, x / 104

101 Portfolio Management N: Universe of asset to purchase x i : Amount of asset i to hold B: Budget minimize u(x) subject to i N x i = B, x 0 Markowitz: u(x) def = α T x + λx T Qx α: maximize expected returns Q: variance-covariance matrix of expected returns λ: minimize risk; aversion parameter 101 / 104

102 More Realistic Models b R N of benchmark holdings Benchmark Tracking: u(x) def = (x b) T Q(x b) Constraint on E[Return]: α T x r 102 / 104

103 More Realistic Models b R N of benchmark holdings Benchmark Tracking: u(x) def = (x b) T Q(x b) Constraint on E[Return]: α T x r Limit Names: i N : x i > 0 K Use binary indicator variables to model the implication x i > 0 y i = 1 Implication modeled with variable upper bounds: i N y i K x i By i i N 103 / 104

104 Optimization Conclusions Optimization is General Modeling Paradigm linear, nonlinear, equations, inequalities integer variables, equilibrium, control AMPL (GAMS) Modeling and Programming Languages express optimization problems use automatic differentiation easy access to state-of-the-art solvers Optimization Software open-source: COIN-OR, IPOPT, SOPLEX, & ASTROS (soon) current solver limitations on laptop: 1,000,000 variables/constraints for LPs 100,000 variables/constraints for NLPs/NCPs 100 variables/constraints for global optimization 500,000,000 variable LP on BlueGene/P 104 / 104

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Nonlinear Optimization Solvers

Nonlinear Optimization Solvers ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS

INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS HANDE Y. BENSON, ARUN SEN, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Abstract. In this paper we consider the question of solving equilibrium

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Solving Multi-Leader-Common-Follower Games

Solving Multi-Leader-Common-Follower Games ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Solving Multi-Leader-Common-Follower Games Sven Leyffer and Todd Munson Mathematics and Computer Science Division Preprint ANL/MCS-P1243-0405

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Algorithms for Trust-Region Subproblems with Linear Inequality Constraints

Algorithms for Trust-Region Subproblems with Linear Inequality Constraints Algorithms for Trust-Region Subproblems with Linear Inequality Constraints Melanie L. Bain Gratton A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Software for Integer and Nonlinear Optimization

Software for Integer and Nonlinear Optimization Software for Integer and Nonlinear Optimization Sven Leyffer, leyffer@mcs.anl.gov Mathematics & Computer Science Division Argonne National Laboratory Roger Fletcher & Jeff Linderoth Advanced Methods and

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley

More information

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines Gautam Kunapuli Example: Text Categorization Example: Develop a model to classify news stories into various categories based on their content. sports politics Use the bag-of-words representation for this

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

ECS550NFB Introduction to Numerical Methods using Matlab Day 2 ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Some Recent Advances in Mixed-Integer Nonlinear Programming

Some Recent Advances in Mixed-Integer Nonlinear Programming Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA

More information

Implementation of an αbb-type underestimator in the SGO-algorithm

Implementation of an αbb-type underestimator in the SGO-algorithm Implementation of an αbb-type underestimator in the SGO-algorithm Process Design & Systems Engineering November 3, 2010 Refining without branching Could the αbb underestimator be used without an explicit

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1. ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms Jeff Linderoth Industrial and Systems Engineering University of Wisconsin-Madison Jonas Schweiger Friedrich-Alexander-Universität

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Optimality conditions and complementarity, Nash equilibria and games, engineering and economic application

Optimality conditions and complementarity, Nash equilibria and games, engineering and economic application Optimality conditions and complementarity, Nash equilibria and games, engineering and economic application Michael C. Ferris University of Wisconsin, Madison Funded by DOE-MACS Grant with Argonne National

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information