PDE-Constrained and Nonsmooth Optimization

Size: px
Start display at page:

Download "PDE-Constrained and Nonsmooth Optimization"

Transcription

1 Frank E. Curtis October 1, 2009

2 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

3 Introduction Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

4 Introduction PDE-constrained optimization min f (x) s.t. c E(x) = 0 c I(x) 0

5 Introduction PDE-constrained optimization min f (x) s.t. c E(x) = 0 c I(x) 0 (PDE) Problem is infinite-dimensional

6 Introduction Inverse problems Recover a parameter k(x) based on data collected from propagating waves

7 Introduction Inverse problems Recover a parameter k(x) based on data collected from propagating waves 1 X X min (y j (x m) y j,m ) 2 + α(β k 2 y,k 2 L 2 (Ω) + k 2 (L 2 (Ω)) n ) j m 8 y >< j (x) + S(x)k(x) 2 y j (x) S(x)(k 2 0 k(x)2 )y i j = 0, x in Ω s.t. y j = 0, x on Ω >: l(x) k(x) u(x), x in Ω

8 Introduction Inverse problems Recover a parameter k(x) based on data collected from propagating waves 1 X X min (y j (x m) y j,m ) 2 + α(β k 2 y,k 2 L 2 (Ω) + k 2 (L 2 (Ω)) n ) j m 8 y >< j (x) + S(x)k(x) 2 y j (x) S(x)(k 2 0 k(x)2 )y i j = 0, x in Ω s.t. y j = 0, x on Ω >: l(x) k(x) u(x), x in Ω

9 Introduction Optimal design Regional hyperthermia is a cancer therapy that aims at heating large and deeply seated tumors by means of radio wave adsorption Results in the killing of tumor cells and makes them more susceptible to other accompanying therapies; e.g., chemotherapy

10 Introduction Optimal design Computer modeling can be used to help plan the therapy for each patient, and it opens the door for numerical optimization The goal is to heat the tumor to a target temperature of 43 C while minimizing damage to nearby cells

11 Introduction Parameter estimation Weather forecasting If the initial state of the atmosphere (temperatures, pressures, wind patterns, humidities) were known at a certain point in time, then an accurate forecast could be obtained by integrating atmospheric model equations forward in time Flow described by Navier-Stokes and further sophistications of atmospheric physics and dynamics

12 Introduction Parameter estimation Limited amount of data (satellites, buoys, planes, ground-based sensors) Each observation is subject to error Nonuniformly distributed around the globe (satellite paths, densely-populated areas)

13 Newton s method Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

14 Newton s method Nonlinear equations Newton s method F(x) = 0 F(x k )d k = F(x k ) Judge progress by the merit function φ(x) 1 2 F(x) 2 Direction is one of descent since φ(x k ) T d k = F(x k ) T F(x k )d k = F(x k ) 2 < 0 (Note the consistency between the step computation and merit function!)

15 Newton s method Equality constrained optimization Consider Lagrangian is min f (x) x R n s.t. c(x) = 0 L(x, λ) f (x) + λ T c(x) so the first-order optimality conditions are» f (x) + c(x)λ L(x, λ) = F(x, λ) = 0 c(x)

16 Newton s method Merit function Simply minimizing» ϕ(x, λ) = 1 F(x, 2 λ) 2 = 1 f (x) + c(x)λ 2 2 c(x) is generally inappropriate for constrained optimization We use the merit function φ(x; π) f (x) + π c(x) where π is a penalty parameter

17 Newton s method Minimizing a penalty function Consider the penalty function for min (x 1) 2, s.t. x = 0 i.e. φ(x; π) = (x 1) 2 + π x for different values of the penalty parameter π Figure: π = 1 Figure: π = 2

18 Newton s method Algorithm 0: Newton method for optimization (Assume the problem is sufficiently convex and regular) for k = 0, 1, 2,... Solve the primal-dual (Newton) equations»»» H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) δ k Increase π, if necessary, so that Dφ k (d k ; π k ) 0 (e.g., π k λ k + δ k ) Backtrack from α k 1 to satisfy the Armijo condition φ(x k + α k d k ; π k ) φ(x k ; π k ) + ηα k Dφ k (d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

19 Newton s method Convergence of Algorithm 0 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous. Also, (Regularity) c(x k ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Han (1977)) The sequence {(x k, λ k )} yields the limit» lim f (xk ) + c(x k )λ k k = 0 c(x k )

20 Inexactness Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

21 Inexactness Large-scale primal-dual algorithms Computational issues: Large matrices to be stored Large matrices to be factored Algorithmic issues: The problem may be nonconvex The problem may be ill-conditioned Computational/Algorithmic issues: No matrix factorizations makes difficulties more difficult

22 Inexactness Nonlinear equations Compute F(x k )d k = F(x k ) + r k requiring (Dembo, Eisenstat, Steihaug (1982)) r k κ F(x k ), κ (0, 1) Progress judged by the merit function φ(x) 1 2 F(x) 2 Again, note the consistency... φ(x k ) T d k = F(x k ) T F(x k )d k = F(x k ) 2 +F(x k ) T r k (κ 1) F(x k ) 2 < 0

23 Inexactness Optimization Compute»»» H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k + 0 c(x k ) δ k» ρk r k satisfying» ρk r k» κ f (xk ) + c(x k )λ k, κ (0, 1) c(x k ) If κ is not sufficiently small (e.g., 10 3 vs ), then d k may be an ascent direction for our merit function; i.e., Dφ k (d k ; π k ) > 0 for all π k π k 1 Our work begins here... inexact Newton methods for optimization We cover the convex case, nonconvexity, irregularity, inequality constraints

24 Inexactness Model reductions Define the model of φ(x; π): m(d; π) f (x) + f (x) T d + π( c(x) + c(x) T d ) d k is acceptable if m(d k ; π k ) m(0; π k ) m(d k ; π k ) = f (x k ) T d k + π k ( c(x k ) c(x k ) + c(x k ) T d k ) 0 This ensures Dφ k (d k ; π k ) 0 (and more)

25 Inexactness Termination test 1 The search direction (d k, δ k ) is acceptable if»» ρk κ f (xk ) + c(x k )λ k, κ (0, 1) c(x k ) r k and if for π k = π k 1 and some σ (0, 1) we have m(d k ; π k ) max{ 1 2 d T k H(x k, λ k )d k, 0} + σπ k max{ c(x k ), r k c(x k ) } {z } 0 for any d

26 Inexactness Termination test 2 The search direction (d k, δ k ) is acceptable if ρ k β c(x k ), β > 0 and r k ɛ c(x k ), ɛ (0, 1) Increasing the penalty parameter π then yields m(d k ; π k ) max{ 1 2 d T k H(x k, λ k )d k, 0} + σπ k c(x k ) {z } 0 for any d

27 Inexactness Algorithm 1: Inexact Newton for optimization (Byrd, Curtis, Nocedal (2008)) for k = 0, 1, 2,... Iteratively solve»»» H(xk, λ k ) c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 c(x k ) until termination test 1 or 2 is satisfied If only termination test 2 is satisfied, increase π so ( π k max π k 1, f (x k) T d k + max{ 1 d ) T 2 k H(x k, λ k )d k, 0} (1 τ)( c(x k ) r k ) Backtrack from α k 1 to satisfy δ k φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

28 Inexactness Convergence of Algorithm 1 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous. Also, (Regularity) c(x k ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 Theorem (Byrd, Curtis, Nocedal (2008)) The sequence {(x k, λ k )} yields the limit» lim f (xk ) + c(x k )λ k k = 0 c(x k )

29 Inexactness Handling nonconvexity and rank deficiency There are two assumptions we aim to drop: (Regularity) c(xk ) T has full row rank with singular values bounded below by a positive constant (Convexity) u T H(x k, λ k )u µ u 2 for µ > 0 for all u R n satisfying u 0 and c(x k ) T u = 0 e.g., the problem is not regular if it is infeasible, and it is not convex if there are maximizers and/or saddle points Without them, Algorithm 1 may stall or may not be well-defined

30 Inexactness No factorizations means no clue We might not store or factor» H(xk, λ k ) c(x k ) c(x k ) T 0 so we might not know if the problem is nonconvex or ill-conditioned Common practice is to perturb the matrix to be» H(xk, λ k ) + ξ 1I c(x k ) c(x k ) T ξ 2I where ξ 1 convexifies the model and ξ 2 regularizes the constraints Poor choices of ξ 1 and ξ 2 can have terrible consequences in the algorithm

31 Inexactness Our approach for global convergence Decompose the direction d k into a normal component (toward the constraints) and a tangential component (toward optimality) We impose a specific type of trust region constraint on the v k step in case the constraint Jacobian is (near) rank deficient

32 Inexactness Handling nonconvexity In computation of d k = v k + u k, convexify the Hessian as in» H(xk, λ k ) + ξ 1I c(x k ) c(x k ) T 0 by monitoring iterates Hessian modification strategy: Increase ξ 1 whenever u k 2 > ψ v k 2, ψ > ut k (H(x k, λ k ) + ξ 1I )u k < θ u k 2, θ > 0

33 Inexactness Algorithm 2: Inexact Newton (regularized) (Curtis, Nocedal, Wächter (2009)) for k = 0, 1, 2,... Approximately solve min 1 2 c(x k) + c(x k ) T v 2, s.t. v ω ( c(x k ))c(x k ) to compute v k satisfying Cauchy decrease Iteratively solve»»» H(xk, λ k ) + ξ 1 I c(x k ) dk f (xk ) + c(x c(x k ) T = k )λ k 0 δ k c(x k ) T v k until termination test 1 or 2 is satisfied, increasing ξ 1 as described If only termination test 2 is satisfied, increase π so ( π k max π k 1, f (x k) T d k + max{ 1 2 ut k (H(x ) k, λ k ) + ξ 1 I )u k, θ u k 2 } (1 τ)( c(x k ) c(x k ) + c(x k ) T d k ) Backtrack from α k 1 to satisfy φ(x k + α k d k ; π k ) φ(x k ; π k ) ηα k m(d k ; π k ) Update iterate (x k+1, λ k+1 ) (x k, λ k ) + α k (d k, δ k )

34 Inexactness Convergence of Algorithm 2 Assumption The sequence {(x k, λ k )} is contained in a convex set Ω over which f, c, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Nocedal, Wächter (2009)) If all limit points of { c(x k ) T } have full row rank, then the sequence {(x k, λ k )} yields the limit» lim f (xk ) + c(x k )λ k k = 0. c(x k ) Otherwise, and if {π k } is bounded, then lim ( c(x k))c(x k ) = 0 k lim f (x k) + c(x k )λ k = 0 k

35 Inexactness Handling inequalities Interior point methods are attractive for large applications Line-search interior point methods that enforce c(x k ) + c(x k ) T d k = 0 may fail to converge globally (Wächter, Biegler (2000)) Fortunately, the trust region subproblem we use to regularize the constraints also saves us from this type of failure!

36 Inexactness Algorithm 2 (Interior-point version) Apply Algorithm 2 to the logarithmic-barrier subproblem for µ 0 min f (x) µ qx ln s i, s.t. c E (x) = 0, c I (x) s = 0 i=1 Define H(x k, λ E,k, λ I,k ) 0 c E (x k ) c I (x k ) d x 3 k 6 0 µi 0 S k 4 c E (x k ) T 7 6 d s k δ E,k 5 c I (x k ) T S k 0 0 δ I,k so that the iterate update has»»» xk+1 xk d x + α k s k+1 s k k S k dk s Incorporate a fraction-to-the-boundary rule in the line search and a slack reset in the algorithm to maintain s max{0, c I (x)}

37 Inexactness Convergence of Algorithm 2 (interior-point) Assumption The sequence {(x k, λ E,k, λ I,k )} is contained in a convex set Ω over which f, c E, c I, and their first derivatives are bounded and Lipschitz continuous Theorem (Curtis, Schenk, Wächter (2009)) For a given µ, Algorithm 2 yields the same limits as in the equality constrained case If Algorithm 2 yields a sufficiently accurate solution to the barrier subproblem for each {µ j } 0 and if the linear independence constraint qualification (LICQ) holds at a limit point x of {x j }, then there exist Lagrange multipliers λ such that the first-order optimality conditions of the nonlinear program are satisfied

38 Results Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

39 Results Implementation details Incorporated in IPOPT software package (Wächter) inexact algorithm yes Linear systems solved with PARDISO (Schenk) SQMR (Freund (1994)) Preconditioning in PARDISO incomplete multilevel factorization with inverse-based pivoting stabilized by symmetric-weighted matchings Optimality tolerance: 1e-8

40 Results CUTEr and COPS collections 745 problems written in AMPL 645 solved successfully 42 real failures Robustness between 87%-94% Original IPOPT: 93%

41 Results Helmholtz N n p q # iter CPU sec (per iter) (21.833) (149.66) (2729.1)

42 Results Boundary control min 1 2 R Ω (y(x) yt(x))2 dx s.t. (e y(x) y(x)) = 20 in Ω y(x) = u(x) on Ω 2.5 u(x) 3.5 on Ω where y t(x) = x 1 (x 1 1)x 2 (x 2 1) sin(2πx 3 ) N n p q # iter CPU sec (per iter) (0.2165) (7.9731) (380.88) Original IPOPT with N = 32 requires 238 seconds per iteration

43 Results Hyperthermia treatment planning min 1 R 2 Ω (y(x) yt(x))2 dx s.t. y(x) 10(y(x) 37) = u M(x)u in Ω 37.0 y(x) 37.5 on Ω 42.0 y(x) 44.0 in Ω 0 where u j = a j e iφ j, M jk (x) =< E j (x), E k (x) >, E j = sin(jx 1 x 2 x 3 π) N n p q # iter CPU sec (per iter) (0.3367) (59.920) Original IPOPT with N = 32 requires 408 seconds per iteration

44 Results Groundwater modeling min 1 R 2 Ω (y(x) yt(x))2 dx α R Ω [β(u(x) ut(x))2 + (u(x) u t(x)) 2 ]dx s.t. (e u(x) y i (x)) = q i (x) in Ω, i = 1,..., 6 y i (x) n = 0 on Ω Z y i (x)dx = 0, i = 1,..., 6 Ω 1 u(x) 2 in Ω where q i = 100 sin(2πx 1 ) sin(2πx 2 ) sin(2πx 3 ) N n p q # iter CPU sec (per iter) ( ) ( ) ( ) Original IPOPT with N = 32 requires approx. 20 hours for the first iteration

45 Summary and future work Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

46 Summary and future work Summary We have a new framework for inexact Newton methods for optimization Convergence results are as good (and sometimes better) than exact methods Preliminary numerical results are encouraging

47 Summary and future work Future work Tune the method for specific applications Incorporate useful techniques such as filters, second-order corrections, specialized preconditioners Use (approximate) elimination techniques so that larger (e.g., time-dependent) problems can be solved

48 SQP Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

49 SQP Constrained optimization of smooth functions Consider constrained optimization problems of the form min x f (x) s.t. c(x) 0 where f and c are smooth (equality constraints OK, too) At x k, solve the SLP/SQP subproblem min d f k + fk T d + 1 d T H 2 k d s.t. c k + c T k d 0, to compute the search direction d k d k

50 SQP SQP illustration: Objective model

51 SQP SQP illustration: Constraint model

52 SQP Practicalities Since the linearized constraints may be inconsistent, we solve min d ρ(f k + fk T d) + X s i + 1 d T H 2 k d s.t. c k + c T k d s, s 0, where ρ > 0 is a penalty parameter We perform a line search on the penalty function to promote global convergence φ(x; ρ) ρf (x) + X max{0, c i (x)}

53 SQP Line search A model of the penalty function is given by q k (d; ρ) ρ(f k + fk T d) + X max{0, ck i + ck i T d} + 1 d T H 2 k d Solving the SQP subproblem is equivalent to minimizing q k (d; ρ) The reduction in q k (d; ρ) yielded by d k is q k (d k ; ρ) q k (0; ρ) q k (d k ; ρ) We impose the sufficient decrease condition φ(x k + α k d k ; ρ) φ(x k ; ρ) ηα k q k (d k ; ρ)

54 SQP Penalty-SQP method for k = 0, 1, 2,... Solve the SQP subproblem min d or, equivalently, solve ρ(f k + fk T d) + X s i + 1 d T H 2 k d s.t. c k + c T k d s, s 0 min d q k (d; ρ) ρ(f k + fk T d) + X max{0, ck i + ck i T d} + 1 d T H 2 k d to compute d k Backtrack from α k = 1 to satisfy φ(x k + α k d k ; ρ) φ(x k ; ρ) ηα k q k (d k ; ρ) Update x k+1 x k + α k d k

55 GS Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

56 GS Unconstrained optimization of nonsmooth functions Consider the unconstrained optimization problem min x f (x) where f may be nonsmooth (but is at least locally Lipschitz) The prototypical example is the absolute value function:

57 GS Clarke subdifferential Suppose f is differentiable over an open dense set D Let B(x, ɛ) {x x x ɛ} The Clarke subdifferential is f (x ) = \ cl conv f (B(x, ɛ) D) ɛ>0 A point x is called Clarke stationary if 0 f (x )

58 GS ɛ-stationarity The Clarke ɛ-subdifferential is given by f (x, ɛ) = cl conv f (B(x, ɛ) D) A point x is called ɛ-stationary if 0 f (x, ɛ)... find ɛ-stationary point, reduce ɛ, find ɛ-stationary point,...

59 GS Gradient sampling: Robust steepest descent (Burke, Lewis, Overton, 2005) We restrict iterates to the open dense set D Ideally, at x k, for a given ɛ we would solve min d f k + max x B k { f (x) T d} d T H k d where B k = B(x k, ɛ) D However, we can only approximate this step by solving min d f k + max x B k { f (x) T d} d T H k d where B k = {x k, x k1,..., x kp } B(x k, ɛ) D

60 GS GS illustration: Objective model

61 GS GS illustration: Objective model

62 GS GS illustration: Objective model

63 GS GS illustration: Objective model

64 GS GS illustration: Objective model

65 GS GS method for k = 0, 1, 2,... Sample points {x k1,..., x kp } in B(x k, ɛ) D Solve the GS subproblem min d f k + max x B k { f (x) T d} d T H k d to compute d k Backtrack from α k = 1 to satisfy f (x k + α k d k ) f (x k ) ηα k d k 2 Update x k+1 x k + α k d k (to ensure x k+1 D) If d k ɛ, then reduce ɛ

66 SQP-GS Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

67 SQP-GS Constrained optimization of nonsmooth functions Consider constrained optimization problems of the form min x f (x) s.t. c(x) 0 where f and c may be nonsmooth (equality constraints OK, too) We may consider solving or even min x min x but this makes me... :-( φ(x; ρ) ρf (x) + X max{0, c i (x)} ϕ(x; ρ) ρf (x) + max max{0, c i (x)} i

68 SQP-GS SQP and GS The SQP subproblem is The GS subproblem is min d min d ρz + X s i d T H k d s.t. f k + fk T d z c k + c T k d s, s 0 z d T H k d s.t. f k + f (x) T d z, x B k

69 SQP-GS SQP-GS The SQP-GS subproblem is min ρz + X s i + 1 d T H d,z,s 2 k d s.t. f k + f (x) T d z, x B 0 k c i k + c i (x) T d s i, s i 0, x B i k, i = 1,..., m where B i k = {x k, x i k1,..., x i kp} B(x k, ɛ) for i = 0,..., m This is equivalent to min d ρ max(f k + f (x) T d)+ X max x B k 0 x B i k max{0, c i k + c i (x) T d, 0}+ 1 2 d T H k d i.e., min d q k (d; ρ), where now q k (d; ρ) is a robust model of φ(x; ρ)

70 SQP-GS SQP-GS illustration: Constraint model

71 SQP-GS SQP-GS illustration: Constraint model

72 SQP-GS SQP-GS illustration: Constraint model

73 SQP-GS SQP-GS illustration: Constraint model

74 SQP-GS SQP-GS illustration: Constraint model

75 SQP-GS SQP-GS method for k = 0, 1, 2,... Sample points {x i k1,..., x i kp} in B(x k, ɛ) D i for i = 0,..., m Solve the SQP-GS subproblem min ρz + X s i + 1 d T H d,z,s 2 k d s.t. f k + f (x) T d z, x B 0 k c i k + c i (x) T d s i, s i 0, x B i k, i = 1,..., m to compute d k Backtrack from α k = 1 to satisfy φ(x k + α k d k ; ρ) φ(x k ; ρ) ηα k q k (d k ; ρ) Update x k+1 x k + α k d k (to ensure x k+1 i D i ) If q k (d k ; ρ) ɛ, then reduce ɛ

76 SQP-GS Global convergence Assumption 1: The functions f and c i, i = 1,..., m, are locally Lipschitz and continuously differentiable on open dense subsets of R n Assumption 2: The sequence of iterates and sample points are contained in a convex set over which the functions f and c i, i = 1,..., m, and their first derivatives are bounded Assumption 3: For universal constants ξ ξ > 0, the Hessian matrices satisfy ξ d 2 d T H k d ξ d 2 for all d R n

77 SQP-GS Global convergence Lemma 1: q k (d k ; ρ) = 0 if and only if x k is ɛ-stationary Lemma 2: The one-sided directional derivative of the penalty function satisfies φ (d k ; ρ) d T k H k d k < 0 and so d k is a descent direction for φ(x; ρ) at x k Lemma 3: Suppose the sample size is p n + 1. If the current iterate x k is sufficiently close to a stationary point x of the penalty function φ(x; ρ), then there exists a nonempty open set of sample sets such that the solution to the SQP-GS subproblem d k yields an arbitrarily small q k (d k ; ρ) Carathéodory s Theorem Theorem: With probability one, every cluster point of {x k } is feasible and stationary for φ(x; ρ)

78 Results Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

79 Results Implementation Prototype implementation in MATLAB (available soon?) QP subproblems solved with MOSEK BFGS approximations of Hessian of penalty function (Lewis and Overton, 2009) ρ decreased conservatively

80 Results Example 1: Nonsmooth Rosenbrock min x 8 x 2 1 x 2 + (1 x 1) 2 s.t. max{ 2x 1, 2x 2} 1

81 Results Example 1: Nonsmooth Rosenbrock

82 Results Example 2: Entropy minimization Find a N N matrix X that solves! KY min ln λ j (A X T X ) X j=1 s.t. X j = 1, j = 1,..., N where λ j (M) denotes the jth largest eigenvalue of M, A is a real symmetric N N matrix, denotes the Hadamard matrix product, and X j denotes the jth column of X

83 Results Example 2: Entropy minimization N n K f (SQP-GS) f (GS) e e e e e e e e e e e e e e e e-02

84 Results Example 3(a): Compressed sensing (l 1 norm) Recover a sparse signal by solving min x x 1 s.t. Ax = b where A is a submatrix of a discrete cosine transform (DCT) matrix

85 Results Example 3(a): Compressed sensing (l 1 norm)

86 Results Example 3(b): Compressed sensing (l 0.5 norm) Recover a sparse signal by solving min x x 0.5 s.t. Ax = b where A is a submatrix of a discrete cosine transform (DCT) matrix

87 Results Example 3(b): Compressed sensing (l 0.5 norm) Figure: k = 1

88 Results Example 3(b): Compressed sensing (l 0.5 norm) Figure: k = 10

89 Results Example 3(b): Compressed sensing (l 0.5 norm) Figure: k = 25

90 Results Example 3(b): Compressed sensing (l 0.5 norm) Figure: k = 50

91 Results Example 3(b): Compressed sensing (l 0.5 norm) Figure: k = 200

92 Results Example 3(b): Compressed sensing (l 0.5 norm)

93 Summary and future work Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP) Gradient sampling (GS) SQP-GS Results Summary and future work Conclusion

94 Summary and future work Summary We have presented a globally convergent algorithm for the solution of constrained, nonsmooth, and nonconvex optimization problems The algorithm follows a penalty-sqp framework and uses Gradient Sampling to make the search direction calculation robust Preliminary results are encouraging

95 Summary and future work Future work Tune updates for ɛ and ρ Allow for special handling of smooth/convex/linear functions Investigate SLP vs. SQP Extensions for particular applications; e.g., specialized sampling

96 Thanks!!

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Large-Scale Nonlinear Optimization with Inexact Step Computations

Large-Scale Nonlinear Optimization with Inexact Step Computations Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com IPAM Workshop on Numerical Methods for Continuous

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Recent Adaptive Methods for Nonlinear Optimization

Recent Adaptive Methods for Nonlinear Optimization Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

An Inexact Newton Method for Nonconvex Equality Constrained Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,

More information

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees Noname manuscript No. (will be inserted by the editor) A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees Frank E. Curtis Xiaocun Que May 26, 2014 Abstract

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Downloaded 07/17/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 07/17/13 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 22, No. 2, pp. 474 500 c 2012 Society for Industrial and Applied Mathematics A SEQUENTIAL QUADRATIC PROGRAMMING ALGORITHM FOR NONCONVEX, NONSMOOTH CONSTRAINED OPTIMIZATION FRANK E.

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

A Line search Multigrid Method for Large-Scale Nonlinear Optimization

A Line search Multigrid Method for Large-Scale Nonlinear Optimization A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

A trust region method based on interior point techniques for nonlinear programming

A trust region method based on interior point techniques for nonlinear programming Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

A Local Convergence Analysis of Bilevel Decomposition Algorithms

A Local Convergence Analysis of Bilevel Decomposition Algorithms A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information