Large-Scale Nonlinear Optimization with Inexact Step Computations
|
|
- Claribel King
- 5 years ago
- Views:
Transcription
1 Large-Scale Nonlinear Optimization with Inexact Step Computations Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York IPAM Workshop on Numerical Methods for Continuous Optimization October 14, 2010 (joint with Frank Curtis, Johannes Huber, and Olaf Schenk) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
2 Nonlinear Optimization min x R n f (x) s.t. c(x) = 0 d(x) 0 x f (x) : R n R c(x) : R n R m d(x) : R n R p Variables Objective function Equality constraints Inequality constraints Functions f (x), c(x), d(x) are smooth (C 2 ); sparse derivatives Want to find local solution Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
3 Nonlinear Optimization min x R n f (x) s.t. c(x) = 0 d(x) 0 x f (x) : R n R c(x) : R n R m d(x) : R n R p Variables Objective function Equality constraints Inequality constraints Functions f (x), c(x), d(x) are smooth (C 2 ); sparse derivatives Want to find local solution Barrier method: p min f (x) µ ln(s (i) ) x R n,s R p i=1 s.t. c(x) = 0 d(x) + s = 0 Solve sequence of barrier problems In following, concentrate on NLPs only with equality constraints (fixed µ) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
4 Motivation Example PDE constrained optimization problem: 3D inverse problem [Haber et al.] All-at-once approach Finite difference discretization on regular grid N = 32 in each dimension Size of NLP: 229,376 variables 196,608 equality and 65,536 inequality constraints Using Ipopt with Pardiso linear solver (direct factorization) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
5 Motivation Example PDE constrained optimization problem: 3D inverse problem [Haber et al.] All-at-once approach Finite difference discretization on regular grid N = 32 in each dimension Size of NLP: 229,376 variables 196,608 equality and 65,536 inequality constraints Using Ipopt with Pardiso linear solver (direct factorization) About 15 hours for first iteration! Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
6 Motivation Example PDE constrained optimization problem: 3D inverse problem [Haber et al.] All-at-once approach Finite difference discretization on regular grid N = 32 in each dimension Size of NLP: 229,376 variables 196,608 equality and 65,536 inequality constraints Using Ipopt with Pardiso linear solver (direct factorization) About 15 hours for first iteration! # nonzeros in sparse step computation matrix: 6,329,867 # nonzeros in factor: 2,503,219,440 Fill-in factor: 395 (!) Want to be able to use iterative linear solver Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
7 Motivation Example PDE constrained optimization problem: 3D inverse problem [Haber et al.] All-at-once approach Finite difference discretization on regular grid N = 32 in each dimension Size of NLP: 229,376 variables 196,608 equality and 65,536 inequality constraints Using Ipopt with Pardiso linear solver (direct factorization) About 15 hours for first iteration! # nonzeros in sparse step computation matrix: 6,329,867 # nonzeros in factor: 2,503,219,440 Fill-in factor: 395 (!) Want to be able to use iterative linear solver Looking for other applications with large fill-in (ideas?) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
8 Outline Newton s method for F(x) = 0 with inexact step computation NLP algorithm with inexact step computation Global convergence result Numerical experiments (3D PDE problems) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
9 Newton s Method min x R n NLP f (x) s.t. c(x) = 0 Optimality Conditions f (x) + c(x)y = 0 c(x) = 0 Apply Newton s Method [ ]( ) ( ) Wk c(x k ) xk f (xk ) + c(x c(x k ) T = k )y k 0 y k c(x k ) W k = xx L(x k, y k ) Hessian of Lagrangian (or approximation) L(x, y) = f (x) + y T c(x) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
10 Newton s Method min x R n NLP f (x) s.t. c(x) = 0 Optimality Conditions f (x) + c(x)y = 0 c(x) = 0 Apply Newton s Method [ ]( ) ( ) Wk c(x k ) xk f (xk ) + c(x c(x k ) T = k )y k 0 y k c(x k ) W k = xx L(x k, y k ) Hessian of Lagrangian (or approximation) L(x, y) = f (x) + y T c(x) Simple approach: Consider as root finding problem F(z) = 0 (with z = (x, y)) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
11 Basic Line Search Algorithm for F(z) = 0 Given: Starting point z 0 ; set k 0 1 Solve linear system to compute search direction F(z k ) T z k = F(z k ) 2 Perform backtracking line search to determine step sizeαk (0, 1] satisfying Armijo condition (η (0, 1)): φ(z k +α k z k ) φ(z k ) +ηα k Dφ(z k ; z k ) Here: Merit functionφ(z) = F(z) Update iterates zk+1 = z k +α k z k 4 Increase k k + 1 and go back to Step 1. Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
12 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
13 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
14 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
15 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
16 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 3 Then 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 k φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
17 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 3 Then 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 k φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) k ηα i Dφ(z i ; z i ) i=0 i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
18 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 3 Then 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 k φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) k ηα i Dφ(z i ; z i ) i=0 i=0 2ηᾱ k F(z i ) 2 2 i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
19 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 3 Then k L φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) k ηα i Dφ(z i ; z i ) i=0 i=0 2ηᾱ k F(z i ) 2 2 i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
20 Simple Global Convergence Proof Step computation F(z k ) T z k = F(z k ) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k = 2 F(z k ) 2 2< 0 Proof outline: 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 3 Then k L φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) k ηα i Dφ(z i ; z i ) i=0 4 Therefore: limk F(z k ) = 0 i=0 2ηᾱ k F(z i ) 2 2 i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
21 Simple Global Convergence Proof Inexact step computation [Dembo, Eisenstat, Steihaug (1982)] F(z k ) T z k = F(z k ) + r k ( r k κ F(z k ) 2, κ (0, 1)) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k 2 F(z k ) F(z k ) T r k Proof outline: 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 3 Then k L φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) k ηα i Dφ(z i ; z i ) i=0 4 Therefore: limk F(z k ) = 0 i=0 2ηᾱ k F(z i ) 2 2 i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
22 Simple Global Convergence Proof Inexact step computation [Dembo, Eisenstat, Steihaug (1982)] F(z k ) T z k = F(z k ) + r k ( r k κ F(z k ) 2, κ (0, 1)) Merit functionφ(z) = F(z) 2 2 Dφ(z k ; z k ) = 2F(z k ) T F(z k ) T z k 2(1 κ) F(z k ) 2 2 Proof outline: 1 Show zk are bounded (e.g., if F(z k ) and F(z k ) 1 bounded) 2 Therefore step sizeαk ᾱ>0 bounded away from 0 3 Then k L φ(z k+1) φ(z 0 ) = (φ(z i+1 ) φ(z i )) k ηα i Dφ(z i ; z i ) i=0 4 Therefore: limk F(z k ) = 0 i=0 2ηᾱ k (1 κ) F(z i ) 2 2 i=0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
23 Discussion Root finding algorithm F(z) = 0 can easily incorporate inexactness of linear solver Natural consistency between step computation and merit function Can be used for solving PDEs Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
24 Discussion Root finding algorithm F(z) = 0 can easily incorporate inexactness of linear solver Natural consistency between step computation and merit function Can be used for solving PDEs Requires F(z k ) 1 is uniformly bounded F(z k ) = Violated, e.g, if c(x k ) loses rank [ Wk ] c(x k ) c(x k ) T 0 In barrier approach, this also corresponds to violation of LICQ Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
25 Discussion Root finding algorithm F(z) = 0 can easily incorporate inexactness of linear solver Natural consistency between step computation and merit function Can be used for solving PDEs Requires F(z k ) 1 is uniformly bounded F(z k ) = Violated, e.g, if c(x k ) loses rank [ Wk ] c(x k ) c(x k ) T 0 In barrier approach, this also corresponds to violation of LICQ Maximizers and stationary points also satisfy F(z) = 0 Would like to encourage convergence to minimizers Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
26 Merit Function For Optimization Classical penalty function:φ ν (x) = f (x) +ν c(x) For ν sufficiently large: x local solution of NLP x local minimizer ofφ ν (x) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
27 Merit Function For Optimization Classical penalty function:φ ν (x) = f (x) +ν c(x) For ν sufficiently large: x local solution of NLP Algorithm: x local minimizer ofφ ν (x) 1 Solve linear system to compute search direction [ ]( ) ( ) Wk c(x k ) xk f (xk ) + c(x c(x k ) T = k )y k 0 y k c(x k ) 2 Perform backtracking line search to determine step sizeαk with φ(x k +α k x k ) φ(x k )+ηα k Dφ(x k ; x k ) [Need Dφ(x k ; x k )<0!] 3 Update iterates (xk+1, y k+1 ) = (x k, y k ) +α k ( x k, y k ) 4 Increase k k + 1 and go back to Step 1. Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
28 Ensuring Descent If c(x k ) T x k + c(x k ) = 0 then Dφ νk (x k ; x k ) = f (x k ) T x k ν k c(x k ) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
29 Ensuring Descent If c(x k ) T x k + c(x k ) = 0 then Dφ νk (x k ; x k ) = f (x k ) T x k ν k c(x k ) To guarantee Dφ νk (x k ; x k )<0: Before line search, possibly increase ν Careful: Need to avoidν! Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
30 Ensuring Descent If c(x k ) T x k + c(x k ) = 0 then Dφ νk (x k ; x k ) = f (x k ) T x k ν k c(x k ) To guarantee Dφ νk (x k ; x k )<0: Before line search, possibly increase ν Careful: Need to avoidν! One option: Convexify local model [ ]( ) ( ) Wk +δi c(x k ) xk f (xk ) + c(x c(x k ) T = k )y k 0 y k c(x k ) Make sure that v T (W k +δi )v> 0 for all v with c(x k ) T v = 0 (δ 0) Can be verified by looking at inertia (requires factorization) E.g., in Ipopt : Chooseδ 0 so that inertia is as desired Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
31 Simple Convergence Proof Show first (based on strongish assumptions): Steps x k are bounded Step sizeα k ᾱ>0 bounded away from 0 ν k = ν for large k Dφ ν (x k ; x k ) θ OptCond k (θ>0) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
32 Simple Convergence Proof Show first (based on strongish assumptions): Steps x k are bounded Step sizeα k ᾱ>0 bounded away from 0 ν k = ν for large k Dφ ν (x k ; x k ) θ OptCond k (θ>0) Then, as before, using Armijo condition: L φ ν (x k+1 ) φ ν (x 0 ) k = (φ ν (x i+1 ) φ ν (x i )) i=0 k ηα i Dφ ν (x i ; x i ) k ηᾱθ OptCont i i=0 i=0 Therefore: lim k OptCont k = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
33 Discussion of Convergence Required for this convergence proof: Dφνk (x k ; x k )<0 νk = ν for large k Steps xk are bounded Step sizeαk ᾱ>0 bounded away from 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
34 Discussion of Convergence Required for this convergence proof: Dφνk (x k ; x k )<0 νk = ν for large k Steps xk are bounded Step sizeαk ᾱ>0 bounded away from 0 Standard algorithm/proof fails if Inertia cannot be computed iterative linear solver Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
35 Discussion of Convergence Required for this convergence proof: Dφνk (x k ; x k )<0 νk = ν for large k Steps xk are bounded Step sizeαk ᾱ>0 bounded away from 0 Standard algorithm/proof fails if Inertia cannot be computed iterative linear solver Step does not satisfy linear system exactly iterative linear solver Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
36 Discussion of Convergence Required for this convergence proof: Dφνk (x k ; x k )<0 νk = ν for large k Steps xk are bounded Step sizeαk ᾱ>0 bounded away from 0 Standard algorithm/proof fails if Inertia cannot be computed iterative linear solver Step does not satisfy linear system exactly iterative linear solver Constraint gradients become linearly dependent strong assumption Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
37 Discussion of Convergence Required for this convergence proof: Dφνk (x k ; x k )<0 νk = ν for large k Steps xk are bounded Step sizeαk ᾱ>0 bounded away from 0 Standard algorithm/proof fails if Inertia cannot be computed iterative linear solver Step does not satisfy linear system exactly iterative linear solver Constraint gradients become linearly dependent strong assumption For barrier methods: The fraction-to-the-boundary rule leads toα k 0 fundamental problem for line-search barrier methods [W., Biegler, 2000] Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
38 Goal Devise an algorithm that can work with an iterative linear solver can deal with rank-deficient constraint Jacobian overcomes convergence problems of line-search barrier methods Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
39 Goal Devise an algorithm that can work with an iterative linear solver can deal with rank-deficient constraint Jacobian overcomes convergence problems of line-search barrier methods Main ingredients: Step decomposition helps to bound step (no need to assume full rank of c(xk )) handle nonconvexity Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
40 Goal Devise an algorithm that can work with an iterative linear solver can deal with rank-deficient constraint Jacobian overcomes convergence problems of line-search barrier methods Main ingredients: Step decomposition helps to bound step (no need to assume full rank of c(xk )) handle nonconvexity Termination test of linear solver based on model of merit function consistency between inexact solution and merit function does not ignore optimization nature of problem Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
41 Useful Equivalence [ W k c(x k ) c(x k ) T 0 ]( ) ( ) xk f (xk ) + c(x = k )y k y k c(x k ) 1 min 2 xt k W k x k + f (x k ) T x k s.t. c(x k ) T x k + c(x k ) = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
42 Step Decomposition min 1 2 xt k W k x k + f (x k ) T x k s.t. c(x k ) T x k + c(x k ) = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
43 Step Decomposition min 1 2 xt k W k x k + f (x k ) T x k s.t. c(x k ) T x k + c(x k ) = 0 x k = n k + t k Normal step nk c(x k ) T n k + c(x k ) = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
44 Step Decomposition min 1 2 xt k W k x k + f (x k ) T x k s.t. c(x k ) T x k + c(x k ) = 0 x k = n k + t k Normal step nk Tangential step tk 1 ( T min t 2 tt W k t + Wk T n k + f (x k )) t s.t. c(x k ) T t = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
45 Inexact Steps (Normal Component) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
46 Inexact Steps (Normal Component) Normal component c(x k ) T n k + c(x k ) = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
47 Inexact Steps (Normal Component) Normal component min n c(x k ) T n + c(x k ) 2 2 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
48 Inexact Steps (Normal Component) Normal component min n c(x k ) T n + c(x k ) 2 2 s.t. n 2 ω c(x k )c(x k ) 2 (ω> 0 fixed, usually large) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
49 Inexact Steps (Normal Component) min n c(x k ) T n + c(x k ) 2 2 s.t. n 2 ω c(x k )c(x k ) 2 Trust-region subproblem Controls size of normal component Trust region inactive ifω [σmin ( c(x k ))] 1 c(xk )c(x k ) goes to zero if x k converges to minimizer of c(x) 2 2 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
50 Inexact Steps (Normal Component) min n c(x k ) T n + c(x k ) 2 2 s.t. n 2 ω c(x k )c(x k ) 2 Trust-region subproblem Controls size of normal component Trust region inactive ifω [σmin ( c(x k ))] 1 c(xk )c(x k ) goes to zero if x k converges to minimizer of c(x) 2 2 Inexact: Make at least as much progress as steepest descent step Cauchy step n C Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
51 Inexact Steps (Normal Component) min n c(x k ) T n + c(x k ) 2 2 s.t. n 2 ω c(x k )c(x k ) 2 Trust-region subproblem Controls size of normal component Trust region inactive ifω [σmin ( c(x k ))] 1 c(xk )c(x k ) goes to zero if x k converges to minimizer of c(x) 2 2 Inexact: Make at least as much progress as steepest descent step Cauchy step n C In practice: [ ]( ) ( ) I c(x k ) δ 0 Shortest norm solution: c(x k ) T 0 n N = c(x k ) (solve only approximately) Choose combination of n C and n N satisfying trust region giving best objective Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
52 Inexact Steps (Tangential Component) Tangential component 1 ( T min t 2 tt W k t + Wk T n k + f (x k )) t s.t. c(x k ) T t = 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
53 Inexact Steps (Tangential Component) Tangential component [ ]( ) ( ) W k c(x k ) tk f (xk ) + c(x c(x k ) T = k )y k + W k n k 0 y k 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
54 Inexact Tangential Component [ ]( ) ( ) W k c(x k ) tk f (xk ) + c(x c(x k ) T = k )y k + W k n k 0 y k 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
55 Inexact Tangential Component [ ]( ) ( ) W k c(x k ) tk f (xk ) + c(x c(x k ) T = k )y k + W k n k 0 y k 0 Rewrite using x k = n k + t k Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
56 Inexact Tangential Component [ ]( ) ( ) W k c(x k ) xk f (xk ) + c(x c(x k ) T = k )y k 0 y k c(x k ) T n k Rewrite using x k = n k + t k Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
57 Inexact Tangential Component [ ) ( ) ( ) W k c(x k ) ]( x k f (xk ) + c(x c(x k ) T = k )y k ρ 0 y k c(x k ) T + k n k r k Rewrite using x k = n k + t k ( x k current inexact solution) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
58 Inexact Tangential Component [ ) ( ) ( ) W k c(x k ) ]( x k f (xk ) + c(x c(x k ) T = k )y k ρ 0 y k c(x k ) T + k n k r k Rewrite using x k = n k + t k ( x k current inexact solution) Require: (simplified) 1 Residual ( ) f (xk ) + c(x ρ k κ k )y k c(x k ) T κ (0, 1) n k Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
59 Inexact Tangential Component [ ) ( ) ( ) W k c(x k ) ]( x k f (xk ) + c(x c(x k ) T = k )y k ρ 0 y k c(x k ) T + k n k r k Rewrite using x k = n k + t k ( x k current inexact solution) Require: (simplified) 1 Residual ( ) f (xk ) + c(x ρ k κ k )y k c(x k ) T κ (0, 1) n k 2 Tangential component condition: Satisfy at least one of T t k Wk t k θ t k 2 θ>0 t k ψ n k ψ> 0 Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
60 Inexact Tangential Component [ ) ( ) ( ) W k c(x k ) ]( x k f (xk ) + c(x c(x k ) T = k )y k ρ 0 y k c(x k ) T + k n k r k Rewrite using x k = n k + t k ( x k current inexact solution) Require: (simplified) 1 Residual ( ) f (xk ) + c(x ρ k κ k )y k c(x k ) T κ (0, 1) n k 2 Tangential component condition: Satisfy at least one of T t k Wk t k θ t k 2 θ>0 t k ψ n k ψ> 0 Update W k W k +δi if both violated Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
61 Model Reduction Condition Merit function: φ νk (x) = f (x) +ν k c(x) Linear model: m νk (x k, x) = f (x k ) + f (x k ) T x +ν k c(x k ) + c(x k ) T x Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
62 Model Reduction Condition Merit function: φ νk (x) = f (x) +ν k c(x) Linear model: m νk (x k, x) = f (x k ) + f (x k ) T x +ν k c(x k ) + c(x k ) T x Predicted reduction: m νk (x k ; x) = m νk (x k, 0) m νk (x k, x) Can show Dφ νk (x k ; x) m νk (x k ; x) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
63 Model Reduction Condition Merit function: φ νk (x) = f (x) +ν k c(x) Linear model: m νk (x k, x) = f (x k ) + f (x k ) T x +ν k c(x k ) + c(x k ) T x Predicted reduction: m νk (x k ; x) = m νk (x k, 0) m νk (x k, x) Can show Dφ νk (x k ; x) m νk (x k ; x) Require: m νk (x k ; x k ) is sufficiently positive, (= guarantee direction of descent for currentν k ) or Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
64 Model Reduction Condition Merit function: φ νk (x) = f (x) +ν k c(x) Linear model: m νk (x k, x) = f (x k ) + f (x k ) T x +ν k c(x k ) + c(x k ) T x Predicted reduction: m νk (x k ; x) = m νk (x k, 0) m νk (x k, x) Can show Dφ νk (x k ; x) m νk (x k ; x) Require: m νk (x k ; x k ) is sufficiently positive, (= guarantee direction of descent for currentν k ) or c(x k ) c(x k ) c(x k ) T x k is sufficiently positive (= controlled update ofν k to get Dφ νk (x k ; x k )<0) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
65 Theoretical Convergence Result Assumption f (x), c(x), f (x), c(x) are bounded and Lipschitz continuous, (unmodified) W k are bounded. Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
66 Theoretical Convergence Result Assumption f (x), c(x), f (x), c(x) are bounded and Lipschitz continuous, (unmodified) W k are bounded. Theorem (Curtis, Nocedal, W, 2009) If all limit points of c(x k ) have full rank thenν k is bounded and ( ) f (xk ) + c(x lim k )y k k c(x k ) = 0. Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
67 Theoretical Convergence Result Assumption f (x), c(x), f (x), c(x) are bounded and Lipschitz continuous, (unmodified) W k are bounded. Theorem (Curtis, Nocedal, W, 2009) If all limit points of c(x k ) have full rank thenν k is bounded and ( ) f (xk ) + c(x lim k )y k k c(x k ) = 0. Otherwise lim k c(x k )c(x k ) = 0 (stationary for: min 1 2 c(x) 2 2) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
68 Theoretical Convergence Result Assumption f (x), c(x), f (x), c(x) are bounded and Lipschitz continuous, (unmodified) W k are bounded. Theorem (Curtis, Nocedal, W, 2009) If all limit points of c(x k ) have full rank thenν k is bounded and ( ) f (xk ) + c(x lim k )y k k c(x k ) = 0. Otherwise lim k c(x k )c(x k ) = 0 (stationary for: min 1 2 c(x) 2 2) and ifν k is bounded lim k f (x k ) + c(x k )y k = 0. Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
69 Theoretical Convergence Result Assumption f (x), c(x), f (x), c(x) are bounded and Lipschitz continuous, (unmodified) W k are bounded. Theorem (Curtis, Nocedal, W, 2009) If all limit points of c(x k ) have full rank thenν k is bounded and ( ) f (xk ) + c(x lim k )y k k c(x k ) = 0. Otherwise lim k c(x k )c(x k ) = 0 (stationary for: min 1 2 c(x) 2 2) and ifν k is bounded lim k f (x k ) + c(x k )y k = 0. Similar result for interior point method with inequalities (Curtis, Schenk, W, 2010) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
70 Inequality Constraints / Barrier Problem p min f (x) µ ln(s (i) ) x R n,s R p i=1 s.t. c(x) = 0 d(x) + s = 0 (s> 0) W k 0 c(x k ) d(x k ) x k f (x k ) + c(x k )y 0 µs 2 k c k 0 I s + d(x k)y d k k c(x k ) T y c d(x k ) T k = µs 1 k e c(x k ) I 0 0 yk d d(x k ) + s k Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
71 Inequality Constraints / Barrier Problem p min f (x) µ ln(s (i) ) x R n,s R p i=1 s.t. c(x) = 0 d(x) + s = 0 (s> 0) W k 0 c(x k ) d(x k ) x k f (x k ) + c(x k )y 0 µi 0 S k S 1 k c + d(x k)y d k c(x k ) T k s k y c d(x k ) T k = µe c(x k ) S k 0 0 yk d d(x k ) + s k Trick: Scale slack variables: s = Sk 1 s k Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
72 Inequality Constraints / Barrier Problem p min f (x) µ ln(s (i) ) x R n,s R p i=1 s.t. c(x) = 0 d(x) + s = 0 (s> 0) W k 0 c(x k ) d(x k ) x k f (x k ) + c(x k )y 0 µi 0 S k S 1 k c + d(x k)y d k c(x k ) T k s k y c d(x k ) T k = µe c(x k ) S k 0 0 yk d d(x k ) + s k Trick: Scale slack variables: s = Sk 1 s k Now all quantities in linear system are uniformly bounded Much of equality-only analysis applies, use same termination tests Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
73 Inequality Constraints / Barrier Problem p min f (x) µ ln(s (i) ) x R n,s R p i=1 s.t. c(x) = 0 d(x) + s = 0 (s> 0) W k 0 c(x k ) d(x k ) x k f (x k ) + c(x k )y 0 µi 0 S k S 1 k c + d(x k)y d k c(x k ) T k s k y c d(x k ) T k = µe c(x k ) S k 0 0 yk d d(x k ) + s k Trick: Scale slack variables: s = Sk 1 s k Now all quantities in linear system are uniformly bounded Much of equality-only analysis applies, use same termination tests Special consideration Fraction to boundary: sk +α k s k (1 τ)s k > 0 Slack reset: sk+1 max{s k+1, d(x k+1 )} Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
74 Some Computational Results Ipopt [W, Laird, Biegler] primal-dual interior point method originally: filter line-search method added inexact algorithm with penalty function Pardiso [Schenk, Gärtner] Parallel Direct Linear Solver includes iterative linear solvers (e.g., SQMR) preconditioner: multi-level incomplete factorization with weighted graph matchings and inverse-based pivoting Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
75 Some Computational Results Ipopt [W, Laird, Biegler] primal-dual interior point method originally: filter line-search method added inexact algorithm with penalty function Pardiso [Schenk, Gärtner] Parallel Direct Linear Solver includes iterative linear solvers (e.g., SQMR) preconditioner: multi-level incomplete factorization with weighted graph matchings and inverse-based pivoting Experiments: Robustness test with CUTE/COPS collection 684 AMPL models Inexact method solved 85% (original Ipopt 94%) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
76 Some Computational Results Ipopt [W, Laird, Biegler] primal-dual interior point method originally: filter line-search method added inexact algorithm with penalty function Pardiso [Schenk, Gärtner] Parallel Direct Linear Solver includes iterative linear solvers (e.g., SQMR) preconditioner: multi-level incomplete factorization with weighted graph matchings and inverse-based pivoting Experiments: Robustness test with CUTE/COPS collection 684 AMPL models Inexact method solved 85% (original Ipopt 94%) PDE-constrained optimization problems Implemented in Matlab; finite differences; uniform grids Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
77 Boundary Control 1 (y(x) y t (x)) 2 dx 2 Ω s.t. (e y(x) y(x)) = 20 in Ω = [0, 1] 3 min y,u y(x) = u(x) on Ω 2.5 u(x) 3.5 on Ω Target profile: y t (x) = x (1) (x (1) 1)x (2) (x (2) 1) sin(2πx (3) ) N # var # eq # ineq # iter CPU sec (per iter) (0.2165) (7.9731) (380.88) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
78 Boundary Control 1 (y(x) y t (x)) 2 dx 2 Ω s.t. (e y(x) y(x)) = 20 in Ω = [0, 1] 3 min y,u y(x) = u(x) on Ω 2.5 u(x) 3.5 on Ω Target profile: y t (x) = x (1) (x (1) 1)x (2) (x (2) 1) sin(2πx (3) ) N # var # eq # ineq # iter CPU sec (per iter) (0.2165) (7.9731) (380.88) Original Ipopt with N = 32 requires 238 seconds per iteration Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
79 Hyperthermia Treatment Planning 1 (y(x) y t (x)) 2 dx 2 Ω s.t. y(x) 10(y(x) 37) = u M(x)u in Ω = [0, 1] 3 min y,u 37.0 y(x) 37.5 on Ω 42.0 y(x) 44.0 in Ω Tumor Ω Controls: u j = a j e iφ j j = 1,...,10 Data: M jk (x) =< E j (x), E k (x)>, E j = sin(jπx 1 x 2 x 3 ) N # var # eq # ineq # iter CPU sec (per iter) (0.3367) (59.920) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
80 Hyperthermia Treatment Planning 1 (y(x) y t (x)) 2 dx 2 Ω s.t. y(x) 10(y(x) 37) = u M(x)u in Ω = [0, 1] 3 min y,u 37.0 y(x) 37.5 on Ω 42.0 y(x) 44.0 in Ω Tumor Ω Controls: u j = a j e iφ j j = 1,...,10 Data: M jk (x) =< E j (x), E k (x)>, E j = sin(jπx 1 x 2 x 3 ) N # var # eq # ineq # iter CPU sec (per iter) (0.3367) (59.920) Original Ipopt with N = 32 requires 408 seconds per iteration Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
81 Inverse Problem min y,u 1 6 (y i (x) y i,t (x)) 2 dx + 1 (β(u(x) u t (x)) 2 + (u(x) u t (x)) 2) 2 2 i=1 Ω Ω s.t. (e u(x) y i (x)) = q i (x) in Ω = [0, 1] 3, i = 1,...,6 y i (x) n = 0 on Ω, i = 1,...,6 y i (x) dx = 0 i = 1,...,6 Ω 1 u(x) 1 in Ω N # var # eq # ineq # iter CPU sec (per iter) ( ) ( ) ( ) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
82 Inverse Problem min y,u 1 6 (y i (x) y i,t (x)) 2 dx + 1 (β(u(x) u t (x)) 2 + (u(x) u t (x)) 2) 2 2 i=1 Ω Ω s.t. (e u(x) y i (x)) = q i (x) in Ω = [0, 1] 3, i = 1,...,6 y i (x) n = 0 on Ω, i = 1,...,6 y i (x) dx = 0 i = 1,...,6 Ω 1 u(x) 1 in Ω N # var # eq # ineq # iter CPU sec (per iter) ( ) ( ) ( ) Original Ipopt with N = 32 requires 15 hours for the first iteration (reduction by factor 550) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
83 PDE-Constrained Optimization Very active research area Often try to use of preconditioners for original PDE Extremely efficient in solving linear systems for state variables Optimization methods often based on step decomposition Projections into the null space of constraint Jacobian Work with reduced Hessian Those often lead to triple iterative loops Can handle constraints on control variables Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
84 PDE-Constrained Optimization Very active research area Often try to use of preconditioners for original PDE Extremely efficient in solving linear systems for state variables Optimization methods often based on step decomposition Projections into the null space of constraint Jacobian Work with reduced Hessian Those often lead to triple iterative loops Can handle constraints on control variables Potential difficulty: Inequalities involving state variables If interior-point method is used: Reduced Hessian becomes very ill-conditioned Not clear how to find efficient preconditioner Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
85 Our Ongoing Work We hope that our approach can make a difference for problems with many state constraints when no efficient PDE preconditioner is known (e.g., some version of Helmholtz equation) Find an interesting problem with many state constraints Utilize standard PDE techniques Discretization using meshes (e.g., finite elements) Adaptive mesh refinement (e.g., [Ziems, Ulbrich]) Scalable distributed-memory parallel implementation If successful, distribute code Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
86 Example: Server Room Cooling Problem: Heat generating equipment in room must be cooled Want to place/control air conditioners to minimize cooling costs Suggested by Henderson, Lopez, Mello (IBM Research) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
87 Formulation Of Airflow Problem We assume Air flow has no internal friction (irrotational) Air speed is far below speed of sound (incompressible) Then air flow can be described by velocity potential Φ Air velocity: v(x) = Φ(x) Potential satisfies Laplace s equation Φ(x) = n i=1 2 Φ xi 2 (x) = 0 for x Ω Ω R n : interior of server room without equipment (n = 2, 3) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
88 Boundary conditions Φ(x) = 0 for x Ω Walls and equipment: No airflow through the surface Φ(x) n = 0 for x Γ wall ( n : outward normal derivative) At AC locations in walls: Controllable fan speed is incoming flow Φ(x) n = v ACi for x Γ ACi At exhausts: Assume hole in the wall ( grounded potential ) Φ(x) = 0 for x Γ Exhj Unique solution Φ H 1 (Ω), continuously dependent on v ACi Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
89 Optimal Control Problem min ci v ACi s.t. Φ(x) = 0 for x Ω Φ(x) = 0 n for x Γ wall Φ(x) = v ACi n for x Γ ACi Φ(x) = 0 for x Γ Exhj Φ(x) 2 2 vmin 2 v ACi 0 for x Γ hot Objective: Minimize costs of running ACs Constraints: Require minimum air speed at heat sources of equipment (nonconvex) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
90 Software Finite Element Library: libmesh [Kirk, Peterson, Stogner, Carey] Manages the finite-element mesh Interfaces to mesh generators (we use tetgen [Si, Gärtner] for 3D) Handles refinement of mesh (not yet used) Distributed memory implementation (uses PETSc) Optimization code: Ipopt (inexact algorithm) Variation: Step decomposition only when necessary Using the flexible penalty function [Curtis, Nocedal, 2008] Using distributed memory version of Ipopt [Dash, W] handles linear algebra objects in parallel supports parallel function evaluations not (yet... ) officially released Linear Solver: Pardiso (iterative linear solver) No distributed memory implementation In the future: use PSPIKE [Samel, Sathe, Schenk] Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
91 3D Server Room Cooling Example 7 servers of different sizes 57 ACs located all around the walls One exhaust in top front Constraint: Minimum air speed at front and back sides of servers Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
92 Computational Results Problem size (Ω = [0, 10] [0, 5] [0, 3]) h #var #eq #ineq #nnz KKT ,816 43,759 4, , , ,134 19,128 5,759,809 Exact algorithm h #iter f (x ) CPU(algo) CPU(eval) fill-in , , , Inexact algorithm h #iter f (x ) CPU(algo) CPU(eval) , , , (different ACs active for h = 0.1 and h = 0.05) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
93 Optimal Solution (h = 0.05) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
94 Optimal Solution (h = 0.05) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
95 Optimal Solution (h = 0.05) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
96 Optimal Solution (Active Constraints) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
97 Future Plans These computations are still preliminary Further improvements: Include temperature into the model Nonlinear PDEs Include adaptive mesh refinement within the optimization Improve preconditioner Perform computations in parallel Function evaluations (libmesh) and Ipopt are already parallel Explore distributed memory iterative linear solver PSPIKE Potentially release code as part of Ipopt distribution Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
98 Conclusions Nonlinear optimization with inexact step computations Termination tests for iterative linear solver Consider optimization nature Based on model of merit function Theoretical convergence properties under mild assumptions Encouraging results for PDE constrained problems Very good speed up on some examples Promising preliminary results for problem with state constraints A lot left to do (Looking for other NLP applications with large fill-in ideas?) Andreas Wächter (IBM) NLP With Inexact Steps IPAM / 41
Inexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationRecent Adaptive Methods for Nonlinear Optimization
Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationOn the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations
On the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Lehigh University Johannes Huber University of Basel Olaf Schenk University
More information1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,
AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale
More informationA Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations
Noname manuscript No. (will be inserted by the editor) A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations Frank E. Curtis Johannes Huber
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationNumerical Methods for PDE-Constrained Optimization
Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,
More informationTrust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization
Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu
More informationA New Penalty-SQP Method
Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationAn Inexact Newton Method for Nonconvex Equality Constrained Optimization
Noname manuscript No. (will be inserted by the editor) An Inexact Newton Method for Nonconvex Equality Constrained Optimization Richard H. Byrd Frank E. Curtis Jorge Nocedal Received: / Accepted: Abstract
More informationA note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations
Math. Program., Ser. B (2012) 136:209 227 DOI 10.1007/s10107-012-0557-4 FULL LENGTH PAPER A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationA Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization
Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More information1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.
AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationA Trust-Funnel Algorithm for Nonlinear Programming
for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton
More informationNonlinear Optimization Solvers
ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationA Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization
Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods
More informationA Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm
Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:
More informationInterior Methods for Mathematical Programs with Complementarity Constraints
Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties
More informationA Line search Multigrid Method for Large-Scale Nonlinear Optimization
A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More informationFeasible Interior Methods Using Slacks for Nonlinear Optimization
Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which
More informationA null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties
A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm
More informationAn Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps
An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps R.A. Waltz J.L. Morales J. Nocedal D. Orban September 8, 2004 Abstract An interior-point method for nonlinear
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationSome Recent Advances in Mixed-Integer Nonlinear Programming
Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationNonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function
Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,
More informationA trust region method based on interior point techniques for nonlinear programming
Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationAlgorithms for Nonsmooth Optimization
Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationInterior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy
InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationIP-PCG An interior point algorithm for nonlinear constrained optimization
IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationPOWER SYSTEMS in general are currently operating
TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationProximal Newton Method. Ryan Tibshirani Convex Optimization /36-725
Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h
More informationA Regularized Interior-Point Method for Constrained Nonlinear Least Squares
A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationStochastic Optimization Algorithms Beyond SG
Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods
More informationAn Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84
An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationImplicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks
Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Carl-Johan Thore Linköping University - Division of Mechanics 581 83 Linköping - Sweden Abstract. Training
More informationA DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS
Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationarxiv: v2 [math.oc] 11 Jan 2018
A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationPreprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory
Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationIBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence
RC23036 (W0304-181) April 21, 2003 Computer Science IBM Research Report Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence Andreas Wächter, Lorenz T. Biegler IBM Research
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationA globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization
Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian
More informationDerivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming
Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty
More informationInfeasibility Detection in Nonlinear Optimization
Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear
More informationREPORTS IN INFORMATICS
REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationLAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION
LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION ROMAN A. POLYAK Abstract. We introduce the Lagrangian Transformation(LT) and develop a general LT method for convex optimization problems. A class Ψ of
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationCoordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /
Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods
More informationALADIN An Algorithm for Distributed Non-Convex Optimization and Control
ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More information