An (exhaustive?) overview of optimization methods

Size: px
Start display at page:

Download "An (exhaustive?) overview of optimization methods"

Transcription

1 An (exhaustive?) overview of optimization methods C. Dutang Summer 2010

2 2 CONTENTS Contents 1 Minimisation problems Continuous optimisation, uncountable set X Unconstrained optimisation Constrained optimisation Saddle point Root problems General equation, uncountable set X Fixed-point problems 17 4 Variational Inequality and Complementarity problems Examples and problem reformulation Examples Problem reformulation Algorithms for CPs Algorithms for VIPs A Bibliography 24 B Websites 25

3 3 The following materials come mainly from Raydan & Svaiter (2001), Fletcher (2001), Madsen et al. (2004), Bonnans et al. (2006), Boggs & Tolle (1996), Dennis & Schnabel (1996), Conte & de Boor (1980), Ye (1996), Lange (1994) for optimization and root problems, Varadhan (2004), Roland et al. (2007) for fixed-point problems, and Facchinei & Pang (2003a,b) for variational inequality problems. 1 Minimisation problems Minimisation problems consist in solving Notations: gradient vector g(x) = f(x), Hessian matrix H(x) = 2 f(x), min x X R n f(x) such that c j(x) = 0, j E and c j (x) 0, j I. Jacobian matrix of the function c J c = ( ci x j )ij, positive and negative part of z, z + = max(z, 0) and z = min(z, 0), c(x) is such that c(x) i = c i(x) if i E or c i (x) + if i I, the superscript T denotes the transpose. 1.1 Continuous optimisation, uncountable set X Unconstrained optimisation, E, I = 1. Quadratic problems f(x) = 1 2 xt Mx + b T x + c. Prop: unique solution if matrix M is symmetric positive and definite. Thus solve Mx + b = 0. (a) General descent scheme (x k+1 = x k + t k d k with stepsize t k and direction d k such that f(x k+1 ) < f(x k )) i. d k = (Mx k + b): steepest descent method, ii. d k = (Mx k + b) and t k = (x k x k 1 ) T (x k x k 1 ) : Barzilai-Borwein method, (x k x k 1 ) T M(x k x k 1 ) iii. relaxed descent scheme (x k+1 = x k + t k θ k d k with relaxation parameters 0 < θ k < 2) Assuming direction d k = g(x k ) and optimal stepsize t k = g(x k) T g(x k ), the choice of relaxation parameters are g(x k ) T Mg(x k ) θ k = 1 steepest descent method, θ k = 2 we get f(x k+1 ) = f(x k ), If the sequence (θ k ) k has an accumulation point θ then the relaxed Cauchy method converges.

4 4 1 MINIMISATION PROBLEMS (b) Conjugate gradient methods (x k+1 = x k + t k d k+1 given the full information at kth iteration): i. classic CG method: Init: x 0, d 1 = g 1 Iter: g k+1 = g k + t k Md k with t k = g k 2 g T k Md k d k+1 = g k+1 + c k d k with c k = g k 2 g T k Md k NB: all directions (d 1,..., d k ) are conjugate w.r.t. M. ii. preconditioned conjugate gradient method, (c) Gauss-Newton method for least square problems (f(x) = p j=1 f j 2(x)) Iteration is x k+1 = x k + d k with d k = G(x k ) 1 f(x), gradient f(x) = p j=1 f j(x) f j (x), approximate Hessian G(x) = p j=1 f j(x) f j (x) T. 2. Smooth non linear problems f C 1,. (a) General descent scheme (x k+1 = x k + t k d k with stepsize t k and direction d k such that f(x k+1 ) < f(x k )) Direction rule: i. d k = arg min g(x k ) T d d <δ L 1 norm, d k is the index of the largest component. Gauss Siedel? L 1 norm, d k = g(x k ): Steepest descent method. Stepsize rule: i. fixed: method of successive approximation, ii. optimal t k = arg minf(x k + td k ) (useless in practice), t>0 iii. Wolfe s rule, iv. Goldstein and Price, v. Armijo. Direction+Stepsize rule: i. d k = g(x k ) and t k = d T k 1 d k 1 d T k 1 (g(x k) g(x k 1 )) Barzilai-Borwein method, ii. d k = g(x k ) and t k = arg min t>0 f(x k + td k ) Cauchy method, (b) Conjugate gradient methods:

5 1.1 Continuous optimisation, uncountable set X 5 i. classic CG Init: d 1 = g(x 1 ) Iter: d k = g(x k ) + β k d k 1 for k > 1 β k = g(x k) T g(x k ) : Fletcher-Reeves update, g(x k 1 ) T g(x k 1 ) β k = g(x k) T (g(x k ) g(x k 1 )) : Polak-Ribière update, g(x k 1 ) T g(x k 1 ) Beale-Sorenson, Hestenses-Stiefel, Conjugate-Descent,... NB: all directions (d 1,..., d k ) are conjugate to the Hessian matrix H(x k )? ii. preconditionned CG (c) Newton method (x k+1 = x k + d k with direction d k minimizes the quadratic function q f (d) = f(x k ) + g(x k ) T d dt g(x k )d). i. exact Newton method, d k solves g(x k ) + g(x k )d = 0 i.e. minimizer of the local quadratic approximation q f. ii. Quasi-Newton methods, d k approximates the exact minimizer of q f Scheme: Init: x 0, W 0 Iter: while g(x k ) > ɛ approximate Hessian inverse W k = W k 1 + B k compute direction d k = W k g(x k ) line search for t k Choice of W : Constraints: symmetric, positive, definite and verified the quasi-newton equation W k (g k+1 g k ) = x k+1 x k, known methods for W : Davidon-Fletcher-Powell (DFP) or Broyden-Fletcher-Goldfarb-Shanno (BFGS). iii. inexact Newton or truncated Newton methods: It requires that d k decreases the linear residual, H(x k )d k + g(x k ) η c g(x k ), where 0 < η c < 1 is the forcing term. NB: univariate case (n = 1), quasi-newton has a unique equation: secant method or regula falsi method. (d) Trust region algorithms (x k+1 = x k + h k with h k a local minimizer) Scheme: h k ( k ) = arg min f(x k ) + g(x k ) T h ht Hk h with H k is positive definite matrix, h < k if f(x k + h k ( k )) f(x k ) mh k ( k ) then terminates iteration, otherwise decrease k. NB: 0 < m < 1 a fixed coefficient. Choice of matrix H k :

6 6 1 MINIMISATION PROBLEMS H k is the Hessian H(x k ): positive-definiteness is not guaranteed, H k = H(x k ) + λi n with λ > 0: Levenberg-Marquardt algorithm (derived with KKT conditions), quasi-newton approximation, Gauss-Newton approximation NB: Hk = 0 gives the steepest descent. 3. Non smooth problems (a) direct search (derivative free) methods i. Nelder-Mead algorithm ii. Hooke-Jeeves algorithm iii. multi-directional algorithm iv.... (b) metaheuristics i. evolutionary algorithms ii. stochastic algorithms simulated annealing Monte-Carlo method ant colony (c) regularizing techniques i. filter algorithms ii. noisy algorithms (d) NSO methods i. subgradient methods ii. bundle methods iii. gradient sampling methods iv. hybrid methods (e) special problems i. piecewise linear ii. minmax iii. partially separable

7 1.1 Continuous optimisation, uncountable set X 7 Function Method type Algorithms type Method R function - package quadratic smooth non smooth large scale descent scheme Gauss-Newton method conjugate gradient (CG) descent scheme conjugate gradient Newton methods trust-region direct search methods metaheuristics regularizing techniques NSO methods steepest descent relaxed descent scheme CG methods preconditionned CG steepest descent (BB, Cauchy) Gauss-Siedel CG methods preconditioned CG exact quasi-newton truncated Newton direct Hessian Levenberg-Marquardt quasi-newton Nelder-Mead algorithm multidirectional algorithm evolutionary algorithms stochastic algorithms filter algorithms noisy algorithms subgradient methods bundle methods gradient sampling methods Barzilai-Borwein Cauchy DFP BFGS genetic algorithm covariance matrix adaptation evo. strat. simulated annealing Monte-Carlo method ant colony dfsane BB optim stats dfsane BB optim stats nlm stats optim stats trust trust optim stats genoud rgenoud, mco cmaes optim stats limited-memory algorithms L-BFGS optim stats Hessian free methods conjugate gradient (CG) optim stats

8 8 1 MINIMISATION PROBLEMS Constrained optimisation (E, I ), also known as nonlinear programming The Lagrangian function is defined as L(x, λ) = f(x) + λ T c(x), with λ the Lagrange multiplier. The Karush-Kuhn-Tucker (KKT) conditions are x L(x, λ) = 0, equality constraints j E, c j (x) = 0 and λ j R, active inequality constraints j I, c j (x) = 0 and λ j > 0, inactive inequality constraints j I, c j (x) < 0 and λ j = box constraints (a) projected Newton method, (b) limited-memory box-constraint BFGS method: L-BFGS-B, 2. linear constraints (a) linear programming (f is linear) i. simplex method, ii. dual simplex method, iii. Karmarkar algorithm, (b) Interior point methods for linear constraints Consider the problem min f(x) such that Ax = b and x 0. Let a log potential be π(x) = x i log(x i). We want to minimize the penalised function f(x) + µπ(x). A central path or path following algorithm is a sequence of points (x µ n, λ µ n, s µ n ) solution of the problem for a sequence of decreasing (µ n ) n to 0. Main algorithms are i. potential reduction algorithm, ii. primal-dual symmetric algorithm,. first order optimality conditions x.s = µ n 1 f(x) + A T λ = s Ax = b such that x 0, s 0,

9 1.1 Continuous optimisation, uncountable set X 9 iii. generic predictor-corrector algorithms or adaptive path-following algorithm, Generic predictor-corrector algorithms solve a linear complementarity reformulation of this problem with an iterative method: a small-neighborhood algorithm, a predictor-corrector algorithm with modified field, a large-neighborhood algorithm. 3. general constraints (a) Sequential linear programming linearize both objective and constraint functions using Taylor series expansion. (b) Sequential quadratic programming i. Local methods for equality constraints KKT conditions reduce to f(x) + J c (x) T λ = 0 and c(x) = 0: Newton method is an iterative method { ( xk+1 = x k + d k 2 computed from xx L(x k, λ k ) J c (x k ) T ) ( ) ( ) dk f(xk ) = λ k+1 = λ k + µ k J c (x k ) 0 λ k + µ k c(x k ) full Hessian approximation: replace 2 xxl with an approximated Hessian B k (possible scheme PSB, BFGS), augmented Lagrangian (add a constraint-penalty term) to guarantee positiveness, reduced Hessian matrix: positiveness is only guaranteed on a subspace, ii. Local methods for (in)equality constraints The SQP algorithm consists in solving a sequence of quadratic (Taylor) approximations of the Lagrangian. Scheme: Init: x 0, λ 0 Iterate while a termination criterion x k+1 = x k + d k solution of min d f(x k ) T d dt 2 xxl(x k, λ k )d such that c E (x k ) + J ce (x k )d = 0 and c I (x k ) + J ci (x k )d 0 Implementations: active-set strategies, interior-point algorithms, dual approaches. iii. Globalization of SQP method A. trust region method: Iterative method of a sequence of quadratic bounded sub-problem region radius k is updated at each stage. min d < k f(x k ) T d dt B k d, where the trust

10 10 1 MINIMISATION PROBLEMS B. line search method: Iterative method x k+1 = x k + t k d k, where direction d k is approximated by a solution of quadratic sub-problem and t k chosen to ensure a reasonable decrease of a merit function (e.g. f(x) + σ c(x) p ). NB: the penalty parameter has to be updated (σ k ) k. NB: for both those techniques, one particular approximation of the Hessian has to be chosen: quasi-newton, reduced quasi-newton,... (c) Sequential unconstrained methods i. Exterior point methods The general idea is to minimize the merit function f(x) + p(x) where p is a. During the optimization process, an extertior-point method does not guarantee that all points are in the feasible region (hence the name). Possible penalty terms are: quadratic penalty: p(x) = σ/2 c(x) 2 2, Lagrangian: p(x) = µ T c(x), augmented Lagrangian: p(x) = µ T c(x) + σ/2 c(x) 2 2 with c(x) j = c j (x) if j E and max( µ j σ, c j(x)) if j I, non differentiable (exact) function: p(x) = σ c(x) p Then we can either minimize of the associated Lagrangian function is carried out by an iterative method, or solve the dual problem arg max θ(u) with θ(u) = min f(x) + up(x). u 0 x ii. Interior point methods (or adaptive barrier methods): Barrier methods operates in the interior of the feasible region: adapting iteratively the barrier permits the convergence to a boundary by lowering the strength of the corresponding barrier. Let a log potential be π(x) = i log(x i). Using the convex property on f(x) + µπ(x), two majorants of f(x) can be found log surogate function: g l (x, y, µ) = f(x) µ j I c j(y) log c j (x) + µ j I c j (y)(x y), power surogate function: g p (x, y, µ) = f(x) + µ j I c j(y) α+β log c j (x) α + µ j I c j(y) β c j (y)(x y). Then a MM algorithm can be used with an adaptive barrier constant µ k : Init: x 0, µ 0, Iterate while a termination criterion x k+1 = arg min g l (x, x k, µ k ) or arg min g p (x, x k, µ k ), x x adapt µ k+1.

11 1.1 Continuous optimisation, uncountable set X 11 Constraint type Method type Algorithm type Algorithms R function - package box constraints linear constraints general constraints projected Newton method L-BFGS-B linear program interior-point methods sequential linear program sequential quadratic program unconstrained program simplex method dual simplex method Karmarkar algorithm potential reduction algorithm primal-dual symmetric algorithm adaptive path-following algorithm trust-region SQP line-search SQP exterior-point algorithms interior-point algorithms small-neighborhood algorithm predictor-corrector algorithm large-neighborhood algorithm optim stats, Rvmmin Min-max saddle point A saddle point x satisfies the following inequality u, v, φ(u, v) φ(u, v ) φ(u, v ), with x = (u T v T ) T. Grantham (2005) provide an unified way of most algorithms solving saddle points. If we see the iterative algorithm as a function of time, then iterative ( ) algorithm can be defined as the solution of partial differential equation (PDE). Let g be the gradient of φ: g(x) = φ x (x) = gu (x). Then a general algorithm solve the PDE g v (x) x t = P (x)g(x),

12 12 1 MINIMISATION PROBLEMS where P (x) denotes a matrix. Let H(x) be the Hessian matrix ( ) Huu H H(x) = uv (x). H T uv H vv We have the following algorithms 1. steepest descent P (x) = diag(i u, I v ), 2. Newton method P (x) = H(x) 1, 3. gradient enhanced min-max ( ) Huu + α P (x) = u I u H uv (x) 1. H vv α v I v H T uv

13 13 2 Root problems Root problems consist in solving f(x) = 0, x X R n. 2.1 General equation, uncountable set X 1. f linear: Ax = b The solution is unique if A is invertible, i.e. det(a) 0. (a) direct inversion Compute A 1, then x = A 1 b. (b) Gaussian elimination If A is not upper triangular, transform A into a upper triangular matrix, and then compute ascendently the solution (if it exists). Note that it corresponds to a factorization P LU where L is a lower triangular matrix, U upper triangular and P permutation matrix. i. Gauss pivot method ii. Gauss Jordan method iii. Grassman algorithm (c) Decomposition Decompose A into three matrix D E F, with D the diagonal part, E the opposite lower part and F the opposite upper part. i. Jacobi iterations: x k+1 = D 1 (E + F )x k + D 1 b, ii. Gauss-Siedel iterations: x k+1 = (D E) 1 F x k + (D E) 1 b, iii. Successive Over Relaxation: (D ωe)x k+1 = (ωf + (1 ω)d)x k + ωb for ω [0, 1]. (d) projection methods for large scale problems 2. f univariate (a) Newton methods i. Newton-Raphson method Assuming, we have the gradient f, the algorithm is Init: x 0 Iter: x k+1 root of f(x k ) + f (x k )(x k+1 x k ) = 0.

14 14 2 ROOT PROBLEMS ii. the secant method it consists in replacing f (x k ) by a finite-difference approximation f(x k) f(x k 1 ) x k x k 1 iii. the Muller method It consists in approximating the function by the quadratic function q(x) = f(x k ) + (x x k )f[x k, x k 1 ] + (x x k )(x x k 1 )f[x k, x k 1, x k 2 ], in the Newton-Raphson method. where f[.,.] denotes divided differences. Of course, analytical solution exists to find the next iterate x k+1. (b) dichotomic search i. bisection method Assuming f is continuous, the procedure is Init: a 0 and b 0 such that f(a 0 )f(b 0 ) < 0, Iter: let c = a k+b k 2. If f(c)f(a k ) > 0 then a k+1 = c, b k+1 = b k, otherwise b k+1 = c, a k+1 = a k. ii. regula falsi or false position method a hybrid combining dichotomic search and the secant method. Replace the c of the bissection method (middle of [a k, b k ]) by c k root of f(b k ) + f(b k) f(a k ) b k a k (c k b k ) = f polynomial p (a) Bairstow method: Consider a polynom with real coefficients. It consists in dividing succesively the polynom by a quadratic polynom. Thus we get pairs of conjugate zeros. (b) Bernoulli method: Iterative method that use the characteristic polynom to compute zero one after another, and deflate the polynom at each stage. (c) Muller method: Use a quadratic approximation of the polynom to find zeros. (d) Newton method: Iterates the Newton method on the polynom. If combines with Muller method, it can be powerful to find zeros succesively. (e) Laguerre method: It use the derivatives of log p, used in an iterative method. (f) Durand-Kerner or Weierstrass method: Use a fixed-point iteration procedure to compute all roots at a time. 4. f smooth

15 2.1 General equation, uncountable set X 15 (a) Newton-Raphson method Assuming, we have the gradient f, the algorithm is Init: x 0 Iter: x k+1 root of f(x k ) + f(x k )(x k+1 x k ) = 0. (b) quasi-newton methods Approximate the gradient by G k with an update scheme: Init: x 0 Iter: x k+1 root of f(x k ) + G k (x k+1 x k ) = 0. Update schemes Broyden method : G k = G k 1 + (y k 1 G k 1 s k 1 )s T k 1 s T k 1 s k 1 DFP or BFGS direct approximation of G 1 k. with y k 1 = f(x k ) f(x k 1 ) and s k = x k x k 1.

16 16 2 ROOT PROBLEMS Function Methods Algorithms direct inversion linear univariate polynomial smooth Gaussian elimination decomposition projection methods Newton method dichotomic search Bairstow Bernoulli Muller Newton Laguerre Weierstrass Newton-Raphson method quasi-newton Gauss pivot method Gauss-Jordan method Grassman method Jacobi iterations Gauss-Siedel iterations Succesive over relaxations Newton-Raphson method secant method Muller method bissection method regula falsi Broyden BFGS

17 17 3 Fixed-point problems Root problems consist in solving F (x) = 0, x X R n. 1. Direct method: x k+1 = F (x k ), 2. Polynomials methods: x k+1 = d i=0 γ i,kf i (x k ) with F i the ith composition of F, Let u k i be F i (x k ), one iterate is x k u k 0,..., uk d x k+1. The sequence u k i s is called a cycle or a restart. (a) 1st order method: x k+1 can be rewritten as x k α k r k with r k = F (x k ) x k, 1 i. relaxation method: α k independent of x k such as k+1 or a random uniform number in ]0, 2[, ii. Lemaréchal method or RRE1: α k = <v k,r k > <v k,v k > where v k = F (F (x k )) 2F (x k ) + x k, iii. Brezinski method or MPE1: α k = <r k,r k > <r k,v k >, (b) dth order method: The coefficients γ i,k must satisfy the constraints d i=0 γ i,k = 1, d i=0 γ i,kβ i,j,k = 0. i. Reduced Rank Extrapolation (RREd): β i,j,k =< i,1 x k, j,2 x k >, ii. Minimal Polynomial Extrapolation (MPEd): β i,j,k =< i,1 x k, j,1 x k >, with i,j x k = j l=0 ( 1)l j Cj lf l+i (x k ) and Cj l the binomial coefficients. Squaring methods consist in applying twice a cycle step to get the next iterate. So we have x k+1 = d d i=0 j=0 γ i,kγ j,k F i+j (x k ). (a) squaring 1st order method: x k+1 can be rewritten as x k 2α k r k + αk 2v k, i. SqRRE1: α k = <v k,r k > <v k,v k >, ii. SqMPE1: α k = <r k,r k > <r k,v k >, (b) squaring dth order method: i. SqRREd: β i,j,k =< i,1 x k, j,2 x k >, ii. SqMPEd: β i,j,k =< i,1 x k, j,1 x k >, NB: the RRE1 method is (sometimes) called the Richardson method when F is linear and Lemaréchal method otherwise. The SqRRE1 method is called the Cauchy Barzilai Borwein method when F is linear. The MPE1 method is called the Cauchy method when F is linear and Brezinski method otherwise. 3. Epsilon algorithms: (a) Scalar ɛ algorithm SEA (b) Vector ɛ algorithm VEA (c) Topological ɛ algorithm TEA

18 18 4 VARIATIONAL INEQUALITY AND COMPLEMENTARITY PROBLEMS 4 Variational Inequality and Complementarity problems Variational inequality problems VI(K, F ) consist in finding x K such that y K, (y x) T F (x) 0, where F : K R n. We talk about quasi-variational inequality problems for the problem y K(x), (y x) T F (x) 0. Complementarity problems CP(K, F ) consist in finding x K such that x K, F (x) K, x T F (x) = 0 where K denotes the dual cone of K, i.e. K = {d R n, k K, k T d = 0}. Let us note x T y = 0 is equivalent to x is orthogonal to y, usually noted by x y. Furthermore if K is a cone, then CP(K, F ) is equivalent to VI(K, F ). 4.1 Examples and problem reformulation Examples Here are few examples of VI problems. classic complementary problems: when K = R n +, CP(K, F ) reduces to x 0 F (x) 0, i.e. i, x i 0, F (x i ) 0, x i F i (x) = 0. mixed complementary problems: if K = R m R m n + and F (u, v) = (G(u, v) T, H(u, v) T ) T, then G(u, v) = 0, v 0 H(u, v) 0. linear variational inequality problems: F (x) = q + Mx and K be a polyhedral set, a closed rectangle or the positive orthant. link with optimization problem: if we consider min x K x)t θ(x) 0, i.e. VI(K, θ). If θ(x) = q T x xt Mx, then the VIP and the optimization are equivalent if M is symmetric and K is polyhedron. extented KKT system: Let K be {x R n, h(x) = 0, g(x) 0} for h : R n R l and g : R n R m. If x solves VI(K, F ) then there exist µ R l, λ R m such that. Each composant of x and F (x) are complement.

19 4.1 Examples and problem reformulation 19 L(x, µ, λ) = F (x) + j µ j h j (x) + i λ i g i (x) = 0, h(x) = 0, λ 0 g(x) Problem reformulation A necessary tool for VIP and CP is complementarity function. We say ψ : R 2 R is a complementarity function if a, b R 2, ψ(a, b) = 0 a 0, b 0, ab = 0. Here are some examples ψ min (x, y) = min(x, y), ψ FB (x, y) = x 2 + y 2 (x + y). A class of compl. function is the Mangasarian functions: ψ Man (a, b) = ϕ( a b ) ϕ(a) ϕ(b) where ϕ is a strictly increasing function from R to R. Typically, we use ϕ(t) = t and ϕ(t) = t 3. Using this tool, we can reformulate the above extended KKT system as L(x, µ, λ) h(x) = 0. ψ min (λ, g(x)) Another useful tool is the Euclidean projector on K, which for a point x find nearest on K according to the Euclidean distance. That is to say 1 Π K : x arg min y K 2 (y x)t (y x). There are two possibilities to reformulate VI(K, F ) problems: natural mapping: x solves VI(K, F ) FK nat (x) = 0 with F nat K (x) = x Π K(x F (x)), normal mapping: x solves VI(K, F ) z R n, x = Π K (z), FK nor (z) = 0 with F nor K (z) = F (Π K(z)) + z Π K (z). Example for CP(R n +, F ), we have FK nor(z) = F (z +) z The last tool we introduce is the merit functions. A merit function for VI(K, F ) is a function θ : X K R + such that x solves VI(K, F ) x X, θ(x) = 0 x = arg min θ(y) for a closed set X and min θ(y) = 0. y X y X Examples: for VI(R +, F ) we have θ ψ (x) = i ψ(x i, F i (x)) 2 with ψ a compl. function, for VI(K, F ), we have θ gap (x) = supf (x) T (x y). If K is a cone, then θ gap (x) becomes x T F (x). y K

20 20 4 VARIATIONAL INEQUALITY AND COMPLEMENTARITY PROBLEMS 4.2 Algorithms for CPs Problem type: 1. non linear CPs: CP(R n +, F ): Complementarity functions (a) FB based methods: The equation is F FB (x) = 0 and the merit function is θ FB (x) = F FB (x) T F FB (x) where F FB (x) = ψ FB (x 1, F 1 (x)). ψ FB (x n, F n (x)) Tools: for linear Newton approximation scheme T, we choose a matrix H in JacF FB, Set the set B to {i {1,..., n}, x i = 0 = F i (x)}, Choose z R n such that z i 0 for i B, For all columns c of H T, ( ) ( ) x i 1 F e (H T x 2 i + i (x) 1 F ).c = i +F i (x) 2 x 2 i (x) if c / B i +F i (x) ( ) ( 2 ) z i 1 F e z 2 i + i (x) T z 1 F i +( F i (x) T z) 2 z 2 i (x) if c B i +( F i (x) T z) 2. Where e i is the vector with 1 at the ith position. linear CP(K, x q + Mx) Algorithms i. line-search methods: Algo (F FB, θ FB, T ) Init: x 0 R n, ρ > 0, p > 1 and γ ]0, 1[, Iter while x k is not a stationary point of θ FB : select H k in T (x k ) find d k root of F FB (x k ) + H k d = 0 if the equation is not solvable or θ T FB (x k)d k > ρ d k p then d k = θ FB (x k ) find the smallest i k N such that θ FB (x k + 2 i d k ) θ FB (x k ) + γ2 i θ T FB (x k)d k x k+1 = x k + t k d k with t k = 2 i k NB: some variants include further checks on the direction d k.

21 4.2 Algorithms for CPs 21 ii. trust region approach Algo (F FB, θ FB, T ) Init: x 0 R n, 0 < γ 1 < γ 2 < 1, 0, min > 0, Iter: select H k in T (x k ) find d k = arg min d < k q FB (d, x k ) with q FB (d, x) = θ T FB (x)d dt H T k H kd if q FB (d k, x k ) = 0 then stops if θ FB (x k + d k ) θ FB (x k ) + γ 1 q FB (d k, x k ) then { max(2 k, x k+1 = x k + d k and k = min ) if θ FB (x k + d k ) θ FB (x k ) + γ 2 q FB (d k, x k ) max( k, min ) otherwise else x k+1 = x k and k = k /2 NB: a variant includes an additional constraint on the direction d k. iii. constrained methods Algo (F FB, θ FB ) Init: x 0 R n, ρ > 0, p > 1 and γ ]0, 1[ Iter while x k is not a stationary point of θ FB : find y k+1 solution of linear CP(q k, JacF (x k )) and d k = y k+1 x k with q k = F (x k ) JacF (x k )x k, if the equation is not solvable or θ T FB (x k)d k > ρ d k p then d k = min(x k, θ FB (x k )) find the smallest i k N such that θ FB (x k + 2 i d k ) θ FB (x k ) + γ2 i θ T FB (x k)d k x k+1 = x k + t k d k with t k = 2 i k NB: a variant consists in replacing the linear CP by a convex subprogram solved by a Levenberg-Marquardt method. (b) min based methods The equation is F min (x) = 0 and the merit function is θ min (x) = F min (x) T F min (x) where i. line-search method In the following, we use φ(x, d) = i,x i >F i (x) min(x 1, F 1 (x)) F min (x) =.. min(x n, F n (x)) (F i (x) + F i (x) T d) 2 + i,x i F i (x) (x i + d i ) 2 + ρ(θ min(x)) d T d 2

22 22 4 VARIATIONAL INEQUALITY AND COMPLEMENTARITY PROBLEMS and σ(x, d) = F i (x) F i (x) T d + x i d i. i,x i >F i (x) i,x i F i (x) Algorithm 9.2.2( F min, θ min, φ, σ) Init: x 0 R n and γ ]0, 1[ Iter: find d k solution of arg min φ(x k, d) x k +d 0 if d k = 0 then stops find the smallest i k N such that θ min (x k + 2 i d k ) θ min (x k ) γ2 i σ(x k, d k ) x k+1 = x k + t k d k with t k = 2 i k ii. trust region approach not possible since min is not everywhere differentiable iii. mixed compl. func. method Algorithm 9.2.3( F FB, θ FB, F min ) Init: x 0 R n, ɛ > 0, p > 1, ρ > 0 and γ ]0, 1[ Iter while x k is not a stationary point of θ FB : select H k in T min (x k ) d k solves F min (x k ) + H k d = 0 if the system is solvable and F min (x k + d k ) F min (x k ) then x k+1 = x k + t k d k with t k = 1 else if θ T FB (x k)d k > ρ d k p then d k = θ FB (x k ) find the smallest i k N such that θ FB (x k + 2 i d k ) θ FB (x k ) + γ2 i θ T FB (x k)d k x k+1 = x k + t k d k with t k = 2 i k (c) extension to other compl. functions FB based methods can use with other complementarity functions. Here are some examples. ψ LT (a, b) = (a, b) q (a + b) with q > 1 (a b) ψ KK (a, b) = 2 +2qab (a+b) 2 q with 0 q < 2 ψ CCK (a, b) = ψ FB (a, b) qa + b + with q finite lower VIs: CP(R n +, F ): 3. finite upper VIs: CP(R n +, F ): 4. mixed CPs 5. box constrained VIs

23 4.3 Algorithms for VIPs Algorithms for VIPs

24 24 REFERENCES A Bibliography References Boggs, P. T. & Tolle, J. W. (1996), Sequential quadratic programming, Acta Numerica. 3 Bonnans, J. F., Gilbert, J. C., Lemaréchal, C. & Sagastizábal, C. A. (2006), Numerical Optimization: Theoretical and Practical Aspects, Second edition, Springer-Verlag. 3 Conte, S. D. & de Boor, C. (1980), Elementary numerical analysis: an algorithmic approach, Springer International series in pure and applied mathematics. 3 Dennis, J. E. & Schnabel, R. B. (1996), Numerical methods for unconstrained optimization and nonlinear equations, SIAM. 3 Facchinei, F. & Pang, J.-S. (2003a), Finite-Dimensional Variational Inequalities and Complementary Problems. Volume I, Springer- Verlag New York, Inc. 3 Facchinei, F. & Pang, J.-S. (2003b), Finite-Dimensional Variational Inequalities and Complementary Problems. Volume II, Springer- Verlag New York, Inc. 3 Fletcher, R. (2001), On the barzilai-borwein method, Numerical Analysis Report Grantham, W. J. (2005), Gradient transformation trajectory following algorithms for determining stationary min-max saddle points. working paper. 11 Lange, K. (1994), Optimization, Springer-Verlag. 3 Madsen, K., Nielsen, H. B. & Tingleff, O. (2004), Optimization with constraints, Internet. 3 Raydan, M. & Svaiter, B. F. (2001), Relaxed steepest descent and cauchy-barzilai-borwein method. preprint. 3 Roland, C., Varadhan, R. & Frangakis, C. E. (2007), Squared polynomial extrapolation methods with cycling: an application to the positron emission tomography problem, Numerical Algorithms 44(2), Varadhan, R. (2004), Squared extrapolation methods (squarem): a new class of simple and efficient numerical schemes for accelerating the convergence of the em algorithm. working paper. 3 Ye, Y. (1996), Interior-point algorithm: Theory and practice, online. 3

25 25 B Websites Applied mathematics: Decision tree for optimization software: Optimization online: Interior-point algorithms: helmberg/sdp ip.html,

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Lecture 7 Unconstrained nonlinear programming

Lecture 7 Unconstrained nonlinear programming Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

Optimization Methods for Circuit Design

Optimization Methods for Circuit Design Technische Universität München Department of Electrical Engineering and Information Technology Institute for Electronic Design Automation Optimization Methods for Circuit Design Compendium H. Graeb Version

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Miscellaneous Nonlinear Programming Exercises

Miscellaneous Nonlinear Programming Exercises Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background

More information

Optimization Methods

Optimization Methods Optimization Methods Categorization of Optimization Problems Continuous Optimization Discrete Optimization Combinatorial Optimization Variational Optimization Common Optimization Concepts in Computer Vision

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

COMPUTATION OF GENERALIZED NASH EQUILIBRIA WITH

COMPUTATION OF GENERALIZED NASH EQUILIBRIA WITH COMPUTATION OF GENERALIZED NASH EQUILIBRIA WITH THE GNE PACKAGE C. Dutang 1,2 1 ISFA - Lyon, 2 AXA GRM - Paris, 1/19 12/08/2011 user! 2011 OUTLINE 1 GAP FUNCTION MINIZATION APPROACH 2 FIXED-POINT APPROACH

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Notes on Numerical Optimization

Notes on Numerical Optimization Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

ECS550NFB Introduction to Numerical Methods using Matlab Day 2 ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves

More information

Static unconstrained optimization

Static unconstrained optimization Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Applied Numerical Analysis

Applied Numerical Analysis Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Math 273a: Optimization Netwon s methods

Math 273a: Optimization Netwon s methods Math 273a: Optimization Netwon s methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 some material taken from Chong-Zak, 4th Ed. Main features of Newton s method Uses both first derivatives

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information