A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT
|
|
- Josephine Warner
- 5 years ago
- Views:
Transcription
1 A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES Patrice MARCOTTE Jean-Pierre DUSSAULT Resume. Il est bien connu que la methode de Newton, lorsqu'appliquee a un probleme d'inequation variationnelle fortement monotone, converge localement vers la solution de l'inequation, et que l'ordre de convergence est quadratique. Dans cet article nous montrons que la direction de Newton constitue une direction de descente pour un objectif non dierentiable et non convexe, et ceci m^eme en l'abscence de l'hypothese de monotonie forte. Ce resultat permet de modier la methode et de la rendre globalement convergente. De plus, sous l'hypothese de forte monotonie, les deux methodes sont localement equivalentes: il s'ensuit que la methode modiee herite des proprietes de convergence de la methode de Newton: identication implicite des contraintes actives a la solution (sous l'hypothese de stricte complementarite) et ordre de convergence quadratique. 1
2 A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES Patrice MARCOTTE () Jean-Pierre DUSSAULT () () College Militaire Royal de Saint-Jean Saint-Jean-sur-Richelieu, Quebec, Canada J0J 1R0 GERAD, Ecole des Hautes Etudes Commerciales Montreal, Quebec, Canada H3T 1V6 () Departement de Mathematiques et Informatique Universite de Sherbrooke, Sherbrooke, Quebec, Canada J1K 2R1 Abstract. It is well-known (see Pang and Chan [7]) that Newton's method, applied to strongly monotone variational inequalities, is locally and quadratically convergent. In this paper we show that Newton's method yields a descent direction for a nonconvex, nondierentiable merit function, even in the abscence of strong monotonicity. This result is then used to modify Newton's method into a globally convergent algorithm by introducing a linesearch strategy. Furthermore, under strong monotonicity (i) the optimal face is attained after a nite number of iterations (ii) the stepsize is eventually xed to the value one, resulting in the usual Newton step. Computational results are presented. Keywords. Mathematical Programming. Variational Inequalities. Newton's method. Research supported by NSERC grants A5789 and A
3 1. Problem formulation and basic denitions. Let be a nonempty, convex and compact subset of R n. Consider the variational inequality problem consisting in nding x in such that: (x? x) t F (x ) 0 8x 2 (V IP ) where F is a continuously dierentiable, monotone mapping from into R n : (x? y) t (F (x)? F (y)) 0 8x; y 2 (1) and the compacity assumption ensures that the variational inequality possesses at least one solution. To solve V IP, Newton's method generates a sequence fx k g, where x 1 is any feasible point in and x k+1 is solution to the variational inequality problem obtained by linearizing F around the previous iterate x k, i.e.: (x k+1? x) t (F (x k ) + F 0 (x k )(x k+1? x k )) 0 8x 2 (LV IP (x k )) In the above expression, F 0 (x k ) denotes the (not necessarily symmetric) Jacobian matrix of F evaluated at x k. In order that Newton's method be ecient, it is clear that the linearized problem LV IP (x k ) must be easier to solve than the original V IP. This might be the case if possesses some simple (e.g. polyhedral) structure for which a nitely convergent algorithm is available. In the remainder of the paper, the following characterizations of ordinary, strict and strong monotonicity will be used: Monotonicity: F is monotone on if: (x? y) t (F (x)? F (y)) 0 8x; y 2 (2) Strict monotonicity: F is strictly monotone on if: (x? y) t (F (x)? F (y)) > 0 8x; y 2 (3) Strong monotonicity: constant such that: F is strongly monotone on if there exists a positive (x? y) t (F (x)? F (y)) kx? yk 2 8x; y 2 (4) Finally, let us dene some quantities associated with V IP : Denition 1. Denition 2.?(x) def = arg min y2 y t F (x) The gap function associated with V IP is dened as: g(x) def = max(x? y) t F (x) y2 = (x? y) t F (x) for any y 2?(x) 2
4 It is clear that x is solution to V IP if and only if g(x ) = 0. The gap function, though in general nondierentiable and nonconvex, can be driven to zero in a monotone fashion by specialized algorithms (Marcotte [5], Marcotte and Dussault [6]). The term \gap function" has been used by Hearn [3] to denote the same function, although in an optimization framework, i.e. when F is the gradient of some convex function f. Denition 3. If is polyhedral and the solution set is a singleton, we say that strict complementarity holds at the solution x if (x? x) t F (x ) = 0 implies that x lies in the optimal face, i.e. the minimal face of containing x. 2. A globally convergent Newton algorithm. In this section we present algorithm GNEW, obtained by incorporating a linesearch to the basic method. ALGORITHM GNEW Let x 1 2 k 1 while convergence criterion not met do 1. FIND DESCENT DIRECTION Let x 2 satisfy LV IP (x k ), i.e.: (x? x) t (F (x k ) + F 0 (x k )(x? x k )) 0 8x 2 (5) Set d k x? x k 2. LINE SEARCH if g(x k+1 ) :5g(x k ) then k 1 else let k 2 arg min 2[0;1] g(x k + d k ) endif endwhile Set x k+1 k k + 1 x k + k d k Remark. At step 2 (linesearch) of algorithm GNEW, the constant.5 could be replaced by any positive number strictly less than 1. Also inexact linesearch techniques such as Armijo-Goldstein can be implemented. Lemma. If x k is not solution to V IP, then the direction d k generated by GNEW is a feasible descent direction for g at x k. Proof. We have, by Danskin's rule of dierentiation of max-functions (see Danskin [2]): g 0 (x; d) = max d t r x f(x? y) t F (x)g y2?(x) 3
5 Therefore: g 0 (x k ; d k ) = max (x? x k ) t (F (x k ) + F 0t (x k )(x k? y)) = (x? x k ) t F (x k ) + max (x k? y) t F 0 (x k )(x? x k ) = (x? x k ) t F 0 (x k )(x k? x) + (x? x k ) t (F 0 (x k )x + F (x k )? F 0 (x k )x k ) + max [(y? x k ) t F (x k ) + (x k? y) t (F 0 (x k )x + F (x k )? F 0 (x k )x k )] < max (x? y) t (F (x k ) + F 0 (x k )(x? x k )) since the rst term is nonpositive by monotonicity of F, and the third term is strictly negative, since x k is not a solution of the variational inequality. The term on the last line has been obtained by adding the second and fourth terms. Hence: g 0 (x k ; d k ) < 0 max (x? y) t (F (x k ) + F 0 (x k )(x? x k )) since x is solution to the linearized problem. QED Theorem 1. (GLOBAL CONVERGENCE) Let fx k g be a sequence generated by algorithm GNEW. Then lim k!1 g(x k ) = 0 and the limit point of any convergent subsequence is a solution to V IP. Proof. Following Luenberger ([4], section 6.5), global convergence will be obtained if the point-to-set mapping x k! D(x k ) = fd k = x? x k with x solution to LV IP (x k )g is a closed mapping. Let f n g a convergent sequence of points in and its limit point. Let also f n g be a sequence converging to and satisfying: 2 D( n ). Write n = n? n and =?, where n is solution to LV IP ( n ), i.e. : ( n? x) t (F ( n ) + F 0 ( n )( n? n ) 0 8x 2 : (8) Taking limits on both sides of (8), and from the continuity of F and F 0, we obtain: (? x) t (F () + F 0 ()(? )) 0 8x 2 (9) i.e. that is solution to LV IP (). Thus 2 D() and the mapping D is closed. The continuity of g then ensures that the limit of any convergent subsequence (by compacity of there exists at least one such subsequence) is solution to V IP. QED 4
6 3. Local convergence results. The next three results precise the behaviour of the iterates generated in a neighborhood of the solution x, when it is unique. Theorem 2. If (i) F is strongly monotone on with Lipschitz continuous Jacobian F 0 (ii) is a polyhedron (iii) strict complementarity holds at the (unique) solution x, then there exists an index K such that k = 1 for all k K. Proof. From the proof of proposition 8 in Marcotte and Dussault [6] we have that kx? xk O(g(x)). From the quadratic convergence of Newton's method it then follows that g(x k+1 ) c[g(x k )] 2 for some constant c, whenever x k lies in some neighborhood B(x ; ) of x. If K is chosen such that x k 2 B(x ; ) and g(x k ) 1=(2c) for all k K then k will be set to 1 at step 2 of algorithm GNEW. QED Corollary. Under the assumptions of theorem 2, algorithm GNEW is quadratically convergent. Proof. For k K, GNEW and Newton's method are equivalent. QED Theorem 3. (IDENTIFICATION OF OPTIMAL FACE) If is polyhedral, the solution x is unique, and strict complementarity holds at the solution, then there exists an index L such that x k lies in the optimal face T whenever k L. Proof. Assume the result does not hold. Then there exists a subsequence fx k g k2i and an extreme point y of the optimal face T k associated with LV IP (x k ) such that y =2 T ; in other \words": (x k? y) t (F (x k?1) + F 0 (x k?1)(x k? x k?1)) = 0 8k 2 I (10) but: Passing to the limit in (10) there comes: in contradiction with (11). (x? y) t F (x ) = < 0 (11) (x? y) t F (x ) = 0 QED Remark. It is known (see Robinson [8]) that Newton's method does not require strong monotonicity for quadratic convergence. However, to get second-order convergence, invertibility of the Jacobian matrix restricted to the optimal face T is usually assumed; since monotonicity of F on is required for global convergence, this implies that the restriction of F to T has to be strongly monotone in a neighborhood of x (relative to T ). Since T is not known a priori, the strong 5
7 monotonicity condition on all of cannot be substantially weakened. Furthermore, in the proof of theorem 2, the statement kx? xk O(g(x)) may fail to be valid if F is not strongly monotone in a neighborhood of the solution. 4. Numerical results. The algorithm has been developed while investigating ecient methods for solving large-scale network equilibrium problems, using the restriction strategy of von Hohenbalken [9]. At each iteration, a variational inequality problem on the unit simplex is being solved. For the restricted variational inequality, the gap function can be evaluated by inspecting its value at the extreme points dening the current restriction, rather than by solving a linear program over. In our implementation, the linearized subproblems have been solved using Lemke's complementary pivot algorithm, while the linesearch followed Armijo's rule. 1 The pseudo-random cost functions generated assumed the general form: F (x) = (A? A t )x + B t Bx + C(x) + b = Ax + Bx + C(x) + b The entries of matrices A and B are randomly generated from uniform variates; C(x) is a nonlinear diagonal mapping whose i'th component has the form: C i (x) = arctan(x i ) The constants and can be used to vary the asymmetry level of the cost mapping, to increase the importance of the nonlinear term, making the problem more dicult to solve for Newton's method. The parameter b has been adjusted in order that the optimal solution be a priori known. Five test problems of sizes 4, 5, 5, 15 and 25 have been solved. Data for the smallest problems are given in tables 1 and 4. Numerical results are given in the remaining tables. As can be readily observed, a high value of parameter can force Newton's method into an erratic behaviour (see table 7), while small corrections (see table 8) are sucient to make the method convergent. In one instance (see tables 2 and 3) a single stepsize of length less than one resulted in a diminution of the number of iterations from ten to ve. Finally it has been observed that both algorithms seem to be rather insensitive to variation in the asymmetry level of the cost mapping, which is monitored by the relative importance of parameters and. 1 Armijo's stepsize rule usually requires that the objective be continuously dierentiable. However, in our case, it can be proven that the directional derivative along Newton's direction varies continuously with x, due to the dierentiability of the cost mapping F. Hence it is unnecessary to use a more sophisticated linesearch strategy, such as the one proposed by Miin [10]. On our test examples, Armijo's rule performed very satisfactorily. 6
8 References [1] A. Auslender, Optimisation; methodes numeriques, Masson, Paris (1976). [2] J.M. Danskin, \The theory of min-max, with applications", SIAM Journal of Applied Mathematics 14, (1966). [3] D.W. Hearn, \The gap function of a convex program", Operations Research Letters 1, (1981). [4] D.J. Luenberger, Introduction to linear and nonlinear programming, Addison- Wesley, Reading, Mass. (1973). [5] P. Marcotte, \A new algorithm for solving variational inequalities, with application to the trac assignment problem", Mathematical Programming (1985). [6] P. Marcotte and J.-P. Dussault, \A modied Newton method for solving variational inequalities", Proceedings of the 24 th IEEE Conference on Decision and Control, Fort Lauderdale, December (1985). [7] J.S. Pang and D. Chan, \Iterative methods for variational and complementarity problems", Mathematical Programming 24, (1982). [8] S.M. Robinson, \Generalized equations", in Mathematical programming: The state of the art, Bachem, Grotschel, Korte ed., Springer-Verlag, Berlin (1983). [9] B. von Hohenbalken, \A nite algorithm to maximize certain pseudo-concave functions", Mathematical Programming 8, (1975). [10] R. Miin, \A superlinearly convergent algorithm for one-dimensional minimization with convex functions", Mathematics of Operations Research 8, (1983). Aknowledgments. We want to thank Rene Ferland for his careful programming of the test problems, and a referee for suggestions that helped improve the paper. 7
c 1998 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 9, No. 1, pp. 179 189 c 1998 Society for Industrial and Applied Mathematics WEAK SHARP SOLUTIONS OF VARIATIONAL INEQUALITIES PATRICE MARCOTTE AND DAOLI ZHU Abstract. In this work we
More information1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is
New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,
More informationsystem of equations. In particular, we give a complete characterization of the Q-superlinear
INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica
More informationSOME COMMENTS ON WOLFE'S 'AWAY STEP'
Mathematical Programming 35 (1986) 110-119 North-Holland SOME COMMENTS ON WOLFE'S 'AWAY STEP' Jacques GUt~LAT Centre de recherche sur les transports, Universitd de Montrdal, P.O. Box 6128, Station 'A',
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationON THE DIAMETER OF THE ATTRACTOR OF AN IFS Serge Dubuc Raouf Hamzaoui Abstract We investigate methods for the evaluation of the diameter of the attrac
ON THE DIAMETER OF THE ATTRACTOR OF AN IFS Serge Dubuc Raouf Hamzaoui Abstract We investigate methods for the evaluation of the diameter of the attractor of an IFS. We propose an upper bound for the diameter
More informationOptimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization
5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The
More informationOptimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method
Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationProjected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps
Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear
More informationSOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS
A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationA Descent Algorithm for Solving Variational Inequality Problems
Applied Mathematical Sciences, Vol. 2, 2008, no. 42, 2095-2103 A Descent Algorithm for Solving Variational Inequality Problems Zahira Kebaili and A. Krim Keraghel Department of Mathematics, Sétif University
More information1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that
SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.
More informationINDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University
INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS Ronald J. Stern Concordia University Department of Mathematics and Statistics Montreal, Quebec H4B 1R6, Canada and Henry Wolkowicz
More informationA BFGS-IP algorithm for solving strongly convex optimization problems with feasibility enforced by an exact penalty approach
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE A BFGS-IP algorithm for solving strongly convex optimization problems with feasibility enforced by an exact penalty approach Paul Armand
More informationUniqueness of Generalized Equilibrium for Box Constrained Problems and Applications
Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.
More information58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationMerit Functions and Descent Algorithms for a Class of Variational Inequality Problems
Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Michael Patriksson June 15, 2011 Abstract. We consider a variational inequality problem, where the cost mapping is
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationCONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS
CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:
More informationNondegenerate Solutions and Related Concepts in. Ane Variational Inequalities. M.C. Ferris. Madison, Wisconsin
Nondegenerate Solutions and Related Concepts in Ane Variational Inequalities M.C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 53706 Email: ferris@cs.wisc.edu J.S. Pang
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationThe iterative convex minorant algorithm for nonparametric estimation
The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica
More informationSpurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics
More informationQUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1
QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 F. FACCHINEI 2 AND S. LUCIDI 3 Communicated by L.C.W. Dixon 1 This research was
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationand P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s
Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationlim sup f(x k ) d k < 0, {α k } K 0. Hence, by the definition of the Armijo rule, we must have for some index k 0
Corrections for the book NONLINEAR PROGRAMMING: 2ND EDITION Athena Scientific 1999 by Dimitri P. Bertsekas Note: Many of these corrections have been incorporated in the 2nd Printing of the book. See the
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More information(Marcotte and Dussault [9]) are provided.
SIAM J. CONTROL AND OPTIMIZATION Vol. 27, No. 6, pp. 1260-1278, November 1989 1989 Society for Industrial and Applied Mathematics 002 A SEQUENTIAL LINEAR PROGRAMMING ALGORITHM FOR SOLVING MONOTONE VARIATIONAL
More informationCHARACTERIZATIONS OF LIPSCHITZIAN STABILITY
CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195
More informationy Ray of Half-line or ray through in the direction of y
Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem
More informationHYBRID RUNGE-KUTTA AND QUASI-NEWTON METHODS FOR UNCONSTRAINED NONLINEAR OPTIMIZATION. Darin Griffin Mohr. An Abstract
HYBRID RUNGE-KUTTA AND QUASI-NEWTON METHODS FOR UNCONSTRAINED NONLINEAR OPTIMIZATION by Darin Griffin Mohr An Abstract Of a thesis submitted in partial fulfillment of the requirements for the Doctor of
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationA Global Regularization Method for Solving. the Finite Min-Max Problem. O. Barrientos LAO June 1996
A Global Regularization Method for Solving the Finite Min-Max Problem O. Barrientos LAO 96-09 June 1996 LABORATOIRE APPROXIMATION ET OPTIMISATION Universite Paul Sabatier, 118 route de Narbonne - 31062
More information1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o
Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)
More informationON THE INVERSE FUNCTION THEOREM
PACIFIC JOURNAL OF MATHEMATICS Vol. 64, No 1, 1976 ON THE INVERSE FUNCTION THEOREM F. H. CLARKE The classical inverse function theorem gives conditions under which a C r function admits (locally) a C Γ
More informationOn the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities
STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, June 204, pp 05 3. Published online in International Academic Press (www.iapress.org) On the Convergence and O(/N)
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationMidterm 1. Every element of the set of functions is continuous
Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More information1 Introduction Bilevel programming is the adequate framework for modelling those optimization situations where a subset of decision variables is not c
Nonlinear Optimization and Applications, pp. 1-000 G. Di Pillo and F. Giannessi, Editors c1998 Kluwer Academic Publishers B.V. On a class of bilevel programs Martine LABBE (mlabbe@ulb.ac.be) SMG, Institut
More informationPart 2: Linesearch methods for unconstrained optimization. Nick Gould (RAL)
Part 2: Linesearch methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationWerner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying
Stability of solutions to chance constrained stochastic programs Rene Henrion Weierstrass Institute for Applied Analysis and Stochastics D-7 Berlin, Germany and Werner Romisch Humboldt University Berlin
More informationIterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming
Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationA Continuation Method for the Solution of Monotone Variational Inequality Problems
A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:
More information2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o
SIAM J. Optimization Vol. x, No. x, pp. x{xx, xxx 19xx 000 AN SQP ALGORITHM FOR FINELY DISCRETIZED CONTINUOUS MINIMAX PROBLEMS AND OTHER MINIMAX PROBLEMS WITH MANY OBJECTIVE FUNCTIONS* JIAN L. ZHOUy AND
More informationMODIFIED PROJECTION-TYPE METHODS. M. V. Solodovy and P. Tsengz. Mathematical Programming Technical Report # 94-04
MODIFIED PROJECTION-TYPE METHODS FOR MONOTONE VARIATIONAL INEQUALITIES M. V. Solodovy and P. Tsengz Mathematical Programming Technical Report # 94-04 May 24, 1994 (revised January 11, 1995) (to appear
More informationLinear Complementarity as Absolute Value Equation Solution
Linear Complementarity as Absolute Value Equation Solution Olvi L. Mangasarian Abstract We consider the linear complementarity problem (LCP): Mz + q 0, z 0, z (Mz + q) = 0 as an absolute value equation
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationINRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA
Nonlinear Observer Design using Implicit System Descriptions D. von Wissel, R. Nikoukhah, S. L. Campbell y and F. Delebecque INRIA Rocquencourt, 78 Le Chesnay Cedex (France) y Dept. of Mathematics, North
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More information6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games
6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence
More informationLecture 5: The Bellman Equation
Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationExample Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones
Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More informationUsing exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities
Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More information154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line
7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationA Proximal Method for Identifying Active Manifolds
A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationChapter 2: Unconstrained Extrema
Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of
More informationSmoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences
Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au
More informationInstitutional Repository - Research Portal Dépôt Institutionnel - Portail de la Recherche
Institutional Repository - Research Portal Dépôt Institutionnel - Portail de la Recherche researchportal.unamur.be RESEARCH OUTPUTS / RÉSULTATS DE RECHERCHE Armijo-type condition for the determination
More informationAbstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li
Inexact Newton Methods for Solving Nonsmooth Equations Jose Mario Martnez Department of Applied Mathematics IMECC-UNICAMP State University of Campinas CP 6065, 13081 Campinas SP, Brazil Email : martinez@ccvax.unicamp.ansp.br
More information230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n
Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationThe effect of calmness on the solution set of systems of nonlinear equations
Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationLecture 4 - The Gradient Method Objective: find an optimal solution of the problem
Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...
More informationARE202A, Fall Contents
ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions
More information390 Chapter 10. Survey of Descent Based Methods special attention must be given to the surfaces of non-dierentiability, it becomes very important toco
Chapter 10 SURVEY OF DESCENT BASED METHODS FOR UNCONSTRAINED AND LINEARLY CONSTRAINED MINIMIZATION Nonlinear Programming Problems Eventhough the title \Nonlinear Programming" may convey the impression
More informationIdentifying Active Constraints via Partial Smoothness and Prox-Regularity
Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,
More information2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object
A VIEW ON NONLINEAR OPTIMIZATION JOSE HERSKOVITS Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro 1 Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL 1. Introduction Once
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More information