A Descent Algorithm for Solving Variational Inequality Problems

Similar documents
ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

c 1998 Society for Industrial and Applied Mathematics

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

Newton s and Linearization Methods for Quasi-variational Inequlities

Algorithms for constrained local optimization

Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming

Generalized Monotonicities and Its Applications to the System of General Variational Inequalities

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

A derivative-free nonmonotone line search and its application to the spectral residual method

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

Spectral gradient projection method for solving nonlinear monotone equations

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

A proximal-like algorithm for a class of nonconvex programming

Merit functions and error bounds for generalized variational inequalities

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

On Gap Functions for Equilibrium Problems via Fenchel Duality

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

Written Examination

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

The Squared Slacks Transformation in Nonlinear Programming

A convergence result for an Outer Approximation Scheme

Priority Programme 1962

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

4TE3/6TE3. Algorithms for. Continuous Optimization

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

An inexact subgradient algorithm for Equilibrium Problems

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerical Methods. V. Leclère May 15, x R n

Examination paper for TMA4180 Optimization I

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION

Proportional Response as Iterated Cobb-Douglas

17 Solution of Nonlinear Systems

On the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

5 Handling Constraints

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function

Step lengths in BFGS method for monotone gradients

arxiv: v1 [math.oc] 1 Jul 2016

Line Search Methods for Unconstrained Optimisation

The Equilibrium Equivalent Representation for Variational Inequalities Problems with Its Application in Mixed Traffic Flows

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

8 Numerical methods for unconstrained problems

On pseudomonotone variational inequalities

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Optimization of a Convex Program with a Polynomial Perturbation

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications

A Novel of Step Size Selection Procedures. for Steepest Descent Method

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Image restoration: numerical optimisation

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Conjugate Gradient (CG) Method

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Solving a Class of Generalized Nash Equilibrium Problems

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio

Constrained Optimization

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Exceptional Family of Elements and Solvability of Mixed Variational Inequality Problems

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

Nonlinear Optimization: What s important?

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

A Continuation Method for the Solution of Monotone Variational Inequality Problems

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

You should be able to...

On the iterate convergence of descent methods for convex optimization

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

A NON-PARAMETRIC TEST FOR NON-INDEPENDENT NOISES AGAINST A BILINEAR DEPENDENCE

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Optimization Tutorial 1. Basic Gradient Descent

A proximal approach to the inversion of ill-conditioned matrices

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

Transcription:

Applied Mathematical Sciences, Vol. 2, 2008, no. 42, 2095-2103 A Descent Algorithm for Solving Variational Inequality Problems Zahira Kebaili and A. Krim Keraghel Department of Mathematics, Sétif University 19000 Sétif, Algeria z kebaili@yahoo.fr a keragheldz@yahoo.fr Abstract Variational inequality problem can be formulated as a differentiable optimization problem [3]. We propose a descent method to solve the equivalent optimization problem inspired by the approach of Fukushima, then we give theoretical results and numerical experimentations. Mathematics Subject Classifications: 58E35, 90C51, 90C33, 65K05 Keywords: Variational inequality problem, Descent algorithms, Auslender s approach, Fukushima s methods 1 Introduction We consider the problem of finding a point x C such that: F (x),x x 0, for all x C (1) Where C is noempty closed convex subset of R n, and F is continuously differentiable mapping from R n into itself, and.,. denotes the inner product in R n. This problem is called variational inequality problem. The relation between this problem and the optimization goes up in the Seventies there were a difficulties in the obtained problem of optimization (non-differentiability of the objective function). In the Nineties Fukushima and others established the equivalence between the variational inequality and a differentiable optimization problem with sufficient conditions of global optimality. However, the algorithms remain insufficient for a suitable treatment: restrictive assumptions and slow convergence witch sometimes difficult to establish.

2096 Z. Kebaili and A.K. Keraghel In our study, we introduce some algorithmic modifications leading to interesting implementation. The paper is organized as follows. In section 2, we introduce some definitions and basic result concerning the connection between problem (1) and the optimization. Section 3 is devoted to the description of Fukushima s method and its related algorithm is presented. In section 4, we applied the algorithm for some variational inequality problems [4]. Finally, we give in section 5 a conclusion and future remarks. 2. Preliminaries First R n the space of n dimensional vectors, let. G denote the norm in R n defined by x G = x, Gx 1 2, where G is a symmetric positive defined matrix. Given the projection of the point x onto the closed convex set C, denotes by proj C,G is defined as the (unique) solution of the problem: y C min y x G For a vector valued mapping F from R n into itself, we have: F (x) =(F 1 (x),f 2 (x),...,f n (x)) T with x R n. Definition1. Let F : R n R n be a multivalued vector mapping. F is monotone on R n, if for all x, y R n, (x y) T (F (x) F (y)) 0. F is strictly monotone, if for all x, y R n,x y, (x y) T (F (x) F (y)) > 0 F is strongly monotone if: there exists c>0,such that, for all x, y R n : (x y) T (F (x) F (y)) >c x y 2.

Variational inequality problems 2097 2. Resolution of the problem (1) by the optimization s techniques 2.1 Traditional approach The problem when F is the gradient of a differentiable convex function h : R n R,i.e. (F (x) = h (x)), then (1) is noting other than the necessary and sufficient condition ( h (x),x x 0for x C) for the following convex optimization problem : min h (x) subject to x C (2) Reciprocally, if F is differentiable and the Jacobienne matrix F (x) is symmetric x C, then we can associate to (1) an equivalent differentiable optimization problem of type (2). The difficulty arises in the case of asymmetric problems (very frequent in practice ). Auslender [1] was of the first having proposed an encouraging answer but with restricted hypoteses particularly for the numerical aspect with the help of the gap function: g (x) =y Cmax F (x),x y Thus (1) is equivalent to: min g (x) subject to x C (3) Obviously g is not différentiable. Auslender shows that it is it if C is strongly convex i.e ( x 1,x 2 C (x 1 x 2 ), λ ]0, 1[, r>0 such as: B ((1 λ) x 1 λx 2,r) C ), difficult property to have in practice. Thus, we can say that the contribution of Auslender is interesting in theory but certainly not in practice, the assumption of strong convexity of C is too restrictive, what encouraged the researchers to develop other functions of which the differentiability depends on F and not on C. Let us note, however, that the problem of associate to asymmetric variational inequality problem (1) a practical differentiable optimization problem remained open question hang several years. Also, even work of Auslender does not have numerical impact, they constitute with our direction a starting reference for all the later development, in the occurrence the method of Fukushima which represents mainly and even the results obtained by Auslender especially in theory. This method will be the object of following section.

2098 Z. Kebaili and A.K. Keraghel 3. Method of Fukusima The principal idea consists in building a differentiable optimization problem equivalent to 1, what constitutes the interest of this method. 3.1 Description of the method Let F : R n R n a differentiable mapping, C R n is noempty closed convex set, and x R n be temporarily fixed and consider the following problem: { [ y Cmax F (x),x y 1 ] 2 y x 2 G (4) Proposition 3.1 The problem (4) is equivalent to the following problem: { y Cmin 1 y (x G 1 F (x)) 2 2 G (5) Proof. see [4] Let us note that the solution y x of the problem (4) is nothing other than the ortogonal projection of the element (x G 1 F (x)) onto the set C with respect to. G. From where the existence and the uniquness of y x (theorem traditional of projection). However we can prove the existence and the uniqueness of the solution for (5) by noticing that the objective is strongly convex in y and C is noempty closed, The result is a traditional consequence of Wieirstrass s theorem. Thus for each x R n there exists a single y x what makes it possible to define the mapping: H : R n R n x y x = H (x) =Proj G,C (x G 1 F (x)) Proposition 3.2 x is a solution for the variational inequality problem (1) x = H ( x). Now we state the optimization problem equivalent to the variational inequality problem (1) and the optimization problem of Fukushima [3]. 3.2 equivalent optimization problem Let the following optimization problem min f (x) subject to x C (6)

Variational inequality problems 2099 Where f : R n R is defined by: f (x) =- F (x),h(x) x 1 H (x) 2 x 2 G It is about the objective function of (4) in which y is replaced there by its value.what correspond exactly with the function of Auslender increased by a quadratic term. The following theorem established the equivalence between the problem (1) and the (6). Theorem 3.1 [3] Let f the function defined in (6). Then f (x) 0 for all x C, and f (x) =0 if and only if x solves the optimization problem (6) Proof. Again see [3]. Remark. The function of Fukushima has the same properties as the operator F in particular we have: 1. F continuous = f continuous. 2. F C 1 (R n )= C 1 (R n ) in particular we have: f (x) =F (x)+( F (x) G)(H (x) x) (7) What in favour of the aspect practises contrary to the Auslender s result, the function f is not necessarily convex, which involves difficulties if the problem (6) has local minimum or only stationary points.these difficulties can be overcome with the help of this result [3]. Theorem 3.2 [3] Assume that the mapping F : R n R n is continuously differentiable the Jacobian F (x) is positive definite for all x C. If x is a stationary point of the problem (6), i.e., f (x),x y forallx C then x is a global optimal solution of (6), hence x is a solution of the problem (1). Proof. See [3]. 3.3 Principle of the method It means to solve (1) instead of (6) by a descent method, where the direction in x is given by: d = H (x) x

2100 Z. Kebaili and A.K. Keraghel Proposition 3.3 Assume that the mapping F : R n R n is continuously differentiable and the Jacobian F (x) is positive definite for all x C for each x C, then the vector d = H (x) x satisfies the descent conditions: f (x),d < 0 Proposition 3.4 F is strongly monotone = d = H (x) x is a descent direction. Now, we describe the: 3.3 Basic Algorithm Step0: Data an initialization : Let ε > 0, G n n symetric positive definite matrix, x 0 C, and k := 0; Step1: compute the direction from: d k = H ( x k) x k such that H ( x k) ( =Proj C,G x k G 1 F ( x k)) ; While d k >εdo: Step2: compute the steplength: t k = arg min t [0,1] f(x k + td k ). Step3: compute the following iteration: x k+1 = x k + t k d k,k= k + 1 and go to Step1. For the calculation of steplength t k Fukushima the consider the following Armijo-type f(x k + td k ) f(x k ) λβ k d k 2, such that λ > 0 and 0 <β<1, then t k the first β k the last inequality. Theorem 3.3 of convergence [3]. Suppose that is C compact and the mapping F : R n R n is differentiable and strongly monotone, suppose also that F (x) and F (x) are Lipsschitz continuous on C, and let the sequence generated by the iteration x k+1 = x k + t k d k,k=0, 1, 2...,then for any starting point x 0 c, { x k} lies in C and converges to the unique solution of the problem (1). 3.4 Difficulties of the algorithm [ 4] The method of Fukushima includes two operations particularly difficult to realize for unspecified convex, the initialization to start the algorithm and the calculation of the projection which intervenes in the compute of the direction. 4. Implementation and experimental results

Variational inequality problems 2101 In this section, we present some numerical experiments applied to different classes problems given in [4]. Our algorithms were programmed in Pascal 7, the tolerance is ε =10 4. we display the following quantities iter means iterations, size is the dimension of the problem, and Pr denotes the problem. The choice of the matrix G used in the projection and the steplength have a great influence on the performance of the algorithm. Here, G is chosen as a diagonal matrix ( the easier case ). For the steplength, we have adopted two strategies: the constant step of type fixed point (lessexpensive) (Conststep) and the variable step rule of Armijo (V arstep). 1- Nonlinear system of equations In this case C = R n,then the descent direction in x k is given by d k = G 1 F ( x k). Table 1 Problem[4] Size Iter k Time Const step Var step Const step Var step 1 4 5 22 0.27 0.39 2 4 35 37 0.24 0.33 3 20 41 29 0.38 0.38 2- Complementarity problem including the minimization of a differentiable function on R+ n If C = R+ n, then the problem (1) is defined by: { finding x R n + such that : } F (x) 0; x t F (x) =0 is called the complementarity problem. Thus the descent direction in x k is given by: d k = { x k six k G 1 F ( x k) 0 G 1 F ( x k) six k G 1 F ( x k) 0

2102 Z. Kebaili and A.K. Keraghel Table 2 Probl Size Iter k Time Const step Var step Const step Var step 1 3 15 27 0.22 0.24 2 4 34 36 0.22 0.33 3 6 6 38 0.22 0.32 4 9 8 8 0.38 0.54 5 20 19 47 0.33 0.44 3- Variational inequality with hypercube When ce take C = n i=1 [a i,b i ](hypercube)then we calculate the descent direction in x k as following: a i x k six k G 1 F ( x k) a i d k = G 1 F ( x k) si a i <x k G 1 F ( x k) <b i b i x k si x k G 1 F ( x k) b i Table 3 Problem[4] Size Iter k Time Const step Pas var Const step Pas var 1 4 9 29 0.22 0.25 2 4 141 47 0.33 0.39 3 9 19 40 0.38 0.39 4 20 39 40 0.39 0.43 5. GENERAL COMMENTS AND CONCLUSION The techniques of optimization constitute a suitable tool for the resolution of the variational inequality problem and have the necessary ingredients to progress more in the numerical aspect. References [1] : Auslender, Optimisation : Méthodes numériques (Masson, Paris 1976). [2] :R.S Bipasha, K.Yamada, N.Yamashita and M. Fukushima, A linearly convergent descent method for strongly monotone variational inequality

Variational inequality problems 2103 problem, Journal of nonlinear and convex analysis 4 (2003) pp.15-23. [3] : F. Facchnei, A Fisher and C. Kanzow, Regularity properties of a semismooth reformulation of variational inequalities, SIAM Journal Optimization Vol 8, N 0 3,PP, August (1998) 850-869. [4] : F. Facchnei, J.P.Pang, Finite dimensional variational inequalities and complementarity problems, Vol.1 and vol.2, Springer-Verlag, New York Inc.,2003. [5] : M. Fukushima, Equivalent differentiable optimization problem and descent methods for asymmetric variational inequality problem, Mathematical Programming 53 (1992) 99-110, North-Holland. [6]: Kanzow and M. Fukushima,Théorical and numerical investigation of D-gap function for box constrained variational inequalities, Mathematical Programming 83 (1998), pp 55-87. [7] : Z. Kebaili, Résolution du problème d inégalités variationnelles par les techniques d optimisation, Thèse de Magister, Université Mohamed Boudhief- M sila (Février 2000). [8] : A. Keraghel, Etude adaptative des principales variantes dans l algorithme de Karmarkar, Thèse de Doctorat, Université Joseph Fourier- Grenoble (Juillet 1989). [9] : J.S.Pang and M. Fukushima, Quasi-variational inequalities, generalized Nash equilibria and multi-leader follower games, computational Management Science (2005)pp.21-56. [10] : K.Yamada, N.Yamashita and M. Fukushima, A new derivative-free descent method for the nonlinear complementarity problem, nonlinear optimization and Related Topics, G.D.Pillo and F.Giannessi,ends.Kluver Academic Publishers,2000, 463-487. Received: January 15, 2008