Primal-Dual Bilinear Programming Solution of the Absolute Value Equation

Similar documents
Absolute Value Equation Solution via Dual Complementarity

Linear Complementarity as Absolute Value Equation Solution

Absolute Value Programming

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming

Privacy-Preserving Linear Programming

Absolute value equations

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

Unsupervised Classification via Convex Absolute Value Inequalities

Unsupervised and Semisupervised Classification via Absolute Value Inequalities

A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION

THE solution of the absolute value equation (AVE) of

Bulletin of the. Iranian Mathematical Society

An improved generalized Newton method for absolute value equations

Exact 1-Norm Support Vector Machines via Unconstrained Convex Differentiable Minimization

A note on the unique solution of linear complementarity problem

Large Scale Kernel Regression via Linear Programming

Unsupervised Classification via Convex Absolute Value Inequalities

A Smoothing Newton Method for Solving Absolute Value Equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

Support Vector Machine Classification via Parameterless Robust Linear Programming

Large Scale Kernel Regression via Linear Programming

A Residual Existence Theorem for Linear Equations

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Optimality, Duality, Complementarity for Constrained Optimization

Knowledge-Based Nonlinear Kernel Classifiers

An EP theorem for dual linear complementarity problems

Linear System with Hidden Z-Matrix and Complementary Condition arxiv: v1 [math.oc] 12 Jul 2018

The Disputed Federalist Papers: SVM Feature Selection via Concave Minimization

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Proximal Knowledge-Based Classification

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

Separation of convex polyhedral sets with uncertain data

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

Strong Duality: Without Simplex and without theorems of alternatives. Somdeb Lahiri SPM, PDPU. November 29, 2015.

Duality in Linear Programming

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

LINEAR INTERVAL INEQUALITIES

Nonlinear Knowledge-Based Classification

1 Review Session. 1.1 Lecture 2

Research Article Residual Iterative Method for Solving Absolute Value Equations

Copositive Plus Matrices

Understanding the Simplex algorithm. Standard Optimization Problems.

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Linear Programming: Chapter 5 Duality

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Basic Concepts in Linear Algebra

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Newton Method for Linear Programming

Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008

2.098/6.255/ Optimization Methods Practice True/False Questions

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

Today: Linear Programming (con t.)

ME451 Kinematics and Dynamics of Machine Systems

A Finite Newton Method for Classification Problems

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

A Review of Linear Programming

Math 210 Finite Mathematics Chapter 4.2 Linear Programming Problems Minimization - The Dual Problem

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Tensor Complementarity Problem and Semi-positive Tensors

Machine Learning. Support Vector Machines. Manfred Huber

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Necessary and Sufficient Conditions for Reachability on a Simplex

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Chapter 1. Preliminaries

Lecture 7: Weak Duality

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

4TE3/6TE3. Algorithms for. Continuous Optimization

SSVM: A Smooth Support Vector Machine for Classification

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

1. Introduction and background. Consider the primal-dual linear programs (LPs)

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.

Ω R n is called the constraint set or feasible set. x 1

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Review of Basic Concepts in Linear Algebra

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

F 1 F 2 Daily Requirement Cost N N N

A MATLAB Implementation of the. Minimum Relative Entropy Method. for Linear Inverse Problems

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

arxiv: v1 [math.oc] 1 Jul 2016

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

Operations Research Lecture 4: Linear Programming Interior Point Method

Introduction to optimization

Recognition of hidden positive row diagonally dominant matrices

Finite Elements. Colin Cotter. February 22, Colin Cotter FEM

Transcription:

Primal-Dual Bilinear Programg Solution of the Absolute Value Equation Olvi L. Mangasarian Abstract We propose a finitely terating primal-dual bilinear programg algorithm for the solution of the NP-hard absolute value equation (AVE): Ax x = b, where A is an n n square matrix. The algorithm, which makes no assumptions on AVE other than solvability, consists of a finite number of linear programs terating at a solution of the AVE or at a stationary point of the bilinear program. The proposed algorithm was tested on 500 consecutively generated random instances of the AVE with n =10, 50, 100, 500 and 1,000. The algorithm solved 88.6% of the test problems to an accuracy of 1e 6. Keywords: absolute value equation, bilinear programg, linear programg 1 INTRODUCTION We consider the absolute value equation (AVE): Ax x = b, (1.1) where A R n n and b R n are given, and denotes absolute value. A slightly more general form of the AVE, Ax + B x = b was introduced in [14] and investigated in a more general context in [9]. The AVE (1.1) was investigated in detail theoretically in [11], and a bilinear program in the primal space of the problem was prescribed there for the special case when the singular values of A are not less than one. No computational results were given in either [11] or [9]. In contrast in [8], computational results were given for a linear-programg-based successive linearization algorithm utilizing a concave imization model. As was shown in [11], the general NP-hard linear complementarity problem (LCP) [3, 4, 2], which subsumes many mathematical programg problems, can be formulated as an AVE (1.1). This implies that (1.1) is NP-hard in its general form. More recently a generalized Newton method was proposed for solving the AVE [10], while a uniqueness result for the AVE is presented in [15] and for a more general version of the AVE in [16], and finally existence and convexity results are given in [6]. Our point of departure here is to look at the AVE in its primal and dual spaces of the problem and formulate an algorithm that imizes a bilinear function (that is the scalar product of two linear functions) in the combined primal-dual space which has a global imum of zero that yields an exact solution of the AVE. In Section 2 we describe our bilinear formulation of the AVE and show that a zero imum of the bilinear program yields a solution to the AVE. In Section 3 of the paper we state our algorithm for the bilinear program consisting of a succession of linear programs that terate at a global solution of the the AVE or at a stationary point of the bilinear program. In Section 4 we give computational results that show the effectiveness of our approach by solving 88.6% of a sequence of 500 Computer Sciences Department, University of Wisconsin, Madison, WI 53706 and Department of Mathematics, University of California at San Diego, La Jolla, CA 92093.olvi@cs.wisc.edu.

randomly generated consecutive AVEs in R 10 to R 1,000 to an accuracy of 1e 6. Section 5 concludes the paper. We describe our notation now. All vectors will be column vectors unless transposed to a row vector by a prime. For a vector x R n the notation x j will signify the j-th component. The scalar (inner) product of two vectors x and y in the n-dimensional real space R n will be denoted by x y. For x R n, n x denotes the 2-norm: ( (x i ) 2 ) 1 2. The notation A R m n will signify a real m n matrix. For such i=1 a matrix, A will denote the transpose of A. A vector of ones in a real space of arbitrary dimension will be denoted by e. Thus for e R m and y R m the notation e y will denote the sum of the components of y. A vector of zeros in a real space of arbitrary dimension will be denoted by 0. The abbreviation s.t. stands for subject to. 2 Bilinear Formulation of the Absolute Value Equation We begin with the linear program: and its dual: x,y h y s.t. Ax y = b, x + y 0, x + y 0, (2.2) max u,v,w b u s.t. A u + v w = 0, u + v + w = h, (v, w) 0, (2.3) where h is some vector in R n that will play a key role in a bilinear programg formulation. We now state the following simple lemma. Lemma 2.1. Let (x, y) be a solution of the primal problem (2.2) and (u, v, w) be a solution of the corresponding dual problem (2.3). Then: Proof From the complementarity condition we have that: v + w > 0 = Ax x = b (2.4) v (x + y) + w ( x + y) = 0. (2.5) Hence, if v + w > 0 it follows that either (x + y) i = 0, or ( x + y) i = 0, for i = 1,...,n. Hence y = x and from the constraint Ax y = b it follows that Ax x = b. Based on this lemma it follows that for a primal-dual optimal solution (x, y, u, v, w), if v + w ǫe for a positive ǫ, then Ax x = b. Furthermore, from the dual constraints we have that h = u+v +w and hence the difference between the primal and dual objective functions evaluated at a primal-dual feasible point becomes: h y b u = ( u + v + w) y b u 0, (2.6) where the inequality of (2.6) follows from the fact that at a primal-dual feasible point, the primal objective function exceeds or equals the dual objective function. At a primal-dual optimal point this difference is zero. Hence combining these statements with Lemma 2.1 and the extra imposed condition that v + w ǫe, we have the following proposition.

Proposition 2.2. Equivalence of AVE and Zero Minimum of the Bilinear Program At a zero imum of the following bilinear program: x,y,u,v,w s.t. y ( u + v + w) b u Ax y = b x + y 0 x + y 0 A u + v w = 0 v + w ǫe (v, w) 0 we have that y = x and Ax x = b for any solution point (x, y, u, v, w). We establish now the existence of a zero-imum solution to the bilinear program (2.7) under the assumption that AVE (1.1) is solvable. Proposition 2.3. Existence of a Zero-Minimum Solution to the Bilinear Program Under the assumption that the absolute value equation (1.1) is solvable, the bilinear program (2.7) has a zero imum solution (x, y, u, v, w) such that x solves the absolute value equation (1.1). Proof Since AVE (1.1) has a solution, say x, then the feasible region of the bilinear program (2.7) is nonempty because the point (x, y = x, u = 0, v = w = ǫe/2) satisfies the constraints of (2.7). Hence the quadratic bilinear objective function of (2.7) which by Proposition 2.2 is bounded below by zero must by [5] have a solution. Since by Proposition 2.2 a zero-imum solution solves AVE, and AVE is solvable by assumption, such a zero-imum solution exists that solves AVE. We now present a computational algorithm for solving the bilinear program (2.7) that consists of solving a finite number of linear programs. 3 Bilinear Programg Algorithm for the Absolute Value Equation We begin by stating our bilinear algorithm as follows. Algorithm 3.1. Choose parameter value ǫ for the constraint of (2.7) (typically ǫ = 1e 2), tolerance (typically tol=1e 6), and maximum number of iterations itmax (typically itmax= 40). (I) Initialize the algorithm by detering an initial (x 0, y 0 ) by solving the following linear program: Set iteration number i = 0. x,y s.t. e y Ax y = b x + y 0 x + y 0 (II) While Ax i x i b > tol, the bilinear objective function of (2.7) is decreasing, and i itmax perform the following three steps. (III) Solve the following linear program for (u i+1, v i+1, w i+1 ): (2.7) (3.8) u,v,w yi ( u + v + w) b u s.t. A u + v w = 0 v + w ǫe (v, w) 0 (3.9)

(IV) Solve the following linear program for (x i+1, y i+1 ): x,y s.t. ( u i+1 + v i+1 + w i+1 ) y Ax y = b x + y 0 x + y 0 (3.10) (V) i = i + 1. Go to Step (II). We establish now finite teration of our bilinear algorithm. Proposition 3.2. Finite Teration of the Bilinear Algorithm Under the assumption that the absolute value equation (1.1) is solvable and the maximum number of iterations itmax is sufficiently large, the Bilinear Algorithm 3.1 terates in a finite number of iterations at a global zero-imum point that solves the absolute value equation (1.1), or at iteration i with a solution (x i+1, y i+1, u i+1, v i+1, w i+1 ) that satisfies the following imum principle necessary optimality condition for the bilinear program (2.7): ( u i+1 + v i+1 + w i+1 ) (y y i+1 ) (y i+1 + b) (u u i+1 ) + y i+1 (v v i+1 ) + y i+1 (w w i+1 ) 0, x X, (u, v, w) U, (3.11) where X = {(x, y) Ax y = b, x + y 0, x + y 0}, (3.12) U = {(u, v, w) A u + v w = 0, v + w ǫe, (v, w) 0}. (3.13) Proof Note first that the sets X and U defined above are nonempty because as pointed out earlier that under the assumption that AVE has a solution x then (x, x ) X and (0, ǫe/2, ǫe/2) U. To keep the proof simple we shall assume that neither X nor U have straight lines going infinity in both directions. This assumption which allows us to utilize [13, Corollary 32.3.4], can be easily achieved by defining x = x I x II, x I 0, x II 0 and u = u I u II, u I 0, u II 0. Hence, the bilinear program (2.7) with an objective function bounded below by zero, which is equivalent to a concave function imization [1, Proposition 2.2], has a vertex solution on the polyhedral set X U. If for some ith iteration the bilinear objective function does not decrease, then each of the linear programs of steps (III) and (IV) of the algorithm must have returned (x i+1, y i+1 ) and (u i+1, v i+1, w i+1 ) such that: y i+1 ( u+v+w) b u y i+1 ( u i +v i +w i ) bu i = y i+1 ( u i+1 +v i+1 +w i+1 ) bu i+1, (u, v, w) U, (3.14) and ( u i+1 + v i+1 + w i+1 ) y ( u i+1 + v i+1 + w i+1 ) y i = ( u i+1 + v i+1 + w i+1 ) y i+1, (x, y) X. (3.15) Combining the inequalities of (3.14) and (3.15) gives the imum principle necessary optimality condition (3.11). Since there are a finite number of vertices of the set X U, and since each vertex visited by Algorithm 3.1 gives a lesser value for the bilinear objective than the previous vertex, no vertex is repeated. Thus our algorithm must terate at either a global zero imum solution or a point satisfying the imum principle necessary optimality condition. We turn now to our computational results.

4 Computational Results We implemented our algorithm by solving 500 solvable random instances of the absolute value equation (1.1) consecutively generated. Elements of the matrix A were random numbers picked from a uniform distribution in the interval [ 5, 5]. A random solution x with random components from [-.5,.5] was generated and the right hand side b was computed as b = Ax x. All computation was performed on 4 Gigabyte machine running i386 rhe15 Linux. We utilized the CPLEX linear programg code [7] within MATLAB [12] to solve our linear programs. Of the 500 test problems, 88.6% were solved exactly to a tolerance set to tol = 1e 6. The maximum number of iterations was set at 40. The computational results are summarized in Table 1. Problem Size Number of AVEs out of 100 Time in Seconds for n with 2-norm error tol=1e-6 Solving 100 Equations 10 90 1.805 50 87 5.725 100 88 20.605 500 88 1,996.6 1,000 90 19,008 Table 1: Computational Results for 500 Randomly Generated Consecutive AVEs 5 Conclusion and Outlook We have proposed a bilinear programg formulation for solving the NP-hard absolute value equation. The bilinear program was solved by a finite succession of linear programs. In 88.6% of 500 instances, for each solvable random test problem, the proposed algorithm solved the problem to an accuracy of 1e 6. Possible future work may consist of precise sufficient conditions under which the proposed formulation and solution method is guaranteed to solve this NP-hard problem exactly. Acknowledgments The research described in this Data Mining Institute Report 11-01, February 2011, was supported by the Microsoft Corporation and ExxonMobil. References [1] K. P. Bennett and O. L. Mangasarian. Bilinear separation of two sets in n-space. Computational Optimization and Applications, 2:207 227, 1993. [2] S.-J. Chung. NP-completeness of the linear complementarity problem. Journal of Optimization Theory and Applications, 60:393 399, 1989. [3] R. W. Cottle and G. Dantzig. Complementary pivot theory of mathematical programg. Linear Algebra and its Applications, 1:103 125, 1968. [4] R. W. Cottle, J.-S. Pang, and R. E. Stone. The Linear Complementarity Problem. Academic Press, New York, 1992. [5] M. Frank and P. Wolfe. An algorithm for quadratic programg. Naval Research Logistics Quarterly, 3:95 110, 1956. [6] Sheng-Long Hu and Zheng-Hai Huang. A note on absolute equations. Optimization Letters, 4:417 424, 2010. [7] ILOG, Incline Village, Nevada. ILOG CPLEX 9.0 User s Manual, 2003. http://www.ilog.com/products/cplex/.

[8] O. L. Mangasarian. Absolute value equation solution via concave imization. Optimization Letters, 1(1):3 8, 2007. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/06-02.pdf. [9] O. L. Mangasarian. Absolute value programg. Computational Optimization and Applications, 36(1):43 53, 2007. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/05-04.ps. [10] O. L. Mangasarian. A generlaized newton method for absolute value equations. Technical Report 08-01, Data Mining Institute, Computer Sciences Department, University of Wisconsin, May 2008. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/08-01.pdf. Optimization Letters 3(1), January 2009, 101-108. Online version: http://www.springerlink.com/content/c076875254r7tn38/. [11] O. L. Mangasarian and R. R. Meyer. Absolute value equations. Linear Algebra and Its Applications, 419:359 367, 2006. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/05-06.pdf. [12] MATLAB. User s Guide. The MathWorks, Inc., Natick, MA 01760, 1994-2006. http://www.mathworks.com. [13] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, New Jersey, 1970. [14] J. Rohn. A theorem of the alternatives for the equation Ax + B x = b. Linear and Multilinear Algebra, 52(6):421 426, 2004. http://www.cs.cas.cz/ rohn/publist/alternatives.pdf. [15] J. Rohn. On unique solvability of the absolute value equation. Optimization Leters, 3:603 606, 2009. [16] J. Rohn. A residual existence theorem for linear equations. Optimization Letters, 4:287 292, 2010.