APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

Similar documents
Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Simpler and Tighter Redundant Klee-Minty Construction

The Shortest Vector Problem (Lattice Reduction Algorithms)

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Hermite normal form: Computation and applications

Lecture #21. c T x Ax b. maximize subject to

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Linear Programming. 1 An Introduction to Linear Programming

A Strongly Polynomial Simplex Method for Totally Unimodular LP

that nds a basis which is optimal for both the primal and the dual problems, given

Linear Systems and Matrices

Optimization: Then and Now

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

Fraction-free Row Reduction of Matrices of Skew Polynomials

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Chapter 3, Operations Research (OR)

1 Matrices and Systems of Linear Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

Succinct linear programs for easy problems

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Lecture 5: Computational Complexity

This means that we can assume each list ) is

A primal-simplex based Tardos algorithm

Lecture 10. Primal-Dual Interior Point Method for LP

MATRICES. a m,1 a m,n A =

On the number of distinct solutions generated by the simplex method for LP

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

1 Determinants. 1.1 Determinant

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

A matrix over a field F is a rectangular array of elements from F. The symbol

LOVÁSZ-SCHRIJVER SDP-OPERATOR AND A SUPERCLASS OF NEAR-PERFECT GRAPHS

15-780: LinearProgramming

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES?

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

ELEMENTARY LINEAR ALGEBRA

Introduction to Semidefinite Programming I: Basic properties a

Rank minimization via the γ 2 norm

Robust linear optimization under general norms

Linear Regression and Its Applications

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy

Review Questions REVIEW QUESTIONS 71

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Matrices: 2.1 Operations with Matrices

The Maximum Dimension of a Subspace of Nilpotent Matrices of Index 2

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs

Interior-Point Methods for Linear Optimization

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

NORMS ON SPACE OF MATRICES

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Algebra: Characteristic Value Problem

A Review of Linear Programming

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

The following two problems were posed by de Caen [4] (see also [6]):

Mathematical Programs Linear Program (LP)

informs DOI /moor.xxxx.xxxx c 200x INFORMS

CSL361 Problem set 4: Basic linear algebra

Lectures 6, 7 and part of 8

Questionnaire for CSET Mathematics subset 1

An algebraic perspective on integer sparse recovery

On the Number of Solutions Generated by the Simplex Method for LP

MAT Linear Algebra Collection of sample exams

Week 4. (1) 0 f ij u ij.

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Solutions to Exam I MATH 304, section 6

Lecture 5 January 16, 2013

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

On the relative strength of families of intersection cuts arising from pairs of tableau constraints in mixed integer programs

6-2 Matrix Multiplication, Inverses and Determinants

Review of Matrices and Block Structures

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math Camp Notes: Linear Algebra I

Some Perturbation Theory. James Renegar. School of Operations Research. Cornell University. Ithaca, New York October 1992

Ma/CS 6b Class 20: Spectral Graph Theory

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

Hadamard matrices and strongly regular graphs with the 3-e.c. adjacency property

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

Solutions Preliminary Examination in Numerical Analysis January, 2017

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Chapter 1. Preliminaries

Notes on Mathematics

Chapter 1 Linear Programming. Paragraph 5 Duality

Preliminaries and Complexity Theory

A Parallel Approximation Algorithm. for. Positive Linear Programming. mal values for the primal and dual problems are

A priori bounds on the condition numbers in interior-point methods

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Springer-Verlag Berlin Heidelberg

Chapter 1: Systems of Linear Equations

Transcription:

APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the problem of computing the maximum of kb? Ak where B varies over all bases of A. This quantity appears in various places in the mathematical programming literature. More recently, logarithm of this number was the determining factor in the complexity bound of Vavasis and Ye's primal-dual interiorpoint algorithm. We prove that the problem of approximating this maximum norm, even within an exponential (in the dimension of A) factor, is NP-hard. Our proof is based on a closely related result of L. Khachiyan []. Keywords: linear programming, computational complexity, complexity measure Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, NL 3G Canada (e-mail: ltuncel@math.uwaterloo.ca). Research supported in part by a research grant from NSERC of Canada.

APPROXIMATING THE COMPLEXITY MEASURE Introduction and Preliminaries Consider the primal-dual pair of linear programming (LP) problems expressed in the following form. (P ) minimize c T x Ax = b; x 0; (D) maximize b T y A T y + s = c; s 0; where A R mn, b R m, and c R n. In this note, all vectors are column vectors. Without loss of generality, we assume rank(a) = m and that n > m. For a matrix M R mn, km k p denotes the matrix p-norm (induced by the vector p-norms in R m and R n ) km k p := maxfkmxk p : x R n ; kxk p = g: Vavasis and Ye [5] proposed a primal-dual interior-point algorithm for LP with the property that the number of Newton steps required by the algorithm is bounded by a function of only the coecient matrix A. Based on the complexity measure (A) := supfka T (ADA T )? ADk : D Dg (where D is the set of n n, diagonal, positive denite matrices), they established the bound of O? n 3:5 (log (A) + log n) on the number of Newton steps taken by their algorithm in the worst case. There has been a signicant amount of work in mathematical programming which involves or relates to (A). Many of these works include characterizations of (A). Every such known characterization seems to lead only to exponential-time algorithms for computing (A). In this note, we are concerned with the computational complexity of computing this number. We will investigate the question in the context of the Turing Machine Model. Therefore, for the rest of the note, we assume that A Z mn. (The main result goes through for all A with rational entries as well.) A related condition number of A is dened as It is not hard to show that (A) := supfk(ada T )? ADk : D Dg: (A) = maxfkb? k : B B(A)g; () where B(A) is the set of all bases (m m non-singular sub-matrices) of A. Let poly(n) denote a polynomial function of n of xed degree. Khachiyan [] proved (in addition to many related results),

APPROXIMATING THE COMPLEXITY MEASURE 3 Theorem. Approximating (A) within a factor of poly(n) is NP-hard. Khachiyan [], and Vavasis and Ye [5] suspected that the statement of the above theorem would most likely apply to (A) as well. Utilizing Khachiyan's Theorem, we prove that their suspicions were well placed. The main result of this note follows. Theorem. Approximating (A) within a factor of poly(n) is NP-hard. Even though the paper [5] contains elementary ways of avoiding the accurate computation of (A), and the modication of Vavasis-Ye algorithm by Megiddo-Mizuno-Tsuchiya [3] also avoids this computation, our result adds to the relevance of these techniques. Moreover, our result provides further motivation for the probabilistic approaches to the subject, as done by Todd, Tuncel and Ye [4]. Review of the Ingredients (A) also has a characterization in terms of the bases of A (see, for instance, [4]). (A) = maxfkb? Ak : B B(A)g: () We use some elementary and very well-known facts from the complexity analyses of LP problems (Propositions. and.). All logarithms in this note are of base. Given z Z, size(z) := dlog (jzj + )e+: Then size(a) := m P i= j= np size(a ij ): We denote size(a) by L. dim(m) denotes the dimension of the vector space M lies in, in our case, the number of entries of M. Proposition. (a) Let d Z n. Then kdk size(d)?dim(d) : (b) Let C be a square sub-matrix of A. Then jdet(c)j size(c)?dim(c) L?mn : Proof. Proof of (a) is straightforward. Proof of (b) can be easily obtained by an induction. Proposition. Let C be an r r non-singular sub-matrix of A. Let d be an r vector whose entries are chosen from A. Then

APPROXIMATING THE COMPLEXITY MEASURE 4 (a) kc? dk L?mn ; kc? dk L?n ; (b) kck L ; kc? k?l : Proof. (a) By Cramer's Rule, Proposition. (b), and the fact that all entries of C, d are integers, we have kc? dk L?mn. The next inequality follows from the relationship of the vector norms. (b) Using Proposition. (a), and the characterization of operator innity-norm, we have kck L?n. Using the relationship of the operator innity and -norms, we arrive at kck L. Recall that the reciprocal of the largest singular value of C is the smallest singular value of C?. We conclude, kc? k?l. 3 The Main Result Let ^B denote a basis of A attaining (A) = k ^B? k : Note that for any square, non-singular submatrix C of A, there exists a basis B B(A) containing C as a sub-matrix. For every such B, we have the interlacing property of the singular values of B and C. In particular, kbk kck and kb? k kc? k. Thus, kc? k k ^B? k : (3) Our main idea is to exploit the characterizations () and () of and in the following way. We consider the value of the augmented matrix [A j I]. We have (A j I) kb? Ak + kb? k ; where B B(A j I) attains the maximum. We observe that if we choose very large then the second term above might dominate, and we may be forced to choose B very close to ^B. Indeed, this is a very rough idea and we have to consider various issues and verify a few bounds. But in essence, in what follows, we prove that choosing := 5L works. Many of the constants in the analysis below can be improved (including 5L); however, the conclusion of the main theorem stays the same. Therefore, the estimations below are very generous for the ease of presentation. Lemma 3. Let B be a basis of [A j I] for := 5L. Then (a) kb? k? +?L k ^B? k ; (b) kb? Ak L +?L ;

APPROXIMATING THE COMPLEXITY MEASURE 5 (c) (A j I) 4L : Proof. If B does not contain any column of I, then the inequality in (a) clearly holds, the inequality in (b) also holds (as can be checked, using Proposition. (a)). So, for proving (a) and (b), we assume, without loss of generality, that B contains the rst k columns of I. Then we write B as " # " I B B = ; thus, B 0 B? = I? # B B? : 0 B? Now, we prove (a): kb? k Ij? B B? + k B? k? m + kb B? k + k ^B? k? m + L + k ^B? k???l k ^B? k The second inequality above uses (3). Third inequality uses Proposition. (a). The last inequality follows from Proposition. (b). Proof of (b): Write according to the row partition of B. Then B? A = A = " " A A #? A? B B? A B? A # : Therefore, kb? Ak m? L + L + L L +?L : Proof of (c): (A j I) h ^B? Aj ^B? i k ^B? k 4L : We used Proposition. (b). Proof of Theorem.: Let B be the basis attaining (A j I). Then (A j I) B? AjB? kb? Ak + kb? k?l + L +???L (A):

APPROXIMATING THE COMPLEXITY MEASURE 6 We used Lemma 3. (a) and (b). Since (A) (A j I); we obtain (A j I)??6L??4L +?L (A) (A j I): Therefore, (A j I) approximates (A) within a factor of +?L??3L??5L ; we used the fact that (A j I)?L (by Lemma 3. (c)). Since n > m (L 6), this fraction is very close to (bounded above by? +? =???8??30 ). Clearly, if there were a polynomial time algorithm which approximated (A) within a factor of poly(n), we could use it on (AjI) whose size is bounded by a polynomial function of size L of A (then divide the result by ) to get a polynomial time algorithm guaranteeing an approximation factor of e.g., poly(n)+ for (A). Therefore, the problem of approximating (A) within a factor of poly(n) is NP-hard. References [] L. Khachiyan, On the complexity of approximating extremal determinants in matrices, Journal of Complexity (995) 38{53. [] L. Khachiyan, private communication, June 997. [3] N. Megiddo, S. Mizuno and T. Tsuchiya, A modied layered-step interior-point algorithm for linear programming, Mathematical Programming 8 (998) 339{355. [4] M. J. Todd, L. Tuncel and Y. Ye, Probabilistic analysis of two complexity measures for linear programming problems, MSRI Preprint 998{054, Berkeley, CA, USA, October 998. [5] S. A. Vavasis and Y. Ye, A primal-dual interior point method whose running time depends only on the constraint matrix, Mathematical Programming 74 (996) 79{0.