APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the problem of computing the maximum of kb? Ak where B varies over all bases of A. This quantity appears in various places in the mathematical programming literature. More recently, logarithm of this number was the determining factor in the complexity bound of Vavasis and Ye's primal-dual interiorpoint algorithm. We prove that the problem of approximating this maximum norm, even within an exponential (in the dimension of A) factor, is NP-hard. Our proof is based on a closely related result of L. Khachiyan []. Keywords: linear programming, computational complexity, complexity measure Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, NL 3G Canada (e-mail: ltuncel@math.uwaterloo.ca). Research supported in part by a research grant from NSERC of Canada.
APPROXIMATING THE COMPLEXITY MEASURE Introduction and Preliminaries Consider the primal-dual pair of linear programming (LP) problems expressed in the following form. (P ) minimize c T x Ax = b; x 0; (D) maximize b T y A T y + s = c; s 0; where A R mn, b R m, and c R n. In this note, all vectors are column vectors. Without loss of generality, we assume rank(a) = m and that n > m. For a matrix M R mn, km k p denotes the matrix p-norm (induced by the vector p-norms in R m and R n ) km k p := maxfkmxk p : x R n ; kxk p = g: Vavasis and Ye [5] proposed a primal-dual interior-point algorithm for LP with the property that the number of Newton steps required by the algorithm is bounded by a function of only the coecient matrix A. Based on the complexity measure (A) := supfka T (ADA T )? ADk : D Dg (where D is the set of n n, diagonal, positive denite matrices), they established the bound of O? n 3:5 (log (A) + log n) on the number of Newton steps taken by their algorithm in the worst case. There has been a signicant amount of work in mathematical programming which involves or relates to (A). Many of these works include characterizations of (A). Every such known characterization seems to lead only to exponential-time algorithms for computing (A). In this note, we are concerned with the computational complexity of computing this number. We will investigate the question in the context of the Turing Machine Model. Therefore, for the rest of the note, we assume that A Z mn. (The main result goes through for all A with rational entries as well.) A related condition number of A is dened as It is not hard to show that (A) := supfk(ada T )? ADk : D Dg: (A) = maxfkb? k : B B(A)g; () where B(A) is the set of all bases (m m non-singular sub-matrices) of A. Let poly(n) denote a polynomial function of n of xed degree. Khachiyan [] proved (in addition to many related results),
APPROXIMATING THE COMPLEXITY MEASURE 3 Theorem. Approximating (A) within a factor of poly(n) is NP-hard. Khachiyan [], and Vavasis and Ye [5] suspected that the statement of the above theorem would most likely apply to (A) as well. Utilizing Khachiyan's Theorem, we prove that their suspicions were well placed. The main result of this note follows. Theorem. Approximating (A) within a factor of poly(n) is NP-hard. Even though the paper [5] contains elementary ways of avoiding the accurate computation of (A), and the modication of Vavasis-Ye algorithm by Megiddo-Mizuno-Tsuchiya [3] also avoids this computation, our result adds to the relevance of these techniques. Moreover, our result provides further motivation for the probabilistic approaches to the subject, as done by Todd, Tuncel and Ye [4]. Review of the Ingredients (A) also has a characterization in terms of the bases of A (see, for instance, [4]). (A) = maxfkb? Ak : B B(A)g: () We use some elementary and very well-known facts from the complexity analyses of LP problems (Propositions. and.). All logarithms in this note are of base. Given z Z, size(z) := dlog (jzj + )e+: Then size(a) := m P i= j= np size(a ij ): We denote size(a) by L. dim(m) denotes the dimension of the vector space M lies in, in our case, the number of entries of M. Proposition. (a) Let d Z n. Then kdk size(d)?dim(d) : (b) Let C be a square sub-matrix of A. Then jdet(c)j size(c)?dim(c) L?mn : Proof. Proof of (a) is straightforward. Proof of (b) can be easily obtained by an induction. Proposition. Let C be an r r non-singular sub-matrix of A. Let d be an r vector whose entries are chosen from A. Then
APPROXIMATING THE COMPLEXITY MEASURE 4 (a) kc? dk L?mn ; kc? dk L?n ; (b) kck L ; kc? k?l : Proof. (a) By Cramer's Rule, Proposition. (b), and the fact that all entries of C, d are integers, we have kc? dk L?mn. The next inequality follows from the relationship of the vector norms. (b) Using Proposition. (a), and the characterization of operator innity-norm, we have kck L?n. Using the relationship of the operator innity and -norms, we arrive at kck L. Recall that the reciprocal of the largest singular value of C is the smallest singular value of C?. We conclude, kc? k?l. 3 The Main Result Let ^B denote a basis of A attaining (A) = k ^B? k : Note that for any square, non-singular submatrix C of A, there exists a basis B B(A) containing C as a sub-matrix. For every such B, we have the interlacing property of the singular values of B and C. In particular, kbk kck and kb? k kc? k. Thus, kc? k k ^B? k : (3) Our main idea is to exploit the characterizations () and () of and in the following way. We consider the value of the augmented matrix [A j I]. We have (A j I) kb? Ak + kb? k ; where B B(A j I) attains the maximum. We observe that if we choose very large then the second term above might dominate, and we may be forced to choose B very close to ^B. Indeed, this is a very rough idea and we have to consider various issues and verify a few bounds. But in essence, in what follows, we prove that choosing := 5L works. Many of the constants in the analysis below can be improved (including 5L); however, the conclusion of the main theorem stays the same. Therefore, the estimations below are very generous for the ease of presentation. Lemma 3. Let B be a basis of [A j I] for := 5L. Then (a) kb? k? +?L k ^B? k ; (b) kb? Ak L +?L ;
APPROXIMATING THE COMPLEXITY MEASURE 5 (c) (A j I) 4L : Proof. If B does not contain any column of I, then the inequality in (a) clearly holds, the inequality in (b) also holds (as can be checked, using Proposition. (a)). So, for proving (a) and (b), we assume, without loss of generality, that B contains the rst k columns of I. Then we write B as " # " I B B = ; thus, B 0 B? = I? # B B? : 0 B? Now, we prove (a): kb? k Ij? B B? + k B? k? m + kb B? k + k ^B? k? m + L + k ^B? k???l k ^B? k The second inequality above uses (3). Third inequality uses Proposition. (a). The last inequality follows from Proposition. (b). Proof of (b): Write according to the row partition of B. Then B? A = A = " " A A #? A? B B? A B? A # : Therefore, kb? Ak m? L + L + L L +?L : Proof of (c): (A j I) h ^B? Aj ^B? i k ^B? k 4L : We used Proposition. (b). Proof of Theorem.: Let B be the basis attaining (A j I). Then (A j I) B? AjB? kb? Ak + kb? k?l + L +???L (A):
APPROXIMATING THE COMPLEXITY MEASURE 6 We used Lemma 3. (a) and (b). Since (A) (A j I); we obtain (A j I)??6L??4L +?L (A) (A j I): Therefore, (A j I) approximates (A) within a factor of +?L??3L??5L ; we used the fact that (A j I)?L (by Lemma 3. (c)). Since n > m (L 6), this fraction is very close to (bounded above by? +? =???8??30 ). Clearly, if there were a polynomial time algorithm which approximated (A) within a factor of poly(n), we could use it on (AjI) whose size is bounded by a polynomial function of size L of A (then divide the result by ) to get a polynomial time algorithm guaranteeing an approximation factor of e.g., poly(n)+ for (A). Therefore, the problem of approximating (A) within a factor of poly(n) is NP-hard. References [] L. Khachiyan, On the complexity of approximating extremal determinants in matrices, Journal of Complexity (995) 38{53. [] L. Khachiyan, private communication, June 997. [3] N. Megiddo, S. Mizuno and T. Tsuchiya, A modied layered-step interior-point algorithm for linear programming, Mathematical Programming 8 (998) 339{355. [4] M. J. Todd, L. Tuncel and Y. Ye, Probabilistic analysis of two complexity measures for linear programming problems, MSRI Preprint 998{054, Berkeley, CA, USA, October 998. [5] S. A. Vavasis and Y. Ye, A primal-dual interior point method whose running time depends only on the constraint matrix, Mathematical Programming 74 (996) 79{0.