CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES. Alexander Barvinok

Similar documents
ON TESTING HAMILTONICITY OF GRAPHS. Alexander Barvinok. July 15, 2014

Chapter 3 Transformations

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture 5: Lowerbound on the Permanent and Application to TSP

Conceptual Questions for Review

MTH 2032 SemesterII

Math Matrix Algebra

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Review problems for MA 54, Fall 2004.

Linear Algebra Review. Vectors

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

Review of Linear Algebra

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

1. General Vector Spaces

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Lecture 5: Hyperbolic polynomials

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

COUNTING MAGIC SQUARES IN QUASI-POLYNOMIAL TIME. Alexander Barvinok, Alex Samorodnitsky, and Alexander Yong. March 2007

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Introduction to Matrix Algebra

Lecture 7: Positive Semidefinite Matrices

Linear algebra and applications to graphs Part 1

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Quantum Computing Lecture 2. Review of Linear Algebra

Notes on Linear Algebra and Matrix Theory

Eigenvectors and Reconstruction

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

On Euclidean distance matrices

Spectral inequalities and equalities involving products of matrices

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Lecture 8 : Eigenvalues and Eigenvectors

MAT Linear Algebra Collection of sample exams

Knowledge Discovery and Data Mining 1 (VO) ( )

Auerbach bases and minimal volume sufficient enlargements

On the adjacency matrix of a block graph

Recall the convention that, for us, all vectors are column vectors.

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Some notes on Coxeter groups

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra Massoud Malek

Exercise Sheet 1.

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Classification of root systems

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Diagonalizing Matrices

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 PFAFFIANS, HAFNIANS AND PRODUCTS OF REAL LINEAR FUNCTIONALS. Péter E.

Linear Algebra: Characteristic Value Problem

Chapter 6: Orthogonality

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra Practice Problems

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

ABSOLUTELY FLAT IDEMPOTENTS

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Symmetric matrices and dot products

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra Formulas. Ben Lee

Homework set 4 - Solutions

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

Reconstruction and Higher Dimensional Geometry

Linear Algebra (Review) Volker Tresp 2017

Chapter 5 Eigenvalues and Eigenvectors

12x + 18y = 30? ax + by = m

Four new upper bounds for the stability number of a graph

Spectrally arbitrary star sign patterns

arxiv: v1 [math.co] 13 Oct 2014

MATH 22A: LINEAR ALGEBRA Chapter 4

Lecture Notes 1: Vector spaces

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

A PRIMER ON SESQUILINEAR FORMS

Compound matrices and some classical inequalities

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Tutorials in Optimization. Richard Socher

Math 1553, Introduction to Linear Algebra

Math Linear Algebra II. 1. Inner Products and Norms

NOTES on LINEAR ALGEBRA 1

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Mathematical Methods wk 2: Linear Operators

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Linear Algebra: Matrix Eigenvalue Problems

Alexander Barvinok August 2008

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Random sets of isomorphism of linear operators on Hilbert space

MATH 581D FINAL EXAM Autumn December 12, 2016

8. Diagonalization.

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

1 Quantum states and von Neumann entropy

Polynomials of small degree evaluated on matrices

Spectral radius, symmetric and positive matrices

Transcription:

CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES Alexander Barvinok Abstract. We call an n-tuple Q 1,...,Q n of positive definite n n real matrices α-conditioned for some α 1 if for the corresponding quadratic forms q i : R n R we have q i (x αq i (y for any two vectors x,y R n of Euclidean unit length and q i (x αq j (x for all 1 i,j n and all x R n. An n-tuple is called doubly stochastic if the sum of Q i is the identity matrix and the trace of each Q i is 1. We prove that for any fixed α 1 the mixed discriminant of an α-conditioned doubly stochastic n-tuple is n O(1 e n. As a corollary, for any α 1 fixed in advance, we obtain a polynomial time algorithm approximating the mixed discriminant of an α-conditioned n-tuple within a polynomial in n factor. 1. Introduction and main results (1.1 Mixed discriminants. Let Q 1,...,Q n be n n real symmetric matrices. The function det(t 1 Q 1 +...+t n Q n, where t 1,...,t n are real variables, is a homogeneous polynomial of degree n in t 1,...,t n and its coefficient (1.1.1 D(Q 1,...,Q n = n t 1 t n det(t 1 Q 1 +...+t n Q n is called the mixed discriminant of Q 1,...,Q n (sometimes, the normalizing factor of 1/n! is used. Mixed discriminants were introduced by A.D. Alexandrov in his work on mixed volumes [Al38], see also [Le93]. They also have some interesting combinatorial applications, see Chapter V of [BR97]. Mixed discriminants generalize permanents. If the matrices Q 1,...,Q n are diagonal, so that then Q i = diag(a i1,...,a in for i = 1,...,n, (1.1.2 D(Q 1,...,Q n = pera where A = (a ij 1991 Mathematics Subject Classification. 15A15, 15A45, 90C25, 68Q25. Key words and phrases. mixed discriminant, scaling, algorithm, concentration. This research was partially supported by NSF Grant DMS 1361541. 1 Typeset by AMS-TEX

and pera = n σ S n a iσ(i is the permanent of an n n matrix A. Here the i-th row of A is the diagonal of Q i and S n is the symmetric group of all n! permutations of the set {1,...,n}. (1.2 Doubly stochastic n-tuples. If Q 1,...,Q n are positive semidefinite matrices then D(Q 1,...,Q n 0, see [Le93]. We say that the n-tuple (Q 1,...,Q n is doubly stochastic if Q 1,...,Q n are positive semidefinite, Q 1 +...+Q n = I and trq 1 =... = trq n = 1, where I is the n n identity matrix and trq is the trace of Q. We note that if Q 1,...,Q n are diagonal then the n-tuple (Q 1,...,Q n is doubly stochastic if and only if the matrix A in (1.1.2 is doubly stochastic, that is, non-negative and has row and column sums 1. In [Ba89] Bapat conjectured what should be the mixed discriminant version of the van der Waerden inequality for permanents: if (Q 1,...,Q n is a doubly stochastic n-tuple then (1.2.1 D(Q 1,...,Q n n! n n where equality holds if and only if Q 1 =... = Q n = 1 n I. The conjecture was proved by Gurvits [Gu06], see also [Gu08] for a more general result with a simpler proof. In this paper, we prove that D(Q 1,...,Q n remains close to n!/n n e n if the n-tuple (Q 1,...,Q n is doubly stochastic and well-conditioned. (1.3 α-conditioned n-tuples. For a symmetric matrix Q, let λ min (Q denote the minimum eigenvalue of Q and let λ max (Q denote the maximum eigenvalue of Q. We say that a positive definite matrix Q is α-conditioned for some α 1 if λ max (Q αλ min (Q. Equivalently, let q : R n R be the corresponding quadratic form defined by q(x = Qx,x for x R n, where, is the standard inner product in R n. Then Q is α-conditioned if q(x αq(y for all x,y R n such that x = y = 1, where is the standard Euclidean norm in R n. We say that an n-tuple (Q 1,...,Q n is α-conditioned if each matrix Q i is α- conditioned and q i (x αq j (x for all 1 i,j n and all x R n, where q 1,...,q n : R n R are the corresponding quadratic forms. The main result of this paper is the following inequality. 2

(1.4 Theorem. Let (Q 1,...,Q n be an α-conditioned doubly stochastic n-tuple of positive definite n n matrices. Then D(Q 1,...Q n n α2 e (n 1. Combining the bound of Theorem 1.4 with (1.2.1, we conclude that for any α 1, fixed in advance, the mixed discriminant of an α-conditioned doubly stochastic n-tuple is within a polynomial in n factor of e n. If we allow α to vary with n then, the logarithmic order of the mixed discriminant is captured by e n. The estimate of Theorem 1.4 is unlikely to be precise. It can be considered as a (weak mixed discriminant extension of the Bregman - Minc inequality for permanents (we discuss the connection in Section 1.7. as long as α n lnn (1.5 Scaling. We say that an n-tuple (P 1,...,P n of n n positive definite matricesisobtainedfrom ann-tuple(q 1,...,Q n ofn n positivedefinitematrices by scaling if for some invertible n n matrix T and real τ 1,...,τ n > 0, we have (1.5.1 P i = τ i T Q i T for i = 1,...,n, where T is the transpose of T. As easily follows from (1.1.1, ( n (1.5.2 D(P 1,...,P n = (dett 2 τ i D(Q 1,...,Q n, provided (1.5.1 holds. This notion of scaling extends the notion of scaling for positive matrices by Sinkhorn[Si64] to n-tuples of positive definite matrices. Gurvits and Samorodnitsky proved in [GS02] that any n-tuple of n n positive definite matrices can be obtained by scaling from a doubly stochastic n-tuple, and, moreover, this can be achieved in polynomial time, as it reduces to solving a convex optimization problem (the gist of their algorithm is given by Theorem 2.1 below. More generally, Gurvits and Samorodnitsky discuss when an n-tuple of positive semidefinite matrices can be scaled to a doubly stochastic n-tuple. As is discussed in [GS02], the inequality (1.2.1, together with the scaling algorithm, the identity (1.5.2 and the inequality D(Q 1,...,Q n 1 for doubly stochastic n-tuples (Q 1,...,Q n, allow one to estimate within a factor of n!/n n e n the mixed discriminant of any given n-tuple of n n positive semidefinite matrices in polynomial time. In this paper, we prove that if a doubly stochastic n-tuple (P 1,...,P n is obtained from an α-conditioned n-tuple of positive definite matrices then the n-tuple (P 1,...,P n is α 2 -conditioned (see Lemma 2.4 below. We also prove the following strengthening of Theorem 1.4. 3

(1.6 Theorem. Suppose that (Q 1,...,Q n is an α-conditioned n-tuple of n n positive definite matrices and suppose that (P 1,...,P n is a doubly stochastic n- tuple of positive definite matrices obtained from (Q 1,...,Q n by scaling. Then D(P 1,...,P n n α2 e (n 1. Together with the scaling algorithm of [GS02] and the inequality (1.2.1, Theorem 1.6 allows us to approximate in polynomial time the mixed discriminant D(Q 1,...,Q n of an α-conditioned n-tuple (Q 1,...,Q n within a factor of n α2. Note that the value of D(Q 1,...,Q n may vary within a factor of α n. (1.7 Connections to the Bregman - Minc inequality. The following inequality for permanents of 0-1 matrices was conjectured by Minc [Mi63] and proved by Bregman [Br73], see also [Sc78] for a much simplified proof: if A is an n n matrix with 0-1 entries and row sums r 1,...,r n, then (1.7.1 per A n (r i! 1/r i. The author learned from A. Samorodnitsky [Sa00] the following restatement of (1.7.1, see also [So03]. Suppose that B = (b ij is an n n stochastic matrix (that is, a non-negative matrix with row sums 1 such that (1.7.2 0 b ij 1 r i for all i,j and some positive integers r 1,...,r n. Then (1.7.3 per B n (r i! 1/r i r i. Indeed, the function B per B is linear in each row and hence its maximum value on the polyhedron of stochastic matrices satisfying (1.7.2 is attained at a vertex of the polyhedron, that is, where b ij {0,1/r i } for all i,j. Multiplying the i-th row of B by r i, we obtain a 0-1 matrix A with row sums r 1,...,r n and hence (1.7.3 follows by (1.7.1. Suppose now that B is a doubly stochastic matrix whose entries do not exceed α/n for some α 1. Combining (1.7.3 with the van der Waerden lower bound, we obtain that (1.7.4 perb = e n n O(α. Ideally, we would like to obtain a similar to (1.7.4 estimate for the mixed discriminants D(Q 1,...,Q n ofdoubly stochasticn-tuplesofpositivesemidefinite matrices satisfying (1.7.5 λ max (Q i α n 4 for i = 1,...,n.

In Theorem 1.4 such an estimate is obtained under a stronger assumption that the n-tuple (Q 1,...,Q n in addition to being doubly stochastic is also α-conditioned. This of course implies (1.7.5 but it also prohibits Q i from having small (in particular, 0 eigenvalues. The question whether a similar to Theorem 1.4 bound can be proven under the the weaker assumption of (1.7.5 together with the assumption that (Q 1,...,Q n is doubly stochastic remains open. In Section 2 we collect various preliminaries and in Section 3 we prove Theorems 1.4 and 1.6. 2. Preliminaries First, we restate a result of Gurvits and Samorodnitsky [GS02] that is at the heart of their algorithm to estimate the mixed discriminant. We state it in the particular case of positive definite matrices. (2.1 Theorem. Let Q 1,...,Q n be n n positive definite matrices, let H R n be the hyperplane, { } n H = (x 1,...,x n : x i = 0 and let f : H R be the function ( n f (x 1,...,x n = lndet e x i Q i. Then f is strictly convex on H and attains its minimum on H at a unique point (ξ 1,...,ξ n. Let S be an n n, necessarily invertible, matrix such that (2.1.1 S S = n e ξ i Q i (such a matrix exists since the matrix in the right hand side of (2.1.1 is positive definite. Let τ i = e ξ i for i = 1,...,n, let T = S 1 and let B i = τ i T Q i T for i = 1,...,n. Then (B 1,...,B n is a doubly stochastic n-tuple of positive definite matrices. We will need the following simple observation regarding matrices B 1,...,B n constructed in Theorem 2.1. 5

(2.2 Lemma. Suppose that for the matrices Q 1,...,Q n in Theorem 2.1, we have n trq i = n. Then, for the matrices B 1,...,B n constructed in Theorem 2.1, we have D(B 1,...,B n D(Q 1,...,Q n. Proof. We have ( n (2.2.1 D(B 1,...,B n = (dett 2 τ i D(Q 1,...,Q n. Now, (2.2.2 { n n } τ i = exp ξ i = 1 and ( 1 n (2.2.3 (dett 2 = det e ξ i Q i = exp{ f (ξ 1,...,ξ n }. Since (ξ 1,...,ξ n is the minimum point of f on H, we have (2.2.4 f (ξ 1,...,ξ n f(0,...,0 = lndetq where Q = n Q i. We observe that Q is a positive definite matrix with eigenvalues, say, λ 1,...,λ n such that n n λ i = trq = trq i = n and λ 1,...,λ n > 0. Applying the arithmetic - geometric mean inequality, we obtain (2.2.5 detq = λ 1 λ n ( n λ1 +...+λ n = 1. n Combining (2.2.1 (2.2.5, we complete the proof. 6

(2.3 From symmetric matrices to quadratic forms. As in Section 1.3, with an n n symmetric matrix Q we associate the quadratic form q : R n R. We define the eigenvalues, the trace, and the determinant of q as those of Q. Consequently, we define the mixed discriminant D(q 1,...,q n of quadratic forms q 1,...,q n. An n-tuple of positive semidefinite quadratic forms q 1,...,q n : R n R is doubly stochastic if n q i (x = x 2 for all x R n and trq 1 =... = trq n = 1. An n-tuple of quadratic forms p 1,...,p n : R n R n is obtained from an n- tuple q 1,...,q n : R n R by scaling if for some invertible linear transformation T : R n R n and real τ 1,...,τ n > 0 we have p i (x = τ i q i (Tx for all x R n and all i = 1,...,n. One advantage of working with quadratic forms as opposed to matrices is that it is particularly easy to define the restriction of a quadratic form onto a subspace. We will use the following construction: suppose that q 1,...,q n : R n R are positive definite quadratic forms and let L R n be an m-dimensional subspace for some 1 m n. Then L inherits Euclidean structure from R n and we can consider the restrictions q 1,..., q n : L R of q 1,...,q n onto L. Thus we can define the mixed discriminant D( q 1,..., q m. Note that by choosing an orthonormal basis in L, we can associate m m symmetric matrices Q 1,..., Q m with q 1,..., q m. A different choice of an orthonormal basis results in the transformation Q i U Qi U for some m m orthogonal matrix U and i = 1,...,m, which does not change the mixed discriminant D ( Q1,..., Q m. (2.4 Lemma. Let q 1,...,q n : R n R be an α-conditioned n-tuple of positive definite quadratic forms. Let L R n be an m-dimensional subspace, where 1 m n, let T : L R n be a linear transformation such that kert = {0} and let τ 1,...,τ m > 0 be reals. Let us define quadratic forms p 1,...,p m : L R by Suppose that p i (x = τ i q i (Tx for x L and i = 1,...,m. p i (x = x 2 for all x L and trp i = 1 for i = 1,...,m. Then the m-tuple of quadratic forms p 1,...,p m is α 2 -conditioned. This version of Lemma 2.4 and the following proof was suggested by the anonymous referee. It replaces an earlier version with a weaker bound of α 4 instead of α 2. 7

Proof of Lemma 2.4. Let us define a quadratic form q : R n R by q(x = τ i q i (x for all x R n. Then q(x is α-conditioned and for each x,y L such that x = y = 1 we have 1 = q(tx λ min (q Tx 2 and 1 = q(ty λ max (q Ty 2, from which it follows that Tx 2 λ max(q λ min (q Ty 2 and hence (2.4.1 Tx 2 α Ty 2 for all x,y L such that x = y = 1. Applying (2.4.1 and using that the form q i is α-conditioned, we obtain (2.4.2 p i (x =τ i q i (Tx τ i (λ max q i Tx 2 ατ i (λ max q i Ty 2 α 2 τ i (λ min q i Ty 2 α 2 τ i q i (Ty =α 2 p i (y for all x,y L such that x = y = 1, and hence each form p i is α 2 -conditioned. Let us define quadratic forms r i : L R, i = 1,...,m, by Then Therefore, r i (x = q i (Tx for x L and i = 1,...,m. r i (x αr j (x for all 1 i,j m and all x L. trr i αtrr j for all 1 i,j m. Since 1 = trp i = τ i trr i, we conclude that τ i = 1/trr i and, therefore, (2.4.3 τ i ατ j for all 1 i,j m. Applying (2.4.3 and using that the n-tuple q 1,...,q n is α-conditioned, we obtain (2.4.4 p i (x =τ i q i (Tx ατ j q i (Tx α 2 τ j q j (Tx =α 2 p j (x for all x L. Combining (2.4.2 and (2.4.4, we conclude that the m-tuple p 1,...,p m is α 2 - conditioned. 8

(2.5 Lemma. Let q 1,...,q n : R n R be positive semidefinite quadratic forms and suppose that q n (x = u,x 2, where u R n and u = 1. Let H = u be the orthogonal complement to u. Let q 1,..., q n 1 : H R be the restrictions of q 1,...,q n 1 onto H. Then D(q 1,...,q n = D( q 1,..., q n 1. Proof. Let us choose an orthonormal basis of R n for which u is the last basis vector and let Q 1,...,Q n be the matrices of the forms q 1,...,q n in that basis. Then the only non-zero entry of Q n is 1 in the lower right corner. Let Q 1,..., Q n 1 be the upper left (n 1 (n 1 submatrices of Q 1,...,Q n 1. Then det(t 1 Q 1 +...+t n Q n = t n det (t 1 Q1 +...+t n 1 Qn 1 and hence by (1.1.1 we have D(Q 1,...,Q n = D ( Q1,..., Q n 1. On the other hand, Q 1,..., Q n 1 are the matrices of q 1,..., q n 1. Finally, the last lemma before we embark on the proof of Theorems 1.4 and 1.6. (2.6 Lemma. Let q : R n R be an α-conditioned quadratic form such that trq = 1. Let H R n be a hyperplane and let q be the restriction of q onto H. Then tr q 1 α n. Proof. Let 0 < λ 1... λ n be the eigenvalues of q. Then n λ i = 1 and λ n αλ 1, from which it follows that λ n α n. As is known, the eigenvalues of q interlace the eigenvalues of q, see, for example, Section 1.3 of [Ta12], so for the eigenvalues µ 1,...,µ n 1 of q we have Therefore, λ 1 µ 1 λ 2... λ n 1 µ n 1 λ n. n 1 tr q = µ i n 1 λ i 1 α n. 9

3. Proof of Theorem 1.4 and Theorem 1.6 Clearly, Theorem 1.6 implies Theorem 1.4, so it suffices to prove the former. (3.1 Proof of Theorem 1.6. As in Section 2.3, we associate quadratic forms with matrices. We prove the following statement by induction on m = 1,...,n. Statement: Let q 1,...,q n : R n R be an α-conditioned n-tuple of positive definite quadratic forms. Let L R n be an m-dimensional subspace, 1 m n, let T : L R n be a linear transformation such that kert = {0} and let τ 1,...,τ m > 0 be reals. Let us define quadratic forms p i : L R, i = 1,...,m, by and suppose that Then p i (x = τ i q i (Tx for x L and i = 1,...,m p i (x = x 2 for all x L and trp i = 1 for i = 1,...,m. (3.1.1 D(p 1,...,p m exp ( (m 1+α 2 m k=2 1. k In the case of m = n, we get the desired result. The statement holds if m = 1 since in that case D(p 1 = detp 1 = 1. Suppose that m > 1. Let L R n be an m-dimensional subspace and let the linear transformation T, numbers τ i and the forms p i for i = 1,...,m be as above. By Lemma 2.4, the m-tuple p 1,...,p m is α 2 -conditioned. We write the spectral decomposition p m (x = λ j u j,x 2, j=1 where u 1,...,u m L are the unit eigenvectors of p m and λ 1,...,λ m > 0 are the corresponding eigenvalues of p m. Since trp m = 1, we have λ 1 +...+λ m = 1. Let L j = u j, L j L, be the orthogonal complement of u j in L. Let p ij : L j R for i = 1,...,m and j = 1,...,m be the restriction of p i onto L j. 10

Using Lemma 2.5, we write (3.1.2 D(p 1,...,p m = λ j D ( p 1,...,p m 1, u j,x 2 = j=1 λ j D ( p 1j,..., p (m 1j j=1 where λ j = 1 and λ j > 0 for j = 1,...,m. j=1 Let Since σ j = tr p 1j +...+tr p (m 1j for j = 1,...,m. m 1 p ij (x = x 2 p mj (x for all x L j and j = 1,...,m and since the form p mj is α 2 -conditioned, by Lemma 2.6, we have (3.1.3 σ j m 2+ α2 m for j = 1,...,m. Let us define Then by (3.1.3, r ij = m 1 σ j p ij for i = 1,...,m 1 and j = 1,...,m. (3.1.4 D ( p ( σj 1j,..., p (m 1j = m 1 ( 1 1 m 1 + α 2 m(m 1 exp ( 1+ α2 m for j = 1,...,m. m 1 D ( r 1j,...,r (m 1j m 1 D ( r 1j,...,r (m 1j D ( r 1j,...,r (m 1j In addition, (3.1.5 trr 1j +...+trr (m 1j = m 1 for j = 1,...,m. 11

For each j = 1,...,m, letw 1j,...,w (m 1j : L j R be a doubly stochastic(m 1-tuple of quadratic forms obtained from r 1j,...,r (m 1j by scaling as described in Theorem 2.1. From (3.1.5 and Lemma 2.2, we have (3.1.6 D ( r 1j,...,r (m 1j D ( w1j,...,w (m 1j for j = 1,...,m. Finally, for each j = 1,...,m, we are going to apply the induction hypothesis to the (m 1-tuple of quadratic forms w 1j,...,w (m 1j : L j R. Since the (m 1-tuple is doubly stochastic, we have (3.1.7 m 1 w ij (x = x 2 for all x L j and all j = 1,...,m and trw ij = 1 for all i = 1,...,m 1 and j = 1,...,m. Since the (m 1-tuple w 1j,...,w (m 1j is obtained from the (m 1-tuple r 1j,...,r (m 1j by scaling, there are invertible linear operators S j : L j L j and real numbers µ ij > 0 for i = 1,...,m 1 and j = 1,...,m such that w ij (x = µ ij r ij (S j x for all x L j and all i = 1,...,m 1 and j = 1,...,m. In other words, (3.1.8 w ij (x =µ ij r ij (S j x = µ ij(m 1 p ij (S j x = µ ij(m 1 p i (S j x σ j σ j = µ ij(m 1τ i q i (TS j x for all x L j σ j and all i = 1,...,m 1 and j = 1,...,m. Since for each j = 1,...,m, the linear transformation TS j : L j R n of an (m 1-dimensional subspace L j R n has zero kernel, from (3.1.7 and (3.1.8 we can apply the induction hypothesis to conclude that (3.1.9 D ( ( m 1 w 1j,...,w (m 1j exp (m 2+α 2 1 k for j = 1,...,m. Combining (3.1.2 and the inequalities (3.1.4, (3.1.6 and (3.1.9, we obtain (3.1.1 and conclude the induction step. 12 k=2

Acknowledgment I am grateful to the anonymous referee for careful reading of the paper and for suggesting numerous improvements, the most significant of which are a less restrictive definition of an α-conditioned n-tuple of positive definite matrices and a proof of Lemma 2.4 with a better bound. References [Al38] A.D. Alexandrov, On the theory of mixed volumes of convex bodies. IV. Mixed discriminants and mixed volumes (Russian, Matematicheskii Sbornik (Novaya Seriya 3 (1938, 227 251. [Ba89] R.B. Bapat, Mixed discriminants of positive semidefinite matrices, Linear Algebra and its Applications 126 (1989, 107 124. [BR97] R.B. Bapat and T.E.S. Raghavan, Nonnegative Matrices and Applications, Encyclopedia of Mathematics and its Applications, 64, Cambridge University Press, Cambridge, 1997. [Br73] L.M. Bregman, Certain properties of nonnegative matrices and their permanents (Russian, Doklady Akademii Nauk SSSR 211 (1973, 27 30. [Gu06] L. Gurvits, The van der Waerden conjecture for mixed discriminants, Advances in Mathematics 200 (2006, no. 2, 435 454. [Gu08] L. Gurvits, Van der Waerden/Schrijver-Valiant like conjectures and stable (aka hyperbolic homogeneous polynomials: one theorem for all. With a corrigendum, Electronic Journal of Combinatoric 15 (2008, no. 1, Research Paper 66, 26 pp.. [GS02] L. Gurvits and A. Samorodnitsky, A deterministic algorithm for approximating the mixed discriminant and mixed volume, and a combinatorial corollary, Discrete & Computational Geometry 27 (2002, no. 4, 531 550. [Le93] K. Leichtweiß, Convexity and differential geometry, Handbook of convex geometry, Vol. A, B, North-Holland, Amsterdam, 1993, pp. 1045 1080. [Mi63] H. Minc, Upper bounds for permanents of (0,1-matrices., Bulletin of the American Mathematical Society 69 (1963, 789 791. [Sa00] A. Samorodnitsky, personal communication (2000. [Sc78] A. Schrijver, A short proof of Minc s conjecture, Journal of Combinatorial Theory. Series A 25 (1978, no. 1, 80 83. [Si64] R. Sinkhorn, A relationship between arbitrary positive matrices and doubly stochastic matrices, Annals of Mathematical Statistics 35 (1964, 876 879. [So03] G.W. Soules, New permanental upper bounds for nonnegative matrices, Linear and Multilinear Algebra 51 (2003, no. 4, 319 337. [Ta12] T. Tao, Topics in Random Matrix Theory, Graduate Studies in Mathematics, 132, American Mathematical Society, Providence, RI, 2012. Department of Mathematics, University of Michigan, Ann Arbor, MI 48109-1043, USA E-mail address: barvinok@umich.edu 13