Interlacing Families III: Sharper Restricted Invertibility Estimates

Size: px
Start display at page:

Download "Interlacing Families III: Sharper Restricted Invertibility Estimates"

Transcription

1 Interlacing Families III: Sharper Restricted Invertibility Estimates Adam W. Marcus Princeton University Daniel A. Spielman Yale University Nikhil Srivastava UC Berkeley arxiv: v [math.fa] 2 Dec 207 December 22, 207 Abstract We use the method of interlacing families of polynomials to derive a simple proof of Bourgain and Tzafriri s Restricted Invertibility Principle, and then to sharpen the result in two ways. We show that the stable rank can be replaced by the Schatten 4-norm stable rank and that tighter bounds hold when the number of columns in the matrix under consideration does not greatly exceed its number of rows. Our bounds are derived from an analysis of smallest zeros of Jacobi and associated Laguerre polynomials. Introduction The Restricted Invertibility Principle of Bourgain and Tzafriri [BT87] is a quantitative generalization of the assertion that the rank of a d m matrix B is the maximum number of linearly independent columns that it contains. It says that if a matrix B has high stable rank: srank(b) def = B 2 F B 2, 2 then it must contain a large column submatrix B S, of size d S, with large least singular value, defined as: σ min (B S ) def B S x = min x 0 x. The least singular value of B is a measure of how far the matrix is from being singular. Bourgain and Tzafriri s result was strengthened in the works of [Ver0, SS2, You4, NY7], and has since been a useful tool in Banach space theory, data mining, and more recently theoretical computer science. Prior to this work, the sharpest result of this type was the following theorem of Spielman and Srivastava [SS2]: Many of the results in this paper were first announced in lectures by the authors in 203. This research was partially supported by NSF grants CCF , CCF-257, CCF-56204, CCF-55375, DMS , DMS and DMS , a Sloan Research Fellowship to Nikhil Srivastava a Simons Investigator Award to Daniel Spielman, and a MacArthur Fellowship.

2 Theorem.. Suppose B is an d m matrix and k srank(b) is an integer. Then there exists a subset S [m] of size k such that σ min (B S ) 2 ( ) 2 k B 2 F srank(b) m. () Note that when k is proportional to srank(b), Theorem. produces a submatrix whose squared least singular value is at least a constant times the average squared norm of the columns of B, a bound which cannot be improved even for k =. Thus, the theorem tells us that the columns of B S are almost orthogonal in that they have least singular value comparable to the average squared norm of the vectors individually. To understand the form of the bound in (), consider the case when BB T = I d, which is sometimes called the isotropic case. In this situation we have srank(b) = d, and the right hand side of () becomes ( ) 2 k d d m. (2) The number ( k/d) 2 may seem familiar, and arises in the following two contexts.. It is an asymptotically sharp lowerbound on the least zero of the associated Laguerre polynomial L d k k (x), after an appropriate scaling. In Section 3 we derive the isotropic case of Theorem. from this fact, using the method of interlacing families of polynomials. 2. It is the lower edge of the support of the Marchenko-Pastur distribution [MP67], which is the limiting spectral distribution of a sequence of random matrices G d G T d where G d is k d with appropriately normalized i.i.d. Gaussian entries, as d with k/d fixed [MP67]. This convergence result along with large deviation estimates may be used to show that it is not possible to obtain a bound of ( ) 2 k B 2 srank(b) +δ F m in Theorem. for any constant δ > 0, when m goes to infinity significantly faster then d. Thus, the bound of Theorem. is asymptotically sharp. See [Sri] for details. In Section 3 we present a proof of Theorem. in the isotropic case using the method of interlacing polynomials. This proof considers the expected characteristic polynomial of B S B T S for a randomly chosen S. In this first proof, we choose S by sampling k columns with replacement. This seems like a suboptimal thing to do since it may select a column twice (corresponding to a trivial bound of σ min (B S ) = 0), but it allows us to easily prove that the expected characteristic polynomial is an associated Laguerre polynomial and that the family of polynomials that arise in the expectation form an interlacing family, which we define below. Because these polynomials form an interlacing family, there is some polynomial in the family whose kth largest zero is at least the kth largest zero of the expected polynomial. The bound (2) then follows from lower bounds on the zeros of associated Laguerre polynomials. A version of this proof appeared in the authors survey paper [MSS4] (3) 2

3 In Section 4, we extend this proof technique to show that Theorem. remains true in the nonisotropic case. In addition, we show a bound that replaces the stable rank with a Schatten 4-norm stable rank, defined by srank 4 (B) def = B 4 2 B 4, 4 where B p denotes the Schatten p-norm, i.e., the l p norm of the singular values of B. That is, srank 4 (B) = ( i σ2 i )2, i σ4 i where σ... σ d are the singular values of B. As srank 4 (B) = ( i σ2 i )2 i σ4 i ( i σ2 i )2 σ 2 i n σ2 i = srank(b), this is a strict improvement on Theorem.. The above inequality is far from tight when B has many moderately large singular values. In Section 4. we give a polynomial time algorithm for finding the subset guaranteed to exist by Theorem.. In Section 5, we improve on Theorem. in the isotropic case by sampling the sets S without replacement. We show that the resulting expected characteristic polynomials are scaled Jacobi polynomials. We then derive a new bound on the smallest zeros of Jacobi polynomials which implies that there exists a set S of k columns for which ( d(m k) k(m d) ) 2 σ min (B S ) 2 m 2. (4) As ( d(m k) k(m d) ) 2 m 2 ( )( dk + m ) 2 k d d m this improves on Theorem. by a constant factor when m is a constant multiple of d. Note that this does not contradict the lower bound (3) from [Sri], which requires that m d. A number of the results in this paper require a bound on the smallest root of a polynomial. In order to be as self contained as possible, we will either prove such bounds directly or take the best known bound directly from the literature. It is worth noting, however, that a more generic way of proving each of these bounds is provided by the framework of polynomial convolutions developed in [MSS5a]. The necessary inequalities in [MSS5a] are known to be asymptotically tight (as shown in [Mar5]) and in some cases improve on the bounds given here. 2 Preliminaries 2. Notation We denote the Euclidean norm of a vector x by x. We denote the operator norm by: B 2 = B def = max x = Bx. 3

4 This also equals the largest singular value of the matrix B. The Frobenius norm of B, also known as the Hilbert-Schmidt norm and written B 2, is the square root of the sum of the squares of the singular values of B. It is also equal to the square root of the sum of the squares of the entries of B. For a real rooted polynomial p, we let λ k (p) denote the kth largest zero of p. When we want to refer to the smallest zero of a polynomial p without specifying its degree, we will call it λ min (p). We define the lth elementary symmetric function of a matrix A with eigenvalues λ,...,λ d to be the lth elementary symmetric function of those eigenvalues: e l (A) = λ i. S [d], S =li S Thus, the characteristic polynomial of A may be expressed as det[xi A] = d ( ) l e l (A)x d l. l=0 By inspecting the Leibniz expression for the determinant in terms of permutations, it is easy to see that the e l may also be expanded in terms of minors. The Cauchy-Binet identity says that for every d m matrix B e l (B T B) = det [ BT T B T], T ( [m] l ) where T ranges over all subsets of size l of indices in [m] def = {,...,m}, and B T denotes the d l matrix formed by the columns of B specified by T. We will use the following two formulas to calculate determinants and characteristic polynomials of matrices. You may prove them yourself, or find proofs in [Mey00, Chapter 6] or [Har97, Section 5.8]. Lemma 2.. For any invertible matrix A and vector u det [ A+uu T] = det[a](+u T A u) = det[a](+tr [ A uu T] ). Lemma 2.2 (Jacobi s formula). For any square matrices A, B, x det[xa+b] = det[xa+b]tr [ A(xA+B) ] We also use the following consequence of these formulas that was derived in [MSS5c, Lemma 4.2]. Lemma 2.3. For every square matrix A and random vector r, Edet [ A rr T] = ( t )det [ A+tErr T] t=0. 4

5 2.2 α min We bound the zeros of the polynomials we construct by using the barrier function arguments developed in [BSS2], [MSS5c], and [MSS5a]. For α > 0, we define the α min of a polynomial p(x) to be the least root of p(x)+α x p(x), where x indicates partial differentiation with respect to x. We sometimes write this in the compact form α min(p(x)) def = λ min (p(x)+αp (x)). We may also define this in terms of the lower barrier function by observing As Φ p (x) def = p (x) p(x) α min(p) = min{z : Φ p (z) = /α}. Φ p (x) = d λ i x, we see that α min(p) is less than the least root of p(x). The following claim is elementary. Claim 2.4. For α > 0, ( α min x k) = kα. 2.3 Interlacing Families We use the method of interlacing families of polynomials developed in [MSS5b, MSS5c] to relate the zeros of sums of polynomials to individual polynomials in the sum. The results in Section 5 will require the following variant of the definition, which is more general than the ones we have used previously. Definition 2.5. An interlacing family consists of a finite rooted tree T and a labeling of the nodes v T by monic real-rooted polynomials f v (x) R[x], with two properties: a. Every polynomial f v (x) corresponding to a non-leaf node v is a convex combination of the polynomials corresponding to the children of v. b. For all nodes v,v 2 T with a common parent, all convex combinations of f v (x) and f v2 (x) are real-rooted. 2 We say that a set of polynomials is an interlacing family if they are the labels of the leaves of such a tree. 2 This condition implies that all convex combinations of all the children of a node are real-rooted; the equivalence is discussed in [MSS4]. 5

6 In the applications in this paper, the leaves of the tree will naturally correspond to elements of a probability space, and the internal nodes will correspond to conditional expectations of the corresponding polynomials over this probability space. In Sections 3 and 4, as in [MSS5b, MSS5c], we consider interlacing families in which the nodes of the tree at distance t from the root are indexed by sequences s,...,s t [m] t. We denote the empty sequence and the root node of the tree by. The leaves of the tree correspond to sequences of length k, and each is labeled by a polynomial f s,...,s k (x). Eachintermediatenodeislabeledbytheaverageofthepolynomialslabelingitschildren. So, for t < k f s,...,s t (x) = m f s,...,s m t,j(x) = m k t f s,...,s k (x), s t+,...,s k and j= m kf (x) = s,...,s k [m] k f s,...,s k (x). A fortunate choice of polynomials labeling the leaves yields an interlacing family. Theorem 2.6 (Theorem 4.5 of [MSS5c]). Let u,...,u m be vectors in R d and let f s,...,s k def = det [ xi Then, these polynomials form an interlacing family. ] k u si u T s i. Interlacing families are useful because they allow us to relate the zeros of the polynomial labeling the root to those labeling the leaves. In particular, we will prove the following slight generalization of Theorem 4.4 of [MSS5b]. Theorem 2.7. Let f be an interlacing family of degree d polynomials with root labeled by f (x) and leaves by {f l (x)} l L. Then for all indices j n, there exist leaves a and b such that λ j (f a ) λ j (f ) λ j (f b ). (5) To prove this theorem, we first explain why we call these families interlacing. Definition 2.8. We say that a polynomial g(x) = d+ (x α i) interlaces a polynomial f(x) = d (x β i) if α β α 2 β 2 α 3 α d β d α d+. We say that polynomials f,...,f m have a common interlacing if there is a single polynomial g that interlaces f i for each i. The common interlacing assertions in this paper stem from the following fundamental example. Claim 2.9. Let M be a d dimensional symmetric matrix and let u,...,u k be vectors in R d. Then the polynomials f j (x) = det [ xi M u j u T ] j have a common interlacing. 6

7 Proof. Let g 0 (x) = det[xi M] = d (x α i). For any j, let f j (x) = d (x β i). Cauchy s interlacing theorem tells us that β α β 2 α 2 β d α d. So, for sufficiently large α, (x α)g 0 (x) interlaces each f j (x). The connection between interlacing and the real-rootedness of convex combinations is given by the following theorem (see [Ded92], [Fel80, Theorem 2 ], and [CS07, Theorem 3.6]. Theorem 2.0. Let f,...,f m be real-rooted (univariate) polynomials of the same degree with positive leading coefficients. Then f,...,f m have a common interlacing if and only if m µ if i is real rooted for all convex combinations µ i 0, m µ i =. Theorem 2.7 follows from an inductive application of the following lemma, which generalizes the case k = which was proven in [MSS5b, MSS5c]. Lemma 2.. Let f,...,f m be real-rooted degree d polynomials that have a common interlacing. Then for every index j d and for every nonnegative µ,...,µ m such that m µ i =, there exist an a and a b so that λ j (f a ) λ j ( i µ i f i ) λ j (f b ). Proof. By restricting our attention to the polynomials f i for which µ i is positive, we may assume without loss of generality that each µ i is positive for every i. Define f (x) = i µ i f i (x) and let f (x) = d (x β i ). We seek a and b for which λ j (f a ) β j λ j (f b ). Let d+ g(x) = (x α i ) be a polynomial that interlaces every f i. As each f i has a positive leading coefficient, we know that f i (α k ) is at least 0 for k odd and at most 0 for k even. We first consider the case in which f,...,f m do not have any zero in common. In this case, α > α 2 > > α d+ : if some α k = α k+ then f i (α k ) = 0 for all i. Moreover, there must be some i for which f i (α k ) is nonzero. As all the µ i are positive, f (α k ) is positive for k odd and negative for k even. As there must be some i for which f i (β j ) 0, there must be an a for which f a (β j ) < 0 and a b for which f b (β j ) > 0. We now show that if j is odd, then λ j (f a ) β j λ j (f b ). As f a (α j ) is nonnegative, f a must have a zero between β j and α j. As f a interlaces g, this is the jth largest zero of f a. Similarly, the nonpositivity of f b (α j+ ) implies that f b has a zero between α j+ and β j. This must be the jth largest zero of f b. 7

8 The case of even j is symmetric, except that we reverse the choice of a and b. We finish by observing that it suffices to consider the case in which f,...,f m do not have any zero in common. If they do, we let f 0 (x) be their greatest common divisor, define ˆf i (x) = f i (x)/f 0 (x), and observe that ˆf,..., ˆf m do not have any common zeros. Thus, we may apply the above argument to these polynomials. As multiplying all the polynomials by f 0 (x) adds the same zeros to for f,...,f m and f, the theorem holds for these as well. Proof of Theorem 2.7. For every node v in the tree defining an interlacing family, the subtree rooted at v and the polynomials on the nodes of that tree form an interlacing family of their own. Thus, we may prove the theorem by induction on the height of the tree. Lemma 2. handles trees of height. For trees of greater height, Lemma 2. tells us that there are children of the root vâ and vˆb that satisfy (5). If vâ is not a leaf, then it is the root of its own interlacing family and Lemma 2. tells this family has a leaf node v a for which The same holds for vˆb. λ j (v a ) λ j (vâ) λ j (f ). 3 The Isotropic Case with Replacement: Laguerre Polynomials We now prove Theorem. in the isotropic case. Let the columns of B be the vectors u,...,u m R d. The condition BB T = I d is equivalent to i u iu T i = I d. For a set S of size k < d, ( ) σ min (B S ) 2 = λ k (BS T B S) = λ k (B S BS T ) = λ k u i u T i. We now consider the expected characteristic polynomial of the sum of the outer products of k of these vectors chosen uniformly at random, with replacement. We indicate one such polynomial by a vectors of indices such as (s,...,s k ) [m] k, where we recall [m] = {,...,m}. These are the leaves of the tree in the interlacing family: [ f s,...,s k (x) def = det xi i S ] k u si u T s i. As in Section 2.3, the intermediate nodes in the tree are labeled by subsequences of this form, and the polynomial at the root of the tree is We now derive a formula for f (x). f (x) = m k f s,...,s k (x). s,...,s k [m] k Lemma 3.. f (x) = ( m x) k x d. 8

9 Proof. For every 0 t k, define g t = m t s,...,s t [m] t det [ xi ] k u si u T s i We will prove by induction on t that g t = ( m x) t x d. The base case of t = 0 is trivial. To establish the induction, we use Lemma 2., the identity j u ju T j = I, and Lemma 2.2 to compute [ ] g t+ (x) = t m t+ det xi u si u T s i u j u T j s,...,s t j [ = t ] ( ) u t m t+ det xi u si u T s i Tr xi u si u T s i j u T j s,...,s t [ = m t+ det xi s,...,s t = g t (x) m t+ s,...,s t det = g t m t+ j s,...,s t x det = g t (x) m xg t (x) = ( m x)g t (x). t ] ( u si u T s i m Tr xi [ [ xi xi ( t u si u T s i ]Tr xi ] t u si u T s i ) I t u si u T s i t ) u si u T s i For d k the polynomial f (x) is divisible by x d k. So, the kth largest root of f (x) is equal to the smallest root of x (d k) f (x) = x (d k) ( m x) k x d. To bound the smallest root of this polynomial, we observe that it is a slight transformation of an associated Laguerre polynomial. We use the definition of the associated Laguerre polynomial of degree n and parameter α, L (α) n, given by Rodrigues formula [Sze39, (5..5)] Thus, L (α) n (x) = ex x α ( n! n x e x x n+α) = x α n! ( x ) n x n+α. x (d k) f (x) = ( ) k k! m kl(d k) k (mx). 9

10 We now employ a lower bound on the smallest root of associated Laguerre polynomials due to Krasikov [Kra06, Theorem ]. Theorem 3.2. For α >, λ k (L (α) k (x)) V 2 +3V 4/3 (U 2 V 2 ) /3, where V = k+α+ k and U = k +α++ k. Corollary 3.3. λ k (f (x)) > m ( d k) 2. Proof. Applying Theorem 3.2 with α = d k and thus V = ( d+ k) gives λ k (f (x)) = λ k (L (d k) k (mx)) = m λ k(l (d k) k (x)) V 2 > m ( d k) 2. Lemma 3. and Corollary 3.3 are all one needs to establish Theorem. in the isotropic case. Theorems 2.6 and 2.7 tell us that there exists a sequence s,...,s k [m] k for which λ k (f s,...,s k ) λ k (f ) > ( d k) 2. m As f s,...,s k is the characteristic polynomial of i k u s i u T s i, this sequence must consist of distinct elements. If not, then the matrix in the sum would have rank at most k and thus λ k = 0. So, we conclude that there exists a set S [m] of size k for which σ min (B S ) 2 = λ k ( i S u i u T i ) > ( d k) 2 m = ( ) 2 k d d m. 4 The Nonisotropic Case and the Schatten 4 norm In this section we prove the promised strengthening of Theorem. in terms of the Schatten 4- norm. In the proof it will be more natural to work with eigenvalues of BB T rather than singular values of B (and its submatrices). For a symmetric matrix A, we define κ A def = Tr(A)2 Tr(A 2 ). (6) With this definition and the change of notation A = BB T = i u iu T i the theorem may be stated as follows. Theorem 4.. Suppose u,...,u m are vectors with m u iu T i = A. Then for every integer k κ A, there exists a set S [m] of size k with ( ) ( ) 2 k Tr(A) λ k u i u T i κ A m. (7) i S 0

11 We prove this theorem by examining the same interlacing family as in the previous section. As we are no longer in the isotropic case, we need to re-calculate the polynomial at the root of the tree, which will not necessarily be a Laguerre polynomial. We give the formula for general random vectors with finite support, but will apply it to the special case in which each random vector is uniformly chosen from u,...,u m. Lemma 4.2. Let r be a random d-dimensional vector with finite support. If r,...,r k are i.i.d. copies of r, then d Edet r i r T i = x d k ( λ i x )x k, xi i k where λ,...,λ d are the eigenvalues of Err T. Proof. Let M = Err T. By introducing variables t,...,t k and applying Lemma 2.3 k times, we obtain [ ( k k )M] Edet r i r T t i = ( ti )det xi + t i = =t k =0 By computing xi i k we simplify the above expression to = ( k d ( ti ) x d l( ) lel t ti (M)) = =t k =0. l=0 ( k k ) { ( ) ( ti ) t i l t l k! = =t k =0 = (k l)! if k l 0 otherwise, k l=0 x d l ( ) l k! (k l)! e l(m). Since l x xk = x k l k!/(k l)! for l k and l x xk = 0 for l > k, we can rewrite this as as desired. k d x d k x( ) l l e l (M)x k = x d k x( ) l l e l (M)x k l=0 l=0 = x d k det[ x I M]x k d = x d k ( λ i x )x k, We now require a lower bound on the smallest zero of d ( λ i x )x k. We will use the following lemma, which tells us that the α min of a polynomial grows in a controlled way as a function of λ when we apply a ( λ x ) operator to it. This is similar to Lemma 3.4 of [BSS2], which was written in the language of random rank one updates of matrices.

12 Lemma 4.3. If p(x) is a real-rooted polynomial and λ > 0, then ( λ x )p(x) is real-rooted and α min(( λ x )p(x)) α min(p(x))+ /λ+/α. Proof. It is well known that ( λ x )p(x) is real rooted: see [PS76, Problem V..8], [Mar49, Corollary 8.], or [MSS4, Lemma 3.7] or for a proof. To see that λ min (p) λ min (p λp ) for λ > 0, recall that λ min (p ) λ min (p). So, both p and λp have the same single sign for all x < λ min (p), and thus p λp cannot be zero there. Let p be have degree d and zeros µ d... µ. Let b = α min(p), so that Φ p (b) = /α. To prove the claim it is enough to show for that b+δ λ d (( λ x )p) and that δ = /λ+/α Φ ( λ x)p(b+δ) Φ p (b). (8) The first statement is true because µ d b Φ p(b) = /α, so b+δ < b+α µ d λ d (( λ x )p). We begin our proof of the second statement by expressing Φ ( λ x)p in terms of Φ p and Φ p : Φ ( λ x)p = (p λp ) p λp = (p(+λφ p)) p(+λφ p ) = p p λφ p Φ p = Φ p, (9) +λφ p /λ+φ p wherever all quantities are finite, which happens everywhere except at the zeros of p and ( λ x )p. Since b+δ is strictly below the zeros of both, it follows that: Φ ( λ x)p(b+δ) = Φ p (b+δ) Φ p(b+δ) /λ+φ p (b+δ). After replacing /λ by /δ /α = /δ Φ p (b) and rearranging terms (noting the positivity of Φ p (b+δ) Φ p (b)), we see that (8) is equivalent to (Φ p (b+δ) Φ p (b)) 2 Φ p(b+δ) δ (Φ p (b+δ) Φ p (b)). 2

13 We now finish the proof by expanding Φ p and Φ p in terms of the zeros of p: ( (Φ p (b+δ) Φ p (b)) 2 = µ i i b δ µ i i b ( ) 2 δ = (µ i i b δ)(µ i b) ( )( ) δ δ µ i i b (µ i i b δ) 2, as all terms are positive (µ i b) ( ) δ (µ i b δ) 2, as δφ p (b) (µ i b) i ) 2 = (µ i i b δ) 2 (µ i i b)(µ i b δ) = ( (µ i i b δ) 2 δ µ i i b δ i = Φ p (b+δ) δ (Φ p (b+δ) Φ p (b)). ) µ i b Theorem 4.4. Let r be a random d-dimensional vector with finite support such that Err T = M, let r,...,r k be i.i.d. copies of r, and let p(x) = Edet xi r i r T i i k Then ( ) 2 k λ k (p) Tr(M), κ M where κ M is defined as in (6). Proof. By multiplying r through by a constant, we may assume without loss of generality that Tr[M] =. In this case, we need to prove λ k (p) ( ktr[m 2 ]) 2. Let 0 λ d λ be the eigenvalues of M, so that Lemma 4.2 implies d p(x) = x d k ( λ i x )x k. 3

14 Applying Lemma 4.3 d times for any α > 0 yields ( d ) λ k (p) λ k ( λ i x )x k ( d ) α min ( λ i x )x k ( α min x k) d + λ i +α d = kα+ λ i +α, by Claim 2.4. To lower bound this expression, observe that the function y +yα is convex for all α > 0. Since i λ i = Tr[M] =, Jensen s inequality implies that λ i +α = λ i +λ i α +( = i λ2 i )α +Tr(M 2 )α. i i Thus, λ k (p(x)) is at least kα+/(+tr(m 2 )α ), for every α > 0. Taking derivatives, we find that this expression is maximized when α = Tr [ M 2]( / ) ktr[m 2 ] which may be substituted to obtain a bound of as desired. 4. A Polynomial Time Algorithm λ k (p) ( ktr(m 2 )) 2, We now explain how to produce the subset S guaranteed by Theorem 4. in polynomial time, up to a /n c additive error in the value of σk 2 where c can be chosen to be any constant. Selecting the k elements of S corresponds to an interlacing family of depth k, whose nodes are labeled by expected characteristic polynomials conditioned on partial assignments. Recall that the polynomial corresponding to a partial assignment s,...,s j [m] j is j k f s,...,s j (x) := Eχ u si u T s i + r i r T i. i=j+ 4

15 To find a full assignment with λ k (f s,...,s k ) λ k (f ), one has to solve k subproblemsof the following type: given a partial assignment s,...,s j, find an index s j+ [m] such that λ k (f s,...,s j+ ) λ k (f s,...,s j ). We first show how to efficiently compute any partial assignment polynomial f s,...,s j. Letting C = j u s i u T s i and Err T = BB T /m = M and applying Lemma 2.3 repeatedly, we have: k f s,...,s j (x) = Edet xi C r i r T i = = k i=j+ k i=j+ i=j+ ( ti ) det xi C + ( ti ) det xi C + k i=j+ t i M k i=j+ t i ti =0 M ti =0. We now observe that the latter determinant is a polynomial in t := t j t k. Since for any differentiable function of t we have t = ti for every i = j +,...,k, and the operator t preserves the property of being a polynomial in t, we may rewrite this expression as: The bivariate polynomial f s,...,s j (x) = ( t ) k j det[xi C +tm] t=0. det[xi C +tm] := n ( ) i x n i e i (C tm) i=0 has univariate polynomial coefficients e i (C tm) R[t] of degree i n, These can be computed in polynomial time by a number of methods. For example, we can compute the polynomials e i (C tm) by exploiting the fact that the characteristic polynomial of a matrix can be computed in time O(n ω ) where ω 3 is the matrix multiplication exponent [KG85]. We first compute the characteristic polynomial of C tm for t equal to each of the nth roots of unity, in time O(n ω+ ). We then use fast polynomial interpolation via the discrete Fourier transform on the coefficients of these polynomials to recover the coefficients in t of each e i (C tm). This takes time O(nlogn) per polynomial. Thus,intimeO(n ω+ +n 2 logn) = O(n ω+ ), wecancomputethebivariatepolynomialdet[xi C +tm]. Applying the operator k j ( t ) k j = i=0 ( ) k j i ( k j i to each coefficient and setting t to zero amounts to simply multiplying each coefficient of each e i (C tm) by a binomial coefficient, which can be carried out in O(n 2 ) time. Thus, we can compute f s,...,s j (x) in O(n ω+ ) time. Given this subroutine, the algorithm is straightforward: given a partial assignment s,...,s j, extend it to the s,...,s j+ which maximizes λ k (f s,...,s j+ ). This may be done by enumerating over all m possibilities for s j+ and computing an ǫ approximation to the smallest root of 5 ) i t

16 f s,...,s j+ (x)/x n k using the standard technique of binary search with a Sturm sequence (see, e.g., [BPR03]). This takes time O(n 2 log(/ǫ)) per polynomial, which is less than the time required to compute the polynomial when ǫ = /poly(n). The total running time to find a complete assignment is therefore O(kmn ω+ ). We have not made any attempt to optimize this running time, and suspect it can be improved using more sophisticated ideas. 5 The Isotropic Case without Replacement: Jacobi Polynomials In this section we show how to improve Theorem. in the isotropic case by constructing an interlacing family using subsets of vectors instead of sequences. Theorem 5.. Suppose u,...,u m R n satisfy i m u iu T i = I and k d is an integer. Then there exists a subset S [m] of size k such that λ k ( i S u i u T i ) ( d(m k) k(m d) ) 2 m 2. (0) The leaves of the tree in the interlacing family correspond to subsets of [m] of size k. The root corresponds to the empty set,, and the other internal nodes correspond to subsets of size less than k. The children of each internal node are its supersets of size one larger. For each S [m] we define U S = i S u i u T i and p S (x) = det[xi U S ]. We label the leaf nodes with the polynomials p S (x). For an internal node associated with a set T of size less than k, we label that node by the polynomial f T (x) = E p S (x), S T S =k where the expectation is taken uniformly over sets S of size k containing T. All polynomials in the family are real and monic since they are averages of characteristic polynomials of Hermitian matrices. We now derive expressions for these polynomials and prove that they satisfy the requirements of Definition 2.5. We give the connection between these polynomials and Jacobi polynomials in Section 5.. Lemma 5.2. For every subset T of [m] of size t, p T {i} (x) = (x ) (m d t ) x (x ) m d t p T (x). i T The expression on the left above is a sum of polynomials and thus is clearly a polynomial. To make it clear that the term on the right is a polynomial, we observe that for all polynomials p and all positive k, x (x ) k p(x) is divisible by (x ) k. So, the expression on the right above is a polynomial when m d t. It is also a polynomial when d+t+ m because in this case p T (x) is divisible by (x ) d+t m. 6

17 Proof. We begin with a calculation analogous to that in the proof of Lemma 3.: p T {i} (x) = det [ xi U T u i u T ] i i/ T i/ T ( [ ]) det[xi U T ] Tr (xi U T ) u i u T i, by Lemma 2. = i/ T ( [ ]) = p T (x) m t Tr (xi U T ) (I U T ), as u i u T i = I U T i/ T ] [ = p T (x)(m t) p T (x)tr (xi U T ) [(I xi)+(xi U T )] [ ] = p T (x)(m t) dp T (x) p T (x)tr (xi U T ) (I xi) [ = p T (x)(m t d) p T (x)tr (xi U T ) ] ( x) = (m t d)p T (x)+(x ) x p T (x), by Lemma 2.2. We finish the proof of the lemma by observing that for every function f(x) and every h R, x (x ) h f(x) = h(x ) h f(x)+(x ) h x f(x). Thus, (x ) (m d t ) x (x ) m d t p T (x) = (m t d)p T (x)+(x ) x p T (x). By applying this lemma many times, we obtain an expression for the polynomials labeling internal vertices of the tree. Lemma 5.3. For every T [m] of size t k, In particular, f T (x) = ( m t k t) p S (x) = (m k)! (m t)! (x ) (m d k) x k t (x ) m d t p T (x). S T S =k Proof. We will prove by induction on k that S T S =k f (x) = (m k)! (x ) (m d k) k m! x(x ) m d x d. p S (x) = (k t)! (x ) (m d k) k t x (x ) m d t p T (x). For k = t, the polynomial is simply p T (x). To establish the induction, observe that for k > t S T S =k p S (x) = k t S T t/ S S =k p S+t (x) 7

18 and apply the inductive hypothesis and Lemma 5.2 to obtain S T S =k p S (x) = k t (x ) (m d (k ) ) x (x ) m d (k ) (k t )! (x ) (m d (k )) x k t (x ) m d t p T (x) = (k t)! (x ) (m d k) x k t (x ) m t p T (x). Theorem 5.4. The polynomials p S (x) for S = k are an interlacing family. Proof. As explained above, the internal nodes of the tree are the polynomials f T (x) for T [m] and T < k. By definition these polynomials satisfy condition a of an interlacing family. We now show that they also satisfy condition b. Let T [m] have size less than k. We must prove that for every i and j not in T and all 0 µ, f µ (x) def = µf T {i} (x)+( µ)f T {j} (x) is real-rooted. As Claim 2.9 tells us that p T {i} and p T {j} have a common interlacing, Theorem 2.0 implies that p µ (x) def = µp T {i} (x)+( µ)p T {j} (x) is real rooted. Lemma 5.3 implies that f µ (x) = (m k)! (m t)! (x ) (m d k) k t x (x ) m d t p µ (x). So, we can see that f µ (x) is real rooted by observing that real rootedness is preserved by multiplication by (x ), taking derivatives, and dividing by (x ) when is a root. We begin proving a lower bound on the kth largest root of f (x) by expressing it as the smallest root of a simpler polynomial. Lemma 5.5. The kth largest root of f (x) is equal to the smallest root of the polynomial d x(x ) m k x k. () Proof. As all the eigenvalues of U S for every subset S are less than and because U S is positive semidefinite, all the zeros of p S (x) are between 0 and. As all polynomials p S (x) are monic, they are all positive for x > and thus f (x) is as well. This argument, and a symmetric one for x < 0, implies that all the zeros of f (x) are between 0 and. The polynomial f (x) has at least d k zeros at 0. So, its kth largest root is the smallest root of x (d k) (x ) (m d k) k x(x ) m d x d. 8

19 As all the zeros of this polynomial are between 0 and, its smallest root is also the smallest root of x (d k) k x(x ) m d x d. We now show that this latter polynomial is a constant multiple of (). k x (x )m d x d = = = k ( ) k ( x k i (x ) m d)( x i i xd) d ( ) k ( x k i (x ) m d)( x i i xd),as x i xd = 0 for i > d, i=0 i=0 d i=0 k! (m d)!(x ) m d k+i d!x d i i!(k i)! (m d k+i)! (d i)! = (m d)! (m k)! xd k d i=0 = (m d)! (m k)! xd k d x(x ) m k x k. d! (m k)!(x ) m d k+i k!x k i i!(d i)! (m d k +i)! (k i)! We use the following lemma to prove a lower bound on the smallest zero of the polynomial in (). This lemma may be found in [MSS5a, Lemma 4.2], or may be proved by applying Lemma 4.3 in the limit as λ grows large. Lemma 5.6. For every real rooted polynomial p(x) of degree at least two and α > 0, Theorem 5.7. For m > d > k, α min( x p(x)) α min(p(x))+α. λ k (f (x)) > Proof. One can check by substitution that Applying Lemma 5.6 d times gives Define ( d(m k) k(m d) ) 2 m 2. (2) ( α min (x ) m k x k) = ( αm) ( αm) 2 +4αk. 2 ( α min x(x ) d m k x k) ( αm) ( αm) 2 +4αk +dα. 2 u(α) = ( αm) ( αm) 2 +4αk 2 +dα. (3) 9

20 We now derive the value of α at which u(α) is maximized. Taking derivatives in α gives Notice that u (α) = d m 2 u (0) = d k 0 and 2k +αm 2 m 2 ( αm) 2 +4αk. lim α u (α) = d m 0. By continuity, a maximum will occur at a point α 0 at which u (α ) = 0. The solution is given by α = m 2k m 2 m 2d k(m k) m 2 d(m d), which is positive for m > d > k. After observing that ( α m) 2 +4α k = k(m k) d(m d), we may plug α into the definition of u to obtain u(α ) = d m + k m 2dk m 2 + ( 2d 2 m 2 2dm ) k(m k) d(m d) = (dm+km 2dk m 2 2 ) k(m k)d(m d) = m 2 ( d(m k) k(m d) ) Jacobi polynomials Rodrigues formula for the degree n Jacobi polynomial with parameters α and β is [Sze39, (4.3.)] P n (α,β) (x) = ( )n 2 n n! ( x) α (+x) β x n ( x)α+n (+x) β+n. These are related to the polynomials f (x) from the previous section by ( ) P (m d k,d k) m k (2x ) = x (d k) f k (x). Thus, lower boundson the kth smallest root of f (x) translate into lower boundson the smallest root of Jacobi polynomials. By using the following claim, we can improve the lower bound from Theorem 5.7 to ( (d+)(m k) k(m d ) ) 2 Claim 5.8. For every real-rooted polynomial p(x) and α > 0, m 2. (4) α min(p)+α λ min (p). 20

21 Proof. We first observe that there is a z < λ min (p) for which Φ p (z) = /α. This holds because Φ p (z) is continuous for z < λ min (p), approaches infinity as z approaches λ min (p) from below, and approaches zero as z becomes very negative. For a z < λ min such that Φ p (z) = /α, we have The claim follows. /α = Φ p (z) λ min z. The bound that (4) implies for Jacobi polynomials seems to be incomparable to the bound obtained by Krasikov [Kra06], although numerical evaluation suggests that Krasikov s bound is usually stronger. References [BPR03] Saugata Basu, Richard Pollack, and Marie-Francoise Roy. Algorithms in real algebraic geometry, volume 7. Springer, [BSS2] Joshua Batson, Daniel A Spielman, and Nikhil Srivastava. Twice-Ramanujan sparsifiers. SIAM Journal on Computing, 4(6):704 72, 202. [BT87] [CS07] J. Bourgain and L. Tzafriri. Invertibility of large sumatricies with applications to the geometry of banach spaces and harmonic analysis. Israel Journal of Mathematics, 57:37 224, 987. Maria Chudnovsky and Paul Seymour. The roots of the independence polynomial of a clawfree graph. Journal of Combinatorial Theory, Series B, 97(3): , [Ded92] Jean Pierre Dedieu. Obreschkoff s theorem revisited: what convex sets are contained in the set of hyperbolic polynomials? Journal of Pure and Applied Algebra, 8(3): , 992. [Fel80] H. J. Fell. Zeros of convex combinations of polynomials. Pacific J. Math., 89():43 50, 980. [Har97] David A Harville. Matrix algebra from a statistician s perspective, volume. Springer, 997. [KG85] Walter Keller-Gehrig. Fast algorithms for the characteristics polynomial. Theoretical computer science, 36:309 37, 985. [Kra06] Ilia Krasikov. On extreme zeros of classical orthogonal polynomials. Journal of computational and applied mathematics, 93():68 82, [Mar49] Morris Marden. Geometry of polynomials. American Mathematical Soc., 949. [Mar5] Adam W. Marcus. Polynomial convolutions and (finite) free probability. preprint, 205. [Mey00] Carl D Meyer. Matrix analysis and applied linear algebra, volume 2. Siam,

22 [MP67] Vladimir Alexandrovich Marchenko and Leonid Andreevich Pastur. Distribution of eigenvalues for some sets of random matrices. Matematicheskii Sbornik, 4(4): , 967. [MSS4] Adam W. Marcus, Daniel A. Spielman, and Nikhil Srivastava. Ramanujan graphs and the solution of the Kadison-Singer problem. In Proceedings of the International Congress of Mathematicians, volume III, pages Kyung Moon SA, 204. [MSS5a] Adam W. Marcus, Daniel A. Spielman, and Nikhil Srivastava. Finite free convolutions of polynomials. arxiv: , 205. [MSS5b] Adam W. Marcus, Daniel A. Spielman, and Nikhil Srivastava. Interlacing families I: Bipartite Ramanujan graphs of all degrees. Annals of Mathematics, 82: , 205. [MSS5c] Adam W. Marcus, Daniel A. Spielman, and Nikhil Srivastava. Interlacing families II: Mixed characteristic polynomials and the Kadison Singer problem. Annals of Mathematics, 82: , 205. [NY7] [PS76] Assaf Naor and Pierre Youssef. Restricted invertibility revisited. In A Journey Through Discrete Mathematics, pages Springer, 207. George Pólya and Gabor Szegö. Problems and Theorems in Analysis II: Theory of Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry. Springer, 976. [Sri] Nikhil Srivastava. Restricted invertiblity by interlacing polynomials. [SS2] Daniel A Spielman and Nikhil Srivastava. An elementary proof of the restricted invertibility theorem. Israel Journal of Mathematics, 90():83 9, 202. [Sze39] Gabor Szegö. Orthogonal polynomials, volume 23. American Mathematical Soc., 939. [Ver0] Roman Vershynin. John s decompositions: selecting a large part. Israel Journal of Mathematics, 22(): , 200. [You4] Pierre Youssef. Restricted invertibility and the Banach Mazur distance to the cube. Mathematika, 60():20 28,

Ramanujan Graphs of Every Size

Ramanujan Graphs of Every Size Spectral Graph Theory Lecture 24 Ramanujan Graphs of Every Size Daniel A. Spielman December 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The

More information

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Nikhil Srivastava Microsoft Research India Simons Institute, August 27, 2014 The Last Two Lectures Lecture 1. Every weighted

More information

Bipartite Ramanujan Graphs of Every Degree

Bipartite Ramanujan Graphs of Every Degree Spectral Graph Theory Lecture 26 Bipartite Ramanujan Graphs of Every Degree Daniel A. Spielman December 9, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

Graph Sparsification II: Rank one updates, Interlacing, and Barriers

Graph Sparsification II: Rank one updates, Interlacing, and Barriers Graph Sparsification II: Rank one updates, Interlacing, and Barriers Nikhil Srivastava Simons Institute August 26, 2014 Previous Lecture Definition. H = (V, F, u) is a κ approximation of G = V, E, w if:

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Constant Arboricity Spectral Sparsifiers

Constant Arboricity Spectral Sparsifiers Constant Arboricity Spectral Sparsifiers arxiv:1808.0566v1 [cs.ds] 16 Aug 018 Timothy Chu oogle timchu@google.com Jakub W. Pachocki Carnegie Mellon University pachocki@cs.cmu.edu August 0, 018 Abstract

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

The Method of Interlacing Polynomials

The Method of Interlacing Polynomials The Method of Interlacing Polynomials Kuikui Liu June 2017 Abstract We survey the method of interlacing polynomials, the heart of the recent solution to the Kadison-Singer problem as well as breakthrough

More information

Matching Polynomials of Graphs

Matching Polynomials of Graphs Spectral Graph Theory Lecture 25 Matching Polynomials of Graphs Daniel A Spielman December 7, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class The notes

More information

UNIFORM BOUNDS FOR BESSEL FUNCTIONS

UNIFORM BOUNDS FOR BESSEL FUNCTIONS Journal of Applied Analysis Vol. 1, No. 1 (006), pp. 83 91 UNIFORM BOUNDS FOR BESSEL FUNCTIONS I. KRASIKOV Received October 8, 001 and, in revised form, July 6, 004 Abstract. For ν > 1/ and x real we shall

More information

Lecture 19. The Kadison-Singer problem

Lecture 19. The Kadison-Singer problem Stanford University Spring 208 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 3, 208 Scribe: Ben Plaut Lecture 9. The Kadison-Singer problem The Kadison-Singer

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

The Kadison-Singer Problem for Strongly Rayleigh Measures and Applications to Asymmetric TSP

The Kadison-Singer Problem for Strongly Rayleigh Measures and Applications to Asymmetric TSP The Kadison-Singer Problem for Strongly Rayleigh Measures and Applications to Asymmetric TSP Nima Anari Shayan Oveis Gharan Abstract Marcus, Spielman, and Srivastava in their seminal work [MSS13b] resolved

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

18. Cyclotomic polynomials II

18. Cyclotomic polynomials II 18. Cyclotomic polynomials II 18.1 Cyclotomic polynomials over Z 18.2 Worked examples Now that we have Gauss lemma in hand we can look at cyclotomic polynomials again, not as polynomials with coefficients

More information

Analytic Number Theory Solutions

Analytic Number Theory Solutions Analytic Number Theory Solutions Sean Li Cornell University sxl6@cornell.edu Jan. 03 Introduction This document is a work-in-progress solution manual for Tom Apostol s Introduction to Analytic Number Theory.

More information

Some Remarks on the Discrete Uncertainty Principle

Some Remarks on the Discrete Uncertainty Principle Highly Composite: Papers in Number Theory, RMS-Lecture Notes Series No. 23, 2016, pp. 77 85. Some Remarks on the Discrete Uncertainty Principle M. Ram Murty Department of Mathematics, Queen s University,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

RESTRICTED INVERTIBILITY REVISITED

RESTRICTED INVERTIBILITY REVISITED RESTRICTED INVERTIBILITY REVISITED ASSAF NAOR AND PIERRE YOUSSEF Dedicated to Jirka Matoušek Abstract. Suppose that m, n N and that A : R m R n is a linear operator. It is shown here that if k, r N satisfy

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Lecture 18. Ramanujan Graphs continued

Lecture 18. Ramanujan Graphs continued Stanford University Winter 218 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 8, 218 Original scribe: László Miklós Lovász Lecture 18 Ramanujan Graphs

More information

Zero estimates for polynomials inspired by Hilbert s seventh problem

Zero estimates for polynomials inspired by Hilbert s seventh problem Radboud University Faculty of Science Institute for Mathematics, Astrophysics and Particle Physics Zero estimates for polynomials inspired by Hilbert s seventh problem Author: Janet Flikkema s4457242 Supervisor:

More information

Common-Knowledge / Cheat Sheet

Common-Knowledge / Cheat Sheet CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

Decompositions of frames and a new frame identity

Decompositions of frames and a new frame identity Decompositions of frames and a new frame identity Radu Balan a, Peter G. Casazza b, Dan Edidin c and Gitta Kutyniok d a Siemens Corporate Research, 755 College Road East, Princeton, NJ 08540, USA; b Department

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Sparsification of Graphs and Matrices

Sparsification of Graphs and Matrices Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014 Objective of Sparsification:

More information

Maximum union-free subfamilies

Maximum union-free subfamilies Maximum union-free subfamilies Jacob Fox Choongbum Lee Benny Sudakov Abstract An old problem of Moser asks: how large of a union-free subfamily does every family of m sets have? A family of sets is called

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PETER G. CASAZZA, GITTA KUTYNIOK,

More information

On Systems of Diagonal Forms II

On Systems of Diagonal Forms II On Systems of Diagonal Forms II Michael P Knapp 1 Introduction In a recent paper [8], we considered the system F of homogeneous additive forms F 1 (x) = a 11 x k 1 1 + + a 1s x k 1 s F R (x) = a R1 x k

More information

ELA

ELA CRITICAL EXPONENTS: OLD AND NEW CHARLES R. JOHNSON AND OLIVIA WALCH. Abstract. For Hadamard and conventional powering, we survey past and current work on the existence and values of critical exponents

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016 Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating

More information

CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES. Alexander Barvinok

CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES. Alexander Barvinok CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES Alexander Barvinok Abstract. We call an n-tuple Q 1,...,Q n of positive definite n n real matrices α-conditioned for some α 1 if for

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials Outline MSRI-UP 2009 Coding Theory Seminar, Week 2 John B. Little Department of Mathematics and Computer Science College of the Holy Cross Cyclic Codes Polynomial Algebra More on cyclic codes Finite fields

More information

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Spring 2014 Minkowski s theorem Instructor: Daniele Micciancio UCSD CSE There are many important quantities associated to a lattice. Some of them, like the

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

PROBLEMS ON LINEAR ALGEBRA

PROBLEMS ON LINEAR ALGEBRA 1 Basic Linear Algebra PROBLEMS ON LINEAR ALGEBRA 1. Let M n be the (2n + 1) (2n + 1) for which 0, i = j (M n ) ij = 1, i j 1,..., n (mod 2n + 1) 1, i j n + 1,..., 2n (mod 2n + 1). Find the rank of M n.

More information

(January 14, 2009) q n 1 q d 1. D = q n = q + d

(January 14, 2009) q n 1 q d 1. D = q n = q + d (January 14, 2009) [10.1] Prove that a finite division ring D (a not-necessarily commutative ring with 1 in which any non-zero element has a multiplicative inverse) is commutative. (This is due to Wedderburn.)

More information

On Euclidean distance matrices

On Euclidean distance matrices On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto

More information

Discrete Math, Second Problem Set (June 24)

Discrete Math, Second Problem Set (June 24) Discrete Math, Second Problem Set (June 24) REU 2003 Instructor: Laszlo Babai Scribe: D Jeremy Copeland 1 Number Theory Remark 11 For an arithmetic progression, a 0, a 1 = a 0 +d, a 2 = a 0 +2d, to have

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Sparsification by Effective Resistance Sampling

Sparsification by Effective Resistance Sampling Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened

More information

1 Quantum states and von Neumann entropy

1 Quantum states and von Neumann entropy Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n

More information

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms Consider the following two graph problems: 1. Analysis of Algorithms Graph coloring: Given a graph G = (V,E) and an integer c 0, a c-coloring is a function f : V {1,,...,c} such that f(u) f(v) for all

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

The 123 Theorem and its extensions

The 123 Theorem and its extensions The 123 Theorem and its extensions Noga Alon and Raphael Yuster Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, Tel Aviv, Israel Abstract It is shown

More information

1: Introduction to Lattices

1: Introduction to Lattices CSE 206A: Lattice Algorithms and Applications Winter 2012 Instructor: Daniele Micciancio 1: Introduction to Lattices UCSD CSE Lattices are regular arrangements of points in Euclidean space. The simplest

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

Sections of Convex Bodies via the Combinatorial Dimension

Sections of Convex Bodies via the Combinatorial Dimension Sections of Convex Bodies via the Combinatorial Dimension (Rough notes - no proofs) These notes are centered at one abstract result in combinatorial geometry, which gives a coordinate approach to several

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN

VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN Abstract. For any field k and integer g 3, we exhibit a curve X over k of genus g such that X has no non-trivial automorphisms over k. 1. Statement

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

On the Local Profiles of Trees

On the Local Profiles of Trees On the Local Profiles of Trees Sébastien Bubeck 1 and Nati Linial 1 DEPARTMENT OF OPERATIONS RESEARCH AND FINANCIAL ENGINEERING PRINCETON UNIVERSITY PRINCETON 08540, NJ E-mail: sbubeck@princeton.edu SCHOOL

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Counting Matrices Over a Finite Field With All Eigenvalues in the Field

Counting Matrices Over a Finite Field With All Eigenvalues in the Field Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Lecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma

Lecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 7: Schwartz-Zippel Lemma, Perfect Matching Lecturer: Shayan Oveis Gharan 01/30/2017 Scribe: Philip Cho Disclaimer: These notes have not

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

Random sets of isomorphism of linear operators on Hilbert space

Random sets of isomorphism of linear operators on Hilbert space IMS Lecture Notes Monograph Series Random sets of isomorphism of linear operators on Hilbert space Roman Vershynin University of California, Davis Abstract: This note deals with a problem of the probabilistic

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an

More information

EXAMPLES OF PROOFS BY INDUCTION

EXAMPLES OF PROOFS BY INDUCTION EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming

More information

arxiv: v2 [math.fa] 11 Nov 2016

arxiv: v2 [math.fa] 11 Nov 2016 PARTITIONS OF EQUIANGULAR TIGHT FRAMES JAMES ROSADO, HIEU D. NGUYEN, AND LEI CAO arxiv:1610.06654v [math.fa] 11 Nov 016 Abstract. We present a new efficient algorithm to construct partitions of a special

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

18.5 Crossings and incidences

18.5 Crossings and incidences 18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted. THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION Sunday, March 1, Time: 3 1 hours No aids or calculators permitted. 1. Prove that, for any complex numbers z and w, ( z + w z z + w w z +

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Math 0031, Final Exam Study Guide December 7, 2015

Math 0031, Final Exam Study Guide December 7, 2015 Math 0031, Final Exam Study Guide December 7, 2015 Chapter 1. Equations of a line: (a) Standard Form: A y + B x = C. (b) Point-slope Form: y y 0 = m (x x 0 ), where m is the slope and (x 0, y 0 ) is a

More information

Minimum number of non-zero-entries in a 7 7 stable matrix

Minimum number of non-zero-entries in a 7 7 stable matrix Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a

More information