UPPER AND LOWER BOUNDS FOR THE TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER OF A GAUSSIAN MATRIX

Size: px
Start display at page:

Download "UPPER AND LOWER BOUNDS FOR THE TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER OF A GAUSSIAN MATRIX"

Transcription

1 UPPER AND LOWER BOUNDS FOR THE TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER OF A GAUSSIAN MATRIX JEAN-MARC AZAïS AND MARIO WSCHEBOR Abstract. Let A be an m m real random matrix with i.i.d. standard Gaussian entries. We prove that there exist universal positive constants c and C such that the tail of the probability c distribution of the condition number κ(a) satisfies the inequalities x < P{κ(A) > mx} < C x for every x >. The proof requires a new estimation of the joint density of the largest and the smallest eigenvalues of A T A which follows from a formula for the expectation of the number of zeros of a certain random field defined on a smooth manifold. by Key words. Random matrices, Condition Number, Eigenvalue Distribution, Rice Formulae. AMS subject classifications. 5A 6G5. Introduction and main result. Let A be an m m real matrix and denote A = sup Ax x = its Euclidean operator norm. x denotes Euclidean norm of x in R m. If A is nonsingular, its condition number κ(a) is defined by κ(a) = A A (von Neumann and Goldstine [8], Turing [7]). The role of κ(a) in a variety of numerical analysis problems is well-established (see for example Wilkinson [], Smale [4], Higham [9], Demmel [6]). The purpose of the present paper is to prove the following Theorem.. Assume that A = ((a ij )) i,j=,...,m, m 3, and that the a ij s are i.i.d. Gaussian standard random variables. Then, there exist universal positive constants c, C such that for x > : c x < P{κ(A) > mx} < C x. (.) Remarks on the statement of Theorem... It is well-known that as m tends to infinity, the distribution of the random variable κ(a)/m converges to a certain distribution (this follows easily, for example, from Edelman [7] ). The interest of (.) lies in the fact that it holds true for all m 3 and x >. We will see below that c =.3, C = 5.6 satisfy (.) for every m = 3, 4,... and x >. Using the same methods one can obtain more precise upper and lower bounds for each m but we will not detail these calculations here. The simulation study of Section 3 suggests that c must be smaller than.87 and C larger than.8. Laboratoire de Statistique et Probabilités. UMR-CNRS C5583 Université Paul Sabatier. 8, route de Narbonne. 36 Toulouse Cedex 4. France. azais@cict.fr Centro de Matemática. Facultad de Ciencias. Universidad de la República. Calle Igua Montevideo. Uruguay..wschebor@cmat.edu.uy

2 J-M AZAÏS and M. WSCHEBOR 3. In Sankar, Spielman and Teng [3] it was conjectured that P{κ(A) > x} = O( m σx ) when the a ij s are independent Gaussian random variables having a common variance σ and sup i,j E (a ij ). The upper bound part of (.) implies that this conjecture holds true in the centered case. The lower bound shows that, up to a constant factor, this is the exact order of the behavior of the tail of the probability distribution of κ(a). See Wschebor [] for the non centered case. 4. This Theorem, and related ones, can be considered as results on the Wishart matrix A T A (A T denotes the transpose of A). Introducing some minor changes, it is possible to use the same methods to study the condition number of A T A for rectangular n m matrices A having i.i.d. Gaussian standard entries, n > m. This will be considered elsewhere. Some examples of related results on the random variable κ(a) are the following. Theorem. ([7]). Under the same hypothesis of Theorem., one has: E (log κ(a)) = log m + C + ε m where C is a known constant (C.537) and ε m as m +. Theorem.3 ([4]). Let A = ((a ij )) i,j=,...,m and assume that the a ij s are independent Gaussian random variables with a common variance σ and m ij = E(a ij ). Denote by M = ((m ij )) i,j=,...,m the non-random matrix of expectations. Then, [ ] M E (log κ(a)) log m + log σ m C, where C is a known constant. Next, we introduce some notation. Given A, an m m real matrix, we denote by λ,..., λ m, λ... λ m the eigenvalues of A T A. If X : S m R is the quadratic polynomial X(x) = x T A T Ax, then: λ m = A = max x S m X(x) in case λ >, λ = = min A x S m X(x). It follows that κ(a) = ( λm when λ >. We put κ(a) = + if λ =. Note also that κ(a) and κ(ra) = κ(a) for any real r, r. There is an important difference between the proof of Theorem. and those of the two other theorems mentioned above. In the latter cases, one puts λ ) log κ(a) = log λ m log λ and if one takes expectations, the joint distribution of the random variables λ m, λ does not play any role; the proof only uses the individual distributions of λ m and λ. On the contrary, the proof below of Theorem. depends essentially on the joint

3 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER 3 distribution of the pair (λ m, λ ). A general formula for the joint density of λ,..., λ m is well-known since a long time (see for example Wilks [], Wigner [9], Krishnaiah and Chang [], Kendall, Stuart and Ord [], and references therein) but it seems to be difficult to adapt this to our present requirements. In fact, we will use a different approach, based upon the expected value of the number of zeros of a random field parameterized on a smooth manifold. We have also applied this technique to give a new proof of the known result of Lemma, a lower bound for P{λ < a}. One can ask if Theorem. follows from the well-known exponential bounds for the concentration of the distribution of λ m together with known bounds for the distribution of λ (see for example Szarek [5], Davidson and Szarek [5] and Ledoux [] for these type of inequalities). More precisely, consider the upper-bound in Theorem.. For ε > one has: { λm P { κ(a) > mx } = P > m x } λ P { λ m > (4 + ε)m } { + P λ m (4 + ε)m, λ m > m x } λ P { λ m > (4 + ε)m } { (4 + ε) } + P λ < mx C exp [ C mε ] 4 + ε + C 3, (.) x where C, C, C 3 are positive constants. From (.), making an adequate ( choice ) of α ε one can get an upper bound for P{κ(A) > mx} of the form (const) log x x m for some α > and x large enough. However, this kind of argument does not lead to the precise order given by our Theorem.. On the other hand, using known results for the distribution of other functions of the spectrum (for example (λ λ m )/λ as in Edelman [8]) one can get upper and lower bounds for the tails of the distribution of κ(a) which again do not reach the precise behavior (const)/x.. Proof of Theorem.. It is easy to see that, almost surely, the eigenvalues of A T A are pairwise different. We introduce the following additional notations: <.,. > is usual scalar product in R m and {e,..., e m } the canonical basis. I k denotes the k k identity matrix. B = A T A = ((b ij )) i,j=,...,m For s in R m, π s : R m R m denotes the orthogonal projection onto {s}, the orthogonal complement of s in R m. M (resp. M ) means that the symmetric matrix M is positive definite (resp. negative definite). If ξ is a random vector, p ξ (.) is the density of its distribution whenever it exists. For a differentiable function F defined on a smooth manifold M embedded in some Euclidean space, F (s) and F (s) are the first and the second derivative of F that we will represent, in each case, with respect to an appropriate orthonormal basis of the tangent space. Instead of (.) we prove the equivalent statement: for x > m, cm x < P{κ(A) > x} < Cm x. (.)

4 4 J-M AZAÏS and M. WSCHEBOR We break the proof into several steps. Our main task is to estimate the joint density of the pair (λ m, λ ); this will be done in Step 4. Step. For a, b R, a > b, one has almost surely: {λ m (a, a + da), λ (b, b + db)} { s, t S = m, < s, t >=, X(s) (a, a + da), X(t) (b, b + db), π s (Bs) =, π t (Bt) =, X (s), X (t) An instant reflection shows that almost surely the number } (.) N a,b,da,db of pairs (s, t) belonging to the right-hand side of (.) is equal to or to 4, so that: P{λ m (a, a + da), λ (b, b + db)} = 4 E(N a,b,da,db). (.3) Step. In this step we will give a bound for E (N a,b,da,db ) using what we call a Rice-type formula (see Azaïs and Wschebor [3], for some related problems and general tools). Let V = { (s, t) : s, t S m, < s, t >= }. V is a C -differentiable manifold without boundary, embedded in R m, dim(v ) = m 3. We will denote by τ = (s, t) a generic point in V and by σ V (dτ) the geometric measure on V. It is easy to see that σ V (V ) = σ m.σ m where σ m denotes the surface area of S m R m, that is σ m = πm/ Γ(m/). On V we define the random field Y : V R m by means of Y (s, t) = ( πs (Bs) π t (Bt) ). For τ = (s, t) a given point in V, we have that Y (τ) {(t, s)} { {s} {t} } = W τ for any value of the matrix B, where {(t, s)} is the orthogonal complement of the point (t, s) in R m. In fact, (t, s) {s} {t} and < Y (s, t), (t, s) > R m=< π s (Bs), t > < π t (Bt), s > =< Bs < s, Bs > s, t > < Bt < t, Bt > t, s >= since < s, t >= and B is symmetric. Notice that dim(w τ ) = m 3. We also set (τ) = [ [ ]] det (Y (τ)) T Y (τ)

5 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER 5 For τ = (s, t) V, F τ denotes the event N = {τ : τ V, Y (τ) = }. F τ = {X(s) (a, a + da), X(t) (b, b + db), X (s), X (t) }, and p Y (τ) (.) is the density of the random vector Y (τ) in the (m 3)-dimensional subspace W τ of R m. Assume that is not a critical value of Y, that is, if Y (τ) = then (τ). This holds true with probability. By compactness of V, this implies N <. Assume that N and denote by τ,..., τ N the roots of the equation Y (τ) =. Because of the implicit function theorem, if δ > is small enough, one can find in V open neighborhoods U,..., U N of the points τ,..., τ N respectively, so that: Y is a diffeomorphism between U j and Y (V ) B m (, δ) (B m (, δ) is the Euclidean ball of radius δ centered at the origin, in R m ). U,..., U N are pairwise disjoint. If τ / N j= U j, then Y (τ) / B m (, δ). Using the change of variable formula, it follows that: V (τ) { Y (τ)<δ } σ V (dτ) = N j= U j (τ) σ V (dτ) = N µ (Y (U j )), (.4) where µ (Y (U j )) denotes the - (m 3)-dimensional - geometric measure of Y (U j ). As δ, µ (Y (U j )) B m 3 (δ) where B m 3 (δ) is the (m 3)-dimensional Lebesgue measure of a ball of radius δ in R m 3. It follows from (.4) that, almost surely, N = lim δ B m 3 (δ) V j= (τ)i { Y (τ)<δ } σ V (dτ). In exactly the same way, one can prove that N a,b,da,db = lim (τ)i Fτ I δ B m 3 (δ) { Y (τ) <δ} σ V (dτ). Applying Fatou s Lemma and Fubini s Theorem: E(N a,b,da,db ) lim inf E ( ) (τ) I Fτ I { Y (τ)<δ } σv (dτ) δ B m 3 (δ) V = lim inf σ V (dτ) E ( (τ)i Fτ /Y (τ) = y) p Y (τ) (y) δ V B m,δ,τ = E ( (τ)i Fτ /Y (τ) = ) p Y (τ) () σ V (dτ), V V dy B m 3 (δ) where B m,δ,τ = B m (, δ) W τ. The validity of the last passage to the limit will become clear below, since it will follow from the calculations we will perform, that the integrand in the inner integral is a continuous function of the pair (τ, y). Hence: E (N a,b,da,db ) a+da a dx b+db b dy V E ( (s, t)i {X (s),x (t) }/C s,t,x,y ).p X(s),X(t),Y (s,t) (x, y, ) σ V (d(s, t)), (.5)

6 6 J-M AZAÏS and M. WSCHEBOR where C s,t,x,y is the condition: {X(s) = x, X(t) = y, Y (s, t) = }. The invariance of the law of A with respect to isometries of R m implies that the integrand in (.5) does not depend on (s, t) V. Hence, we have proved that the joint law of λ m and λ has a density g(a, b), a > b, and g(a, b) 4 σ m σ m E ( ) (e, e )I {X (e ),X (e ) }/C e,e,a,b p X(e),X(e ),Y (e,e )(a, b, ). (.6) In fact, using the method of Azaïs and Wschebor [3], it could be proved that (.6) is an equality, but we do not need such a precise result here. Step 3. Next, we compute the ingredients in the right-hand member of (.6). We take as orthonormal basis for the subspace W (e,e ): We have: {(e 3, ),..., (e m, ), (, e 3 ),..., (, e m ), X(e ) = b X(e ) = b X (e ) = B b I m X (e ) = B b I m, (e, e )} = L. where B (resp. B ) is the (m ) (m ) matrix obtained by suppressing the first (resp. the second) row and column in B. Y (e, e ) = (, b, b 3,..., b m, b,, b 3,..., b m, b ) T so that, it has the expression in the orthonormal basis L : Y (e, e ) = m ( bi (e i, ) + b i (, e i ) ) + ( b (e, e ) ). i=3 It follows that the joint density of X(e ), X(e ), Y (e, e ) appearing in (.6) in the space R R W (e,e ) is the joint density of the r.v. s b, b, b, b 3,..., b m, b 3,..., b m at the point (a, b,,..., ). To compute this density, first compute the joint density q of b 3,..., b m, b 3,..., b m, given a, a, where a j denotes the j-th column of A which is Gaussian standard in R m. q is the normal density in R (m ), centered with variance matrix ( ) a I m < a, a > I m < a, a > I m a. I m Set a j = a j, j =,. a j

7 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER 7 The density of the triplet (b, b, b ) = ( a, a, a a < a, a >) at the point (a, b, ) can be computed as follows. Since < a, a > and ( a, a ) are independent, the density of the triplet at (a, b, ) is equal to: χ m(a)χ m(b)(ab) / p <a,a > () where χ m (.) denotes the χ density with m degrees of freedom. Let ξ = (ξ,..., ξ m ) T be Gaussian standard in R m. Clearly, < a, a > has the same distribution as ξ ξ, because of the invariance under rotations. t P{ < a, a > t} = t P{ ξ χ m t t } = t P{F,m t (m ) t } = t t (m ) t f,m (x)dx, where χ m = ξ ξ m and F,m has the Fisher distribution with (, m ) degrees of freedom and density f,m. Letting t, we obtain p <a,a > () = Γ(m/) π Γ ( (m )/ ). Summing up, the density in (.6) is equal to : (π) m π Γ(m/)Γ ( (m )/ ) m exp ( a + b ). (.7) ab We now consider the conditional expectation in (.6). First, observe that the (m 3)- dimensional tangent space to V at the point (s, t) is parallel to the orthogonal complement in R m R m of the triplet of vectors (s, ); (, t); (t, s). This is immediate from the definition of V. To compute the associated matrix for Y (e, e ) take the set {(e 3, ),..., (e m, ), (, e 3 ),..., (, e m ), (e, e )} = L. as orthonormal basis in the tangent space and the canonical basis in R m. A direct calculation gives : v T,m b w T,m ( b + b ) Y B (e, e ) = b I m m,m w,m w T ( b + b ),,m v T b m,m B b I m v where v T = (b 3,..., b m ), w T = (b 3,..., b m ), i,j is a null matrix with i rows and j columns and B is obtained from B by suppressing the first and second rows and

8 8 J-M AZAÏS and M. WSCHEBOR columns. The columns represent the derivatives in the directions of L at the point (e, e ). The first m rows correspond to the components of π s (Bs), the last m ones to those of π t (Bt). Thus, under the condition C e,e,a,b that is used in (.6), Y (e, e ) = and [ det [( Y (e, e ) ) T Y (e, e ) ]],m,m,m,m (b a) B ai m m,m m,,m,m (b a),m,m m,m B bi m m, = det(b ai m ) det(b bi m ) (a b). Step 4. Note that B ai m B ai m, and similarly, B bi m B bi m, and that for a > b, under C e,e,a,b, there is equivalence in these relations. It is also clear that, since B one has det(b ai m ) I B ai m a m and it follows that the conditional expectation in (.6) is bounded by: a m E ( det(b bi m ) I B bi m /C ), (.8) where C is the condition {b = a, b = b, b =, b i = b i = (i = 3,..., m)}. To compute the conditional expectation in (.8) we further condition on the value of the random vectors a and a. Since unconditionally a 3,..., a m are i.i.d. standard Gaussian vectors in R m, under this new conditioning, their joint law becomes the law of i.i.d. standard Gaussian vectors in R m and independent of the condition. That is, (.8) is equal to a m E ( det(m bi m ) I M bim ), (.9) where M is an (m ) (m ) random matrix with entries M ij =< v i, v j >, (i, j =,..., m ) and the vectors v,...v m are i.i.d. Gaussian standard in R m. The expression in (.9) is bounded by a m E ( det(m) ) = a m (m )!, The last equality is contained in the following lemma, which is well-known, see for example Edelman [7]. Lemma.. Let ξ,..., ξ m be i.i.d. random vectors in R p, p m, their common distribution being Gaussian centered with variance I p. Denote W m,p the matrix and its characteristic polynomial. Then W m,p = ((< ξ i, ξ j >)) i,j=,...,m. D(λ) = det (W m,p λi m )

9 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER 9 (i) E (det (W m,p )) = p(p )...(p m + ) (.) (ii) E (D(λ)) = m ( ) m ( ) k p! k (p m + k)! λk (.) k= Returning to the proof of the Theorem and summing up this part, after replacing in (.6), we get g(a, b) C m exp ( (a + b)/ ) ab a m, (.) where C m = 4(m )!. Step 5. Now we prove the upper-bound part in (.). One has, for x > P{κ(A) > x} = P{ λ m λ > x } P{λ < L m x } + P{λ m λ > x, λ L m }, (.3) x where L is a positive number to be chosen later on. For the first term in (.3), we use Proposition 9 in Cuesta-Albertos and Wschebor [4] which is a slight modification of Theorem 3. in Sankar, Spielman and Teng [3]: Here, P{λ < L m x } = P{ A > x L m } C (m) Lm x. C (m) = ( ) [ { sup cp t π m > <c<m (m )c}] C (+ ).3473, m c where t m is a random variable having Student s distribution with m degrees of freedom. For the second term in (.3), with P{ λ m λ > x, λ L m + + x } = db g(a, b)da G m (x ) L mx bx G m (y) = C m + + db L my by exp ( (a + b)/ ) a m da, ab using (.). We have: [ + G m(y) = C m exp( b/) b exp ( (by)/ ) (by) m 3/ db+ L my L my + L m exp ( (a + L m y )) a m 3/ L m y da ] (.4)

10 J-M AZAÏS and M. WSCHEBOR which implies Put I m (a) = + G m (y) C my m 3/ exp ( b( + y) ) b m db L my ( ) m = y 3/ y + m e z z m dz 4(m )! + y L m y (+y) + a y 3/ 4(m )! m + L m e z z m dz. e z z m dz. Integrating by parts: I m (a) = e a[ a m + (m )a m + (m )(m )a m (m )! ], so that for a >.5m I m (a) 5 3 e a a m. If L > 5 we obtain the bound G m(y) D m y 3/ with D m = 5 m m ( ) 6 (m )! L(m ) exp L m. We now apply Stirling s formula (Abramowitz and Stegun [], 6..38) i.e. for all x > to get Γ(x + ) exp( ( x ) x x ) πx Γ(x + ), e D m 5 πl m exp( m L 4log(L) m ) 5 πl m, if we choose for L the only root larger than of the equation L 4log(L) = (check that L.345). To finish, G m (y) = + y G m (t)dt < D m + y dt t 3/ = D my. Replacing y by x and performing the numerical evaluations, the upper bound in (.) follows and we get for the constant C the value 5.6. Step 6. We consider now the lower bound in (.). For γ > and x >, we have : P{κ(A) > x} = P { λ m λ > x } P { λ m λ = P { λ < γ m x > x, λ < γ m x } } P { λ m λ x, λ < γ m x }. (.5) A lower bound for the first term in the right-hand member of (.5) is obtained using the following inequality, that we state as a separate lemma. In fact, this result is

11 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER known, see for example Szarek [5] Th..., where it is proved without giving an explicit value for the constant. See also Edelman [7], Cor. 3., for a related result. Lemma.. If < a < /m, then P{λ < a} β am, where we can choose β = ( 3) 3/e /3. Proof. Define the index i X (t) of a critical point t S m of the function X as the number of negative eigenvalues of X (t). For each a > put N i (a) = {t S m : X(t) = t T Bt < a, X (t) =, i X (t) = i}, for i =,,..., m. One easily checks that if the eigenvalues of B are λ,..., λ m, < λ <... < λ m, then: Now consider if a λ then N i (a) = if λ i < a λ i+ then N k (a) = for i =,,..., m for some i =,,..., m for k =,..., i N k (a) = if λ m < a then N i (a) = m M(a) = ( ) i N i (a). i= for k = i,..., m for i =,,..., m. M(a) is the Euler characteristic of the set S = {t S m : X(t) < a}, see Adler []. It follows from the relations above that if N (a) =, then N i (a) = for i =,..., m, hence M(a) =, if N (a) =, then M(a) = or, so that in any case Hence, M(a) N (a). P{λ < a} = P{N (a) = } = E( N (a) ) E( M(a) ). (.6) The expectation of M(a) can be written using the following Rice-type formula, Azaïs and Wschebor [3] or Taylor and Adler [6] E ( M(a) ) = = a a dy E [ det ( X (t) ) /X(t) = y, X (t) = ] p X(t),X (t)(y, )σ m (dt) S m σ m (S m )E [ det ( X (e ) ) /X(e ) = y, X (e ) = ] p X(e),X (e )(y, )dy.

12 J-M AZAÏS and M. WSCHEBOR where we have used again invariance under isometries. Applying a similar Gaussian regression - as we did in Step 4 to get rid of the conditioning - we obtain: E ( M(a) ) = a E [ det ( )] π Q yi m m Γ ( m)exp( y/) dy (.7) y where Q is an (m ) (m ) random matrix with entry i, j equal to (< v i, v j >) and v,..., v m are i.i.d. Gaussian standard in R m. We now use part (ii) of Lemma. : E [ det ( m )] Q yi m = (m )! k= ( m k ) ( y) k k! (.8) Under condition < a < m, since < y < a, as k increases the terms of the sum in the right-hand member of (.8) have decreasing absolute value, so that: E [ det ( Q yi m )] (m )![ (m )y]. Substituting into the right-hand member of (.7), we get : E [ M(a) ] π (m )! m Γ (m/) J m(a), where, using again < a < m : J m (a) = a ( )exp( y/) a (m )y dy y ( (m )y ) y ( y/)dy 4 3 a, by an elementary computation. Going back to (.7), applying Stirling s formula and remarking that ( + /n ) n+ e we get P{λ < a} ( ) 3/e /3 am. 3 This proves the lemma. End of the proof of Theorem. Using Lemma., the first term in the right hand member of (.5) is bounded below by βγ m x. To obtain a bound for the second term, we use again our upper bound (.) on the joint density g(a, b), as follows: { λm P x, λ < γ m λ γ m x C m bx db b } x = γ m x exp ( (a + b)/ ) ab 4(m )! bx db g(a, b)da b a m da C m γ m x x x 3 γ m m m b(x )b (bx ) m 3 db 8 π em γ m m x, (.9)

13 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER 3 Table 3. Values of the estimations P{κ(A) > mx} for x =,, 3, 5,, 5, 3, 5, and m = 3, 5,, 3,, 3, 5 by Monte-Carlo method over 4 simulations. Value of x Probability Lower b.:.3/x Upper b.: 5.6/x m = m= m= m= m= m= m = on applying Stirling s formula. Choosing now γ = /e, we see that the hypothesis of Lemma. is satisfied and also { λm P x, λ < γ m } λ x 8 m π e 3 x. Replacing into (.5), we obtain the lower bound in (.) with c = ( ) 3/e 4/3 3 8 π e Monte-Carlo experiment. To study the tail of the distribution of the condition number of Gaussian matrices of various size, we used the following Matlab functions normrnd to simulate normal variables cond to compute the Condition number of matrix A. Results over 4 simulation using Matlab are given in Table 3.. This table suggests that the constants c and C should take values smaller than.87 and bigger than.8 respectively. Acknowledgments. The authors thank Professors G. Letac and F. Cucker for valuable discussions. REFERENCES [] Milton Abramowitz and Irene A. Stegun, editors. Handbook of mathematical functions with formulas, graphs, and mathematical tables. A Wiley-Interscience Publication. John Wiley & Sons Inc., New York, 984. Reprint of the 97 edition, Selected Government Publications. [] Robert J. Adler. The geometry of random fields. John Wiley & Sons Ltd., Chichester, 98. Wiley Series in Probability and Mathematical Statistics. [3] J-M. Azaïs and M. Wschebor. On the distribution of the maximum of a gaussian field with d parameters. Annals Appl. Prob., to appear, 4. [4] J. Cuesta-Albertos and M. Wschebor. Condition numbers and the extrema of random fields. In proc. IV Ascona Seminar. Birkaüser, Boston, to appear, 4. [5] Kenneth R. Davidson and Stanislaw J. Szarek. Local operator theory, random matrices and Banach spaces. In Handbook of the geometry of Banach spaces, Vol. I, pages North-Holland, Amsterdam,.

14 4 J-M AZAÏS and M. WSCHEBOR.5.45 m=3 m= m= m= Fig. 3.. Values of P{κ(A) > mx} as a function of x for m = 3,, and 5 [6] James W. Demmel. Applied numerical linear algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 997. [7] Alan Edelman. Eigenvalues and condition numbers of random matrices. SIAM J. Matrix Anal. Appl., 9(4):543 56, 988. [8] Alan Edelman. On the distribution of a scaled condition number. Math. Comp., 58(97):85 9, 99. [9] Nicholas J. Higham. Accuracy and stability of numerical algorithms. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition,. [] Maurice Kendall, Alan Stuart, and J. Keith Ord. The advanced theory of statistics. Vol. 3. Macmillan Inc., New York, fourth edition, 983. Design and analysis, and time-series. [] P. R. Krishnaiah and T. C. Chang. On the exact distributions of the extreme roots of the Wishart and MANOVA matrices. J. Multivariate Anal., ():8 7, 97. [] M. Ledoux. A remark on hypercontractivity and tail inequalities for the largest eigenvalues of random matrices. Séminaire de Probabilités XXXVII. Lecture Notes in Mathematics, Springer, To appear, 3. [3] A. Sankar, D.A. Spielman, and S.H. Teng. Smoothed analysis of the condition number and growth factors of matrices. Preprint,. [4] Steve Smale. On the efficiency of algorithms of analysis. Bull. Amer. Math. Soc. (N.S.), 3():87, 985. [5] Stanis law J. Szarek. Condition numbers of random matrices. J. Complexity, 7():3 49, 99. [6] Jonathan E. Taylor and Robert J. Adler. Euler characteristics for Gaussian fields on manifolds. Ann. Probab., 3(): , 3. [7] A. M. Turing. Rounding-off errors in matrix processes. Quart. J. Mech. Appl. Math., :87 38, 948. [8] John von Neumann and H. H. Goldstine. Numerical inverting of matrices of high order. Bull. Amer. Math. Soc., 53: 99, 947.

15 TAILS OF THE DISTRIBUTION OF THE CONDITION NUMBER 5 [9] Eugene Wigner. Random matrices in physics. SIAM Rev., 9: 3, 967. [] J. H. Wilkinson. Rounding errors in algebraic processes. Prentice-Hall Inc., Englewood Cliffs, N.J., 963. [] Samuel S. Wilks. Mathematical statistics. A Wiley Publication in Mathematical Statistics. John Wiley & Sons Inc., New York, 96. [] Mario Wschebor. Smoothed analysis of κ(a). Journal of complexity, ():97 7, 4.

On the roots of a random system of equations. The theorem of Shub & Smale and some extensions.

On the roots of a random system of equations. The theorem of Shub & Smale and some extensions. On the roots of a random system of equations. The theorem of Shub & Smale and some extensions. Jean-Marc Azaïs, azais@cict.fr Mario Wschebor, wschebor@cmat.edu.uy August 2, 24 Abstract We give a new proof

More information

Rice method for the maximum of Gaussian fields

Rice method for the maximum of Gaussian fields Rice method for the maximum of Gaussian fields IWAP, July 8, 2010 Jean-Marc AZAÏS Institut de Mathématiques, Université de Toulouse Jean-Marc AZAÏS ( Institut de Mathématiques, Université de Toulouse )

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Inverse Brascamp-Lieb inequalities along the Heat equation

Inverse Brascamp-Lieb inequalities along the Heat equation Inverse Brascamp-Lieb inequalities along the Heat equation Franck Barthe and Dario Cordero-Erausquin October 8, 003 Abstract Adapting Borell s proof of Ehrhard s inequality for general sets, we provide

More information

Random systems of polynomial equations. The expected number of roots under smooth analysis

Random systems of polynomial equations. The expected number of roots under smooth analysis Bernoulli 15(1), 2009, 249 266 DOI: 10.3150/08-BEJ149 arxiv:0807.0262v2 [math.pr] 9 Feb 2009 Random systems of polynomial equations. The expected number of roots under smooth analysis DIEGO ARMENTANO *

More information

M. Ledoux Université de Toulouse, France

M. Ledoux Université de Toulouse, France ON MANIFOLDS WITH NON-NEGATIVE RICCI CURVATURE AND SOBOLEV INEQUALITIES M. Ledoux Université de Toulouse, France Abstract. Let M be a complete n-dimensional Riemanian manifold with non-negative Ricci curvature

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

arxiv:math/ v1 [math.dg] 23 Dec 2001

arxiv:math/ v1 [math.dg] 23 Dec 2001 arxiv:math/0112260v1 [math.dg] 23 Dec 2001 VOLUME PRESERVING EMBEDDINGS OF OPEN SUBSETS OF Ê n INTO MANIFOLDS FELIX SCHLENK Abstract. We consider a connected smooth n-dimensional manifold M endowed with

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory.

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Jean-Pierre Dedieu a, Pierre Priouret a, Gregorio Malajovich c a MIP. Département de Mathématique, Université Paul Sabatier, 31062 Toulouse

More information

A 3 3 DILATION COUNTEREXAMPLE

A 3 3 DILATION COUNTEREXAMPLE A 3 3 DILATION COUNTEREXAMPLE MAN DUEN CHOI AND KENNETH R DAVIDSON Dedicated to the memory of William B Arveson Abstract We define four 3 3 commuting contractions which do not dilate to commuting isometries

More information

arxiv: v1 [math.pr] 22 Dec 2018

arxiv: v1 [math.pr] 22 Dec 2018 arxiv:1812.09618v1 [math.pr] 22 Dec 2018 Operator norm upper bound for sub-gaussian tailed random matrices Eric Benhamou Jamal Atif Rida Laraki December 27, 2018 Abstract This paper investigates an upper

More information

LIMITING CASES OF BOARDMAN S FIVE HALVES THEOREM

LIMITING CASES OF BOARDMAN S FIVE HALVES THEOREM Proceedings of the Edinburgh Mathematical Society Submitted Paper Paper 14 June 2011 LIMITING CASES OF BOARDMAN S FIVE HALVES THEOREM MICHAEL C. CRABB AND PEDRO L. Q. PERGHER Institute of Mathematics,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

On the minimum of several random variables

On the minimum of several random variables On the minimum of several random variables Yehoram Gordon Alexander Litvak Carsten Schütt Elisabeth Werner Abstract For a given sequence of real numbers a,..., a n we denote the k-th smallest one by k-

More information

Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable Sequences

Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable Sequences Advances in Dynamical Systems and Applications ISSN 0973-532, Volume 6, Number, pp. 9 09 20 http://campus.mst.edu/adsa Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable

More information

Geometry on Probability Spaces

Geometry on Probability Spaces Geometry on Probability Spaces Steve Smale Toyota Technological Institute at Chicago 427 East 60th Street, Chicago, IL 60637, USA E-mail: smale@math.berkeley.edu Ding-Xuan Zhou Department of Mathematics,

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Auerbach bases and minimal volume sufficient enlargements

Auerbach bases and minimal volume sufficient enlargements Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in

More information

A REMARK ON THE GEOMETRY OF SPACES OF FUNCTIONS WITH PRIME FREQUENCIES.

A REMARK ON THE GEOMETRY OF SPACES OF FUNCTIONS WITH PRIME FREQUENCIES. 1 A REMARK ON THE GEOMETRY OF SPACES OF FUNCTIONS WITH PRIME FREQUENCIES. P. LEFÈVRE, E. MATHERON, AND O. RAMARÉ Abstract. For any positive integer r, denote by P r the set of all integers γ Z having at

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES JUHA LEHRBÄCK Abstract. We establish necessary conditions for domains Ω R n which admit the pointwise (p, β)-hardy inequality u(x) Cd Ω(x)

More information

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION J.A. Cuesta-Albertos 1, C. Matrán 2 and A. Tuero-Díaz 1 1 Departamento de Matemáticas, Estadística y Computación. Universidad de Cantabria.

More information

Existence of minimizers for the pure displacement problem in nonlinear elasticity

Existence of minimizers for the pure displacement problem in nonlinear elasticity Existence of minimizers for the pure displacement problem in nonlinear elasticity Cristinel Mardare Université Pierre et Marie Curie - Paris 6, Laboratoire Jacques-Louis Lions, Paris, F-75005 France Abstract

More information

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE FRANÇOIS BOLLEY Abstract. In this note we prove in an elementary way that the Wasserstein distances, which play a basic role in optimal transportation

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

THE ALTERNATIVE DUNFORD-PETTIS PROPERTY FOR SUBSPACES OF THE COMPACT OPERATORS

THE ALTERNATIVE DUNFORD-PETTIS PROPERTY FOR SUBSPACES OF THE COMPACT OPERATORS THE ALTERNATIVE DUNFORD-PETTIS PROPERTY FOR SUBSPACES OF THE COMPACT OPERATORS MARÍA D. ACOSTA AND ANTONIO M. PERALTA Abstract. A Banach space X has the alternative Dunford-Pettis property if for every

More information

VOLUME GROWTH AND HOLONOMY IN NONNEGATIVE CURVATURE

VOLUME GROWTH AND HOLONOMY IN NONNEGATIVE CURVATURE VOLUME GROWTH AND HOLONOMY IN NONNEGATIVE CURVATURE KRISTOPHER TAPP Abstract. The volume growth of an open manifold of nonnegative sectional curvature is proven to be bounded above by the difference between

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Daniel Meschenmoser Institute of Stochastics Ulm University 89069 Ulm Germany daniel.meschenmoser@uni-ulm.de

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

ON OPERATORS WITH AN ABSOLUTE VALUE CONDITION. In Ho Jeon and B. P. Duggal. 1. Introduction

ON OPERATORS WITH AN ABSOLUTE VALUE CONDITION. In Ho Jeon and B. P. Duggal. 1. Introduction J. Korean Math. Soc. 41 (2004), No. 4, pp. 617 627 ON OPERATORS WITH AN ABSOLUTE VALUE CONDITION In Ho Jeon and B. P. Duggal Abstract. Let A denote the class of bounded linear Hilbert space operators with

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If

More information

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Eigenvalue comparisons in graph theory

Eigenvalue comparisons in graph theory Eigenvalue comparisons in graph theory Gregory T. Quenell July 1994 1 Introduction A standard technique for estimating the eigenvalues of the Laplacian on a compact Riemannian manifold M with bounded curvature

More information

Linear Algebra 1 Exam 1 Solutions 6/12/3

Linear Algebra 1 Exam 1 Solutions 6/12/3 Linear Algebra 1 Exam 1 Solutions 6/12/3 Question 1 Consider the linear system in the variables (x, y, z, t, u), given by the following matrix, in echelon form: 1 2 1 3 1 2 0 1 1 3 1 4 0 0 0 1 2 3 Reduce

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

FREQUENTLY HYPERCYCLIC OPERATORS WITH IRREGULARLY VISITING ORBITS. S. Grivaux

FREQUENTLY HYPERCYCLIC OPERATORS WITH IRREGULARLY VISITING ORBITS. S. Grivaux FREQUENTLY HYPERCYCLIC OPERATORS WITH IRREGULARLY VISITING ORBITS by S. Grivaux Abstract. We prove that a bounded operator T on a separable Banach space X satisfying a strong form of the Frequent Hypercyclicity

More information

NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES

NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Volumen 48, Número 1, 2007, Páginas 7 15 NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES ESTEBAN ANDRUCHOW AND ALEJANDRO VARELA

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

Takens embedding theorem for infinite-dimensional dynamical systems

Takens embedding theorem for infinite-dimensional dynamical systems Takens embedding theorem for infinite-dimensional dynamical systems James C. Robinson Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. E-mail: jcr@maths.warwick.ac.uk Abstract. Takens

More information

Introduction to Functional Analysis

Introduction to Functional Analysis Introduction to Functional Analysis Carnegie Mellon University, 21-640, Spring 2014 Acknowledgements These notes are based on the lecture course given by Irene Fonseca but may differ from the exact lecture

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

arxiv: v3 [math.pr] 8 Oct 2017

arxiv: v3 [math.pr] 8 Oct 2017 Complex Random Matrices have no Real Eigenvalues arxiv:1609.07679v3 [math.pr] 8 Oct 017 Kyle Luh Abstract Let ζ = ξ + iξ where ξ, ξ are iid copies of a mean zero, variance one, subgaussian random variable.

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. A. ZVAVITCH Abstract. In this paper we give a solution for the Gaussian version of the Busemann-Petty problem with additional

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Herz (cf. [H], and also [BS]) proved that the reverse inequality is also true, that is,

Herz (cf. [H], and also [BS]) proved that the reverse inequality is also true, that is, REARRANGEMENT OF HARDY-LITTLEWOOD MAXIMAL FUNCTIONS IN LORENTZ SPACES. Jesús Bastero*, Mario Milman and Francisco J. Ruiz** Abstract. For the classical Hardy-Littlewood maximal function M f, a well known

More information

A Characterization of Einstein Manifolds

A Characterization of Einstein Manifolds Int. J. Contemp. Math. Sciences, Vol. 7, 212, no. 32, 1591-1595 A Characterization of Einstein Manifolds Simão Stelmastchuk Universidade Estadual do Paraná União da Vitória, Paraná, Brasil, CEP: 846- simnaos@gmail.com

More information

arxiv: v1 [math.oc] 21 Mar 2015

arxiv: v1 [math.oc] 21 Mar 2015 Convex KKM maps, monotone operators and Minty variational inequalities arxiv:1503.06363v1 [math.oc] 21 Mar 2015 Marc Lassonde Université des Antilles, 97159 Pointe à Pitre, France E-mail: marc.lassonde@univ-ag.fr

More information

Spectrum for compact operators on Banach spaces

Spectrum for compact operators on Banach spaces Submitted to Journal of the Mathematical Society of Japan Spectrum for compact operators on Banach spaces By Luis Barreira, Davor Dragičević Claudia Valls (Received Nov. 22, 203) (Revised Jan. 23, 204)

More information

Author copy. for some integers a i, b i. b i

Author copy. for some integers a i, b i. b i Cent. Eur. J. Math. 6(3) 008 48-487 DOI: 10.478/s11533-008-0038-4 Central European Journal of Mathematics Rational points on the unit sphere Research Article Eric Schmutz Mathematics Department, Drexel

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

A f = A f (x)dx, 55 M F ds = M F,T ds, 204 M F N dv n 1, 199 !, 197. M M F,N ds = M F ds, 199 (Δ,')! = '(Δ)!, 187

A f = A f (x)dx, 55 M F ds = M F,T ds, 204 M F N dv n 1, 199 !, 197. M M F,N ds = M F ds, 199 (Δ,')! = '(Δ)!, 187 References 1. T.M. Apostol; Mathematical Analysis, 2nd edition, Addison-Wesley Publishing Co., Reading, Mass. London Don Mills, Ont., 1974. 2. T.M. Apostol; Calculus Vol. 2: Multi-variable Calculus and

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Open Research Online The Open University s repository of research publications and other research outputs

Open Research Online The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs A note on the Weiss conjecture Journal Item How to cite: Gill, Nick (2013). A note on the Weiss

More information