On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

Size: px
Start display at page:

Download "On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title"

Transcription

1 itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no Issued Date 205 URL Rights Electronic version of an article published as [Random Matrices: heory and Applications, 205, v. 4, p. article no ] [0.42/S X] [copyright World Scientific Publishing Company] [ his wor is licensed under a Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License.

2 On singular values distribution of a large auto-covariance matrix in the arxiv: v [math.pr] 27 Jan 205 ultra-dimensional regime Qinwen Wang and Jianfeng Yao Qinwen Wang Department of Mathematics Zhejiang University wqw883@gmail.com Jianfeng Yao Department of Statistics and Actuarial Science he University of Hong Kong Pofulam, Hong Kong jeffyao@hu.h Abstract: Let ε t t>0 be a sequence of independent real random vectors of p-dimension and let X = s+ t=s+ ε tε t s/ be the lag-s s is a fixed positive integer autocovariance matrix of ε t. his paper investigates the limiting behavior of the singular values of X under the so-called ultra-dimensional regime where p and in a related way such that p/ 0. First, we show that the singular value distribution of X after a suitable normalization converges to a nonrandom limit G quarter law under the forth-moment condition. Second, we establish the convergence of its largest singular value to the right edge of G. Both results are derived using the moment method. AMS 2000 subject classifications: 5A52, 60F5;. Keywords and phrases: Auto-covariance matrix, Singular values, Limiting spectral distribution, Ultra-dimensional data, Largest eigenvalue, Moment method. he research of J. Yao is partly supported by GRF Grant HKU 70543P. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

3 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 2. Introduction Let s be a fixed positive integer and ε t t +s a sequence of independent real random vectors, where ε t = ε it i p has independent coordinates satisfying Eε it = 0 and Eε 2 it =. Consider the so-called lag-s sample autocovariance matrix of ε t defined as X = s+ t=s+ ε t ε t s.. Motivated by their application in high-dimensional statistical analysis where the dimensions p and are assumed large tending to infinity, spectral analysis of such sample autocovariance matrices have attracted much attention in recent literature in random matrix theory. For example, perturbation theory on the matrix X has been carried out in Lam and Yao 202 and Li et al. 204 for estimating the number of factors in a large dimensional factor model of type y t = Λf t + ε t + µ,.2 where {y t } is a p-dimensional sequence observed at time t, {f t } a sequence of m-dimensional latent factor m p uncorrelated with the error process {ε t } and µ R p is the general mean. Since X is not symmetric, its spectral distribution is given by the set of its singular values which are by definition the square roots of positive eigenvalues of A := X X..3 o our best nowledge, all the existing results on X or A are found under what we will refer as the Marčeno-Pastur regime, or simply the MP regime, where p, and p/ c > 0..4 For example, Jin et al 204 derives the limit of the eigenvalue distributions ESD of the symmetrized auto-covariance matrix X 2 + X ; and Wang et al. 203 establishes the exact separation property of the ESD which also implies the convergence of its extreme imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

4 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 3 eigenvalues. For the singular value distribution of X, the limit LSD has been established in Li et al. 203 using the method of Stieltjes transform and in Wang and Yao 204 using the moment method. he latter paper also establishes the almost sure convergence of the largest singular value of X to the right edge of the LSD, thans to the moment method. Related results are also proposed in Liu et al. 203 where the sequence ε t is replaced by a more general time series. In this paper, we investigate the same questions as in Wang and Yao 204 but under a different asymptotic regime, the so-called ultra-dimensional regime where p, and p/ 0..5 It is naturally expected that the limit under this regime will be much different than under the MP regime above. he findings of the paper confirm this difference by providing a new limit of the singular value distribution of X under the ultra-dimensional regime. In a related paper Wang et Paul 204, the authors also adopted the ultra-dimensional regime to derive the LSD for a large class of separable sample covariance matrices. However, the autocovariance matrix X considered in this paper is very different of these separable sample covariance matrices. Recalling the definition of A in.3, we have A i, j = 2 p l= m= n= ε i m+s ε lm ε j n+s ε ln. It follows by simple calculations that 0, i j, EA i, j = p/, i = j, and for i j, Var A i, j = EA 2 i, j = p 2. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

5 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 4 he row sum of the variances Var A i, j is thus of order p 2 / 2. herefore, in order to have the spectrum of A be of constant order when p/ 0, we should normalise it as A := A p2 / 2 = p X X..6 he main results of the paper are as follows. First in Section 2, we derive the almost sure limit of the singular value distribution of p X under the ultra-dimensional regime and assuming that the fourth moment of the entries {ε it } are uniformly bounded. his limit LSD simply equals to the image measure of the semi-circle law on [ 2, 2] by the absolute value transformation x x. Next in Section 3, we establish the almost sure convergence of the largest singular value of p X to 2 assuming that the entries {ε it } has a uniformly bounded moment of order 4 + ν for some ν > 0. Both results are derived using the moment method. Some technical details on the traditional truncation and renormalisation steps are postponed to the appendixes. 2. Limiting spectral distribution by the moment method In this section, we show that when p/ 0, the ESD of the singular values of tends to a nonrandom limit, which is lined to the well nown semi-circle law. p X heorem 2.. Suppose the following conditions hold: a. ε t t is a sequence of independent p-dimensional real valued random vectors with independent entries ε it, i p, satisfying Eε it = 0, Eε 2 it =, sup Eε 4 it <. 2. it b. Both p and tend to infinity in a related way such that p/ 0. hen, with probability one, the empirical distribution of the singular values of tends to the quarter law G with density function p X gx = π 4 x2, 0 < x imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

6 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 5 Remar 2.. Recall that the quarter law G is the image measure of the semi-circle law by the absolute value transformation. It is also worth noticing that if there were no lag, i.e. s = 0, the matrix X would be a standard sample covariance matrix; and in this case the spectral distribution of p X I p would converge to the semi-circle law, see Bai and Yin 988. he case of a auto-covariance matrix X with a positive lag s > 0 is then very different. Since the singular values of p X are the square roots of the eigenvalues of p X X, in the remaining of this paper, we focus on the limiting behaviours of the eigenvalues of X p X. hese properties can then be transferred to the singular values of X p by the square-root transformation x x. heorem 2.2. Under the same conditions as in heorem 2., with probability one, the empirical spectral distribution F A of the matrix A in.6 tends to a limiting distribution F, which is the image measure of the semi-circle law on [ 2, 2] by the square transformation. In particular, its -th moment is: m = 2, 2.3 and its Stieltjes transform sz and density function fx are given by sz = z, z / 0, 4], 2.4 and fx = π x 4, 0 < x 4, 2.5 respectively. Remar 2.2. he -th moment in 2.3 is exactly the 2-th moment of the LSD of a standard Wigner matrix, which is also the number of Dyc paths of length 2 for the definition of Dyc paths, we refer to ao 202. Notice also that the density function f is unbounded at the origin. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

7 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 6 he remaining of the section is devoted to the proof of heorem 2.2 using the moment method. he -th moment of the ESD F A of A is m A = p tr A = i= p j= p + ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j2 i 2 ε j2 i 2 ε j2 s+i 2 ε j2 s+i. 2.6 Here, the indexes in i = i,, i 2 run over, 2,, and the indexes in j = j,, j 2 run over, 2,, p. he core of the proof is to establish the following two assertions: I. Em A m = II. Varm A <. p= 2, 0; his is given in the Subsections 2., 2.2 and 2.3 below. It follows from these assertions that almost surely, m A m for all 0. Since the limiting moment sequence m clearly satisfies the Carleman s condition, i.e. >0 m /2 2 =, we deduce that almost surely, the sequence of ESDs F A wealy converges to a probability measure F whose moments are exactly m. Next, notice that m is exactly the number of Dyc paths of length 2 ao, 202, which is also the 2-th moment of the semi-circle law with support [ 2, 2], it follows that the LSD F equals to the image of the semi-circle law by the square transformation x x 2. he formula in 2.4 and 2.5 are thus easily derived and the proof of heorem 2.2 is complete. 2.. Preliminary steps and some graph concepts We now introduce the proofs for Assertions I and II. First we show that with a uniformly bounded fourth order moment, the variables {ε it } can be truncated at rate η /4 for some vanishing sequence η = η. his is justified in Appendix A. After these imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

8 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 7 truncation, centralisation and rescaling steps, we may assume in all the following that Eε ij = 0, Eε 2 ij =, ε ij η /4, 2.7 where η is chosen such that η 0 but η /4. Now we introduce some basic concepts for graphs associated to the big sum in 2.6. Let ψe,, e m := number of distinct entities among e,, e m, i := i,, i 2, j := j,, j 2, i a, j b p, a, b =,, 2, At, s := {i, j : ψi = t, ψj = s}. Define Qi, j as the multigraph as follows: Let I-line, J-line be two parallel lines, plot i,, i 2 on the I-line, j,, j 2 on the J-line, called the I-vertexes and J-vertexes, respectively. Draw down edges from i 2u to j 2u, down edges from i 2u + s to j 2u, up edges from j 2u to i 2u, up edges from j 2u to i 2u+ + s all these up and down edges are called vertical edges and horizontal edges from i 2u to i 2u + s, horizontal edges from i 2u + s to i 2u with the convention that i 2+ = i, where all the u s are in the region: u. An example of the multi-graph Qi, j with = 3 is presented in the following Figure.! = " # = $ % = & I-line ' " '! = ' $ ' # = ' & ' % J-line p Figure : An example of the multigraph Qi, j with = 3. In the graph Qi, j, once a I-vertex i l is fixed, so is i l + s. For this reason, we glue all the I-vertexes which are connected through horizon edges and denote the resulting graph imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

9 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 8 as MAt, s, where At, s is the index set that has t distinct I-vertexes and s distinct J- vertexes. An example of MA3, 4 that corresponds to the Qi, j in Figure is presented in the following Figure 2.! = " # = $ % = & I-line ' " '! = ' $ ' # = ' & ' % J-line p Figure 2: An example of MA3, 4 that corresponds to the Qi, j in Figure Proof of Assertion I Recall the expression of m A in 2.6, we have Em A = i= p j= p + E[ ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j3 i 3 ε j3 i 4 ε j4 s+i 4 ε j4 s+i 5 ] ε j2 i 2 ε j2 i 2 ε j2 s+i 2 ε j2 s+i = pp p s + t + p + t,s MAt,s E [ ] ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j2 i 2 ε j2 s+i 2 ε j2 s+i := t,s St, s, 2.8 where St, s = p + MAt,s pp p s + t + E [ ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i ]. 2.9 hen we assert a lemma stating that St, s 0 except for one particular term. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

10 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 9 Lemma 2.. St, s 0 as p unless t = and s = +. Suppose Lemma 2. holds true for a moment, then according to 2.8 and 2.9, we have Em A = S, + + o = E[ ] #{MA, + } + o, 2.0 where E[ ] refers to the expectation part in 2.9 and #{MA, + } refers to the number of isomorphism class that have distinct I-vertexes and + distinct J-vertexes. First, we show the expectation part E[ ] equals when t = and s = +. Let v m denote the number of edges in MAt, s whose degree is m. hen we have the total number of edges having the following relationship: v + 2v v 4 = Since we have Eε ij = 0 in 2.7, all the multiplicities of the edges in the graph MAt, s should be at least two, that is v = 0. On the other hand, MAt, s is a connected graph with t + s vertexes and v + + v 4 = v v 4 edges, we have when t = and s = + : 2 + = t + s v + + v 4 + = v v v 2 + 3v v 4 + = 2 +, 2.2 where the last equality is due to 2. with v = 0. hen we have all the inequalities in 2.2 become equalities, that is, v v 4 + = 2 2v 2 + 3v v 4 + = 2 +, which leads to the fact that v 3 = v 4 = = v 4 = 0, v 2 = his means that all the edges in the graph MA, + is repeated exactly twice, so the part of expectation E [ ] ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i = Eε 2 2 ji =. 2.4 imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

11 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 0 Second, the number of isomorphism class in MAt, s with each edge repeated at least twice in the original graph Qi, j is given by the notation f t in Wang and Yao 204, where f t = 2. t t herefore, in this special case when t = and s = +, we have #{MA, + } = f = Finally, combine 2.0, 2.4 and 2.5, we have Em A = 2 + o. Assertion I is then proved. It remains to prove Lemma 2.. Proof. of Lemma 2. Denote b l as the degree that associated to the I-vertex i l l t in MAt, s, then we have b + + b t = 4, which is the total number of edges. On the other hand, since each edge in MAt, s is repeated at least twice otherwise, there exist at least one single edge, so the expectation will be zero, we have each degree b l at least four we glue the original I-vertexes i l and i l + s in MAt, s. herefore, we have 4 = b + + b t 4t, which is t. Now, consider the following two cases separately. Case : s > +. Recall the definition of v m in 2., which satisfies that v + 2v v 4 = 2v v 4 = 4 imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

12 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix and t + s v + + v 4 + = v v 4 +. We can bound the expectation part as follows: E[ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j2 s+i 2 ε j2 s+i ] Eε 2 ji v2 Eε 4 ji v 4 η /4 v 3 +2v v 4 = η /4 3v 3 +4v v 4 2v 3 +v 4 + +v 4 = η /4 4 2v 2 +v 3 + +v 4 η /4 4 2t+s. 2.6 hen we have according to 2.9 that St, s p + t p s η /4 4 2t+s #{MAt, s} = ps 2 s t η4 2t+s #{MAt, s} p s = O, 2.7 η4 2t+s s t 2 where the last equality is due to the fact that #{MAt, s} is a function of is fixed, which could be bounded by a large enough constant. Since s > + and t + s 2, then s s 2 + t = s 2 + t 2 2 = s + t 2 0, 2 which is 0 < s s t. 2 So, 2.7 reduces to p s 2 St, O η 4 2t+s 0, 2.8 which is due to the fact that s > 0 and p/ 0. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

13 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 2 Case 2: s +, but not t = and s = +. For the same reason as before, we have t distinct I-vertexes, each degree is at least four, so we have another estimation for the expectation part: E[ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j2 s+i 2 ε j2 s+i ] η /4 4 4t. 2.9 herefore, St, s p + t p s η /4 4 4t #{MAt, s} η 4 4t = O, 2.20 p + s which is also due to the fact that #{MAt, s} = O. Case 2 contains three situations:. t = and s < + : St, s O 0 ; p + s 2. t < and s = + : St, s O η 4 4t 0 ; η 4 4t 3. t < and s < + : St, s O p + s Combine 2.8 and 2.2, we have St, 0 as p unless t = s = Proof of Assertion II Recall Varm A = [ ] E p εqi,j ε Qi2,j 2 E εqi,j E εqi2,j 2. i,j,i 2,j imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

14 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 3 If Qi, j has no edges coincident with edges of Qi 2, j 2, then E ε Qi,j ε Qi2,j 2 E εqi,j E εqi2,j 2 = 0 by independence between ε Qi,j and ε Qi2,j 2. If Q = Qi, j Qi 2, j 2 has an overall single edge, then E ε Qi,j ε Qi2,j 2 = E εqi,j E εqi2,j 2 = 0, so in the above two cases, we have Varm A = 0. Now, suppose Q = Qi, j Qi 2, j 2 has no single edge, Qi, j and Qi 2, j 2 have common edges. Let the number of vertexes of Qi, j, Qi 2, j 2, Q = Qi, j Qi 2, j 2 on the I-line be t, t 2, t, respectively; and the number of vertexes on the J-line be s, s 2, s, respectively. Since Qi, j and Qi 2, j 2 have common edges, we must have t t + t 2, s s + s 2. Similar to 2.6 and 2.9, we have two bounds for E εqi,j ε Qi2,j 2 : E εqi,j ε Qi2,j 2 η /4 8 2t+s, 2.23 or E εqi,j ε Qi2,j 2 η /4 8 4t For the same reason, we have also Eε Qi,j Eε Qi2,j 2 η /4 4 2t +s +4 2t 2 +s 2 < η /4 8 2t+s, 2.25 or E εqi,j ε Qi2,j 2 η /4 4 4t +4 4t 2 < η /4 8 4t, 2.26 where the last inequalities in 2.25 and 2.26 are due to the fact that t t + t 2, s s + s 2. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

15 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 4 Since Varm A = p t,s MAt,s [ E εqi,j ε Qi2,j 2 E εqi,j E εqi2,j 2 ] := t,s St, s Using 2.23, 2.24, 2.25 and 2.26, we can bound the value of St, s as follows: St, t p s s O η /4 8 2t+s p p s 2 2 = O, 2.28 s/2 t/2 /2 or St, t p s s O η /4 8 4t p = O p s Clearly, t + s 2 +, t 2 + s ; we have thus t + s t + t 2 + s + s 2 4. First, consider the case that s > t + where we use the bound in Since s 2 2 s/2 + t/2 + /2 = s/2 + t/2 2 3/2 3/2, which leads to s 2 2 3/2 + s/2 t/2 /2. imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

16 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 5 Combine with 2.28, we have St, p s O p s t 3/2 2 O p 3/ Second, we use the bound in 2.29 for the case s t +. Recall that t + s 4, we have s + s t + s 4, which is 2s 4. hen, from 2.29, St, s O p s 2 2 O p = O p 3/ Combine 2.27, 2.30 and 2.3, we have Varm A Cp 3/2, which is summable with respect to p. Assertion II is then proved. 3. Convergence of the largest eigenvalue of A In this section, we aim to show that the largest eigenvalue of A tends to 4 almost surely, which is the right edge of its LSD. heorem 3.. Under the same conditions as in heorem 2., with sup it Eε 4 it < in 2. replaced by sup it E ε it 4+ν < for some ν > 0, the largest eigenvalue of A converges to 4 almost surely. Recall that in the proof of heorem 2.2, a main step is Lemma 2., which says that St, s 0 except for one term, which is when t = and s = +. One thing to mention here is that in order to prove this lemma, is assumed to be fixed. hen the number of imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

17 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 6 isomorphism class in MAt, s is a function of, thus can be bounded by a large enough constant. So actually, we do not need to now the value of #{MAt, s} exactly. While in the case of deriving the convergence of the largest eigenvalue, should grow to infinity, so we can not trivially guarantee that the number of isomorphism class in MAt, s is still of constant order. herefore, the main tas in this section is to bound this value, maing St, s t or s + still a smaller order compared with the main term S, + when. Proposition 3.. Let the conditions in heorem 2. hold, with sup it Eε 4 it < in 2. replaced by sup it E ε it 4+ν < for some ν > 0, and = p, is an integer that tends to infinity and satisfies the following conditions: / log p, p/ 0, /p.0 3. hen we have Em A = 2 + o. Now suppose the above Proposition 3. holds true. We first show it will lead to heorem 3.. Proof. of heorem 3. Using Proposition 3., we have the estimation that then for any > 0, we have Em A = 2 + o, 3.2 P l > 4 + P tr A 4 + E tr A 4 + = p Em A 4 + imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

18 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 7 p p / + o + o he right hand side tends to 4 4+ since / log p so p /. Once we fix this > 0, 3.3 is summable. he upper bound for l is trivial due to our heorem 2.2. Now it remains to prove our Proposition 3.. Proof. of Proposition 3. After truncation, centralisation and rescaling, we may assume that the ε it s satisfy the condition that Eε it = 0, Varε it =, ε it δ /2, 3.4 where δ is chosen such that δ 0 δ /2 ɛ 0 δ /2 δ 2 0 p δ More detailed justifications of 3.4 are provided in Appendix B. From the proof of heorem 2.2, we have Em A = St, s = S, + + o = 2 + o, t,s where S, + is the main term that contributes to Em A, while all other terms can be neglect. herefore, it remains to prove that when, we still have St, s = 2 o. t or s + We also consider two cases: imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

19 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 8 Case : s > + Case 2 : s +, but not t = and s = +. Similar to 2.6 and 2.9, we have two bounds for the expectation part: E[ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j2 s+i 2 ε j2 s+i ] δ /2 4 2t+s 3.6 or E[ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i 3 ε j2 s+i 2 ε j2 s+i ] δ /2 4 4t. 3.7 Consider t = first. From Wang et al. 203, the number of isomorphism class #{MA, s} is bounded by 2, 2 s and combine this with 2.9 and 3.6, we have S, s p + ps δ /2 4 2s 2 2 s. 3.8 hen, s S, s 2 p + ps δ /2 4 2s 2 s= 2 s 2 = p + ps δ /2 4 2s s= s he right hand side of 3.9 can be bounded as 2 s= p + δ /2 4 which is dominated by the term when s = 2 since p δ 2 s 2p, δ 2. hen 3.9 reduces to p p + p2 = imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

20 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 9 Next, we consider Case and Case 2 when t > separately. According to Wang et al. 203, the number of isomorphism class in MAs, t t > is bounded by f t 2 t, 3. s where f t = 2. t t Case s > + and t > : he part of expectation can be bounded by 3.6, and combining this with 2.9 and 3., we have St, s p + MAt,s p s t E[ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i ] ps t δ /2 4 2t+s ft 2 t. 3.2 p + s Since s + 2, t 2, and a trivial relationship that t + s 2, we have 2+ t p s t St, s δ /2 4 2t+s ft 2 t. 3.3 p + t,s t=2 s s=+2 he summation over s in 3.3 can be bounded as follows: 2+ t δ 2s s p s 2 t 2+ t s 2p s δ 2, 3.4 and since p δ 2 s=+2 s=+2, the summation in 3.4 is dominated by the term of s = 2 + t. herefore, 3.3 reduces to p 2+ t t δ /2 4 2t+2+ t 2 t ft p + t=2 2 + t p t = ft = 2 p t. 3.5 t=2 t=2 t t imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

21 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 20 For the same reason, the right hand side of 3.5 inside the summation can be bounded by p 2 2 t, p and since 2 /p = 2 / p, the dominating term in 3.5 is when t =, which reduces to 2 2 p = + 2 p Since p/ 0, we have 3.6 equals 2 o. 3.7 herefore, in this case, we have St, s = t,s 2 o. 3.8 Case 2 2 t and s + : For the same reason, combining the bound of the expectation part in 3.7 with 2.9 and 3., we have St, s = p + MAt,s p s t E[ε j i ε j i 2 ε j2 s+i 2 ε j2 s+i ] δ /2 4 4t p s t f p + t 2 t. 3.9 s herefore, we have t,s St, s + p s t δ 4 4t f t 2 t s= s t= We also consider the following three situations:. t = and s < +, imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

22 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 2 2. < t < and s = +, 3. < t < and s < +, and show that for all the above three situations, we have 3.20 bounded by 2 o. For situation, 3.20 reduces to p s f s= = s p s s= 2 s, 3.2 which can be bounded as p 2 p s. s= herefore, the dominating term is when s =, thus 3.2 reduces to 2 p = which is due to the choice of that /p 0. 2 o, For situation 2, 3.20 reduces to δ 4 4t t f t 2 t t=2 = δ 4 4t t 2 2 t t=2 t t Since the right hand side of 3.22 can be bounded by δ 4 t= t, 3.23 δ 4 imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

23 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 22 which is dominated by the term of t = since 22 δ 4 = 22 δ / bounded by δ = δ o, which is due to the fact that δ 4 2 = δ herefore, we have For situation 3, we have 3.20 reduce to = t=2 t=2 p s t δ 4 4t f t 2 t s p s t δ 4 4t 2 2 t t t s s= s= he part of summation over s is which could be bounded by s= p s 2 t s 2p s, s=, therefore, the dominating term is when s =. So 3.24 reduces to p δ 4 4t t t=2 2 2 t t t imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

24 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 23 For the same reason, the right hand side of 3.25 can be bounded by p δ t 2, δ 4 t=2 which is dominated by the term of t = since 2 = 2. herefore, 3.25 δ 4 δ /4 4 reduces to p δ = O δ4 3 p 2, 3.26 and since δ4 3 p = δ 2 2 /p 0, we have 3.26 equals 2 o. Finally, in all the three situations, we have t,s St, s = he proof of Proposition 3. is complete. 2 o. Appendix A: Justification of truncation, centralisation and rescaling in 2.7 A.. runcation Define two p matrices E := ε ε 2 ε ε, E 2 := ε s+ ε s+2 ε s+ ε s+, A. then X = s+ t=s+ ε t ε t s = E 2E, A.2 imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

25 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 24 and our target matrix Let A = p X X = p E 2E E E 2. A.3 ˆε ij = ε ij { εij η /4 }, ˆX and  are defined by replacing all the ε ij with ˆε ij in A.2 and A.3. Using heorem A.44 in Bai and Silverstein 200 and the inequality that ranab CD rana C + ranb D, we have F A x F Âx p ran p X = F p X X x F ˆX p ˆX x p ˆX = p ran E 2E Ê2Ê E p ran 2 Ê2 + p ran E Ê = p ran X ˆX = p ran E 2 E Ê2Ê = 2 E p ran Ê 2 p p { εij >η /4 }. A.4 i= j= Since sup it Eε 4 it <, we have always η 4 p i,j E ε ij 4 I εij >η /4 0 as p,. Consider the expectation and variance of p p i= j= { ε ij >η /4 } in A.4: 2 E p 2 Var p p i= p i= j= j= { εij >η /4 } { εij >η /4 } 2 p 4 p 2 p E ε ij 4 { εij >η /4 } = o, η 4 j= p E ε ij 4 { εij >η /4 } = o η 4 p. i= i= j= imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

26 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 25 Applying Bernstein s inequality, for all small ε > 0 and large p, we have 2 p P p { εij >η /4 } ε 2e 2 ε2p. i= j= A.5 Finally, combine A.4, A.5 with Borel-Cantelli lemma, we have with probability, F A x F Âx 0. A.2. Centralisation Let ε ij = ˆε ij Eˆε ij, X and à are defined by involving the ε ij s in A.2 and A.3. Similar to A.4, we have F Âx F Ãx p ran p ˆX p X = Ê2 p ran Ê Ẽ2Ẽ 2 Ê p ran Ẽ = 2 p ran EÊ = 2 p 0, as p. herefore, we have F Âx F Ãx 0. A.3. Rescaling Let then for the same reason as A.4, we have σ 2 ij = E ε 2 ij, ˇε ij := ε ij /σ ij, F Ãx F Ǎx Ẽ2 p ran Ě2 + Ẽ p ran Ě imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

27 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 26 = 2 Ẽ p ran Ě 2 p max ran Ẽ i j σ ij 2 p max min{p, } i j σ ij = O max. i j σ ij Since σ 2 ij = E ε 2 ij = Eˆε ij Eˆε ij 2 = Varˆε ij = Varε ij { εij η /4 } Varε ij =, as. herefore, we have F Ãx F Ǎx 0. Appendix B: Justification of truncation, centralisation and rescaling in 3.4 B.. runcation E, E 2, X and A are defined in A., A.2 and A.3. Let ˆε ij = ε ij { εij δ /2 }, ˆX and  are defined by replacing all the ε ij with ˆε ij in A.2 and A.3. With the assumption that sup it E ε it 4+ν <, we have always E ε it 4+ν { εit >δ /2 } sup 0 as p,. B. it δ 4+ν Since whose eigenvalues are the same as those of A = p E 2E E E 2, B := p E E E 2 E 2, imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

28 then we have Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 27 λ max A λ max  λmax = B λ max ˆB = p E E E2 E 2 op p Ê ÊÊ 2 Ê2 op p E E E2 E 2 p Ê ÊÊ 2 Ê2 op p E E E2 E 2 p Ê ÊE 2 E 2 + op p Ê ÊE2 E 2 = E E p Ê Ê E2 E 2 + Ê2 op p Ê Ê E2 E 2 Ê 2 p Ê ÊÊ 2 Ê2 op op := J + J 2. B.2 First, we have E E Ê Ê = max op x = xe E Ê Êx ] = max [xe E x = Ê E x + xê E Ê Êx max x = xe E Ê E x + max x = xê E Ê Êx := J + J 2, B.3 where J = max x = xe E Ê E x = max x i x j E E x = Ê E i, j p = max x i x j ε i ˆε i ε j x = i,j = p [ /2 max x 2 2 /2 i εi ˆε i x = = i i j p [ 2 /2 ] /2 = εi ˆε i = p = i /2 p 2 εi ˆε i i= j i,j ε 2 j = j= ε 2 j /2 x 2 j /2 j ε 2 j /2 ] imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

29 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 28 p p /2 = O ε 2 i { εi >δ /2 } = i= /2 O p 2 sup E ε 2 i { εi >δ /2 } i p 2 /2 O sup E ε δ /2 2+ν i 4+ν { εi >δ /2 } i = o δp /2 ν/4, B.4 where the last inequality is due to B.. For the same reason, J 2 is also of the same order as B.4. herefore we have E E Ê Ê o δp /2 ν/4. B.5 op hen recall the definition of J in B.2, where J = E E p Ê Ê E2 E 2 op E E p Ê Ê E op 2 E op 2 o δ /2 ν/4 0, as p,, B.6 where the last inequality in B.6 is due to B.5 and the fact that E 2 E 2 op is the largest eigenvalue of the sample covariance matrix E 2E 2, which is of constant order. For the same reason, we also have J 2 the same order as J, which also tends to zero. Finally, according to B.2 we have λ max A λ max  0. B.2. Centralisation and Rescaling Let σ 2 it = Var ˆε it, ε it = ˆε it Eˆε it σ it, imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

30 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 29 X and à are defined by replacing all the ε ij with ε ij in A.2 and A.3. In this subsection, we will show λ max  λ maxã 0, which is equivalent to showing λ max ˆB λ max B 0. First, since sup σ 2 i = sup Eε2 i E i i = sup ε E 2 i { εi >δ /2 } + i 2 ε i { εi δ /2 } E ε i { εi δ } /2 E ε i { εi >δ } 2 /2 2 sup E ε 2 2 sup i E ε i 4+ν { εi >δ /2 } i { εi >δ /2 } δ /2 2+ν i δ 2 = o 2+ν 2, B.7 where the last equality is due to B.. Finally, we have: sup i σ i = sup σ i i σ i = sup σi 2 i σ i σ i + sup = O σ 2 i i δ 2 o, B.8 2+ν 2 where the last inequality is due to B.7. Second, we have another estimation for the term sup i Eˆε i as follows: sup Eˆε i = sup E [ ε i { εi δ }] /2 = sup E [ ε i { εi >δ }] /2 i i i sup i E [ ] ε i 4+ν { εi >δ /2 } δ δ /2 3+ν = o. B.9 3+ν 2 hen similar to B.2, we have λ max ˆB λ max B imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

31 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 30 Ê p Ê Ẽ Ẽ Ê 2 Ê2 + op p Ẽ Ê Ẽ2 Ẽ 2 Ê2 Ẽ 2 op := J 3 + J 4. Also, similar to B.3 and B.4, we have Ê Ê Ẽ Ẽ = max op x = xê Ê Ẽ Ẽx with Since = max x = xê Ê Ẽ Êx + max x = xẽ Ê Ẽ Ẽx := J 3 + J 32, B.0 J 3 = max x = xê Ê Ẽ Êx = max x i x j x = p = p = p = = O /2 p 2 ˆεi ε i i= p p = 2 ˆεi ε i = i= i= σ i i,j = j= /2 2 ˆεi ε i i= p = 2ˆε 2 i + i= p = ˆε 2 j ˆε i ˆε 2 i Eˆε i σ i i= σ 2 i Eˆε i 2 + p = = /2 p ˆε i ε i ˆε i i= 2 ˆε i Eˆε i σ i σ i B. { max O p sup 2 2 i σ i, O p sup Eˆε i, i O p sup } sup Eˆεi i σ i i { } δ 4 p δ 2 p δ 3 p max o, o, o, B.2 +ν 2+ν 3/2+ν where the last inequality is due to B.8 and B.9. hen according to B., we have the bound for the term J 3 : { δ 2 p J 3 max o ν/2 δp, o, o +ν 2 δ δp /4+ν/2 }. B.3 imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

32 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 3 For the same reason, we have the term J 32 can be bounded by B.3 as well. herefore, we have J 3 = Ê p Ê Ẽ Ẽ Ê 2 Ê2 op p Ê Ê Ẽ Ẽ op Ê 2 Ê2 op = O p J 3 + J 32 { δ 2 δ δ } δ max o, o, o 0. ν/2 +ν 2 /4+ν/2 Similar, we also have J 4 0, which leads to the fact that λ max ˆB λ max B 0. B.4 References Bai, Z. D. and Yin, Y. Q. 988a. A convergence to the semicircle law. Ann. Probab. 62, Bai, Z.D. and Silverstein, J.W Spectral Analysis of Large Dimensional Random Matrices 2nd edition. Springer, 20. Jin, B. S., Wang, C., Bai, Z. D., Nair, K. K. and Harding, M. C Limiting spectral distribution of a symmetrized auto-cross covariance matrix. Ann. Appl. Probab. 243, Lam, C. and Yao, Q.W Factor modeling for high-dimensional time series: inference for the number of factors. Ann. Statist. 40, Li, Z., Pan, G.M. and Yao, J On singular value distribution of large-dimensional autocovariance matrices. Preprint, available at arxiv: Li, Z., Wang, Q. and Yao, J Identifying the number of factors from singular values of a large sample auto-covariance matrix Preprint, available at arxiv: Liu, H.Y., Aue, A. and Paul, D On the Marčeno-Pastur law for linear time series. Preprint, available at arxiv: imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

33 Q. Wang and J. Yao/Singular values distribution of a ultra-large auto-covariance matrix 32 ao, opics in Random Matrix heory. American Mathematical Society. Wang, L. and Paul, D Limiting spectral distribution of renormalized separable sample covariance matrices when p/n 0. J. Multivariate Anal. 26, Wang, C., Jin, B. S., Bai, Z. D., Nair, K. K. and Harding, M. C. 203 Strong Limit of the Extreme Eigenvalues of a Symmetrized Auto-Cross Covariance Matrix. Preprint, available at arxiv: Wang, Q. and Yao, J. 204 Moment approach for singular values distribution of a large auto-covariance matrix. Preprint, available at arxiv: imsart-generic ver. 204/0/6 file: autocross.tex date: January 28, 205

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao A note on a Marčenko-Pastur type theorem for time series Jianfeng Yao Workshop on High-dimensional statistics The University of Hong Kong, October 2011 Overview 1 High-dimensional data and the sample covariance

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX

SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX WANG CHEN NATIONAL UNIVERSITY OF SINGAPORE 203 SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX WANG CHEN (B.Sc.(Hons) National

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Estimation of the Global Minimum Variance Portfolio in High Dimensions

Estimation of the Global Minimum Variance Portfolio in High Dimensions Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j= be a symmetric n n matrix with Random entries

More information

Weiming Li and Jianfeng Yao

Weiming Li and Jianfeng Yao LOCAL MOMENT ESTIMATION OF POPULATION SPECTRUM 1 A LOCAL MOMENT ESTIMATOR OF THE SPECTRUM OF A LARGE DIMENSIONAL COVARIANCE MATRIX Weiming Li and Jianfeng Yao arxiv:1302.0356v1 [stat.me] 2 Feb 2013 Abstract:

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

A new multivariate CUSUM chart using principal components with a revision of Crosier's chart

A new multivariate CUSUM chart using principal components with a revision of Crosier's chart Title A new multivariate CUSUM chart using principal components with a revision of Crosier's chart Author(s) Chen, J; YANG, H; Yao, JJ Citation Communications in Statistics: Simulation and Computation,

More information

A new method to bound rate of convergence

A new method to bound rate of convergence A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

BULK SPECTRUM: WIGNER AND MARCHENKO-PASTUR THEOREMS

BULK SPECTRUM: WIGNER AND MARCHENKO-PASTUR THEOREMS BULK SPECTRUM: WIGNER AND MARCHENKO-PASTUR THEOREMS 1. GAUSSIAN ORTHOGONAL AND UNITARY ENSEMBLES Definition 1. A random symmetric, N N matrix X chosen from the Gaussian orthogonal ensemble (abbreviated

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

LIMITING SPECTRAL DISTRIBUTION OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX. University of Science and Technology of China, National University of

LIMITING SPECTRAL DISTRIBUTION OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX. University of Science and Technology of China, National University of Submitted to the Annals of Applied Probability arxiv: math.pr/0567508 LIMITING SPECTRAL DISTRIBUTION OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX By Baisuo Jin,, Chen Wang Z. D. Bai,, K. Krishnan Nair

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices

Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Ann Inst Stat Math 206 68:765 785 DOI 0007/s0463-05-054-0 Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Huiqin Li Zhi Dong Bai Jiang Hu Received:

More information

METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW

METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW Statistica Sinica 9(1999), 611-677 METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW Z. D. Bai National University of Singapore Abstract: In this paper, we give a brief

More information

Fluctuations from the Semicircle Law Lecture 4

Fluctuations from the Semicircle Law Lecture 4 Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

arxiv: v2 [math.pr] 2 Nov 2009

arxiv: v2 [math.pr] 2 Nov 2009 arxiv:0904.2958v2 [math.pr] 2 Nov 2009 Limit Distribution of Eigenvalues for Random Hankel and Toeplitz Band Matrices Dang-Zheng Liu and Zheng-Dong Wang School of Mathematical Sciences Peking University

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Optimal spectral shrinkage and PCA with heteroscedastic noise

Optimal spectral shrinkage and PCA with heteroscedastic noise Optimal spectral shrinage and PCA with heteroscedastic noise William Leeb and Elad Romanov Abstract This paper studies the related problems of denoising, covariance estimation, and principal component

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

The Hadamard product and the free convolutions

The Hadamard product and the free convolutions isid/ms/205/20 November 2, 205 http://www.isid.ac.in/ statmath/index.php?module=preprint The Hadamard product and the free convolutions Arijit Chakrabarty Indian Statistical Institute, Delhi Centre 7,

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

Exact Asymptotics in Complete Moment Convergence for Record Times and the Associated Counting Process

Exact Asymptotics in Complete Moment Convergence for Record Times and the Associated Counting Process A^VÇÚO 33 ò 3 Ï 207 c 6 Chinese Journal of Applied Probability Statistics Jun., 207, Vol. 33, No. 3, pp. 257-266 doi: 0.3969/j.issn.00-4268.207.03.004 Exact Asymptotics in Complete Moment Convergence for

More information

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

Distribution of Eigenvalues of Weighted, Structured Matrix Ensembles

Distribution of Eigenvalues of Weighted, Structured Matrix Ensembles Distribution of Eigenvalues of Weighted, Structured Matrix Ensembles Olivia Beckwith 1, Steven J. Miller 2, and Karen Shen 3 1 Harvey Mudd College 2 Williams College 3 Stanford University Joint Meetings

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

A remark on the maximum eigenvalue for circulant matrices

A remark on the maximum eigenvalue for circulant matrices IMS Collections High Dimensional Probability V: The Luminy Volume Vol 5 (009 79 84 c Institute of Mathematical Statistics, 009 DOI: 04/09-IMSCOLL5 A remark on the imum eigenvalue for circulant matrices

More information

Convergence rates of spectral distributions of large sample covariance matrices

Convergence rates of spectral distributions of large sample covariance matrices Title Convergence rates of spectral distributions of large sample covariance matrices Authors) Bai, ZD; Miao, B; Yao, JF Citation SIAM Journal On Matrix Analysis And Applications, 2004, v. 25 n., p. 05-27

More information

Measurable Choice Functions

Measurable Choice Functions (January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Efficient Estimation for the Partially Linear Models with Random Effects

Efficient Estimation for the Partially Linear Models with Random Effects A^VÇÚO 1 33 ò 1 5 Ï 2017 c 10 Chinese Journal of Applied Probability and Statistics Oct., 2017, Vol. 33, No. 5, pp. 529-537 doi: 10.3969/j.issn.1001-4268.2017.05.009 Efficient Estimation for the Partially

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

MAT 107 College Algebra Fall 2013 Name. Final Exam, Version X

MAT 107 College Algebra Fall 2013 Name. Final Exam, Version X MAT 107 College Algebra Fall 013 Name Final Exam, Version X EKU ID Instructor Part 1: No calculators are allowed on this section. Show all work on your paper. Circle your answer. Each question is worth

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information

ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR

ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR Periodica Mathematica Hungarica Vol. 51 1, 2005, pp. 11 25 ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR István Berkes Graz, Budapest, LajosHorváth Salt Lake City, Piotr Kokoszka Logan Qi-man Shao

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Unit Roots in White Noise?!

Unit Roots in White Noise?! Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12 Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Singular value decomposition (SVD) of large random matrices. India, 2010

Singular value decomposition (SVD) of large random matrices. India, 2010 Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:

More information

Asymptotic distribution of GMM Estimator

Asymptotic distribution of GMM Estimator Asymptotic distribution of GMM Estimator Eduardo Rossi University of Pavia Econometria finanziaria 2010 Rossi (2010) GMM 2010 1 / 45 Outline 1 Asymptotic Normality of the GMM Estimator 2 Long Run Covariance

More information

Spectral law of the sum of random matrices

Spectral law of the sum of random matrices Spectral law of the sum of random matrices Florent Benaych-Georges benaych@dma.ens.fr May 5, 2005 Abstract The spectral distribution of a matrix is the uniform distribution on its spectrum with multiplicity.

More information

High-dimensional two-sample tests under strongly spiked eigenvalue models

High-dimensional two-sample tests under strongly spiked eigenvalue models 1 High-dimensional two-sample tests under strongly spiked eigenvalue models Makoto Aoshima and Kazuyoshi Yata University of Tsukuba Abstract: We consider a new two-sample test for high-dimensional data

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

Extreme eigenvalues of Erdős-Rényi random graphs

Extreme eigenvalues of Erdős-Rényi random graphs Extreme eigenvalues of Erdős-Rényi random graphs Florent Benaych-Georges j.w.w. Charles Bordenave and Antti Knowles MAP5, Université Paris Descartes May 18, 2018 IPAM UCLA Inhomogeneous Erdős-Rényi random

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Nathan Noiry Modal X, Université Paris Nanterre May 3, 2018 Wigner s Matrices Wishart s Matrices Generalizations Wigner s Matrices ij, (n, i, j

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

Chapter 2. Limits and Continuity 2.6 Limits Involving Infinity; Asymptotes of Graphs

Chapter 2. Limits and Continuity 2.6 Limits Involving Infinity; Asymptotes of Graphs 2.6 Limits Involving Infinity; Asymptotes of Graphs Chapter 2. Limits and Continuity 2.6 Limits Involving Infinity; Asymptotes of Graphs Definition. Formal Definition of Limits at Infinity.. We say that

More information

WEIGHTED SUMS OF SUBEXPONENTIAL RANDOM VARIABLES AND THEIR MAXIMA

WEIGHTED SUMS OF SUBEXPONENTIAL RANDOM VARIABLES AND THEIR MAXIMA Adv. Appl. Prob. 37, 510 522 2005 Printed in Northern Ireland Applied Probability Trust 2005 WEIGHTED SUMS OF SUBEXPONENTIAL RANDOM VARIABLES AND THEIR MAXIMA YIQING CHEN, Guangdong University of Technology

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

1 of 7 7/16/2009 6:12 AM Virtual Laboratories > 7. Point Estimation > 1 2 3 4 5 6 1. Estimators The Basic Statistical Model As usual, our starting point is a random experiment with an underlying sample

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

Freeness and the Transpose

Freeness and the Transpose Freeness and the Transpose Jamie Mingo (Queen s University) (joint work with Mihai Popa and Roland Speicher) ICM Satellite Conference on Operator Algebras and Applications Cheongpung, August 8, 04 / 6

More information

AN EXAMPLE OF SPECTRAL PHASE TRANSITION PHENOMENON IN A CLASS OF JACOBI MATRICES WITH PERIODICALLY MODULATED WEIGHTS

AN EXAMPLE OF SPECTRAL PHASE TRANSITION PHENOMENON IN A CLASS OF JACOBI MATRICES WITH PERIODICALLY MODULATED WEIGHTS AN EXAMPLE OF SPECTRAL PHASE TRANSITION PHENOMENON IN A CLASS OF JACOBI MATRICES WITH PERIODICALLY MODULATED WEIGHTS SERGEY SIMONOV Abstract We consider self-adjoint unbounded Jacobi matrices with diagonal

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

and finally, any second order divergence form elliptic operator

and finally, any second order divergence form elliptic operator Supporting Information: Mathematical proofs Preliminaries Let be an arbitrary bounded open set in R n and let L be any elliptic differential operator associated to a symmetric positive bilinear form B

More information

ARTICLE IN PRESS European Journal of Combinatorics ( )

ARTICLE IN PRESS European Journal of Combinatorics ( ) European Journal of Combinatorics ( ) Contents lists available at ScienceDirect European Journal of Combinatorics journal homepage: www.elsevier.com/locate/ejc Proof of a conjecture concerning the direct

More information

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel. UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:

More information

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston Operator-Valued Free Probability Theory and Block Random Matrices Roland Speicher Queen s University Kingston I. Operator-valued semicircular elements and block random matrices II. General operator-valued

More information

Are There Sixth Order Three Dimensional PNS Hankel Tensors?

Are There Sixth Order Three Dimensional PNS Hankel Tensors? Are There Sixth Order Three Dimensional PNS Hankel Tensors? Guoyin Li Liqun Qi Qun Wang November 17, 014 Abstract Are there positive semi-definite PSD) but not sums of squares SOS) Hankel tensors? If the

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

A Short Course in Basic Statistics

A Short Course in Basic Statistics A Short Course in Basic Statistics Ian Schindler November 5, 2017 Creative commons license share and share alike BY: C 1 Descriptive Statistics 1.1 Presenting statistical data Definition 1 A statistical

More information

Non-Gaussian Maximum Entropy Processes

Non-Gaussian Maximum Entropy Processes Non-Gaussian Maximum Entropy Processes Georgi N. Boshnakov & Bisher Iqelan First version: 3 April 2007 Research Report No. 3, 2007, Probability and Statistics Group School of Mathematics, The University

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS

MATHEMATICAL ENGINEERING TECHNICAL REPORTS MATHEMATICAL ENGINEERING TECHNICAL REPORTS Combinatorial Relaxation Algorithm for the Entire Sequence of the Maximum Degree of Minors in Mixed Polynomial Matrices Shun SATO (Communicated by Taayasu MATSUO)

More information

ESTIMATES FOR THE MONGE-AMPERE EQUATION

ESTIMATES FOR THE MONGE-AMPERE EQUATION GLOBAL W 2,p ESTIMATES FOR THE MONGE-AMPERE EQUATION O. SAVIN Abstract. We use a localization property of boundary sections for solutions to the Monge-Ampere equation obtain global W 2,p estimates under

More information