Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix
|
|
- Dwain Snow
- 6 years ago
- Views:
Transcription
1 Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix Alexander P. Koldanov and Petr A. Koldanov Abstract A multiple decision statistical problem for the elements of inverse covariance matrix is considered. Associated optimal unbiased multiple decision statistical procedure is given. This procedure is constructed using the Lehmann theory of multiple decision statistical procedures and the conditional tests of the Neyman structure. The equations for thresholds calculation for the tests of the Neyman structure are analyzed. Keywords Inverse covariance matrix Tests of the Neyman structure Multiple decision statistical procedure Generating hypothesis 1 Introduction A market network is constructed by means of some similarity measure between every pairs of stocks. The most popular measure of similarity between stocks of a market is the correlation between them [1 4]. The analysis of methods of market graph construction [2] from the statistical point of view was started in [5]. In [5] multiple decision statistical procedure for market graph construction based on the Pearson test is suggested. The authors of [5] note that a procedure of this type can be made optimal in the class of unbiased multiple decision statistical procedures if one uses the tests of the Neyman structure for generating hypothesis. In the present paper we use the partial correlations as a measure of similarity between stocks. In this case the elements of inverse covariance matrix are the weights of links between stocks in the market network [6]. A.P. Koldanov P.A. Koldanov ( ) National Research University, Higher School of Economics, Bolshaya Pecherskaya 25/12, Nizhny Novgorod, 63155, Russia akoldanov@hse.ru; pkoldanov@hse.ru V.F. Demyanov et al. (eds.), Constructive Nonsmooth Analysis and Related Topics, Springer Optimization and Its Applications 87, DOI 1.17/ , Springer Science+Business Media New York
2 26 A.P. Koldanov and P.A. Koldanov The main goal of the paper is investigation of the problem of identification of inverse covariance matrix as a multiple decision statistical problem. As a result an optimal unbiased multiple decision statistical procedure for identification of inverse covariance matrix is given. This procedure is constructed using the Lehmann theory of multiple decision statistical procedures and the conditional tests of the Neyman structure. In addition the equations for thresholds calculation for the tests of the Neyman structure are analyzed. The paper is organized as follows. In Sect. 2 we briefly recall the Lehmann theory of multiple decision statistical procedures. In Sect. 3 we describe the tests of the Neyman structure. In Sect. 4 we formulate the multiple decision problem for identification of inverse covariance matrix. In Sect. 5 we construct and study the optimal tests for testing of generating hypothesis for the elements of an inverse covariance matrix. In Sect. 6 we construct the multiple statistical procedure and consider some particular cases. In Sect. 7 we summarize the main results of the paper. 2 Lehmann Multiple Decision Theory In this section we recall for the sake of completeness the basic idea of the Lehmann theory following the paper [5]. Suppose that the distribution of a random vector R is taken from the family f (r, θ) : θ Ω, where θ is a parameter, Ω is the parametric space, r is an observation of R. We need to construct a statistical procedure for the selection of one from the set of L hypotheses, which in the general case can be stated as: H i : θ Ω i,i = 1,...,L, Ω i Ω = /, i, L i=1 Ω i = Ω. (1) The most general theory of a multiple decision procedure is the Lehmann theory [7]. The Lehmann theory is based on three concepts: generating hypothesis, generating hypothesis testing and compatibility conditions, and additivity condition for the loss function. The multiple decision problem (1) of selecting one from the set of L hypothesesh i : θ Ω i ;i = 1,...,L is equivalent to a family of M two decision problems: with H : θ ω vs K : θ ω 1, = 1,...,M, (2) L i=1 Ω i = ω ω 1 = Ω.
3 Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix 27 This equivalence is given by the relations: where Ω i = M =1 ω χ i,, ω = {i:χ i, =1} Ω i, { 1, Ωi ω χ i, = /, 1, Ω i ω = /. (3) Hypotheses H : ( = 1,...,M) are called generating hypotheses for the problem (1). The equivalence between problem (1) and the family of problems (2) reduces the multiple decision problem to the testing of generating hypothesis. Any statistical procedure δ for hypothesis testing of H can be written in the following form {, r X, δ (r)=, r X 1 1, where is the decision of acceptance of H 1 and of K, X is the acceptance region of H and X 1 (4) is the decision of acceptance is the acceptance region of K (reection region of H ) in the sample space. One has X X 1 = /, X X 1 = X, X being the sample space. Define the acceptance region for H i by D i = M =1 X χ i,, (5) where χ i, are defined by (3), and put X 1 = X. Note that L i=1 D i X, but it is possible that L i=1 D i X. Therefore, if D 1,D 2,...,D L is a partition of the sample space X, then one can define the statistical procedure δ(r) by d 1, r D 1, d δ(r)= 2, r D 2, d L, r D L. According to [8,9] we define the conditional risk for multiple decision statistical procedure by risk(θ,δ)=e θ w(θ,δ(r)) = L k=1 (6) w(θ,d k )P θ (δ(r)=d k ), (7)
4 28 A.P. Koldanov and P.A. Koldanov where E θ is the expectation according to the density function f (x,θ) and w(θ,d k ) is the loss from decision d k under the condition that θ is true, θ Ω. Under the additivity condition of the loss function (see [5] for more details) the conditional risk can be written as risk(θ,δ)= M =1 risk(θ,δ ), θ Ω. (8) We call statistical procedure optimal in a class of statistical procedures if it has minimal conditional risk for all θ Ω in this class. The main result of the Lehmann theory (see [7]) states that if Eq. (8) is satisfied and statistical procedures (4) are optimal in the class of unbiased statistical tests, then the associated multiple decision statistical procedure (6) is optimal in the class of unbiased multiple decision procedures. 3 Unbiasedness and Tests of the Neyman Structure The class of unbiased multiple decision statistical procedures according to Lehmann [9, 1] is defined by: E θ w(θ,δ(r)) E θ w(θ,δ(r)) f or any θ,θ Ω. Let f (r;θ) be the density of the exponential family: f (r;θ)=c(θ)exp( M =1 θ T (r))h(r), (9) where c(θ) is a function defined in the parameters space, h(r), T (r) are functions defined in the sample space, and T (R) are the sufficient statistics for θ, = 1,...,M. Suppose that generating hypotheses (2) has the form: H : θ = θ vs K : θ θ, = 1,2,...,M, (1) where θ are fixed. For a fixed the parameter θ is called information or structural parameter and θ k, k are called nuisance parameters. According to [9] the optimal unbiased tests for generating hypotheses (1) are: δ = {, c 1 (t 1,...,t 1,t +1,...,t M ) < t < c 2 (t 1,...,t 1,t +1,...,t M ), 1, otherwise, (11)
5 Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix 29 where t i = T i (r),i = 1,...,M and constants c 1 (t 1,...,t 1,t +1,...,t M ), c 2 (t 1,..., t 1,t +1,...,t M ) are defined from the equations c 2 c 1 f (t,θ T i = t i,i = 1,...,M;i )dt = 1 α, (12) and c c 2 t f (t,θ T i = t i,i = 1,...,M;i )dt t f (t,θ T i = t i,i = 1,...,M;i )dt + = α t f (t,θ T i = t i,i = 1,...,M;i )dt, (13) where f (t,θ T i = t i,i = 1,...,M;i ) is the density of conditional distribution of statistics T and α is the level of significance of the test. A test satisfying (12) is said to have Neyman structure. This test is characterized by the fact that the conditional probability of reection of H (under the assumption that H is true) is equal to α on each of the surfaces k (x) =t k ). Therefore k (T the multiple decision statistical procedure associated with the tests of the Neyman structure (11), (12), and (13) is optimal in the class of unbiased multiple decision procedures. 4 Problem of Identification of Inverse Covariance Matrix In this section we formulate the multiple decision problem for elements of inverse covariance matrix. Let N be the number of stocks on a financial market, and let n be the number of observations. Denote by r i (t) the daily return of the stock i for the day t (i = 1,...,N;t = 1,...,n). We suppose r i (t) to be an observation of the random variable R i (t). We use the standard assumptions: the random variables R i (t), t = 1,...,n are independent and have all the same distribution as a random variable R i (i = 1,...,N). The random vector (R 1,R 2,...,R N ) describes the oint behavior of the stocks. We assume that the vector (R 1,R 2,...,R N ) has a multivariate normal distribution with covariance matrix σ i where σ i = cov(r i,r )=E(R i E(R i ))(R E(R )),ρ i =(σ i )/( σ ii σ ),i, = 1,...,N, E(R i ) is the expectation of the random variable R i. We define a sample space as R N n with the elements (r i (t)). Statistical estimation of σ i is s i = Σt=1 n (r i(t) r i )(r (t) r ) where r i =(1/n) t=1 n r i(t).the sample correlation between the stocks i and is defined by r i =(s i )/( s ii s ).
6 21 A.P. Koldanov and P.A. Koldanov It is known [11] that for a multivariate normal vector the statistics (r 1,r 2,...,r N ) and s i (matrix of sample covariances) are sufficient. Let σ i be the elements of inverse covariance matrix σ i. Then the problem of identification of inverse covariance matrix can be formulated as a multiple decision problem of the selection of one from the set of hypotheses: H 1 : σ i = σ i, i, = 1,...,N, i <, H 2 : σ 12 σ 12,σi = σ i,i, = 1,...,N,(i, ) (1,2), i <, H 3 : σ 13 σ 13,σi = σ i,i, = 1,...,N,(i, ) (1,3), i <, H 4 : σ 12 σ 12,σ13 σ 13,σi = σ i,i, = 1,...,N,(i, ) (1,2), (i, ) (1,3),... H L : σ i σ i,i, = 1,...,N, i <, (14) where L = 2 M with M = N(N 1)/2. Multiple decision problem (14) is a particular case of the problem (1). The parameter space Ω is the space of positive semi-definite matrices σ i, Ω k is a domain in the parameters space associated with the hypothesis H k from the set (14) k = 1,...,L. For the multiple decision problem (14) we introduce the following set of generating hypotheses: h i, : σ i = σ i vs k i, : σ i σ i, i, = 1,2,...,N, i <, (15) We use the following notations: i, is the decision of acceptance of the hypothesis h i, and 1 i, is the decision of reection of h i,. 5 Tests of the Neyman Structure for Testing of Generating Hypothesis Now we construct the optimal test in the class of unbiased tests for generating hypothesis (15). To construct these tests we use the sufficient statistics s i with the following Wishart density function [11]: f ({s k,l })= [det(σ kl )] n/2 [det(s kl )] (n N 2)/2 exp[ (1/2) k l s k,l σ kl ] 2 (Nn/2) π N(N 1)/4 Γ (n/2)γ ((n 1)/2) Γ ((n N + 1)/2) if the matrix (s kl ) is positive definite, and f ({s kl })= otherwise. One has for a fixed i < : f ({s kl })=C({σ kl }) exp[ σ i s i 1 2 (k,l) (i, );(k,l) (,i) s kl σ kl ] h({s kl }),
7 Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix 211 where C({σ kl })= 1 q det(σ kl )] n/2, q = 2 (Nn/2) π N(N 1)/4 Γ (n/2)γ ((n 1)/2) Γ ((n N + 1)/2), h({s kl })=[det(s kl )] (n N 2)/2. Therefore, this distribution belongs to the class of exponential distributions with parameters σ kl. The optimal tests of the Neyman structure (11), (12), and (13) for generating hypothesis (15) take the form: δ i, ({s kl })= { i,, c 1 i, ({s kl}) < s i < c 2 i, ({s kl}), (k,l) (i, ), 1 i,, s i c 1 i, ({s kl}) or s i c 2 i, ({s kl}), (k,l) (i, ), (16) where the critical values are defined from the equations I [c 1i, ;c2i, I i ] exp[ σ s i][det(s kl )] (n N 2)/2 ds i i exp[ σ s = 1 α i,, (17) i][det(s kl )] (n N 2)/2 ds i s i exp[ σ i I [ ;c 1 i, ] s i][det(s kl )] (n N 2)/2 ds i + s i exp[ σ i I [c 2 i, ;+ ] s i][det(s kl )] (n N 2)/2 ds i = α i, s i exp[ σ i s i][det(s kl )] (n N 2)/2 ds i, I where I is the interval of values of s i such that the matrix (s kl ) is positive definite and α i is the level of significance of the tests. Consider Eqs. (17) and (18). Note that det(s kl ) is a quadratic polynomial of s i. Let det(s k,l )= C 1 s 2 i, +C 2s i, +C 3 = C 1 ( s 2 i, +As i, +B), C 1 > then the positive definiteness of the matrix (s k,l ) for a fixed s k,l, (k,l) (i, ) gives the following interval for the value of s i, : (18) I = {x : A A B < x < A A B}. (19) 4 Now we define the functions x Ψ 1 (x)= ( t 2 + At + B) K exp( σ i t)dt, (2) Ψ 2 (x)= x t( t 2 + At + B) K exp( σ i t)dt, (21)
8 212 A.P. Koldanov and P.A. Koldanov where K =(n N 2)/2. One can calculate the critical values of c 1 i,,c2 i, from the equations: Ψ 1 (c 2 i) Ψ 1 (c 1 i)=(1 α i )(Ψ 1 ( A A B) Ψ 1( A A B)), (22) 4 Ψ 2 (c 2 i) Ψ 2 (c 1 i)=(1 α i )(Ψ 2 ( A A B) Ψ 2( A A B)). (23) 4 The test (16) can be written in terms of sample correlations. First note that det(s kl )=det(r kl )s 11 s 22...s NN where r kl are the sample correlations. One has J [e 1i, ;e2i, i ] exp[ σ r i s ii s ][det(r kl )] (n N 2)/2 dr i i J exp[ σ r = 1 α i,, (24) i s ii s ][det(r kl )] (n N 2)/2 dr i r i exp[ σ i J [ ;e 1 i, ] r i sii s ][det(r kl )] (n N 2)/2 dr i + r i exp[ σ i J [e 2 i, ;+ ] r i sii s ][det(r kl )] (n N 2)/2 dr i = α i, r i exp[ σ i r i sii s ][det(r kl )] (n N 2)/2 dr i, J (25) where I = J s ii s, c k i, = ek i, sii s ;k = 1,2. Therefore, the tests (16) take the form δ i, ({r kl })= i,, e 1 i, (s ii,s,{r kl }) < r i < e 2 i, (s ii,s,{r kl }), (k,l) (i, ), 1 i,, r i e 1 i, (s ii,s {r kl }) or r i e 2 i, (s ii,s {r kl }), (k,l) (i, ). It means that the tests of the Neyman structure for generating hypothesis (15) do not depend on s k,k, k i, k. In particular for N = 3, (i, )=(1,2) one has { 1,2, e 1 12 δ 1,2 ({r k,l })= (s 11,s 22,r 13,r 23 ) < r 12 < e 2 12 (s 11,s 22,r 13,r 23 ), 1,2 1, r 12 e 1 12 (s 11,s 22,r 13,r 23 ) or r 12 e 2 12 (s 11,s 22,r 13,r 23 ). (27) To emphasize the peculiarity of the constructed test we consider some interesting particular cases. n N 2 =, σ i. In this case expressions (2) and (21) can be simplified. Indeed one has in this case x Ψ 1 (x)= exp( σ i t)dt = 1 σ i (1 exp( σ i x)), (26)
9 Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix 213 x Ψ 2 (x)= t exp( σ i t)dt = x σ i exp( σ i x) 1 (σ i )2 exp( σ i x)+ 1 (σ i )2. Finally one has the system of two equations for defining constants c 1 i,c2 i : exp( σ i c1 i) exp( σ i c2 i) =(1 α i ){exp( σ i A (A 2 2 i A + B)) exp( σ 4 (A (28) 4 + B))}, c 1 iexp( σ i c1 i) c 2 iexp( σ i c2 i) =(1 α i ){( A 2 A 2 ( A 2 + A B)exp( σ i (A 2 A B)) + B)exp( σ i (A 2 + A B))}. (29) σ =. In this case the critical values are defined by the system of algebraic equations (22) and (23) where the functions Ψ 1, Ψ 2 are defined by x Ψ 1 (x)= ( t 2 + At + B) K dt, Ψ 2 (x)= x t( t 2 + At + B) K dt, In this case the tests of the Neyman structure have the form { i,, e 1 i, δ i, ({r k,l })= ({r kl} < r i < e 2 i, ({r kl}, (k,l) (i, ), i, 1, r i e 1 i, ({r kl} or r i e 1 i, ({r kl}, (k,l) (i, ). (3) n N 2 =, σ =. In this case one has c 1 i, = A 2 (1 α A i) B, (31) c 2 i, = A 2 +(1 α A i) B. 6 Multiple Statistical Procedure Based on the Tests of the Neyman Structure Now it is possible to construct the multiple decision statistical procedure for problem (14) based on the tests of Neyman structure. Then the multiple decision statistical procedure (6) takes the form:
10 214 A.P. Koldanov and P.A. Koldanov d 1, c 1 i ({s kl}) < r i < c 2 i ({s kl}), i <,i,,k,l = 1,...,N,(k,l) (i, ), d 2, r 12 c 1 12 ({s kl}) or r 12 c 2 12 ({s kl}),c 1 i ({s kl}) < r i < c 2 i ({s kl}), δ = i <,i,,k,l = 1,...,N,(k,l) (i, ), d L, r i c 1 i ({s kl}) or r i c 2 i ({s kl}), i <,i,,k,l = 1,...,N,(k,l) (i, ), (32) where c i ({s kl }) are defined from the Eqs. (17) and (18). One has D k = {r R N n : δ(r)=d k }, k = 1,2,...,L. It is clear that L D k = R N n. k=1 Then D 1,D 2,...,D L is a partition of the sample space R N n. The tests of the Neyman structure for generating hypothesis (15) are optimal in the class of unbiased tests. Therefore if the condition of the additivity (8) of the loss function is satisfied, then the associated multiple decision statistical procedure is optimal. For discussion of additivity of the loss function see [5]. We illustrate statistical procedure (32) with an example. Let N = 3. In this case problem (14) is the problem of the selection of one from eight hypotheses: H 1 : σ 12 = σ 12, σ 13 = σ 13, σ 23 = σ 23, H 2 : σ 12 σ 12, σ 13 = σ 13, σ 23 = σ 23, H 3 : σ 12 = σ 12, σ 13 σ 13, σ 23 = σ 23, H 4 : σ 12 = σ 12, σ 13 = σ 13, σ 23 σ 23, H 5 : σ 12 σ 12, σ 13 σ 13, σ 23 = σ 23, H 6 : σ 12 = σ 12, σ 13 σ 13, σ 23 σ 23, H 7 : σ 12 σ 12, σ 13 = σ 13, σ 23 σ 23, H 8 : σ 12 σ 12, σ 13 σ 13, σ 23 σ 23. (33) Generating hypotheses are: h 1,2 : σ 12 = σ 12 vs k 1,2 : σ 12 σ 12, σ 13,σ 23 are the nuisance parameters. h 1,3 : σ 13 = σ 13 vs k 1,3 : σ 13 σ 13, σ 12,σ 23 are the nuisance parameters. h 2,3 : σ 23 = σ 23 vs k 2,3 : σ 23 σ 23, σ 12,σ 13 are the nuisance parameters.
11 Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix 215 In this case multiple statistical procedure for problem (33) (ifσ ) is: d 1, c 1 12 < r 12 < c 2 12, c1 13 < r 13 < c 2 13, c1 23 < r 23 < c 2 23, d 2, r 12 c 1 12 or r 12 c 2 12, c1 13 < r 13 < c 2 13, c1 23 < r 23 < c 2 23, d 3, c 1 12 < r 12 < c 2 12, r 13 c 1 13 or r 13 c 2 13, c1 23 < r 23 < c 2 23, d 4, c 1 12 < r 12 < c 2 12, c1 13 < r 13 < c 2 13, r 23 c 1 23 or r 23 c 2 23, δ = d 5, r 12 c 1 12 or r 12 c 2 12, r 13 c 1 13 or r 13 c 2 13, c1 23 < r 23 < c 2 23, d 6, c 1 12 < r 12 < c 2 12, r 13 c 1 13 or r 13 c 2 13, r 23 c 1 23 or r 23 c 2 23, d 7, r 12 c 1 12 or r 12 c 2 12, c1 13 < r 13 < c 2 13, r 23 c 1 23 or r 23 c 2 23, d 8, r 12 c 1 12 or r 12 c 2 12, r 13 c 1 13 or r 13 c 2 13, r 23 c 1 23 or r 23 c (34) The critical values c k 12 = ck 12 (r 13,r 23,s 11 s 22 ), c k 13 = ck 13 (r 12,r 23,s 11 s 33 ), c k 23 = c k 23 (r 12,r 13,s 22 s 33 ); k = 1,2 are defined from Eqs. (24) and (25). If n = 5; σ i ; i, = 1,2,3, then the critical values c k i ; k = 1,2 are defined from (28) and (29). If σ i =, i, and n = 5, then tests (3) for generating hypothesis depend on the sample correlation only. Therefore the corresponding multiple statistical procedure with L decisions depends only on the sample correlation too. This procedure is (34) where constants c k 12 = ck 12 (r 13,r 23 ),c k 13 = ck 13 (r 12,r 23 ),c k 23 = ck 23 (r 12,r 13 );k = 1,2. In this case I 1,2 =(r 13 r 23 G 1,2 ;r 13 r 23 + G 1,2 ), I 1,3 =(r 12 r 23 G 1,3 ;r 12 r 23 + G 1,3 ), I 2,3 =(r 12 r 13 G 2,3 ;r 12 r 13 + G 2,3 ), where G 1,2 = (1 r13 2 )(1 r2 23 ), G 1,3 = (1 r12 2 )(1 r2 23 ), G 2,3 = (1 r12 2 )(1 r2 13 ), and the critical values are c 1 12 = r 13 r 23 (1 α 12 )G 1,2, c 2 12 = r 13 r 23 +(1 α 12 )G 1,2, c 1 13 = r 12 r 23 (1 α 13 )G 1,3, c 2 13 = r 12 r 23 +(1 α 13 )G 1,3,
12 216 A.P. Koldanov and P.A. Koldanov c 1 23 = r 12 r 13 (1 α 23 )G 2,3, c 2 23 = r 12 r 13 +(1 α 23 )G 2,3. Note that in this case test (34) has a very simple form. 7 Concluding Remarks Statistical problem of identification of elements of inverse covariance matrix is investigated as multiple decision problem. Solution of this problem is developed on the base of the Lehmann theory of multiple decision procedures and theory of tests of the Neyman structure. It is shown that this solution is optimal in the class of unbiased multiple decision statistical procedures. Obtained results can be applied to market network analysis with partial correlations as a measure of similarity between stocks returns. Acknowledgements The authors are partly supported by National Research University, Higher School of Economics, Russian Federation Government grant, N. 11.G References [1] Mantegna, R.N.: Hierarchical structure in financial market. Eur. Phys. J. series B 11, (1999) [2] Boginsky V., Butenko S., Pardalos P.M.: On structural properties of the market graph. In: Nagurney, A. (ed.) Innovations in Financial and Economic Networks. pp Edward Elgar Publishing Inc., Northampton (23) [3] Boginsky V., Butenko S., Pardalos P.M.: Statistical analysis of financial networks. Comput. Stat. Data. Anal. 48, (25) [4] M. Tumminello, T. Aste, T. Di Matteo, R.N. Mantegna, H.A.: Tool for Filtering Information in Complex Systems. Proc. Natl. Acad. Sci. Uni. States Am. 12 3, (25) [5] Koldanov, A.P., Koldanov, P.A., Kalyagin, V.A., Pardalos, P.M.: Statistical Procedures for the Market Graph Construction. Comput. Stat. Data Anal. 68, 17 29, DOI: 1.116/.csda (213) [6] Hero, A., Raaratnam, B.: Hub discovery in partial correlation graphical models. IEEE Trans. Inform. Theor. 58 (9), (212) [7] Lehmann E.L.: A theory of some multiple decision procedures 1. Ann. Math. Stat. 28, 1 25 (1957) [8] Wald, A.: Statistical Decision Function. Wiley, New York (195) [9] Lehmann E.L., Romano J.P.: Testing Statistical Hypothesis. Springer, New York (25) [1] Lehmann E.L.: A general concept of unbiasedness. Ann. Math. Stat. 22, (1951) [11] Anderson T.W.: An Introduction to Multivariate Statistical Analysis. 3 edn. Wiley- Interscience, New York (23)
Measures of uncertainty in market network analysis
arxiv:1311.2273v1 [q-fin.st] 1 Nov 213 Measures of uncertainty in market network analysis Kalyagin V.A. 1, Koldanov A.P. 2, Koldanov P.A. 2, Pardalos P.M. 1,3, and Zamaraev V.A. 1 1 National Research University
More informationTesting Equality of Natural Parameters for Generalized Riesz Distributions
Testing Equality of Natural Parameters for Generalized Riesz Distributions Jesse Crawford Department of Mathematics Tarleton State University jcrawford@tarleton.edu faculty.tarleton.edu/crawford April
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions
More informationTEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX.
Serdica Math J 34 (008, 509 530 TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX Evelina Veleva Communicated by N Yanev Abstract
More informationDerivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques
Vol:7, No:0, 203 Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques D. A. Farinde International Science Index, Mathematical and Computational Sciences Vol:7,
More informationCompleteness. On the other hand, the distribution of an ancillary statistic doesn t depend on θ at all.
Completeness A minimal sufficient statistic achieves the maximum amount of data reduction while retaining all the information the sample has concerning θ. On the other hand, the distribution of an ancillary
More informationA Statistical Analysis of Fukunaga Koontz Transform
1 A Statistical Analysis of Fukunaga Koontz Transform Xiaoming Huo Dr. Xiaoming Huo is an assistant professor at the School of Industrial and System Engineering of the Georgia Institute of Technology,
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........
More informationSTAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015
STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis
More informationONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION
ONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION Mark M. Meerschaert Department of Mathematics University of Nevada Reno NV 89557 USA mcubed@unr.edu and Hans Peter Scheffler
More informationHomework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.
Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each
More informationFinite Sample Analysis of Weighted Realized Covariance with Noisy Asynchronous Observations
Finite Sample Analysis of Weighted Realized Covariance with Noisy Asynchronous Observations Taro Kanatani JSPS Fellow, Institute of Economic Research, Kyoto University October 18, 2007, Stanford 1 Introduction
More informationDetecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf
Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise
More informationGeneralized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses
Ann Inst Stat Math (2009) 61:773 787 DOI 10.1007/s10463-008-0172-6 Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Taisuke Otsu Received: 1 June 2007 / Revised:
More informationarxiv: v2 [q-fin.st] 24 Aug 2018
Efficient construction of threshold networks of stock markets arxiv:1803.06223v2 [q-fin.st] 24 Aug 2018 Abstract Xin-Jian Xu a, Kuo Wang a, Liucun Zhu b, Li-Jie Zhang c, a Department of Mathematics, Shanghai
More informationOn Reduction and Q-conditional (Nonclassical) Symmetry
Symmetry in Nonlinear Mathematical Physics 1997, V.2, 437 443. On Reduction and Q-conditional (Nonclassical) Symmetry Roman POPOVYCH Institute of Mathematics of the National Academy of Sciences of Ukraine,
More informationSome History of Optimality
IMS Lecture Notes- Monograph Series Optimality: The Third Erich L. Lehmann Symposium Vol. 57 (2009) 11-17 @ Institute of Mathematical Statistics, 2009 DOl: 10.1214/09-LNMS5703 Erich L. Lehmann University
More informationAn Introduction to Multivariate Statistical Analysis
An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents
More informationTesting a Normal Covariance Matrix for Small Samples with Monotone Missing Data
Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical
More informationChoosing the best set of variables in regression analysis using integer programming
DOI 10.1007/s10898-008-9323-9 Choosing the best set of variables in regression analysis using integer programming Hiroshi Konno Rei Yamamoto Received: 1 March 2007 / Accepted: 15 June 2008 Springer Science+Business
More informationALEXEY V. GAVRILOV. I p (v) : T p M T Expp vm the operator of parallel transport along the geodesic γ v. Let E p be the map
THE TAYLOR SERIES RELATE TO THE IFFERENTIAL OF THE EXPONENTIAL MAP arxiv:205.2868v [math.g] 3 May 202 ALEXEY V. GAVRILOV Abstract. In this paper we study the Taylor series of an operator-valued function
More informationOn the GLR and UMP tests in the family with support dependent on the parameter
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 3, September 2015, pp 221 228. Published online in International Academic Press (www.iapress.org On the GLR and UMP tests
More informationSOME TECHNIQUES FOR SIMPLE CLASSIFICATION
SOME TECHNIQUES FOR SIMPLE CLASSIFICATION CARL F. KOSSACK UNIVERSITY OF OREGON 1. Introduction In 1944 Wald' considered the problem of classifying a single multivariate observation, z, into one of two
More informationInference on reliability in two-parameter exponential stress strength model
Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag
More informationA Note on Some Wishart Expectations
Metrika, Volume 33, 1986, page 247-251 A Note on Some Wishart Expectations By R. J. Muirhead 2 Summary: In a recent paper Sharma and Krishnamoorthy (1984) used a complicated decisiontheoretic argument
More informationA method for construction of Lie group invariants
arxiv:1206.4395v1 [math.rt] 20 Jun 2012 A method for construction of Lie group invariants Yu. Palii Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia and Institute
More informationSome New Properties of Wishart Distribution
Applied Mathematical Sciences, Vol., 008, no. 54, 673-68 Some New Properties of Wishart Distribution Evelina Veleva Rousse University A. Kanchev Department of Numerical Methods and Statistics 8 Studentska
More informationGENERAL PROBLEMS OF METROLOGY AND MEASUREMENT TECHNIQUE
DOI 10.1007/s11018-017-1141-3 Measurement Techniques, Vol. 60, No. 1, April, 2017 GENERAL PROBLEMS OF METROLOGY AND MEASUREMENT TECHNIQUE APPLICATION AND POWER OF PARAMETRIC CRITERIA FOR TESTING THE HOMOGENEITY
More informationSequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process
Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationOn the Trivariate Non-Central Chi-Squared Distribution
On the Trivariate Non-Central Chi-Squared Distribution K. D. P. Dharmawansa,R.M.A.P.Rajatheva and C. Tellambura Telecommunications Field of Study, Asian Institute of Technology P.O.Box, Klong Luang 11,
More informationStatistical Signal Processing Detection, Estimation, and Time Series Analysis
Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY
More informationPrincipal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,
Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationOn the Midpoint Method for Solving Generalized Equations
Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics
More informationEffective Sample Size in Spatial Modeling
Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS029) p.4526 Effective Sample Size in Spatial Modeling Vallejos, Ronny Universidad Técnica Federico Santa María, Department
More informationAnalysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems
Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA
More informationStatistical models and likelihood functions
Statistical models and likelihood functions Jean-Marie Dufour McGill University First version: February 1998 This version: February 11, 2008, 12:19pm This work was supported by the William Dow Chair in
More informationProbabilities & Statistics Revision
Probabilities & Statistics Revision Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 January 6, 2017 Christopher Ting QF
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationLecture 19. The Kadison-Singer problem
Stanford University Spring 208 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 3, 208 Scribe: Ben Plaut Lecture 9. The Kadison-Singer problem The Kadison-Singer
More informationLecture 12 November 3
STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1
More informationMean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County
A Generalized p-value Approach to Inference on Common Mean Pranab K. Mitra and Bimal K. Sinha Department of Mathematics and Statistics, University Of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore,
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationInformation Theory and Hypothesis Testing
Summer School on Game Theory and Telecommunications Campione, 7-12 September, 2014 Information Theory and Hypothesis Testing Mauro Barni University of Siena September 8 Review of some basic results linking
More informationCS 6375 Machine Learning
CS 6375 Machine Learning Nicholas Ruozzi University of Texas at Dallas Slides adapted from David Sontag and Vibhav Gogate Course Info. Instructor: Nicholas Ruozzi Office: ECSS 3.409 Office hours: Tues.
More informationUniversity of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries
University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent
More informationCS Lecture 19. Exponential Families & Expectation Propagation
CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces
More informationMean-field equations for higher-order quantum statistical models : an information geometric approach
Mean-field equations for higher-order quantum statistical models : an information geometric approach N Yapage Department of Mathematics University of Ruhuna, Matara Sri Lanka. arxiv:1202.5726v1 [quant-ph]
More informationInvariant HPD credible sets and MAP estimators
Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationSTAT Financial Time Series
STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationRobustness of the Quadratic Discriminant Function to correlated and uncorrelated normal training samples
DOI 10.1186/s40064-016-1718-3 RESEARCH Open Access Robustness of the Quadratic Discriminant Function to correlated and uncorrelated normal training samples Atinuke Adebanji 1,2, Michael Asamoah Boaheng
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationCalculation of Bayes Premium for Conditional Elliptical Risks
1 Calculation of Bayes Premium for Conditional Elliptical Risks Alfred Kume 1 and Enkelejd Hashorva University of Kent & University of Lausanne February 1, 13 Abstract: In this paper we discuss the calculation
More informationMATH5745 Multivariate Methods Lecture 07
MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationThe regression model with one fixed regressor cont d
The regression model with one fixed regressor cont d 3150/4150 Lecture 4 Ragnar Nymoen 27 January 2012 The model with transformed variables Regression with transformed variables I References HGL Ch 2.8
More informationAn Algebraic and Geometric Perspective on Exponential Families
An Algebraic and Geometric Perspective on Exponential Families Caroline Uhler (IST Austria) Based on two papers: with Mateusz Micha lek, Bernd Sturmfels, and Piotr Zwiernik, and with Liam Solus and Ruriko
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationVolatility. Gerald P. Dwyer. February Clemson University
Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use
More informationApplication of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption
Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption Alisa A. Gorbunova and Boris Yu. Lemeshko Novosibirsk State Technical University Department of Applied Mathematics,
More informationSTAT 4385 Topic 01: Introduction & Review
STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics
More information10708 Graphical Models: Homework 2
10708 Graphical Models: Homework 2 Due Monday, March 18, beginning of class Feburary 27, 2013 Instructions: There are five questions (one for extra credit) on this assignment. There is a problem involves
More information(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS
(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University
More informationLECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.
Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the
More informationAdmissible Significance Tests in Simultaneous Equation Models
Admissible Significance Tests in Simultaneous Equation Models T. W. Anderson Stanford University December 7, 203 Abstract Consider testing the null hypothesis that a single structural equation has specified
More informationApplication of Homogeneity Tests: Problems and Solution
Application of Homogeneity Tests: Problems and Solution Boris Yu. Lemeshko (B), Irina V. Veretelnikova, Stanislav B. Lemeshko, and Alena Yu. Novikova Novosibirsk State Technical University, Novosibirsk,
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationhttp://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random
More informationPOSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES
POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES ALEXANDER KOLDOBSKY Abstract. We prove that if a function of the form f( K is positive definite on, where K is an origin symmetric
More informationMultiple orthogonal polynomials. Bessel weights
for modified Bessel weights KU Leuven, Belgium Madison WI, December 7, 2013 Classical orthogonal polynomials The (very) classical orthogonal polynomials are those of Jacobi, Laguerre and Hermite. Classical
More informationLecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis
Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationMi-Hwa Ko. t=1 Z t is true. j=0
Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV
More informationStatistical Inference On the High-dimensional Gaussian Covarianc
Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional
More informationAn introduction to Bayesian inference and model comparison J. Daunizeau
An introduction to Bayesian inference and model comparison J. Daunizeau ICM, Paris, France TNU, Zurich, Switzerland Overview of the talk An introduction to probabilistic modelling Bayesian model comparison
More informationSupport Vector Method for Multivariate Density Estimation
Support Vector Method for Multivariate Density Estimation Vladimir N. Vapnik Royal Halloway College and AT &T Labs, 100 Schultz Dr. Red Bank, NJ 07701 vlad@research.att.com Sayan Mukherjee CBCL, MIT E25-201
More information5.1 Uniformly Most Accurate Families of Confidence
Chapter 5 Confidence Estimation Let (X, B X, P ), P {P θ θ Θ} be sample space and {Θ, B Θ, µ} be measure space with some σ finite measure µ. 5.1 Uniformly Most Accurate Families of Confidence Sets Def.
More informationCombinatorial Dimension in Fractional Cartesian Products
Combinatorial Dimension in Fractional Cartesian Products Ron Blei, 1 Fuchang Gao 1 Department of Mathematics, University of Connecticut, Storrs, Connecticut 0668; e-mail: blei@math.uconn.edu Department
More information557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES
557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp
More informationtopics about f-divergence
topics about f-divergence Presented by Liqun Chen Mar 16th, 2018 1 Outline 1 f-gan: Training Generative Neural Samplers using Variational Experiments 2 f-gans in an Information Geometric Nutshell Experiments
More informationFDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES
FDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES Sanat K. Sarkar a a Department of Statistics, Temple University, Speakman Hall (006-00), Philadelphia, PA 19122, USA Abstract The concept
More informationARANDOM-MATRIX-THEORY-BASEDANALYSISOF STOCKS OF MARKETS FROM DIFFERENT COUNTRIES
Advances in Complex Systems, Vol. 11, No. 5 (2008) 655 668 c World Scientific Publishing Company ARANDOM-MATRIX-THEORY-BASEDANALYSISOF STOCKS OF MARKETS FROM DIFFERENT COUNTRIES RICARDO COELHO,PETERRICHMONDandSTEFANHUTZLER
More informationMultivariate Normal-Laplace Distribution and Processes
CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationGaussian representation of a class of Riesz probability distributions
arxiv:763v [mathpr] 8 Dec 7 Gaussian representation of a class of Riesz probability distributions A Hassairi Sfax University Tunisia Running title: Gaussian representation of a Riesz distribution Abstract
More informationSample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson
Sample Geometry Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring
More informationResearch Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood
Advances in Decision Sciences Volume 2012, Article ID 973173, 8 pages doi:10.1155/2012/973173 Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood
More informationTheory and Methods of Statistical Inference. PART I Frequentist likelihood methods
PhD School in Statistics XXV cycle, 2010 Theory and Methods of Statistical Inference PART I Frequentist likelihood methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationHypothesis testing (cont d)
Hypothesis testing (cont d) Ulrich Heintz Brown University 4/12/2016 Ulrich Heintz - PHYS 1560 Lecture 11 1 Hypothesis testing Is our hypothesis about the fundamental physics correct? We will not be able
More informationResearch Article The Laplace Likelihood Ratio Test for Heteroscedasticity
International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van
More informationUsing Hankel structured low-rank approximation for sparse signal recovery
Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,
More informationStein s Method and Characteristic Functions
Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Class 6 AMS-UCSC Thu 26, 2012 Winter 2012. Session 1 (Class 6) AMS-132/206 Thu 26, 2012 1 / 15 Topics Topics We will talk about... 1 Hypothesis testing
More information