WEIGHTED-TYPE WISHART DISTRIBUTIONS WITH APPLICATION

Size: px
Start display at page:

Download "WEIGHTED-TYPE WISHART DISTRIBUTIONS WITH APPLICATION"

Transcription

1 REVSTAT Statistical Journal Volume 5, Number, April 07, 05- WEIGHTED-TYPE WISHART DISTRIBUTIONS WITH APPLICATION Authors: Mohammad Arashi Department of Statistics, School of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran. Department of Statistics, Faculty of Natural and Agricultural Sciences,University of Pretoria, Pretoria, South Africa. m arashi stat@yahoo.com Andriëtte Bekker Department of Statistics, Faculty of Natural and Agricultural Sciences, University of Pretoria, Pretoria, South Africa. andriette.bekker@up.ac.za Janet van Niekerk Department of Statistics, Faculty of Natural and Agricultural Sciences, University of Pretoria, Pretoria, South Africa. janet.vanniekerk@up.ac.za Received: March 06 Revised: October 06 Accepted: November 06 Abstract: In this paper, we consider a general framework for constructing new valid densities regarding a random matrix variate. However, we focus specifically on the Wishart distribution. The methodology involves coupling the density function of the Wishart distribution with a Borel measurable function as a weight. We propose three different weights by considering trace and determinant operators on matrices. The characteristics for the proposed weighted-type Wishart distributions are studied and the enrichment of this approach is illustrated. A special case of this weighted-type distribution is applied in the Bayesian analysis of the normal model in the univariate and multivariate cases. It is shown that the performance of this new prior model is competitive using various measures. Key-Words: Bayesian analysis; eigenvalues; Kummer gamma; Kummer Wishart; matrix variate; weight function; Wishart distribution. AMS Subject Classification: 6E5, 60E05, 6H, 6F5

2 06 Arashi, Bekker and van Niekerk

3 Weighted-type Wishart distributions with application 07. INTRODUCTION The modeling of real world phenomena is constantly increasing in complexity and standard statistical distributions cannot model these adequately. The question arises whether we can introduce new models to compete with and enhance the standard approaches available in the literature. Various generalizations and extensions have been proposed for standard statistical models, since more complex models are needed to solve the modeling complications of real data. To mention a few: Sutradhar et al. 989 generalized the Wishart distribution for the multivariate elliptical models, however Teng et al. 989 considered matrix variate elliptical models in their study. Wong and Wang 995 defined the Laplace-Wishart distribution, while Letac and Massam 00 defined the normal quasi-wishart distribution. In the context of graphical models, Roverato 00 defined the hyper-inverse Wishart and Wang and West 009 extended the inverse Wishart distribution for using hyper-markov properties see Dawid and Lauritzen 993, while Bryc 008 proposed the compound Wishart and q-wishart in graphical models. Abul-Magd et al. 009 proposed a generalization to Wishart-Laguerre ensembles. Adhikari 00 generalized the Wishart distribution for probabilistic structural dynamics, and Díaz-García et al. 0 extended the Wishart distribution for real normed division algebras. Munilla and Cantet 0 also formulated a special structure for the Wishart distribution to apply in modeling the maternal animal. These generalizations justify the speculative research to propose new models based on the concept of weighted distributions Rao 965. Assuming special cases of these new models as priors for an underlying normal model in a Bayesian analysis exhibit interesting behaviour. In this paper we propose a weighted-type Wishart distribution, making use of the mathematical mechanism frequently used in proposing weighted-type distributions, from length-biased viewpoint, and consider its applications in Bayesian analysis. The building block of our contribution is an extension of the mathematical formulation of univariate weighted-type distributions to multivariate weighted-type distributions. Specifically, if fx; σ is the main/natural probability density function pdf which is imposed by a scalar weight function hx; φ not necessarily positive, then the weighted-type distribution is given by. gx; θ = Chx; φfx; σ, θ = σ, φ, where C = E σ [hx; φ] and the expectation E σ [.] is taken over the same probability measure as f.. The parameter φ can be seen as an enriching parameter. For the multivariate case, one can simply use the pdf of a multivariate random variable for f. in.. Further, the parameter space can be multidimensional. However, the weight function h. should remain of scalar form. Thus the question that arises is: Why not replace f. in. with the pdf of a matrix variate random variable? To address this issue and using. as

4 08 Arashi, Bekker and van Niekerk departure, we define matrix variate weighted-type distributions, from where new matrix variate distributions originate. Initially let S m be the space of all positive definite matrices of dimension m. To set the platform for what we are proposing, consider a random matrix variate X S m having a pdf f.; Ψ with parameter Ψ. We construct matrix variate distributions, with pdf g.; Θ, where Θ = Ψ, Φ and enrichment parameter Φ S m, by utilizing one of the following mechanisms:. Loading with a weight of trace form. gx; Θ = C h tr[xφ]fx; Ψ, Θ = Ψ, Φ.. Loading with a weight of determinant form.3 gx; Θ = C h XΦ fx; Ψ, Θ = Ψ, Φ. 3. Loading with a mixture of weights of trace and determinant forms.4 gx; Θ = C 3 h tr[xφ ]h XΦ fx; Ψ, Θ = Ψ, Φ, Φ, where h i., i =, is a Borel measurable function weight function which admits Taylor s series expansion, C j is a normalizing constant and f. can be referred to as a generator. In this paper, we consider the f. in.-.4 to be the pdf of the Wishart distribution with parameters n and Σ, i.e. Ψ = n, Σ, given by.5 Σ n nm Γ m n X n m+ etr Σ X, with X, Σ S m, denoted by W m n, Σ, and incorporate a weight function, h i., as given by.-.4. Note that Γ m. is the multivariate gamma function and etr. = exptr.. We organize our paper as follows: In Section, we discuss the weighted-type Wishart distribution that originated from. and propose some of its important properties. The enrichment of this approach is illustrated by the graphical display of the joint density function of the eigenvalues of the random matrix for certain cases. In Section 3, the weighted-type Wishart distributions emanating from.3 and.4 are proposed. The significance of this approach of extending the well-known Wishart distribution, will be demonstrated in Section 4, by assuming special cases as a priors for the underlying univariate and multivariate normal model. Comparison results of these cases with well-known priors path the way for integrating these models in Bayesian analysis. Finally, some thoughts of other possible applications are given in Section 5.

5 Weighted-type Wishart distributions with application 09. WEIGHTED-TYPE I WISHART DISTRIBUTION In this section we consider the construction methodology of a weighted-type I Wishart distribution according to.. Definition.. The random matrix X S m is said to have a weightedtype I Wishart distribution WWD with parameters Ψ, Φ S m and the weight function h., if it has the following pdf gx; Θ = h tr[xφ]fx; Ψ E [h tr[xφ]]. with. = c n,m Θ Σ n X n m+ etr {c n,m Θ} = nm Γm n Σ X h tr[xφ], k h k 0 k! n κ κ C κ ΦΣ, Θ = Ψ, Φ, written as X W I mn, Σ, Φ. In. fx; Ψ is the pdf of the Wishart distribution W m n, Σ see.5 i.e. Ψ = Σ, n, n > m, Σ S m and h. is a Borel measurable function that admits Taylor s series expansion, a κ = Γma,κ Γ ma and Γ m a, κ is the generalized gamma function. The parameters are restricted to take those values for which the pdf is non-negative. Remark.. Note that using Taylor s series expansion for h. in. it follows that h k h tr[xφ] = 0 trxφ k h k.3 = 0 C κ XΦ, k! k! from Definition 7., p.8 of Muirhead 005 where h k 0 is the k-th derivative of h. at the point zero. Therefore using Theorem 7..7, p.48 of Muirhead 005 follows from Definition. that E [h tr[xφ]] = h tr[xφ]fx; ΨdX S m Σ n h k = nm Γ n 0 X n m+ etr m k! κ S m Σ X C κ XΦdX Σ n nm +k Γ n n m Σ h k = nm Γ n 0 n C κ ΦΣ m k! κ κ k h k = 0 n C κ ΦΣ, k! κ κ and. follows C κ ax = a k C κ X and C κ. is the zonal polynomial corresponding to κ Muirhead 005.

6 0 Arashi, Bekker and van Niekerk Remark.. Here we consider some thoughts related to Definition. and.. As formerly noticed, the weight function should be a scalar function. In Definition., we used the trace operator, however any relevant operator can be used. The determinant operator will be discussed in Section 3. Another interesting operator can be the eigenvalue. In this respect one may use the result of Arashi 03 to get closed expression for the expected value of the weight function. For h tr[xφ] = etrxφ in. we obtain an enriched Wishart distribution with scale matrix Σ + Φ. 3 For h xφ = expxφ and m = in. the pdf simplifies to gx; θ = c n θσ n n.4 x exp σ φ x, which is the pdf of a gamma random variable with parameters n and φ, σ n σ with c n θ = φ, θ = σ Γ n, φ, written as Gα = n, β = φ. σ 4 For h x = x and m = in. the pdf simplifies to gx; θ = c n θσ n n x exp σ x x=c n θσ n n x exp σ x, n+ σ φ with c n θ = Γ n +, θ = σ, φ, hence X Gα = n +, β = φ. σ This is also called the length-biased or size-biased gamma distribution see Patil and Ord 976 with parameters n and φ. σ 5 For h tr[xφ] = + tr[xφ] in. the pdf simplifies to gx; Θ = c n,m Θ Σ n n X m+ etr.5 Σ X + trxφ, with c n,m Θ as in., Θ = n, Σ, Φ, which is defined as the Kummer Wishart distribution and denoted as KW m n, Σ, Φ. 6 For h xφ = + xφ γ, where γ is a known fixed constant, and m = in. the pdf simplifies to gx; θ = c n θσ n n x exp.6 σ x + φx γ with c n θ = n Γ n φ σ k γ! n γ k!k! κ κ, θ = σ, φ. If φ = then this is also known as the Kummer gamma or generalized gamma distribution, written as KGα = n, β =, γ, by expanding the term + φx γ σ see Pauw et al Various functional forms of h. are explored and the joint density of eigenvalues of the matrix variates are graphically illustrated to show the flexibility built in by this construction, see Table.

7 Weighted-type Wishart distributions with application.. Characteristics In this section some statistical properties of the WWD Definition. are derived. Most of the computations here deal with the relevant use of.3 and Theorem 7..7, p.48 of Muirhead 005, though we do not mention every time. Theorem.. given by Let X W I mn, Σ, Φ, then the r th moment of X is E X r = where c n,m Θ and c n +r,m Θ as in.. c n,m Θ c n +r,m Θ Σ r, Proof: Similarly as in Remark., by using., E X r = c n,m Θ Σ n X r+ n m+ etr S m Σ X h tr[xφ]dx = c n,m Θ Σ n h k 0 X r+ n m+ etr k! S m Σ X the result follows. C κ XΦdX, κ In the following, we give the exact expression for the moment generating function MGF of the WWD, provided its existence. Let X W I mn, Σ, Φ, then the moment generating func- Theorem.. tion of X is given by M X T = c n,m Θd n,m I m ΣT n, with c n,m Θ as in. and d n,m = nm Γ n k h k 0 n m κ k! κ C κ ΦΣ T. Proof: M X T = EetrT X Using equation. we have = c n,m Θ Σ n = c n,m Θ Σ n X n S m Σ T n Cκ ΦΣ T κ m+ etr Σ X + T X h tr[xφ]dx nm +k h k 0 n κγ m n Γ nm + k k!γ nm + k and the proof is complete.

8 Arashi, Bekker and van Niekerk Another important statistical characteristic is the joint pdf of the eigenvalues of X, which is given in the next theorem. Theorem.3. Let X W I mn, Σ, Φ, then the joint pdf of the eigenvalues λ > λ >... > λ m > 0 of X is c n,m Θ Σ n π m Γ m m r=0 ρ κ φ ρ,κ m i<j λ i λ j Λ n m+ h k 0Cρ,κ φ I m, I m C ρ,κ φ Σ, Φ r!k! [C φ I m ] C φ Λ. Proof: From Theorem 3..7, p.04 of Muirhead 005 the pdf of Λ = diagλ,..., λ m is π m Γ m m m i<j λ i λ j ghλh ; ΘdH, Om where Om is the space of all orthogonal matrices H of order m. Note that I = ghλh ; ΘdH Om = c n,m Θ Σ n By using.3, we get Om HΛH n m+ etr Σ HΛH h tr[hλh Φ]dH I = c n,m Θ Σ n n Λ m+ ρ κ Om r=0 r! h k 0 k! C ρ Σ HΛH C κ ΦHΛH dh. Note that Om = φ ρ,κ C ρ ΣHΛH C κ ΦHΛH dh C φ Λ C ρ,κ φ I m, I m C ρ,κ φ [C φ I m ] Σ, Φ from., p.468 of Davis 979 and the result follows.

9 Weighted-type Wishart distributions with application 3 Remark.3. For Σ = c I and Φ = c I the result can be obtained from Theorem 3..7, p.04 of Muirhead 005 as follows: I = c n,m Θc mn Λ n m+ etr c.7 Λ h c Λ. Based on Remark.3, Table illustrates the joint pdf of the eigenvalues of X for specific c, c and n and different weight functions using.7. It is evident that the functional form of the weight function provides increased flexibility for the user. Negative and positive correlations amongst the eigenvalues can be obtained using different weight functions, h.. Table : Joint pdf of the eigenvalues for n = 9, c = and c = 0. Left, c = 0.5 Middle and c =.5 Right h c x = expc x h c x = exp c x h c x = + c x

10 4 Arashi, Bekker and van Niekerk 3. FURTHER DEVELOPMENTS 3.. Weighted-type II Wishart distribution In this section we focus on the construction of a weighted-type Wishart distribution for which the weight function is of determinant form see.3. Before exploring the form of the weighted-type Wishart distribution based on a weight of determinant form, let X W m n, Σ and µ k denote the k-th moment of X. Then from 5, p.0 of Muirhead 005 [ µ k = E X k] = k Γ n m + k Γ n Σ k. m Thus for any Borel measurable function h., making use of Taylor s series expansion, we have 3. h XΦ = Hence E [h XΦ ] = h k 0 Φ k µ k = k! h k 0 XΦ k. k! k Γ n m + k h k 0 k!γ n ΦΣ k. m Accordingly, we have the following definition for a weighted-type Wishart distribution with weight of determinant form see.3. Definition 3.. The random matrix X S m is said to have a weightedtype II Wishart distribution WWD with parameters Ψ, Φ S m and the weight function h., if it has the following pdf gx; Θ = h XΦ fx; Ψ E [h XΦ ] with = c n,mθ Σ n X n m+ etr Σ X h XΦ, Θ = Ψ, Φ { c n,m Θ } = Σ Sm n n X m+ etr Σ X h XΦ dx = h k n+km 0 Γ n+k m ΦΣ k. k! and fx; Ψ is the pdf of the Wishart distribution W m n, Σ i.e. Ψ = Σ, n, n > m, Σ S m and h. is a Borel measurable function that admits Taylor s series expansion. The parameters are restricted to take those values for which the pdf is non-negative. We write this as X W II m n, Σ, Φ.

11 Weighted-type Wishart distributions with application Weighted-type III Wishart distribution As before, in this section we give the definition of the weighted-type III Wishart distribution W3WD. Utilizing a more extended version of.4 allowing more parameters we have the following definition: Definition 3.. The random matrix X S m is said to have a weightedtype III Wishart distribution W3WD with parameters Ψ, Φ and Φ S m and the weight functions h. and h., if it has the following pdf gx; Θ = h tr[xφ ]h XΦ fx; Ψ, Θ = Ψ, Φ, Φ E [h tr[xφ ]h XΦ ] = c n,mθ Σ n n X m+ etr Σ X h tr[xφ ]h XΦ with {c n,mθ} = nm t=0 C κ Φ Σ. h k 0ht 0 n mt+k Γ m k!t! + t Φ Σ t κ n + t κ where fx; Ψ is the pdf of the Wishart distribution W m n, Σ i.e. Ψ = Σ, n, n > m, Σ S m and h. and h. are Borel measurable functions that admit Taylor s series expansion. We denote this as X W III m n, Σ, Φ, Φ. The parameters are restricted to take those values for which the pdf is non-negative. 4. APPLICATION In this section special cases of Definition are applied as priors for the normal model under the squared error loss function. First the Kummer gamma distribution.6 with φ = as a prior for the variance of the univariate normal distribution and secondly the Kummer Wishart.5 as a prior for the covariance matrix of the matrix variate normal distribution. Bekker and Roux 995 considered the Wishart prior as a competitor for the conjugate inverse- Wishart prior for the covariance matrix of the matrix variate normal distribution. Van Niekerk et al. 06a confirmed the value added of the latter by a numerical study. This is the stimulus to consider other possible priors. 4.. Univariate Bayesian illustration In this section a special univariate case of the weighted-type I Wishart distribution is applied as a prior for the variance of the normal model. Consider

12 6 Arashi, Bekker and van Niekerk a random sample of size n from a univariate normal distribution with unknown mean and variance, i.e. X i Nµ, σ, X = X,..., X n. Let hx = + x γ, m = and Σ = σ in Definition. see.6, and consider this distribution as a prior for σ, and an objective prior for µ. This prior model is compared with the well-known inverse gamma and gamma priors in terms of coverage and median credible interval width. The three priors under consideration are Inverse gamma prior IGα, β with pdf gx; α, β = βα Γα x α exp β, x > 0 x Gamma prior.4 Gα, β Kummer gamma prior.6 KGα 3, β 3, γ =. The marginal posterior pdf and Bayes estimator of σ under the Kummer gamma prior are calculated using Remark 5 of Van Niekerk et al. 06b as σ α 3 n exp β 3 σ + φσ exp qσ X= and σ [ n Γ α 3 + β α E σ [ σ n +φσ exp σ β 3 Γ α Eσ σ = Γ α 3 + Eσ [ σ n + φσ exp [ σ n + φσ exp σ where σ G α 3 +, β 3 and σ G α 3 + 3, β 3. σ [ n i= X i X] [ n i= X i X]], i= X i [ X]], n i= X i X]] A normal sample of size 8 is simulated with mean µ = 0 and variance σ =. The hyperparameters are chosen such that Eσ = σ 0 = 0.9. Four combinations of hyperparameter values are investigated and summarized in Table. Note that in combination 4, the prior belief for the Kummer gamma is not 0.9 but 5.3, which is quite far from the target value of and the prior information is clearly misspecified. It is clear from Table that the Kummer gamma prior, with parameter combination 4, is very vague when compared to the other two priors. To evaluate the performance of the new prior structure, 000 independent samples are simulated and for each one the posterior densities and estimates are calculated. This enables the calculation of the coverage probabilities and median credible interval width as given in Table 3. The coverage probability obtained under the Kummer gamma prior is higher than for the inverse-gamma and gamma priors, while the median width of the credible interval indicated in brackets is competitive. It is interesting to note that even under total misspecification see combination 4 in Table, the Kummer gamma prior is still performing well. The performance superiority of the Kummer gamma prior is clear from Table 3.

13 Weighted-type Wishart distributions with application 7 Table : Influence of hyperparameters on the prior pdf s Kummer gamma prior, - - Inverse gamma prior,... Gamma prior Combination 3 4 Inverse α = 3., α = 4.33, α = 3., α = 3., gamma β = β = 3 β = β = prior Gamma α =.8, α =.8, α =.8, α =.8, prior β = β = β = β = Kummer gamma prior α 3 =., β 3 =. α 3 = 0.8, β 3 =.8 α 3 =.8, β 3 =.5 α 3 = 5.0, β 3 =.5 Table 3: Coverage probabilities median credible interval width calculated from the posterior density functions Combination Inverse gamma prior Gamma prior Kummer gamma prior 74.5% % %0.85 7% %.5 89.% % %.05 9.% % % % Multivariate Bayesian illustration In this section the Kummer Wishart.5 prior is considered for the covariance matrix of the matrix variate normal model. Consider a random sample of size n from a matrix variate normal distribution with unknown mean µ and covariance matrix Σ, i.e. X i N m,p µ, Σ I p with likelihood function Lµ, Σ X, V Σ n p etr [ Σ [ V + n X µx µ ]]. The three priors for Σ under consideration are Inverse Wishart prior IW m p, Φ with pdf [ gx; p, Φ = mp m p m Γ m etr ] X p Φ p m [ ] X Φ, X S m

14 8 Arashi, Bekker and van Niekerk Wishart prior.5 W m p, Φ Kummer Wishart prior.5 KW m p 3, I, Φ. The conditional posterior pdf s of µ and Σ with a Kummer Wishart prior and objective prior for µ, necessary for the simulation of the posterior samples, are q µ Σ, X, V [ etr [ Σ V + n X µx µ ]], and q Σ µ, X, V Σ n p + n m+ etr [ [ Σ V + n X µx µ ]] etr Φ Σ + trσθ. with V = n i= X i XX i X. A sample of size 0 is simulated from a multivariate normal distribution p = with µ = 0 and Σ = I m. The hyperparameters are chosen as Φ = I m, m = 3, p = 9.5, p = p 3 = 3, according to the methodology of Van Niekerk et al. 06a. Posterior samples of size 5000, are simulated using a Gibbs sampling scheme with an additional Metropolis-Hastings algorithm, similarly to Van Niekerk et al. 06a. The estimates calculated for Σ under the three different priors as well as the MLE are Σ MLE = , Σ IW = Σ W = , Σ KW = The above estimates are obtained for one posterior sample. The Frobenius norm see Golub and Van Loan 996 of the errors, defined as Σ Σ F = tr Σ Σ Σ Σ, are calculated for each estimate and given in Table 4. The Kummer Wishart prior results in the smallest Frobenius norm of the error. For further investigation, this sampling scheme is repeated 00 times to obtain 00 estimates under each prior as well as the MLE for each of the 00 simulated samples. The Frobenius norm of the error for each estimate and every repetition is calculated and the empirical cumulative distribution function ecdf of each set of Frobenius norms calculated for each estimator is obtained and given in Figure. The ecdf which is most left in the figure is regarded as the best since for a specific value of the error norm, a higher proportion of estimates

15 Weighted-type Wishart distributions with application 9 Table 4: Frobenius norm of the error of the estimates calculated from the simulated sample Frobenius norm Value Σ MLE Σ F.336 Σ IW Σ F Σ W Σ F.5766 Σ KW Σ F Figure : The empirical cumulative distribution function ecdf of the Frobenius norm of the estimation errors for n = 0Left and n = 00Right from that particular prior results in less error. It is evident from Figure that the performance of the sample estimate improves as the sample size increases, which is to be expected, and the performance of the Kummer Wishart prior is still competitive. From Figure we conclude that the Kummer Wishart prior results in an estimate of Σ, for small and larger sample sizes, with less error and preference should be given to this prior. To validate the graphical interpretation, a two-sample Kolmogorov-Smirnov test is performed for n = 0, pairwise, on the three different ecdf s and the p-value for some pairs are given in Table 5. Table 5: p-values of the Kolmogorv-Smirnov two-sample test based on samples n = 0 of the Frobenius norms Pairwise comparison p-value MLE and IW < 0.00 IW and KW < 0.00 W and KW < 0.00 MLE and KW < 0.00 From Table 5 it is clear that the ecdf of the errors under the Kummer Wishart prior is significantly different from the other priors. Therefore, the assertion can be made that the Kummer Wishart prior structure produces an estimate

16 0 Arashi, Bekker and van Niekerk that results in statistically significant less error. 5. DISCUSSION In this paper, we proposed a construction methodology for new matrix variate distributions with specific focus on the Wishart distribution, followed by weighting the Wishart distribution with different weight functions. It was shown from Bayesian viewpoint, by simulation studies, that the Kummer gamma and Kummer Wishart priors, as special cases of the weighted-type I Wishart distribution, outperformed the well-known priors. The weighted-type III Wishart distribution gives rise to a Wishart distribution with larger degrees of freedom and scaled covariance matrix that might have application in missing value analysis. In the following we list some thoughts that might be considered as plausible applications of the proposed distributions. i ii iii Let N and N observations be independently and identically derived from Z N m 0, Σ and Z N m 0, Σ, respectively. Then the statistic T = N j= Z + Z Z + Z T, has the Wishart distribution W m N + N, Σ +Σ. Suppose the focus of the paper is on the covariance structure Σ + Σ similar to standby systems, then, to reduce the cost of sampling, one may only consider N observations from the WWD and take h. to be of exponential form in.. A weight of the form h XΦ = X q, where q is a known fixed constant with Φ = I m in Definition 3. has applications in missing value analysis. To see this, let Y,..., Y n+q be a random sample from N m 0, Σ. Define T = n i= Y iy T i. Then T W m n, Σ. Now, using the weight h X = X q one obtains the W m n + q, Σ distribution and without having the observations n +,..., n + q we can find the distribution of the full sample and the relative analysis. Finally, one may ask what is the sampling distribution regarding Definition 3.? To answer this question, we recall that if Y N0, I n Σ, then X = Y T Y W m n, Σ. Now, assume matrices A S m and B S m exist such that [Σ + Φ] = A + B. Then if we sample Y N 0, I n+α [Σ + Φ], the quadratic form X = Y T Y will have the distribution as in Definition 3., where A = Σ, B = Φ and Φ = I. In other words, if we enlarge both the covariance and number of samples in a normal population and consider the distribution of the quadratic form, we are indeed weighting a Wishart distribution with a Wishart.

17 Weighted-type Wishart distributions with application ACKNOWLEDGMENTS The authors would like to hereby acknowledge the support of the StatDisT group. This work is based upon research supported by the UP Vice-chancellor s post-doctoral fellowship programme, the National Research foundation grant Re:CPRR No 9497 and the vulnerable discipline-academic statistics STAT fund. The authors thank the reviewers and associate editor for their helpful recommendations. REFERENCES [] Abul-Magd,A.Y., Akemann, G. and Vivo, P Superstatistical generalizations of Wishart-Laguerre ensembles of random matrices, Journal of Physics A: Mathematical and Theoretical, 47, [] Adhikari, S. 00. Generalized Wishart distribution for probabilistic structural dynamics, Computational Mechanics, 455, [3] Arashi, M. 03. On Selberg-type square matrices integrals, Journal of Algebraic Systems,,, [4] Bekker, A. and Roux, J.J.J Bayesian multivariate normal analysis with a Wishart prior, Communications in Statistics - Theory and Methods, 40, [5] Bryc, W Compound real Wishart and q-wishart matrices, International Mathematics Research Notices, rnn079. [6] Davis, A.W Invariant polynomials with two matrix arguments extending the zonal polynomials: Applications to multivariate distribution theory, Annals of the Institute of Statistical Mathematics, 3, [7] Dawid, A.P. and Lauritzen, S.L Hyper Markov laws in the statistical analysis of decomposable graphical models, The Annals of Statistics, [8] Díaz-García, J.A. and Gutiérrez-Jáimez, R. and others 0. On Wishart distribution: some extensions, Linear Algebra and its Applications, 4356, [9] Golub, G. H. and Van Loan, C. F Matrix Computations, 3rd ed, Baltimore, MD: Johns Hopkins. [0] Letac, G. and Massam, H. 00. The normal quasi-wishart distribution, Contemporary Mathematics, 87,3 40. [] Muirhead, R.J. 98. Aspects of Multivariate Statistical Theory, John Wiley and Sons, New York. [] Munilla, S. and Cantet, R.J.C. 0. Bayesian conjugate analysis using a generalized inverted Wishart distribution accounts for differential uncertainty among the genetic parameters an application to the maternal animal model, Journal of Animal Breeding and Genetics, 93,

18 Arashi, Bekker and van Niekerk [3] Patil, G. P. and Ord, J. K On size-biased sampling and related forminvariant weighted distributions, Sankhyā, 38B, [4] Pauw, J., Bekker, A. and Roux, J.J.J. 00. Densities of composite weibullized generalized gamma variables, South African Statistical Journal, 44, 7 4. [5] Rao, C. R On discrete distributions arising out of methods of ascertainment, In Classical and Contagious Pergamon Press and Statistical Publishing Society, [6] Rènyi, A. 96. On measures of information and entropy, Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability, 960, [7] Roverato, A. 00. Hyper Inverse Wishart Distribution for Nondecomposable Graphs and its Application to Bayesian Inference for Gaussian Graphical Models, Scandinavian Journal of Statistics, 93,39 4. [8] Shannon, C.E A Mathematical Theory of Communication, Bell System Technical Journal, 73, [9] Sutradhar, B.C. and Ali, M.M A generalization of the Wishart distribution for the elliptical model and its moments for the multivariate t model, Journal of Multivariate Analysis, 9,55 6. [0] Teng, C., Fang H. and Deng, W The generalized noncentral Wishart distribution, Journal of Mathematical Research and Exposition, 94, [] Van Niekerk, J., Bekker, A., Arashi, M. and De Waal, D.J. 06a. Estimation under the matrix variate elliptical model, South African Statistical Journal, 50, [] Van Niekerk, J., Bekker, A., and Arashi, M. 06b. A gamma-mixture class of distributions with Bayesian application, Communications in Statistics- Simulation and Computation, Accepted. [3] Wang, H. and West, M Bayesian analysis of matrix normal graphical models, Biometrika, 964, [4] Wong, C.S. and Wang, T Laplace-Wishart distributions and Cochran theorems, Sankhyā: The Indian Journal of Statistics, Series A,

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY José A. Díaz-García and Raúl Alberto Pérez-Agamez Comunicación Técnica No I-05-11/08-09-005 (PE/CIMAT) About principal components under singularity José A.

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 11 CRFs, Exponential Family CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due today Project milestones due next Monday (Nov 9) About half the work should

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

Comment on Article by Scutari

Comment on Article by Scutari Bayesian Analysis (2013) 8, Number 3, pp. 543 548 Comment on Article by Scutari Hao Wang Scutari s paper studies properties of the distribution of graphs ppgq. This is an interesting angle because it differs

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Gaussian Models (9/9/13)

Gaussian Models (9/9/13) STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate

More information

Random Eigenvalue Problems Revisited

Random Eigenvalue Problems Revisited Random Eigenvalue Problems Revisited S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Research Article On the Marginal Distribution of the Diagonal Blocks in a Blocked Wishart Random Matrix

Research Article On the Marginal Distribution of the Diagonal Blocks in a Blocked Wishart Random Matrix Hindawi Publishing Corporation nternational Journal of Analysis Volume 6, Article D 59678, 5 pages http://dx.doi.org/.55/6/59678 Research Article On the Marginal Distribution of the Diagonal Blocks in

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

ON THE NORMAL MOMENT DISTRIBUTIONS

ON THE NORMAL MOMENT DISTRIBUTIONS ON THE NORMAL MOMENT DISTRIBUTIONS Akin Olosunde Department of Statistics, P.M.B. 40, University of Agriculture, Abeokuta. 00, NIGERIA. Abstract Normal moment distribution is a particular case of well

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Bayesian model selection in graphs by using BDgraph package

Bayesian model selection in graphs by using BDgraph package Bayesian model selection in graphs by using BDgraph package A. Mohammadi and E. Wit March 26, 2013 MOTIVATION Flow cytometry data with 11 proteins from Sachs et al. (2005) RESULT FOR CELL SIGNALING DATA

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Large-scale Ordinal Collaborative Filtering

Large-scale Ordinal Collaborative Filtering Large-scale Ordinal Collaborative Filtering Ulrich Paquet, Blaise Thomson, and Ole Winther Microsoft Research Cambridge, University of Cambridge, Technical University of Denmark ulripa@microsoft.com,brmt2@cam.ac.uk,owi@imm.dtu.dk

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1 Applied Mathematical Sciences, Vol. 2, 28, no. 48, 2377-2391 The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1 A. S. Al-Ruzaiza and Awad El-Gohary 2 Department of

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

Subject CS1 Actuarial Statistics 1 Core Principles

Subject CS1 Actuarial Statistics 1 Core Principles Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Bayesian Inference for the Multivariate Normal

Bayesian Inference for the Multivariate Normal Bayesian Inference for the Multivariate Normal Will Penny Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK. November 28, 2014 Abstract Bayesian inference for the multivariate

More information

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

Summary of Extending the Rank Likelihood for Semiparametric Copula Estimation, by Peter Hoff

Summary of Extending the Rank Likelihood for Semiparametric Copula Estimation, by Peter Hoff Summary of Extending the Rank Likelihood for Semiparametric Copula Estimation, by Peter Hoff David Gerard Department of Statistics University of Washington gerard2@uw.edu May 2, 2013 David Gerard (UW)

More information

A BIVARIATE GENERALISATION OF GAMMA DISTRIBUTION

A BIVARIATE GENERALISATION OF GAMMA DISTRIBUTION A BIVARIATE GENERALISATION OF GAMMA DISTRIBUTION J. VAN DEN BERG, J.J.J. ROUX and A. BEKKER Department of Statistics, Faculty of Natural and Agricultural Sciences, University of Pretoria, Pretoria,, South

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Lecture 5: GPs and Streaming regression

Lecture 5: GPs and Streaming regression Lecture 5: GPs and Streaming regression Gaussian Processes Information gain Confidence intervals COMP-652 and ECSE-608, Lecture 5 - September 19, 2017 1 Recall: Non-parametric regression Input space X

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Bayesian (conditionally) conjugate inference for discrete data models. Jon Forster (University of Southampton)

Bayesian (conditionally) conjugate inference for discrete data models. Jon Forster (University of Southampton) Bayesian (conditionally) conjugate inference for discrete data models Jon Forster (University of Southampton) with Mark Grigsby (Procter and Gamble?) Emily Webb (Institute of Cancer Research) Table 1:

More information

arxiv: v1 [math.st] 17 Jul 2018

arxiv: v1 [math.st] 17 Jul 2018 Multimatricvariate distribution under elliptical models arxiv:1807.06635v1 [math.st] 17 Jul 2018 José A. Díaz-García Universidad Autónoma de Chihuahua Facultad de Zootecnia y Ecología Periférico Francisco

More information

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES SHILIANG SUN Department of Computer Science and Technology, East China Normal University 500 Dongchuan Road, Shanghai 20024, China E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Gaussian Process Regression

Gaussian Process Regression Gaussian Process Regression 4F1 Pattern Recognition, 21 Carl Edward Rasmussen Department of Engineering, University of Cambridge November 11th - 16th, 21 Rasmussen (Engineering, Cambridge) Gaussian Process

More information

Multivariate Normal & Wishart

Multivariate Normal & Wishart Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.

More information

OBJECTIVE BAYESIAN ESTIMATORS FOR THE RIGHT CENSORED RAYLEIGH DISTRIBUTION: EVALUATING THE AL-BAYYATI LOSS FUNCTION

OBJECTIVE BAYESIAN ESTIMATORS FOR THE RIGHT CENSORED RAYLEIGH DISTRIBUTION: EVALUATING THE AL-BAYYATI LOSS FUNCTION REVSTAT Statistical Journal Volume 14, Number 4, October 2016, 433 454 OBJECTIVE BAYESIAN ESTIMATORS FOR THE RIGHT CENSORED RAYLEIGH DISTRIBUTION: EVALUATING THE AL-BAYYATI LOSS FUNCTION Author: J.T. Ferreira

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Bayesian Inference of Multiple Gaussian Graphical Models

Bayesian Inference of Multiple Gaussian Graphical Models Bayesian Inference of Multiple Gaussian Graphical Models Christine Peterson,, Francesco Stingo, and Marina Vannucci February 18, 2014 Abstract In this paper, we propose a Bayesian approach to inference

More information

Studentization and Prediction in a Multivariate Normal Setting

Studentization and Prediction in a Multivariate Normal Setting Studentization and Prediction in a Multivariate Normal Setting Morris L. Eaton University of Minnesota School of Statistics 33 Ford Hall 4 Church Street S.E. Minneapolis, MN 55455 USA eaton@stat.umn.edu

More information

Lecture 9: PGM Learning

Lecture 9: PGM Learning 13 Oct 2014 Intro. to Stats. Machine Learning COMP SCI 4401/7401 Table of Contents I Learning parameters in MRFs 1 Learning parameters in MRFs Inference and Learning Given parameters (of potentials) and

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions The Korean Communications in Statistics Vol. 14 No. 1, 2007, pp. 71 80 Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions R. Rezaeian 1)

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 6 Standard Kernels Unusual Input Spaces for Kernels String Kernels Probabilistic Kernels Fisher Kernels Probability Product Kernels

More information

Meixner matrix ensembles

Meixner matrix ensembles Meixner matrix ensembles W lodek Bryc 1 Cincinnati April 12, 2011 1 Based on joint work with Gerard Letac W lodek Bryc (Cincinnati) Meixner matrix ensembles April 12, 2011 1 / 29 Outline of talk Random

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

2 Inference for Multinomial Distribution

2 Inference for Multinomial Distribution Markov Chain Monte Carlo Methods Part III: Statistical Concepts By K.B.Athreya, Mohan Delampady and T.Krishnan 1 Introduction In parts I and II of this series it was shown how Markov chain Monte Carlo

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

CS 7140: Advanced Machine Learning

CS 7140: Advanced Machine Learning Instructor CS 714: Advanced Machine Learning Lecture 3: Gaussian Processes (17 Jan, 218) Jan-Willem van de Meent (j.vandemeent@northeastern.edu) Scribes Mo Han (han.m@husky.neu.edu) Guillem Reus Muns (reusmuns.g@husky.neu.edu)

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning

More information

Reliability Monitoring Using Log Gaussian Process Regression

Reliability Monitoring Using Log Gaussian Process Regression COPYRIGHT 013, M. Modarres Reliability Monitoring Using Log Gaussian Process Regression Martin Wayne Mohammad Modarres PSA 013 Center for Risk and Reliability University of Maryland Department of Mechanical

More information

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes CSci 8980: Advanced Topics in Graphical Models Gaussian Processes Instructor: Arindam Banerjee November 15, 2007 Gaussian Processes Outline Gaussian Processes Outline Parametric Bayesian Regression Gaussian

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

Decomposable Graphical Gaussian Models

Decomposable Graphical Gaussian Models CIMPA Summerschool, Hammamet 2011, Tunisia September 12, 2011 Basic algorithm This simple algorithm has complexity O( V + E ): 1. Choose v 0 V arbitrary and let v 0 = 1; 2. When vertices {1, 2,..., j}

More information

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 3, 2010

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 3, 2010 MFM Practitioner Module: Risk & Asset Allocation Estimator February 3, 2010 Estimator Estimator In estimation we do not endow the sample with a characterization; rather, we endow the parameters with a

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester

More information

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas CS839: Probabilistic Graphical Models Lecture 7: Learning Fully Observed BNs Theo Rekatsinas 1 Exponential family: a basic building block For a numeric random variable X p(x ) =h(x)exp T T (x) A( ) = 1

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

Student-t Process as Alternative to Gaussian Processes Discussion

Student-t Process as Alternative to Gaussian Processes Discussion Student-t Process as Alternative to Gaussian Processes Discussion A. Shah, A. G. Wilson and Z. Gharamani Discussion by: R. Henao Duke University June 20, 2014 Contributions The paper is concerned about

More information

PIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCJA. Estymatory kowariancji dla modeli graficznych

PIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCJA. Estymatory kowariancji dla modeli graficznych PIOTR GRACZYK LAREMA, UNIVERSITÉ D ANGERS, FRANCJA Estymatory kowariancji dla modeli graficznych wspó lautorzy: H. Ishi(Nagoya), B. Ko lodziejek(warszawa), S. Mamane(Johannesburg), H. Ochiai(Fukuoka) XLV

More information

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

Step-Stress Models and Associated Inference

Step-Stress Models and Associated Inference Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

CS Lecture 19. Exponential Families & Expectation Propagation

CS Lecture 19. Exponential Families & Expectation Propagation CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Hybrid Censoring; An Introduction 2

Hybrid Censoring; An Introduction 2 Hybrid Censoring; An Introduction 2 Debasis Kundu Department of Mathematics & Statistics Indian Institute of Technology Kanpur 23-rd November, 2010 2 This is a joint work with N. Balakrishnan Debasis Kundu

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland Frederick James CERN, Switzerland Statistical Methods in Experimental Physics 2nd Edition r i Irr 1- r ri Ibn World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI CHENNAI CONTENTS

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution

Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Biswabrata Pradhan & Debasis Kundu Abstract In this article the Bayes estimates of two-parameter gamma distribution is considered.

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

Subjective and Objective Bayesian Statistics

Subjective and Objective Bayesian Statistics Subjective and Objective Bayesian Statistics Principles, Models, and Applications Second Edition S. JAMES PRESS with contributions by SIDDHARTHA CHIB MERLISE CLYDE GEORGE WOODWORTH ALAN ZASLAVSKY \WILEY-

More information

On the Comparison of Fisher Information of the Weibull and GE Distributions

On the Comparison of Fisher Information of the Weibull and GE Distributions On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential

More information

Department of Statistics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Iran.

Department of Statistics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Iran. JIRSS (2012) Vol. 11, No. 2, pp 191-202 A Goodness of Fit Test For Exponentiality Based on Lin-Wong Information M. Abbasnejad, N. R. Arghami, M. Tavakoli Department of Statistics, School of Mathematical

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Minimax design criterion for fractional factorial designs

Minimax design criterion for fractional factorial designs Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:

More information