BAYESIAN INFERENCE ON MIXTURE OF GEOMETRIC WITH DEGENERATE DISTRIBUTION: ZERO INFLATED GEOMETRIC DISTRIBUTION
|
|
- Lesley Washington
- 6 years ago
- Views:
Transcription
1 IJRRAS 3 () October 5.pdf BAYESIAN INFERENCE ON MIXTURE OF GEOMETRIC WITH DEGENERATE DISTRIBUTION: ZERO INFLATED GEOMETRIC DISTRIBUTION Mayuri Pandya, Hardik Pandya & Smita Pandya Department of Statistics, Bhavnagar University University Campus, Near Gymkhana Bhavnagar-364, India ABSTRACT Power series distributions form a useful subclass of one-parameter discrete exponential families suitable for modeling count data. A zero-inflated Geometric distribution is a mixture of a Geometric distribution and a degenerate distribution at zero, with a mixing probability p for the degenerate distribution. This distribution is useful for modeling count data that may have extra zeros. A sequence of independent count data X,. X m, X m+,..., X n were observed from A zero-inflated Geometric extra zeros. A sequence of independent count data X,..., X m, X m+,..., X n were observed from A zero-inflated Geometric distribution with probability mass function f x i p, θ,but later it was found that there was a change in the system at some point m and it is reflected in the sequence after X m by change in probability mass function f x i p,. The Bayes estimators of m, θ, p,,p are derived under different asymmetric loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied. Keywords: Bayesian Inference, Estimation, Zero-inflated Geometric, Degenerate, Mixture of Distribution, Change Point.. INTRODUCTION Power series distributions form a useful subclass of one-parameter discrete exponential families suitable for modelling count data. A zero-inflated power series distribution is a mixture of a power series distribution and a degenerate distribution at zero, with a mixing probability p for the degenerate distribution. This distribution is useful for modelling count data that may have extra zeros.a. Bhattacharya ET. al. (8) has presented a Bayesian test for the problem based on recognizing that the parameter space can be expanded to allow p to be negative. They also find that using a posterior probability as a test statistic has slightly higher power on the most important ranges of the sample size n and parameter values than the score test and likelihood ratio test in simulations. Models for count data often fail to fit in practice because of the presence of more zeros in the data than is explained by a standard model. This situation is often called zero inflation because the number of zeros is inflated from the baseline number of zeros that would be expected in, say, a one-parameter discrete exponential family. Zero inflation is a special case of over dispersion that contradicts the relationship between the mean and variance in a oneparameter exponential family. One way to address this is to use a two-parameter distribution so that the extra parameter permits a larger variance. Johnson, Kotz and Kemp (99) discuss a simple modification of a power series (PS) distribution f(* θ) to handle extra zeros. An extra proportion ofzeros, p, is added to the proportion of zeros from the original discrete distribution, while decreasing the remaining proportions in an appropriate way. So the zero inflated PS distribution is defined as p + p f( θ), y = f (y p, θ) = p f y θ, y > Where θ Θ, the parameter space and the maxing parameter p range over the interval, f θ ( f θ ) < p < This allows the distribution to be well defined for certain negative values of p, depending on θ. Although the mixing interpretation is lost when p <, these values have a natural interpretation in terms of zero-deflation, relative to a PS model. Correspondingly, p > can be regarded as zero inflation relative to a PS model. Note that the PS family contains all discrete one-parameter exponential families so an appropriate choice of PS model in ()a permits any desired interpretation for the data corresponding to the second term. The first term allows an extra proportion p of () 53
2 IJRRAS 3 () October zeros to be added to the discrete PS distribution zero inflated Geometric distribution with parameter (p, θ) is given by p + p θ, x = f (x p, θ) = p θ θ x, x =,,. () < θ <, ( θ) < p < θ E (x p, θ = ( p)θ θ A statistical model is specified to represent the distribution of changing count data that may have extra zeros and statistical inferences are made on the basis of this model. Counting data are often subject to random fluctuations. It may happen that at some point of time instability in the sequence of count data is observe and number of extra zeroes are changed. The problem of study in when and where this change has started occurring is called change point inference problem. Bayesian ideas may play as important role in the study of such change point problem and has been often proposed as veiled alternative to classical estimation procedure. The monograph of Broemeling and Tsurumi [987] on structural change, Jani and Pandya [999] Ebahimi and Ghosh [] and a survey by Jack [983] Pandya and Jani [6] Pandya and Bhatt [7] Mayuri Pandya and Prbha Jadav( [8] []) are useful reference. In this paper we have proposed change point model on Zero Inflated Geometric distribution to represent the distribution of count data with change point m and have obtain Bays estimates of m, p, and p.. PROPOSEDCHANGE POINT MODEL ON ZERO INFLATED GEOMETRICDISTRIBUTION Let X, X,..., X n (n 3) be a sequence of observed count data. Let first m observations X, X,..., X m have come from the Zero Inflated Geometric distribution with probability mass function f x i p, θ, f (x i p, θ ) = (p + ( p )( θ )) I(x i) (( p )( θ )) I(x i) θ x i ( I(x i ), and later (n-m) observations X m+,x m+,.,x n coming from the Zero Inflated Geometric distribution with probability mass function f x i p,, f (x i p, ) = p + p θ I x i (( p )( )) I(xi) x θ i ( I(x i ), x i = Where, I(x i ) =, x i > And m is change point. p and p are proportions. 3a. POSTERIOR DISTRIBUTION FUNCTIONS USING INFORMATIVE PRIOR The ML method, as well as other classical approaches is based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to an attractive inferential method. The Bayes procedure is based on a posterior density, say g θ,, p, p m X), which is proportional to the product of the likelihood function L(θ,, p, p, m X) with a prior joint density, say g (θ,, p, p, m)representing the uncertainty on the parameters values. The likelihood function of the parameters θ,, p, p and m given the sample information X = X, X,, X m, X m +,, X n is given by, L(θ,, p, p, m X) = (p + ( p )( θ )) dm. ( θ ) m dm. ( p ) m dm Sm. θ (p + ( p )( )) dn dm. ( ) n m dn +dm. ( p ) n m dn +dm Sn Sm. (4) Where,, x i = m i= I x i = dm,, I(x i ) =, x i > m n I x i= i = m dm, i=m + I x i = dn dm, n m I x i=m + i = n m dn + dm, i= x i ( I x i ) = Sm, n i=m + x i ( I x i ) = Sn Sm (5) (3) 54
3 IJRRAS 3 () October As in Broemeling et. al. (987), we suppose the marginal prior distribution of m to be discrete uniform over the set {,, 3..., n-}. g m = n Let the marginal prior distribution of θ and to be beta Distribution with mean μ i i =, and common standarddeviation σ. g θ i = θ i a i b θ i i a β (a i,b i ) i, b i >, i =, If the prior information is given in terms of the prior means, and a common standard deviation σ then the beta parameters a i, b i, i=,, can be obtained by solving (6). a i = σ [ μ i μ i μ i σ ] b i = μ i μ i a i i =, (6) Let the marginal prior distribution of p and p to be beta Distribution with prior meanμ i i = 3,4 and common standarddeviation σ. So, g p j = p c j d j p j j c j, d j >, j =, β (c j,d j ) If the prior information is given in terms of the prior means 3, 4 and a common standard deviation σ then the beta parameters c i, d i, i=,, can be obtained by solving (6). c j = σ [ μ i μ i μ i σ ]i = 3,4,j=,. d j = μ i μ i c j j =, (7) We assume that θ,, p, p and m are priori independent. The joint prior density is say,g (θ,, p, p, m) obtained as, g (θ,, p, p, m) = k( θ ) a b θ ( ) a b ( p ) d c p ( p ) d c p ( p ) c d p (8) Where, k = (9) (n )β(a,b )β (a,b )β (c,d )β(c,d ) Note. The Gauss hyper geometric function in three parameter a, b and c, denoted by F, is defined by, a, m b, m x m F a, b, c; x = for x < c, m m! m = With Pochhamer coefficients Γ a + m a, m = for m and a, = Γ a This function has the following integral representation, u a u c a xu b F a, b, c; x = du B(a, c a) The symbols Γand B (*) denoting the usual functions Gamma and Beta respectively. This function is a solution to a hyper geometric differential equation. It is known as Gauss series or the Kummer series. The Joint posterior density of parameters θ,, p, p and m is obtained using the likelihood function (4) and the joint prior density of the parameters (8), g θ,, p, p, m X) = (p + ( p )( θ )) dm ( p ) m dm +d c p ( θ ) m dm +a Sm +b θ dn dm. (p + ( p )( )) ( p ) n m dn +dm +d c p ( ) n m dn +dm +a Sn Sm +b / (X) () And (X) is the marginal posterior density of X, n = k I (m)i m = Where, I (m) n (X) = L(θ,, p, p, m X)g(θ,, p, p, m) dθ d dp dp m = (m) () = Γc Γ(m dm + d ) ( θ ) m +a Sm +b θ 55
4 IJRRAS 3 () October θ F [c, dm, c + m dm + d, ] dθ θ () And, I (m) = ( ) n m +a Sn Sm +b Γc Γ(n m dn + dm + d ) F [c, (dn dm), c + n m dn + dm + d, ] dθ (3) θ θ Where F c, dm, c + m dm + d, and F [c, (dn dm), c + n m dn + dm + d, ] are hyper geometric function same as explained in note. The joint posterior density of θ, is obtained using the joint posterior density of θ,, p, p and mgiven in () and integrating with respect to p and p and summing over the m, we get n g (θ, X) = k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( θ ) m +a Sm +b θ F [c, dm, c + m dm + d, θ ] ( ) n m +a Sn Sm +b F [c, (dn dm), c + n m dn + dm + d, ]. (X) (4) θ Where F [c, (dn dm), c + n m dn + dm + d, ] is a hypergeometric function same as explained in note. The marginal density of θ is obtained using the joint posterior density of θ and and integrating with respect to.we get, n n g (θ X) = k g (θ, X) d m = = k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = The marginal density of is obtained as, ( θ ) m +a Sm +b θ F [c, dm, c + m dm + d, θ ] ( ) n m +a Sn Sm +b F [c, (dn dm), c + n m dn + dm + d, ] d. (X) n g ( X) = k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( ) n m +a Sn Sm +b F [c, (dn dm), c + n m dn + dm + d, ] ( θ ) m +a Sm +b θ F [c, dm, c + m dm + d, θ ] dθ. (X) θ Where F [c, dm, c + m dm + d, ] and θ F [c, (dn dm), c + n m dn + dm + d, ]are hypergeometric function The joint posterior density of p and p is obtained as, θ θ θ (5) (6) 56
5 IJRRAS 3 () October n g (p, p X) = k g (θ,, p, p, m X) dθ d n m = = k {( p ) m dm +d p c }{( p ) n m dn +dm +d p c } m = Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a )Γ(Sn Sm + b ) F [Sm + b, dm, m dm + a + Sm + b, p ] F [Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p ] (X) (7) Where F [Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p ] is a hypergeometric function and same as explained in Note. The marginal density of p andp are obtained from the joint posterior density of p and p and given as, n g (p X) = k g (p, p X) dp n m = = k {( p ) m dm +d p c } m = Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a )Γ(Sn Sm + b ) F [Sm + b, dm, m dm + a + Sm + b, p ] {( p ) n m dn +dm +d c p F [Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p ]} dp (X) (8) g (p X) = k n p n m dn +dm +d p c m = Γ m dm + a Γ Sm + b Γ n m dn + dm + a Γ Sn Sm + b F Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p { p m dm +d c p F Sm + b, dm, m dm + a + Sm + b, p } dp (X) (9) Where F [Sm + b, dm, m dm + a + Sm + b, p ] and F Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p are hypergeometric function same as explained in note.. The marginal posterior density of change point m is, say g m x is obtained by integrating the joint posterior density of θ,, p, p and m () with respect to p, p and θ and, n = I m I m / m = I m I m () g m X ) = k I m I m (X) 3b. POSTERIOR DISTRIBUTION FUNCTION USING NON INFORMATIVE PRIOR A non-informative prior is a prior that reflects indifference to all values of the parameter, and adds no information to that contained in the empirical data. Thus, a Bayes inference based upon non-informative prior has generally a theoretical interest only, since, from an engineering view point, the Bayes approach is very attractive for it allows incorporating expert opinion or technical knowledge in the estimation procedure. However, such a Bayes inference acquires large interest in solving prediction problems when it is extremely difficult, if at all possible, to find a classical solution for the prediction problem, because classical prediction intervals are numerically equal to the Bayes ones based on the non-informative prior density. Hence, the Bayes approach based on prior ignorance can be viewed as mathematical method for obtaining classical prediction intervals. The joint prior density of parameters using non-informative prior is say, g (θ,, p, p, m) = ( θ )θ ( ) ( p )( p ) () 57
6 IJRRAS 3 () October The Joint posterior density of parameters θ,, p, p and m under non-informative prior is obtained using the likelihood function (4) and the joint prior density of the parameters under non-informative prior (), g (θ,, p, p, m X) = L θ,,p,p,m X g (θ,,p,p,m ) () (X) = (p + ( p )( θ )) dm ( p ) m dm + ( θ ) m dm Sm θ. (p + ( p )( )) dn dm ( p ) n m dn +dm ( ) n m dn +dm Sn Sm Sn Sm θ (X) (3) Where, X is the marginal density of X under non-informative priors. Where, (X) == k I 3 (m) == Γ(m dm) n m = I 3 (m)i 4 (m) ( θ ) m Sm θ Γ(m dm) θ F [, dm, dm + m, θ ] dθ I 4 m == Γ(n m dn + dm) ( ) n m Sn Sm F [, dm dn, + dm dn m + n, ] d θ Where F [, dm, dm + m, ] and θ F [, dm dn, + dm dn m + n, function same as explained in note. Now the joint posterior density of θ and and of p and p are obtained as g (θ, X) = k n m = θ k ( θ ) m θ Sm F [, dm, dm + m, θ ]( ) n m Sn Sm F [, dm dn, + dm dn m + n, ] (X)(7) g (p, p X) = k n ( p ) m dm n m dn +dm ( p ) (4) (5) (6) ] is a hypergeometric m = Γ(m dm)γ(sm)γ(n m dn + dm)γ(sn Sm) F [Sm, dm, m dm + Sm, p ] F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ] (X) (8) Where, k = Γ(m dm)γ(n m dn + dm) (9) F Sn Sm, dn + dm, n m dn + dm + Sn Sm, p is a hypergeometric function same as explained in note. The marginal posterior density of θ,, p and p are obtained as, g (θ X) = k n m = k ( θ ) m Sm θ F [, dm, dm + m, θ ] ( ) n m Sn Sm θ dn dm ( ) F [, dm dn, + dm dn m + n, ] dθ (X) (3) g ( X) = k n m = k ( ) n m Sn Sm θ 58
7 IJRRAS 3 () October F [, dm dn, + dm dn m + n, ] ( θ ) m Sm θ θ F [, dm, dm + m, ] dθ θ (X) (3) g (p X) = k n m = Γ(m dm)γ(sm)γ(n m dn + dm)γ(sn Sm) ( p ) m dm F [Sm, dm, m dm + Sm, p ] n m dn +dm ( p ) F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ] dp (X) (3) g (p X) = k n m = Γ(m dm)γ(sm)γ(n m dn + dm) n m dn +dm Γ(Sn Sm)( p ) F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ] ( p ) m dm F [Sm, dm, m dm + Sm, p ] dp (X) (33) Where F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ]and F Sm, dm, m dm + Sm, p are hypergeometric function same as explained in note. The marginal posterior density of m is obtained using the joint posterior density of θ,, p, p and m and integrating with respect to θ,, p and p leads to the marginal posterior density of change point m, g m X ) = k I 3 m I 4 m (X) = I 3 m I 4 m n m = I 3 m I 4 m (34) 4. BAYES ESTIMATES USING INFORMATIVE AND NON INFORMATIVE PRIOR UNDER SYMMETRIC LOSS FUNCTION The Bayes estimate of a generic parameter (or function there of) α based on a squared error loss (SEL) function: L α, d = (α d), Where, d is a decision rule to estimateα, is the posterior mean. Bayes estimates of m,p, p, θ, and under SEL and informative prior and non-informative prior are m = m = n m = mi m I m n m = I m I m n m = mi 3 m I 4 m n m = I 3 m I 4 m n θ = [k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = (35) (36) ( θ ) m +a Sm +b θ F c, dm, c + m dm + d, θ dθ ( ) n m +a Sn Sm +b F [c, (dn dm), c + n m dn + dm + d, ] dθ. (X)] (37) n = [ k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( ) n m +a Sn Sm +b F [c, (dn dm), c + n m dn + dm + d, ] θ ( θ ) m +a Sm +b θ F c, dm, c + m dm + d, dθ θ. X ] (38) θ 59
8 IJRRAS 3 () October p == [ k n m = Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a ) Γ(Sn Sm + b ) ( p ) m dm +d p c F Sm + b, dm, m dm + a + Sm + b, p dp {( p ) n m dn +dm +d p c F [Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p ]} dp (X) ] (39) n p = [ k Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a ) m = Γ(Sn Sm + b ) p n m dn +dm +d p c F Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p dp { p m dm +d c p F Sm + b, dm, m dm + a + Sm + b, p } dp X ] (4) 5. BAYES ESTIMATES USING INFORMATIVE AND NON INFORMATIVE PRIOR UNDER ASYMMETRIC LOSS FUNCTION The Loss function L( α, d) provides a measure of the financial consequences arising from a wrong decision rule d to estimate an unknown quantity α. The choice of the appropriate loss function depends on financial considerations only, and is independent from the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since for example, an over estimation of the reliability function is usually much more serious than an under estimation In this section, we derive Bayes estimator of change point m under different asymmetric loss function using both prior considerations explained in section 3. A useful asymmetric loss, known as the Linex loss function was introduced by Varian (975). Under the assumption that the minimal loss at d, the Linex loss function can be expressed as, L 4 (α, d) = exp. [q (d- α)] q (d α) I, q... The sign of the shape parameter q reflects the deviation of the asymmetry, q > if over estimation is more serious than under estimation, and vice-versa, and the magnitude of q reflects the degree of asymmetry. The posterior expectation of Linex loss function is: E α {L 4 (α, d)}=exp (q d) E α { exp ( q α)} q d E α α I Where E α f α denote te expectaion of f α with respect to the posterior density of g α X ). The Bayes estimate α L is the value of d that minimizes E α {L 4 (α, d)} α L = q In[E α { exp ( q α)}] Provided that E α { exp ( q α)} exists and finite. Minimizing expected loss function E m [L 4 (m, d)] and using posterior distribution () and (34) we get the bayes estimates of m, using Linex loss function as, = q ln n m = e m q I m I m n m = I m I m m L = q ln[e(e mq )] (4) m L = q ln n m = e m q I 3 m I 4 m n m = I 3 m I 4 m (4) Minimizing expected loss function E θ [L 4 (θ, d)] and using posterior distribution (5) and(3) we get the bayes estimates ofθ, using Linex loss function as 6
9 IJRRAS 3 () October n θ L = ln [k Γc q Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( θ ) m +a Sm +b θ e θ q F c, dm, c + m dm + d, θ dθ ( ) n m +a Sn Sm +b F [c, (dn dm), c + n m dn + dm + d, ] dθ. (X)] (43) n θ L = ln [ k q k ( θ ) m Sm θ ( θ ) dm e θ q m = θ F [, dm, dm + m, θ ] dθ ( ) n m Sn Sm θ dn dm ( ) F [, dm dn, + dm dn m + n, ] dθ (X)] (44) Minimizing expected loss function E θ [L 4 (, d)] and using posterior distribution (6) and(3),we get the bayes estimates of, using Linex loss function as n L = ln[ k Γc q Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( ) n m +a Sn Sm +b e q F [c, (dn dm), c + n m dn + dm + d, ] θ F [c, dm, c + m dm + d, ] dθ θ. (X)] (45) n θ L = ln[ k q k ( ) n m Sn Sm F [, dm dn, + dm dn m + n, ]e q d ( θ ) m Sm θ θ F [, dm, dm + m, ] dθ θ (X) ] (46) m = θ ( θ ) m +a Sm +b θ Minimizing expected loss function E p [L 4 (p, d)] and using posterior distribution (8) and(3) we get the bayes estimates of p, using Linex loss function as p L = q ln [ k n m = Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a ) Γ(Sn Sm + b ) ( p ) m dm +d p c e p q F Sm + b, dm, m dm + a + Sm + b, p dp {( p ) n m dn +dm +d p c F [Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p ]} dp (X) ](47) p L = ln [ k q n m = Γ(m dm)γ(sm)γ(n m dn + dm)γ(sn Sm) ( p ) m dm F [Sm, dm, m dm + Sm, p ]e p q dp 6
10 IJRRAS 3 () October n m dn +dm ( p ) F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ] dp (X) ] (48) Minimizing expected loss function E p [L 4 (p, d)] and using posterior distribution (9)and (33) we get the bayes estimates of p, using Linex loss function as p L = q ln [ k n m = Γ(Sn Sm + b ) Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a ) p n m dn +dm +d p c e p q F Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p dp { p m dm +d c p F Sm + b, dm, m dm + a + Sm + b, p } dp X ] (49) p L = ln [ k q n m = Γ(m dm)γ(sm)γ(n m dn + dm)γ(sn Sm) n m dn +dm ( p ) F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ]e p q dp ( p ) m dm F Sm, dm, m dm + Sm, p dp X ] (5) Another loss function, called General Entropy loss function (GEL), proposed by Calabria and Pulcini (994) is given by, L 5 (α, d) = d α q 3 q 3 ln(d / α), The Bayes estimate α E is the value of d that minimizes E α [L 5 (α, d)]: α E =[E α (α q 3 )] q 3, Provided that Eα (α q 3 ) exists and is finite. minimizing expectation [E m [L 5 (m, d)] and using posterior distributions () and (34), we get Bayes estimate m E, m E as, m E = E m q 3 q3 = = n m = m q 3I m I m n m = I m I m n m = m q 3I 3 m I 4 m n m = I 3 m I 4 m q3 q3 m E = E m q 3 q3 minimizing expectation [E θ [L 5 (θ, d)] and using posterior distributions (5) and (3), we get Bayes estimate ofθ using General Entropy loss function as, n θ E = [k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( θ ) m +a Sm +b θ q 3 F c, dm, c + m dm + d, θ dθ ( ) n m +a Sn Sm +b θ F [c, (dn dm), c + n m dn + dm + d, ] dθ. (X) ] q3 (53) n θ E = [ k k ( θ ) m Sm q θ 3 m = θ F [, dm, dm + m, θ ] dθ θ (5) (5) 6
11 IJRRAS 3 () October ( ) n m Sn Sm θ F, dm dn, + dm dn m + n, dθ X ] q3 (54) minimizing expectation [E θ [L 5 (, d)] and using posterior distributions (6) and (3), we get Bayes estimate of using General Entropy loss function as, n E = [ k Γc Γc Γ(m dm + d )Γ(n m dn + dm + d ) m = ( ) n m +a Sn Sm +b q 3 F [c, (dn dm), c + n m dn + dm + d, ] d ( θ ) m +a Sm +b θ θ F c, dm, c + m dm + d, dθ θ. X ] q3 (55) n θ E = [ k k ( ) n m Sn Sm q θ 3 m = F [, dm dn, + dm dn m + n, ] d ( θ ) m Sm θ F [, dm, dm + m, ] dθ θ (X) q3 (56) minimizing expectation [E p [L 5 (p, d)] and using posterior distributions (8) and(3)we get Bayes estimate ofp using General Entropy loss function as, p E = [ k n m = θ ] Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a ) Γ(Sn Sm + b ) ( p ) m dm +d c p q 3 F Sm + b, dm, m dm + a + Sm + b, p dp {( p ) n m dn +dm +d p c F [Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p ]} dp (X) ] q3(57) p E = [ k n m = Γ(m dm)γ(sm)γ(n m dn + dm)γ(sn Sm) ( p ) m dm p q 3 F [Sm, dm, m dm + Sm, p ] dp n m dn +dm ( p ) F Sn Sm, dn + dm, n m dn + dm + Sn Sm, p dp X ] q3 (58) minimizing expectation [E p [L 5 (p, d)] and using posterior distributions (9) and(33)we get Bayes estimate ofp using General Entropy loss function as, n p E = [ k Γ(m dm + a )Γ(Sm + b )Γ(n m dn + dm + a ) m = Γ(Sn Sm + b ) p n m dn +dm +d p c q 3 F Sn Sm + b, dn + dm, n m dn + dm + a + Sn Sm + b, p dp 63
12 IJRRAS 3 () October { p m dm +d p c F Sm + b, dm, m dm + a + Sm + b, p } dp (X) ] q3 (59) p E = [ k n m = Γ(m dm)γ(sm)γ(n m dn + dm)γ(sn Sm) ( p ) n m dn +dm p q 3 F [Sn Sm, dn + dm, n m dn + dm + Sn Sm, p ] dp ( p ) m dm dm p F Sm, dm, m dm + Sm, p dp X ] (6) q3 6. NUMERICAL STUDY We have generated random observations from the proposed change point model explained in section-. The first observation from ZIGP with θ =.5 and, p =.6 and the next observations are from the same distribution with parameters =.7 and p =.8. The p and p are themselves were random observations form Beta distributions with mean μ =.5 and μ =.7 and standard deviation σ =. respectively, resulting in c =.63, d =.8, c =.84 and d =.56. The observations are given in table-. Table-. Generated Observations From Proposed Change Point Model Of Zero Inflated Geometric Distribution I Xi 3 I Xi 7 We have calculated posterior means of proportions under the informative and non-informative priorsusing the results given in section 4.. The results are shown in Table-. Prior Table- The value of Bayes estimates of proportions under SEL Bayes Estimates of Proportion p p Informative.6.8 Non-Informative.59.8 We have calculated posterior means of m,θ and under informative and non-informative prior using the results given in section 4., the results are shown in Table 3. Table-3 Bayes Estimate of m, θ and under SEL Bayes Estimates of m (Posterior Mean) Bayes Estimates of θ and (Posterior Mean) Prior θ Informative.5.7 Non Informative
13 IJRRAS 3 () October We also compute the Bayes estimators of p and p for both informative and non-informative priors using (47), (48), (49), (5), (57), (58), (59) and (6) respectively for the data given in Table- and for different value of shape parameter q and q 3. The results are shown in Table-4. Table-4 The value of Bayes estimates of proportions using asymmetric loss function Prior Shape Parameter Bayes Estimates of Proportion q q 3 p L p E p L p E Informative Non - Informative We have calculated Bayes estimate of change point m and of θ and under the informative under Linex and General entropy loss functions respectively using (4), (5) and(43),,(45), (53), (55))for the data given in Table- and for different value of shape parameter q and q 3. The results are shown in Table 5 and 6. Table5. The Bayes estimates using Linex Loss Function q m L θ L Informative Prior Table 6 The Bayes estimates using General Entropy Loss Function q 3 m E θ E Informative Prior L E Table 6 shows that for small values of q, q =.9,.,. Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 5 also shows that, for q =.5,., Bayes estimate are less than actual value of m=. For q =q 3 =,, Bayes estimates are quite large than actual value m=. It can be seen from Table 5 and 6 that the negative sign of shape parameter of loss function reflecting underestimation is more serious than overestimation. Thus problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss function negative. Table 6 shows that, for small values of q, q 3 =.9,.,. General Entropy loss function, the values of the Bayes estimate under a loss is not far from the posterior mean. Table 6 also shows that, for q 3 =.5,., Bayes estimates are less than actual value of m=. It can be seen Table 5 and 6 that positive sign of shape parameter of loss functions reflecting overestimation is more serious than under estimation.thus problem of overestimation can be solved by taking the value of shape parameter of Linex and General Entropy loss function positive and high. 65
14 IJRRAS 3 () October 7. SIMULATION STUDY In section 4 and 5, we have obtained Bayes estimates of m and proportions p and p on the basis of the generated data given in Table- for given values of parameters. To justify the results, we have generated, different random samples with m=, n=, p =.6, p =.8, θ =.5, =.7 and obtained the frequency distributions of *, * posterior mean, median of m, m L m E with the correct prior consideration. The result is shown in Table-7and 8. The value of shape parameter of the general entropy loss and Linex loss used in simulation study for change point is taken as.. We have also simulated several mixed samples as explained in section with p =.5,.6,.7; p =.5,.8,.9 and θ =.5,.,.; =.55,.45,.35. For each p, p, θ and, pseudo random samples with m= and n= have been simulated and Bayes estimators of change point m using q =.9 has been computed for same value of a i, b i,c i and d i i=, for different prior means and. Table 7 Frequency distributions of the Bayes estimates of the change point Bayes estimate % Frequency for Posterior mean 7 7 Posterior median Posterior mode 3 75 * m L 65 3 * m E Table 8 Frequency distributions of the Bayes estimates of Proportions Bayes estimate % Frequency for p E p E 8. REFERENCES []. Archan Bhattacharya (8). A Bayesian test for excess zeros in a zero-inflated power series distribution.ims Vol., []. Johnson, N. L., Kotz, S. and Kemp, A. W. (99). Univariate Discrete Distributions, nd ed. Wiley, New York. [3]. Broemeling, L. D. and Tsurumi, H. (987). Econometrics and structural change, Marcel Dekker, New York. [4]. Jani, P. N. and Pandya, M. (999). Bayes estimation of shift point in left Truncated Exponential Sequence Communications in Statistics (Theory and Methods), 8(), [5]. Ebrahimi N. and Ghosh S. K. (). Bayesian and frequentist methods in change-point problems. Handbook of statistic, Vol., Advance in Reliability, , (Eds. N. Balakrishna and C. R. Rao). [6]. Zacks, S. (983). Survey of classical and Bayesian approaches to the change point problem: fixed sample and sequential procedures for testing and estimation. Recent advances in statistics. Herman Chernoff Best Shrift, Academic Press New-York, 983, [7]. Pandya, M. and Jani, P.N. (6). Bayesian Estimation of Change Point in Inverse Weibull Sequence. Communication in statistics- Theory and Methods, 35 (), [8]. Pandya M. and Bhatt, S. (7). Bayesian Estimation of shift point in Weibull Distribution, journal of Indian Statistical Association, Vol.45,, pp [9]. Pandya, M. and Jadav, P. (8). Bayesian Estimation of change point in Inverse Weibull Distribution, IAPQR Transactions., Vol.33, No., pp. -3. []. Pandya and Jadav (), Bayesian estimation of change point in mixture of left truncated exponential and degenerate distribution, Communication in statistics, vol. 39, no. 5, pp []. Calabria, R. and Pulcini, G. (996). Point estimation under asymmetric loss functions for left-truncated exponential samples, Communication in Statistics (Theory and Methods), 5(3), []. Varian, H. R. (975). A Bayesian approach to real estate assessment, Studies in Bayesian econometrics and Statistics in Honor of Leonard J. Savage, (Feigner and Zellner, Eds.) North Holland Amsterdam,
Bayesian estimation of reliability function for a changing exponential family model under different loss functions
American Journal of Theoretical and Applied Statistics 2014; 3(1): 25-30 Published online February 28, 2014 (http://www.sciencepublishinggroup.com/j/ajtas) doi: 10.11648/j.ajtas.20140301.14 Bayesian estimation
More informationCOMPARISON OF RELATIVE RISK FUNCTIONS OF THE RAYLEIGH DISTRIBUTION UNDER TYPE-II CENSORED SAMPLES: BAYESIAN APPROACH *
Jordan Journal of Mathematics and Statistics JJMS 4,, pp. 6-78 COMPARISON OF RELATIVE RISK FUNCTIONS OF THE RAYLEIGH DISTRIBUTION UNDER TYPE-II CENSORED SAMPLES: BAYESIAN APPROACH * Sanku Dey ABSTRACT:
More informationAn Analysis of Record Statistics based on an Exponentiated Gumbel Model
Communications for Statistical Applications and Methods 2013, Vol. 20, No. 5, 405 416 DOI: http://dx.doi.org/10.5351/csam.2013.20.5.405 An Analysis of Record Statistics based on an Exponentiated Gumbel
More informationLoss functions, utility functions and Bayesian sample size determination.
Loss functions, utility functions and Bayesian sample size determination. Islam, A. F. M. Saiful The copyright of this thesis rests with the author and no quotation from it or information derived from
More informationBAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION
REVSTAT Statistical Journal Volume 9, Number 3, November 211, 247 26 BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION Authors: Sanjay
More informationNoninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions
Communications for Statistical Applications and Methods 03, Vol. 0, No. 5, 387 394 DOI: http://dx.doi.org/0.535/csam.03.0.5.387 Noninformative Priors for the Ratio of the Scale Parameters in the Inverted
More informationNew Bayesian methods for model comparison
Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison
More informationStatistical Inference Using Progressively Type-II Censored Data with Random Scheme
International Mathematical Forum, 3, 28, no. 35, 1713-1725 Statistical Inference Using Progressively Type-II Censored Data with Random Scheme Ammar M. Sarhan 1 and A. Abuammoh Department of Statistics
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationEstimation for inverse Gaussian Distribution Under First-failure Progressive Hybird Censored Samples
Filomat 31:18 (217), 5743 5752 https://doi.org/1.2298/fil1718743j Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Estimation for
More informationPhysics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester
Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability
More informationBayesian Inference: Posterior Intervals
Bayesian Inference: Posterior Intervals Simple values like the posterior mean E[θ X] and posterior variance var[θ X] can be useful in learning about θ. Quantiles of π(θ X) (especially the posterior median)
More informationError analysis for efficiency
Glen Cowan RHUL Physics 28 July, 2008 Error analysis for efficiency To estimate a selection efficiency using Monte Carlo one typically takes the number of events selected m divided by the number generated
More informationCurve Fitting Re-visited, Bishop1.2.5
Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the
More informationTime Series and Dynamic Models
Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The
More informationCHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.0 Discrete distributions in statistical analysis Discrete models play an extremely important role in probability theory and statistics for modeling count data. The use of discrete
More informationCSC321 Lecture 18: Learning Probabilistic Models
CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling
More informationRELIABILITY ANALYSIS FOR THE TWO-PARAMETER PARETO DISTRIBUTION UNDER RECORD VALUES
J Appl Math & Informatics Vol 9(11) No 5-6 pp 1435-1451 Website: http://wwwkcambiz RELIABILITY ANALYSIS FOR THE TWO-PARAMETER PARETO DISTRIBUTION UNDER RECORD VALUES LIANG WANG YIMIN SHI PING CHANG Abstract
More informationDavid Giles Bayesian Econometrics
David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian
More informationPrediction for future failures in Weibull Distribution under hybrid censoring
Prediction for future failures in Weibull Distribution under hybrid censoring A. Asgharzadeh a, R. Valiollahi b, D. Kundu c a Department of Statistics, School of Mathematical Sciences, University of Mazandaran,
More informationEstimation Under Multivariate Inverse Weibull Distribution
Global Journal of Pure and Applied Mathematics. ISSN 097-768 Volume, Number 8 (07), pp. 4-4 Research India Publications http://www.ripublication.com Estimation Under Multivariate Inverse Weibull Distribution
More informationGamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions
Hacettepe Journal of Mathematics Statistics Volume 44 (5) (2015), 1283 1291 Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions SH. Moradi
More informationIrr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland
Frederick James CERN, Switzerland Statistical Methods in Experimental Physics 2nd Edition r i Irr 1- r ri Ibn World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI CHENNAI CONTENTS
More informationIntroduction to Bayesian Inference
University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ
More informationLecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1
Lecture 5 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter
More informationConsideration of prior information in the inference for the upper bound earthquake magnitude - submitted major revision
Consideration of prior information in the inference for the upper bound earthquake magnitude Mathias Raschke, Freelancer, Stolze-Schrey-Str., 6595 Wiesbaden, Germany, E-Mail: mathiasraschke@t-online.de
More informationStatistical Methods in HYDROLOGY CHARLES T. HAAN. The Iowa State University Press / Ames
Statistical Methods in HYDROLOGY CHARLES T. HAAN The Iowa State University Press / Ames Univariate BASIC Table of Contents PREFACE xiii ACKNOWLEDGEMENTS xv 1 INTRODUCTION 1 2 PROBABILITY AND PROBABILITY
More informationDS-GA 1002 Lecture notes 11 Fall Bayesian statistics
DS-GA 100 Lecture notes 11 Fall 016 Bayesian statistics In the frequentist paradigm we model the data as realizations from a distribution that depends on deterministic parameters. In contrast, in Bayesian
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationBayesian Analysis of Simple Step-stress Model under Weibull Lifetimes
Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes A. Ganguly 1, D. Kundu 2,3, S. Mitra 2 Abstract Step-stress model is becoming quite popular in recent times for analyzing lifetime
More informationHybrid Censoring; An Introduction 2
Hybrid Censoring; An Introduction 2 Debasis Kundu Department of Mathematics & Statistics Indian Institute of Technology Kanpur 23-rd November, 2010 2 This is a joint work with N. Balakrishnan Debasis Kundu
More informationBAYESIAN ESTIMATION IN DAM MONITORING NETWORKS
BAYESIAN ESIMAION IN DAM MONIORING NEWORKS João CASACA, Pedro MAEUS, and João COELHO National Laboratory for Civil Engineering, Portugal Abstract: A Bayesian estimator with informative prior distributions
More informationDavid Giles Bayesian Econometrics
9. Model Selection - Theory David Giles Bayesian Econometrics One nice feature of the Bayesian analysis is that we can apply it to drawing inferences about entire models, not just parameters. Can't do
More informationEMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS
Statistica Sinica 7(1997), 755-769 EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS Su-Yun Huang and TaChen Liang Academia Sinica and Wayne State University Abstract: This paper deals
More informationPlausible Values for Latent Variables Using Mplus
Plausible Values for Latent Variables Using Mplus Tihomir Asparouhov and Bengt Muthén August 21, 2010 1 1 Introduction Plausible values are imputed values for latent variables. All latent variables can
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationOBJECTIVE BAYESIAN ESTIMATORS FOR THE RIGHT CENSORED RAYLEIGH DISTRIBUTION: EVALUATING THE AL-BAYYATI LOSS FUNCTION
REVSTAT Statistical Journal Volume 14, Number 4, October 2016, 433 454 OBJECTIVE BAYESIAN ESTIMATORS FOR THE RIGHT CENSORED RAYLEIGH DISTRIBUTION: EVALUATING THE AL-BAYYATI LOSS FUNCTION Author: J.T. Ferreira
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationEstimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio
Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist
More informationStats 579 Intermediate Bayesian Modeling. Assignment # 2 Solutions
Stats 579 Intermediate Bayesian Modeling Assignment # 2 Solutions 1. Let w Gy) with y a vector having density fy θ) and G having a differentiable inverse function. Find the density of w in general and
More informationTABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1
TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1 1.1 The Probability Model...1 1.2 Finite Discrete Models with Equally Likely Outcomes...5 1.2.1 Tree Diagrams...6 1.2.2 The Multiplication Principle...8
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference
More informationFrequentist-Bayesian Model Comparisons: A Simple Example
Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal
More informationReview: Statistical Model
Review: Statistical Model { f θ :θ Ω} A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced the data. The statistical model
More informationHPD Intervals / Regions
HPD Intervals / Regions The HPD region will be an interval when the posterior is unimodal. If the posterior is multimodal, the HPD region might be a discontiguous set. Picture: The set {θ : θ (1.5, 3.9)
More informationCS 540: Machine Learning Lecture 2: Review of Probability & Statistics
CS 540: Machine Learning Lecture 2: Review of Probability & Statistics AD January 2008 AD () January 2008 1 / 35 Outline Probability theory (PRML, Section 1.2) Statistics (PRML, Sections 2.1-2.4) AD ()
More informationProbability and Statistics
Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be Chapter 3: Parametric families of univariate distributions CHAPTER 3: PARAMETRIC
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationA Discussion of the Bayesian Approach
A Discussion of the Bayesian Approach Reference: Chapter 10 of Theoretical Statistics, Cox and Hinkley, 1974 and Sujit Ghosh s lecture notes David Madigan Statistics The subject of statistics concerns
More informationCS 630 Basic Probability and Information Theory. Tim Campbell
CS 630 Basic Probability and Information Theory Tim Campbell 21 January 2003 Probability Theory Probability Theory is the study of how best to predict outcomes of events. An experiment (or trial or event)
More informationEstimation of Quantiles
9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles
More informationIntroduction to Machine Learning. Lecture 2
Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for
More informationBayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2
How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University
More informationStatistical Theory MT 2007 Problems 4: Solution sketches
Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the
More informationProbabilistic Graphical Models
Parameter Estimation December 14, 2015 Overview 1 Motivation 2 3 4 What did we have so far? 1 Representations: how do we model the problem? (directed/undirected). 2 Inference: given a model and partially
More informationON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain
ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision
More information10. Exchangeability and hierarchical models Objective. Recommended reading
10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.
More informationINVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION
Pak. J. Statist. 2017 Vol. 33(1), 37-61 INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION A. M. Abd AL-Fattah, A.A. EL-Helbawy G.R. AL-Dayian Statistics Department, Faculty of Commerce, AL-Azhar
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationA Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness
A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A. Linero and M. Daniels UF, UT-Austin SRC 2014, Galveston, TX 1 Background 2 Working model
More informationEstimation of Operational Risk Capital Charge under Parameter Uncertainty
Estimation of Operational Risk Capital Charge under Parameter Uncertainty Pavel V. Shevchenko Principal Research Scientist, CSIRO Mathematical and Information Sciences, Sydney, Locked Bag 17, North Ryde,
More informationLecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis
Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationNow consider the case where E(Y) = µ = Xβ and V (Y) = σ 2 G, where G is diagonal, but unknown.
Weighting We have seen that if E(Y) = Xβ and V (Y) = σ 2 G, where G is known, the model can be rewritten as a linear model. This is known as generalized least squares or, if G is diagonal, with trace(g)
More informationSequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process
Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationIntroduction into Bayesian statistics
Introduction into Bayesian statistics Maxim Kochurov EF MSU November 15, 2016 Maxim Kochurov Introduction into Bayesian statistics EF MSU 1 / 7 Content 1 Framework Notations 2 Difference Bayesians vs Frequentists
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationSociedad de Estadística e Investigación Operativa
Sociedad de Estadística e Investigación Operativa Test Volume 14, Number 2. December 2005 Estimation of Regression Coefficients Subject to Exact Linear Restrictions when Some Observations are Missing and
More informationOn a simple construction of bivariate probability functions with fixed marginals 1
On a simple construction of bivariate probability functions with fixed marginals 1 Djilali AIT AOUDIA a, Éric MARCHANDb,2 a Université du Québec à Montréal, Département de mathématiques, 201, Ave Président-Kennedy
More informationA Note on Lenk s Correction of the Harmonic Mean Estimator
Central European Journal of Economic Modelling and Econometrics Note on Lenk s Correction of the Harmonic Mean Estimator nna Pajor, Jacek Osiewalski Submitted: 5.2.203, ccepted: 30.0.204 bstract The paper
More informationCarl N. Morris. University of Texas
EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province
More informationStatistical Models with Uncertain Error Parameters (G. Cowan, arxiv: )
Statistical Models with Uncertain Error Parameters (G. Cowan, arxiv:1809.05778) Workshop on Advanced Statistics for Physics Discovery aspd.stat.unipd.it Department of Statistical Sciences, University of
More informationChapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1
Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in
More informationOn Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions
Journal of Applied Probability & Statistics Vol. 5, No. 1, xxx xxx c 2010 Dixie W Publishing Corporation, U. S. A. On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty
More informationBayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida
Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More informationMore on nuisance parameters
BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution
More informationIntroduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??
to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter
More informationA NONINFORMATIVE BAYESIAN APPROACH FOR TWO-STAGE CLUSTER SAMPLING
Sankhyā : The Indian Journal of Statistics Special Issue on Sample Surveys 1999, Volume 61, Series B, Pt. 1, pp. 133-144 A OIFORMATIVE BAYESIA APPROACH FOR TWO-STAGE CLUSTER SAMPLIG By GLE MEEDE University
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationLearning Bayesian network : Given structure and completely observed data
Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution
More informationHybrid Censoring Scheme: An Introduction
Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline 1 2 3 4 5 Outline 1 2 3 4 5 What is? Lifetime data analysis is used to analyze data in which the time
More informationA Brief Review of Probability, Bayesian Statistics, and Information Theory
A Brief Review of Probability, Bayesian Statistics, and Information Theory Brendan Frey Electrical and Computer Engineering University of Toronto frey@psi.toronto.edu http://www.psi.toronto.edu A system
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More informationPhysics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester
Physics 403 Propagation of Uncertainties Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Maximum Likelihood and Minimum Least Squares Uncertainty Intervals
More informationBayesian Econometrics
Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence
More informationPractice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:
Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation
More informationUnivariate Discrete Distributions
Univariate Discrete Distributions Second Edition NORMAN L. JOHNSON University of North Carolina Chapel Hill, North Carolina SAMUEL KOTZ University of Maryland College Park, Maryland ADRIENNE W. KEMP University
More informationOutline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution
Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model
More informationThe Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.
Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface
More informationOne-parameter models
One-parameter models Patrick Breheny January 22 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/17 Introduction Binomial data is not the only example in which Bayesian solutions can be worked
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationPreliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics, Appendix
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More information