BAYESIAN RELIABILITY ANALYSIS FOR THE GUMBEL FAILURE MODEL. 1. Introduction

Size: px
Start display at page:

Download "BAYESIAN RELIABILITY ANALYSIS FOR THE GUMBEL FAILURE MODEL. 1. Introduction"

Transcription

1 BAYESIAN RELIABILITY ANALYSIS FOR THE GUMBEL FAILURE MODEL CHRIS P. TSOKOS AND BRANKO MILADINOVIC Department of Mathematics Statistics, University of South Florida, Tampa, FL 3362 ABSTRACT. The Gumbel or double exponential probability distribution is used to characterize the failure times of a given system. The ordinary Bayesian settings of the reliability functions are being studied. In the Bayesian reliability analysis, we utilize Jeffrey s non-informative prior to obtain estimates of the target time of a system to achieve a desired reliability. Lindley s approximation procedure is used to obtain numerical estimates of the target time. 1. Introduction Extreme value probability distributions have been used effectively to model various problems in engineering, environment, business, etc. Some key references are (Burton Markopoulos, 1985; Naess, 1998; Osella et al., 1992; Ramachran, 1982; Rao et al., 1997; Sastry Pi, 1991; Silbergleit, 1996; Suzuki Ozaha, 1994; Tsokos, 1999). The object of the present study is to use the classical Gumbel or double exponential probability distribution to characterize the failure times of a given system, both in the ordinary Bayesian settings. In the Bayesian setting, we assume that the prior probability density function is the Jeffrey s non-informative prior under the mean square error loss function. We are interested in obtaining ordinary Bayesian estimates of a target time t α, subject to a desired specified reliability. That is, for a given system what is the time to failure, t α, with at least (1 - α)% assurance. For example, we want to be at least 95% certain that the system will be operable to time t.5. We develop both ordinary Bayesian estimates of t α introduce Lindley s approximation procedure that is used to obtain numerical results that illustrate the usefulness of the study. 2. The Gumbel Model For the Gumbel model, the probability distribution function (p.d.f) the cumulative distribution function (c.d.f) of the failure time T are given, respectively, by (2.1) f(t) = 1 e t µ e t µ,

2 2 CHRIS P. TSOKOS AND BRANKO MILADINOVIC < t <, < µ <, > (t µ) (2.2) F(t) = exp{ exp{( }} where µ are the location scale parameters, respectively. This model has been used in fire protection, insurance problems, prediction of earthquake magnitudes, carbon dioxide levels in the atmosphere, high return levels of wind speeds in the design of structures among others. In the present study, we shall apply the subject model in reliability analysis, more specifically, Bayesian reliability modeling. 3. Reliability modeling Let t 1, t 2, t 3,..., t n be the failure times that follow the Gumbel p.d.f. given by (2.1). The likelihood function L(µ, ), is given by L(µ, ) = n t i µ exp{ its logarithmic form is (3.1) LogL = nln ( t i µ ) exp( t i µ )} ) The maximum likelihood estimates (MLE) for µ can be obtained from the likelihood functions by solving the following equations (3.2) ˆ + Σt ie t i/ˆ Σe t i/ˆ = t (3.3) ˆµ = ˆln{ 1 n Σe t i/ˆ } Equations (3.2) (3.3) are not analytically tractable must be solved numerically to obtain approximate MLE s of µ, that is ˆµ ˆ. By taking the natural logarithm of both sides of equation (2.2) solving for t we obtain the expression for the target time t α under the desired reliability 1 - α given by (3.4) t α = µ (ln( ln(α)) Thus, by the invariance property of the MLE s we can obtain the MLE of the target time t α (3.5) ˆt α = ˆµ ˆ(ln( ln(α)) Classical estimates confidence intervals for t α can be obtained using the method of maximum likelihood the normal approximation for different extreme value models. In the present study, we shall examine the estimation of t α for an extreme

3 GUMBEL FAILURE MODEL 3 value model in a Bayesian setting under a specified prior mean square error loss function. 4. Bayesian approach to the Gumbel Model In the Bayesian approach we regard µ behaving as rom variables with a joint p.d.f. π(µ, ). We shall investigate the point estimator of t α for Jeffrey s non-informative prior Jeffrey s non-informative prior. Jeffrey s non-informative prior chooses the prior π(µ, ) to be proportional to deti(θ), where I(θ) is the expected Fisher information matrix. That is, 2 lnl µ I(ˆµ, ˆ) = E 2 2 lnl µ 2 2 lnl 2 lnl µ 2 2 Using logl as given in (3.1), we obtain I(θ) as n n (1 γ) (4.1) I(θ) = 2 2 n n (1 γ) {γ 2 2γ η(2, 2)} 2 2 where γ is the Euler s constant Hence, η(p, q) = 1 Γ(p) det(i(θ)) = K 4 t p 1 e qt dt. 1 e t implying that the Jeffrey s non-informative prior is given by.. (4.2) π(µ, ) = 1 2 We remark that π is an improper prior p.d.f. Consistent with the aim of the present study in identifying the target time t α, we proceed to obtain its analytical form Posterior distribution. The posterior probability density function of (µ, ) given the failure times t 1,..., t n is given by π(µ, t 1, t 2,..., t n ) = L(µ, t 1,..., t n )π(µ, ) L(µ, t 1, t 2,..., t n )π(µ, )dµd where L(µ, t 1,..., t n ) is given by (3.1). We shall first compute the marginal probability density function, that is, L(µ, t 1,..., t n )π(µ, ) dµ d. Using the prior π(µ, ) = 1, > letting x = n t 2 i, we obtain

4 4 CHRIS P. TSOKOS AND BRANKO MILADINOVIC L(µ, t 1,..., t n )π(µ, )dµd (4.3) = n 2 + e x/ e (nµ/) e e(µ/θ) n e t i / dµd. Let u = e, a = n e (t 1/), the expression (4.3) can be written as (4.4) L(µ, t 1,..., t n )π(µ, )dµd = n 1 e x/ u n 1 e au dud = Γ(n) = Γ(n) = Γ(n) where x = n t i = n t. n 1 e x/ a n d v n 1 e xv ( v n 1 ( e tiv ) n dv e v(ti+ t) ) n dv 4.3. Bayesian estimation of t α for Jeffrey s prior. The Bayes estimate of t α = µ ln( ln(α)) for squared error loss is given by ˆt B = E(t α t 1, t 2,..., t n ) [µ ln( ln(α)]l(µ, t 1, t 2,..., t n )π(µ, )dµd or (4.5) ˆt B = [µ ln( lnα)]l(µ, t 1, t 2,..., t n )π(µ, )dµd Γ(n) v n 1 ( e (t i+ t)v ) n dv Proceeding as we did before for obtaining the marginal probability distribution we can write (4.6) E(µ t) = v n 2 e xv (lnu)u n 1 e au dudv where a = e tiv,

5 (4.7) E( t) = Γ(n) Hence, where E(ˆt α t 1,..., t n ) = +( ln( lnα)) GUMBEL FAILURE MODEL 5 v n 2 ( e v(ti+ t) ) n dv. v n 2 e xv (lnu)un 1 e au dudv Γ(n) v n 1 [ n e v(t i+ t) ] n dv v n 2 [ e v(t i+ t) ] n dv v n 1 [ n e v(t i+ t) ] n dv a = e t iv t = n. To evaluate the above expression to obtain approximate Bayesian estimates of t α, we shall use Lindley s approximation method The Lindley Approximation. Similar to our work in the previous chapter, let u(θ)v(θ)e L(θ) dθ I = v(θ)e L(θ) dθ where θ = (θ 1, θ 2,..., θ k ), a vector of parameters. Also, let L=log(likelihood function) Note that I is the posterior expectation of u( θ) given the failure data, for a prior v(θ). Denote by Furthermore, u 1 = u θ 1 u 11 = 2 u θ 2 1 t i u 2 = u θ 2 u 22 = 2 u θ2 2 p = π(θ 1, θ 2 ) p 1 = p θ 1 ; p 2 = p θ 2 L 2 = 2 L ; L θ1 2 2 = 2 L θ2 2 L 3 = 3 L ; L θ1 3 3 = 3 L θ = ( L 2 ) 1 22 = ( L 2 ) 1 E(u(θ) t) = u(ˆθ 1, ˆθ 2 ) (u u ) + P 1 u P 2 u (L 3u L 3 u L 21 u L 12 u )

6 6 CHRIS P. TSOKOS AND BRANKO MILADINOVIC evaluated at (ˆθ 1, ˆθ 2 ), where ˆθ 1 ˆθ 2 are the MLEs of θ 1 θ 2. The target time for the Gumbel model given by t B = µ b = u(µ, ), where θ 1 = µ θ 2 =. Also, u 1 = 1 u 2 = b where b = ln( lnα) where u 11 = u 22 =. Thus, we can write P(θ 1, θ 2 ) = π(µ, ) = 1 2 P 1 = P 2 = 2 3. Let ˆµ ˆ be the classical MLEs for µ, respectively. Furthermore, we have { } L = n exp ( t i µ ) exp( t i µ ) or Thus, Also, L,2 = 2 lnl 2 lnl lnl = nln lnl µ = n 1 ( t i µ L 2, = 2 lnl µ 2 = 1 2 = n + (t i µ) = n 2[ (t 2 i µ)] which can be expressed as or n ( (t 2 i µ)) 1 + ( 3 n [ 2(t 2 i µ)[1 e (t i µ) We proceed to find L 3, L,3, that is ) e t i µ e t i µ. e t i µ e t i µ ) (t i µ) ]] ( L 3, = 3 lnl µ 3 = 1 3 e t i µ. ) (t i µ) (ti µ) 1 4 e t i µ (ti µ) 2 ) (t i µ) 2 ) 1 4. e t i µ ) (t i µ) 2

7 Also, or GUMBEL FAILURE MODEL 7 L,3 = 3 lnl = 2n + 6[ [ (t i µ) 2 ] 1 1[ 5 e (t i µ) L 21 = ( 2 lnl µ 2 ) = [ (t i µ)[1 e (t i µ) ]] 1 4 e (t i µ) (t i µ) 3 ] 1 6. ) ] 2 3 [ L 12 = µ ( 2 lnl 2 ) )(t i µ) ] 1 4 L 12 = 2 3(n e t i µ 4 ) + (ti µ)e t i µ 4 1 (t 5 i µ) 2 e t i µ Thus, a Bayesian approximate estimate for t α is given by (4.8) ˆt B = ˆt α (MLE) + P 2 u (L L 3 u L 21 u L ) evaluated at the MLE of µ, ˆµ ˆ. 5. Numerical Analysis In this section we present a numerical study in order to compare the maximum likelihood Bayes estimates for determining the target time of the Gumbel failure model subject to specified reliability. Our numerical simulation was conducted in the following manner: 1. Under the assumption that the location parameter µ the scale parameter behave romly independently, we simulated m (m = 5, 1, 2) location parameters from the normal distribution. In order to study the effects of the prior variance on our estimates, we simulated location parameters from the normal distribution with mean 25 variances equal to 1, 4, 9 respectively. 2. We assumed the scale parameter follows the uniform distribution. However, in order to see what effects the increase of variance has on our estimates, we let equal to 1, 2 4 respectively. 3. Using the obtained m pairs of µ, we generated n (n = 5, 1, 2) observations from the Gumbel p.d.f. calculated both the maximum likelihood Bayes estimates of the target time. 4. For comparison purposes, we calculated the absolute value of the difference between the true target time the corresponding ML Bayes estimates for 99% reliability. A schematic diagram of the complete step-by-step process of the numerical analysis is presented in Figure 1.

8 8 CHRIS P. TSOKOS AND BRANKO MILADINOVIC Figure 1. Numerical Study of the Gumbel Failure Time m n µ B, ˆµ B, ˆ t α ˆt α t α ˆt B , , , , , , , , , , , , , , , , , , 3, Table 1. Comparison between ML Bayesian Estimates of Reliability Time: µ N(25, 1), = 1,2,4, α =.1 Due to the size of our simulation some of the numerical results are given in Tables 1-3 under 99% reliability. In each table we present the size of the prior sample m used to calculate the Bayes estimate µ B, while ˆµ ˆ are the ML estimates of the location scale parameters. t α ˆt α t α ˆt B represent the absolute value of the

9 GUMBEL FAILURE MODEL 9 difference between the true target time, maximum likelihood Bayes target time estimates respectively. As we can see from Table 1, by keeping the prior sample m n µ B, ˆµ B, ˆ t α ˆt α t α ˆt B , , , , , , , , , , , , , , , , , , Table 2. Comparison between ML Bayesian Estimates of Reliability Time: µ N(25, 2), = 1,2,4, α =.1 size m = 5 prior variance fixed varying the sample size of the failure model from n = 5 to n = 2, the absolute value of the difference t α ˆt α t α ˆt B decreases. This behavior is consistent as we increase n, except that we notice m n µ B, ˆµ B, ˆ t α ˆt α t α ˆt B , , , , , , , , , , , , , , , , , , Table 3. Comparison between ML Bayesian Estimates of Reliability Time: µ N(25, 3), = 1,2,4, α =.1 a significant improvement in the ML estimate. In Table 2 Table 3 we increase the prior sample size m to 1 2 prior variance to 4 9 respectively also observe that the absolute value of the difference t α ˆt α t α ˆt B decreases. The increase in the prior variance has no effect on the behavior of our estimates. This is consistent as we increase n, we again notice a significant

10 1 CHRIS P. TSOKOS AND BRANKO MILADINOVIC improvement in the ML estimate. In almost every case the Bayes estimate is closer to the true target time than its maximum likelihood counterpart. 6. Conclusion As expected, the Monte Carlo simulation indicates that the Bayes estimate under the non-informative prior is closer to the true reliability time than its maximum likelihood counterpart. However, the following findings are in order: 1. An increase in the prior sample size for the location parameter has no effect on the behavior of the estimates. 2. An increase in the sample size of the simulated Gumbel data results in the improvement of both the maximum likelihood Bayesian estimates. 3. When we increase the variance of the prior distribution from 1 to 4 to 9 the variance of the simulated Gumbel data from 1 to 2 to 4, we notice a significant improvement in the maximum likelihood estimate. We therefore conclude that for large sample size high variance there is very little difference between the maximum likelihood the Bayes estimates of the target time subject to specified reliability.

11 GUMBEL FAILURE MODEL 11 REFERENCES [1] Burton, P. W. Markopoulos, K. C. (1985). Seismic risk of circum-pacific earthquakes, II. Extreme values using Gumbel s third distribution the relationship with strain energy release. Pure Applied Geophysics.123: [2] Naess, A. (1998). Estimations of long return period design values for wind speeds. J. Eng. Mech. A. Soc. Civil Eng.. 124: 252:259. [3] Osella, A. M., Sabbione, N. C., & Cernados, D. C. (1992). Statistical analysis of seismic data from north- western western Argentina. Pure Applied Geophysics. 139: [4] Ramachran, G. (1982), Properties of extreme order statistics their application to fire protection insurance problems. Fire Saf. J.. 5: [5] Rao, N. M., Rao, P. P., & Kaila, K. L. (1997). The first third asymptotic distributions of extremes as applied to the seismic source regions of India adjacent areas. Geophys. J. Int.. 128: [6] Sastry, S. & Pi, J. I. (1991). Estimating the minimum of partitioning floor planning problems. IEEETrans. Comput. Aided Design Integr. Circuits Syst.. 1: [7] Silbergleit, V. M. (1996). On the occurrence of geomagnetic storms with sudden commencements. J. Geomagn. Geoelect.. 48: [8] Suzuki, M & Ozaha, Y (1994). Seismic risk analysis based on strain-energy accumulations in focal region. J. Res. Natl. Inst. St. Technol.. 99: [9] Tsokos, C. P. & Nadarajah S. (23). Extreme value models for software reliability. Stoch. Anal. & Applic.. 21: [1] Tsokos, C. P. (1999). Ordinary Bayesian approach to life testing using the extreme value distributions. Frontiers in Reliability Analysis. IAPOQ: Acknowledgment The authors wish to acknowledge our late colleague Dr. A.N.V. Rao for our fruitful discussions assistance concerning the present study.

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

An Analysis of Record Statistics based on an Exponentiated Gumbel Model

An Analysis of Record Statistics based on an Exponentiated Gumbel Model Communications for Statistical Applications and Methods 2013, Vol. 20, No. 5, 405 416 DOI: http://dx.doi.org/10.5351/csam.2013.20.5.405 An Analysis of Record Statistics based on an Exponentiated Gumbel

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

The comparative studies on reliability for Rayleigh models

The comparative studies on reliability for Rayleigh models Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong

More information

More on nuisance parameters

More on nuisance parameters BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Choosing among models

Choosing among models Eco 515 Fall 2014 Chris Sims Choosing among models September 18, 2014 c 2014 by Christopher A. Sims. This document is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist

More information

Final Examination. STA 215: Statistical Inference. Saturday, 2001 May 5, 9:00am 12:00 noon

Final Examination. STA 215: Statistical Inference. Saturday, 2001 May 5, 9:00am 12:00 noon Final Examination Saturday, 2001 May 5, 9:00am 12:00 noon This is an open-book examination, but you may not share materials. A normal distribution table, a PMF/PDF handout, and a blank worksheet are attached

More information

MIT Spring 2016

MIT Spring 2016 MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a) :

More information

Statistical Inference Using Progressively Type-II Censored Data with Random Scheme

Statistical Inference Using Progressively Type-II Censored Data with Random Scheme International Mathematical Forum, 3, 28, no. 35, 1713-1725 Statistical Inference Using Progressively Type-II Censored Data with Random Scheme Ammar M. Sarhan 1 and A. Abuammoh Department of Statistics

More information

Noninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions

Noninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions Communications for Statistical Applications and Methods 03, Vol. 0, No. 5, 387 394 DOI: http://dx.doi.org/0.535/csam.03.0.5.387 Noninformative Priors for the Ratio of the Scale Parameters in the Inverted

More information

Estimation Under Multivariate Inverse Weibull Distribution

Estimation Under Multivariate Inverse Weibull Distribution Global Journal of Pure and Applied Mathematics. ISSN 097-768 Volume, Number 8 (07), pp. 4-4 Research India Publications http://www.ripublication.com Estimation Under Multivariate Inverse Weibull Distribution

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 208 Petros Koumoutsakos, Jens Honore Walther (Last update: June, 208) IMPORTANT DISCLAIMERS. REFERENCES: Much of the material (ideas,

More information

Step-Stress Models and Associated Inference

Step-Stress Models and Associated Inference Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated

More information

Introduction to Bayesian Statistics. James Swain University of Alabama in Huntsville ISEEM Department

Introduction to Bayesian Statistics. James Swain University of Alabama in Huntsville ISEEM Department Introduction to Bayesian Statistics James Swain University of Alabama in Huntsville ISEEM Department Author Introduction James J. Swain is Professor of Industrial and Systems Engineering Management at

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Statistical Inference: Maximum Likelihood and Bayesian Approaches

Statistical Inference: Maximum Likelihood and Bayesian Approaches Statistical Inference: Maximum Likelihood and Bayesian Approaches Surya Tokdar From model to inference So a statistical analysis begins by setting up a model {f (x θ) : θ Θ} for data X. Next we observe

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

Inferences about Parameters of Trivariate Normal Distribution with Missing Data

Inferences about Parameters of Trivariate Normal Distribution with Missing Data Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 7-5-3 Inferences about Parameters of Trivariate Normal Distribution with Missing

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The

More information

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain A BAYESIAN MATHEMATICAL STATISTICS PRIMER José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es Bayesian Statistics is typically taught, if at all, after a prior exposure to frequentist

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

Bayesian Decision and Bayesian Learning

Bayesian Decision and Bayesian Learning Bayesian Decision and Bayesian Learning Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 30 Bayes Rule p(x ω i

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Predictive Distributions

Predictive Distributions Predictive Distributions October 6, 2010 Hoff Chapter 4 5 October 5, 2010 Prior Predictive Distribution Before we observe the data, what do we expect the distribution of observations to be? p(y i ) = p(y

More information

Conjugate Priors, Uninformative Priors

Conjugate Priors, Uninformative Priors Conjugate Priors, Uninformative Priors Nasim Zolaktaf UBC Machine Learning Reading Group January 2016 Outline Exponential Families Conjugacy Conjugate priors Mixture of conjugate prior Uninformative priors

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 4, Issue 6, December 2014

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 4, Issue 6, December 2014 Bayesian Estimation for the Reliability Function of Pareto Type I Distribution under Generalized Square Error Loss Function Dr. Huda A. Rasheed, Najam A. Aleawy Al-Gazi Abstract The main objective of this

More information

Kernel density estimation of reliability with applications to extreme value distribution

Kernel density estimation of reliability with applications to extreme value distribution University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 2008 Kernel density estimation of reliability with applications to extreme value distribution Branko Miladinovic

More information

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That Statistics Lecture 4 August 9, 2000 Frank Porter Caltech The plan for these lectures: 1. The Fundamentals; Point Estimation 2. Maximum Likelihood, Least Squares and All That 3. What is a Confidence Interval?

More information

Error analysis for efficiency

Error analysis for efficiency Glen Cowan RHUL Physics 28 July, 2008 Error analysis for efficiency To estimate a selection efficiency using Monte Carlo one typically takes the number of events selected m divided by the number generated

More information

Part 4: Multi-parameter and normal models

Part 4: Multi-parameter and normal models Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Data Analysis and Uncertainty Part 2: Estimation

Data Analysis and Uncertainty Part 2: Estimation Data Analysis and Uncertainty Part 2: Estimation Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics in Estimation 1. Estimation 2. Desirable

More information

Bayesian Inference. Chapter 2: Conjugate models

Bayesian Inference. Chapter 2: Conjugate models Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Minimum Message Length Analysis of the Behrens Fisher Problem

Minimum Message Length Analysis of the Behrens Fisher Problem Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33 Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett

More information

Some Curiosities Arising in Objective Bayesian Analysis

Some Curiosities Arising in Objective Bayesian Analysis . Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Hybrid Censoring; An Introduction 2

Hybrid Censoring; An Introduction 2 Hybrid Censoring; An Introduction 2 Debasis Kundu Department of Mathematics & Statistics Indian Institute of Technology Kanpur 23-rd November, 2010 2 This is a joint work with N. Balakrishnan Debasis Kundu

More information

Bayesian vs frequentist techniques for the analysis of binary outcome data

Bayesian vs frequentist techniques for the analysis of binary outcome data 1 Bayesian vs frequentist techniques for the analysis of binary outcome data By M. Stapleton Abstract We compare Bayesian and frequentist techniques for analysing binary outcome data. Such data are commonly

More information

Nested Sampling. Brendon J. Brewer. brewer/ Department of Statistics The University of Auckland

Nested Sampling. Brendon J. Brewer.   brewer/ Department of Statistics The University of Auckland Department of Statistics The University of Auckland https://www.stat.auckland.ac.nz/ brewer/ is a Monte Carlo method (not necessarily MCMC) that was introduced by John Skilling in 2004. It is very popular

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

Statistical Inference on Constant Stress Accelerated Life Tests Under Generalized Gamma Lifetime Distributions

Statistical Inference on Constant Stress Accelerated Life Tests Under Generalized Gamma Lifetime Distributions Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS040) p.4828 Statistical Inference on Constant Stress Accelerated Life Tests Under Generalized Gamma Lifetime Distributions

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Likelihoods. P (Y = y) = f(y). For example, suppose Y has a geometric distribution on 1, 2,... with parameter p. Then the pmf is

Likelihoods. P (Y = y) = f(y). For example, suppose Y has a geometric distribution on 1, 2,... with parameter p. Then the pmf is Likelihoods The distribution of a random variable Y with a discrete sample space (e.g. a finite sample space or the integers) can be characterized by its probability mass function (pmf): P (Y = y) = f(y).

More information

STAT 830 Bayesian Estimation

STAT 830 Bayesian Estimation STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These

More information

INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION

INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION Pak. J. Statist. 2017 Vol. 33(1), 37-61 INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION A. M. Abd AL-Fattah, A.A. EL-Helbawy G.R. AL-Dayian Statistics Department, Faculty of Commerce, AL-Azhar

More information

Statistical Theory MT 2006 Problems 4: Solution sketches

Statistical Theory MT 2006 Problems 4: Solution sketches Statistical Theory MT 006 Problems 4: Solution sketches 1. Suppose that X has a Poisson distribution with unknown mean θ. Determine the conjugate prior, and associate posterior distribution, for θ. Determine

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

Estimation of Quantiles

Estimation of Quantiles 9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

inferences on stress-strength reliability from lindley distributions

inferences on stress-strength reliability from lindley distributions inferences on stress-strength reliability from lindley distributions D.K. Al-Mutairi, M.E. Ghitany & Debasis Kundu Abstract This paper deals with the estimation of the stress-strength parameter R = P (Y

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION

ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION ZHENLINYANGandRONNIET.C.LEE Department of Statistics and Applied Probability, National University of Singapore, 3 Science Drive 2, Singapore

More information

Estimation of Parameters of the Weibull Distribution Based on Progressively Censored Data

Estimation of Parameters of the Weibull Distribution Based on Progressively Censored Data International Mathematical Forum, 2, 2007, no. 41, 2031-2043 Estimation of Parameters of the Weibull Distribution Based on Progressively Censored Data K. S. Sultan 1 Department of Statistics Operations

More information

Suggested solutions to written exam Jan 17, 2012

Suggested solutions to written exam Jan 17, 2012 LINKÖPINGS UNIVERSITET Institutionen för datavetenskap Statistik, ANd 73A36 THEORY OF STATISTICS, 6 CDTS Master s program in Statistics and Data Mining Fall semester Written exam Suggested solutions to

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Moments of the Reliability, R = P(Y<X), As a Random Variable

Moments of the Reliability, R = P(Y<X), As a Random Variable International Journal of Computational Engineering Research Vol, 03 Issue, 8 Moments of the Reliability, R = P(Y

More information

Remarks on Improper Ignorance Priors

Remarks on Improper Ignorance Priors As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

The Jeffreys Prior. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) The Jeffreys Prior MATH / 13

The Jeffreys Prior. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) The Jeffreys Prior MATH / 13 The Jeffreys Prior Yingbo Li Clemson University MATH 9810 Yingbo Li (Clemson) The Jeffreys Prior MATH 9810 1 / 13 Sir Harold Jeffreys English mathematician, statistician, geophysicist, and astronomer His

More information

Patterns of Scalable Bayesian Inference Background (Session 1)

Patterns of Scalable Bayesian Inference Background (Session 1) Patterns of Scalable Bayesian Inference Background (Session 1) Jerónimo Arenas-García Universidad Carlos III de Madrid jeronimo.arenas@gmail.com June 14, 2017 1 / 15 Motivation. Bayesian Learning principles

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

Lecture 1: Introduction

Lecture 1: Introduction Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference

More information

32. STATISTICS. 32. Statistics 1

32. STATISTICS. 32. Statistics 1 32. STATISTICS 32. Statistics 1 Revised September 2009 by G. Cowan (RHUL). This chapter gives an overview of statistical methods used in high-energy physics. In statistics, we are interested in using a

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6.

Approximating models. Nancy Reid, University of Toronto. Oxford, February 6. Approximating models Nancy Reid, University of Toronto Oxford, February 6 www.utstat.utoronto.reid/research 1 1. Context Likelihood based inference model f(y; θ), log likelihood function l(θ; y) y = (y

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION

BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION REVSTAT Statistical Journal Volume 9, Number 3, November 211, 247 26 BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION Authors: Sanjay

More information

Estimation for inverse Gaussian Distribution Under First-failure Progressive Hybird Censored Samples

Estimation for inverse Gaussian Distribution Under First-failure Progressive Hybird Censored Samples Filomat 31:18 (217), 5743 5752 https://doi.org/1.2298/fil1718743j Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Estimation for

More information

Example: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently

Example: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently Chapter 3 Sufficient statistics and variance reduction Let X 1,X 2,...,X n be a random sample from a certain distribution with p.m/d.f fx θ. A function T X 1,X 2,...,X n = T X of these observations is

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Bayesian statistics, simulation and software

Bayesian statistics, simulation and software Module 10: Bayesian prediction and model checking Department of Mathematical Sciences Aalborg University 1/15 Prior predictions Suppose we want to predict future data x without observing any data x. Assume:

More information

Bayesian analysis of three parameter absolute. continuous Marshall-Olkin bivariate Pareto distribution

Bayesian analysis of three parameter absolute. continuous Marshall-Olkin bivariate Pareto distribution Bayesian analysis of three parameter absolute continuous Marshall-Olkin bivariate Pareto distribution Biplab Paul, Arabin Kumar Dey and Debasis Kundu Department of Mathematics, IIT Guwahati Department

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty

More information

Fleet Maintenance Simulation With Insufficient Data

Fleet Maintenance Simulation With Insufficient Data Fleet Maintenance Simulation With Insufficient Data Zissimos P. Mourelatos Mechanical Engineering Department Oakland University mourelat@oakland.edu Ground Robotics Reliability Center (GRRC) Seminar 17

More information