SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS

Size: px
Start display at page:

Download "SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS"

Transcription

1 REVSTAT Statistical Journal Volume 14, Number 3, June 016, SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS Authors: Agnieszka St epień-baran Institute of Applied Mathematics and Mechanics, University of Warsaw, Banacha, Warszawa, Poland A.Stepien-Baran@mimuw.edu.pl Received: June 014 Revised: January 015 Accepted: January 015 Abstract: The problem of sequentially estimating a common location parameter of two independent populations from the same distribution with an unknown location parameter and known but different scale parameters is considered in the case when the observations become available at random times. Certain classes of sequential estimation procedures are derived under a location invariant loss function and with the observation cost determined by convex functions of the stopping time and the number of observations up to that time. Key-Words: location parameter; location invariant loss function; minimum risk equivariant estimator; optimal stopping time; risk function. AMS Subject Classification: 6L1, 6L15, 6F10.

2 98 Agnieszka St epień-baran

3 Sequential Estimation of a Common Location Parameter of Two Populations INTRODUCTION The paper concerns the invariant problem of sequentially estimating a common location parameter of two independent populations from the same distribution with an unknown location parameter and known but different scale parameters, in the special case when the observations arrive at random times. For example, in studying the effectiveness of experimental safety devices of mobile constructions relevant data may become available only as a result of accidents. Medical data (such as data on drug abuse or an asymptomatic disease) can sometimes only be obtained when patients voluntarily seek help or are somehow otherwise identified and examined, at random times. Other examples are data resulting from an undersea survey of containerized radioactive waste, from archeological discoveries, from market research or from planning the assortment of production (when the orders come forward at random times). The estimation problem of a common location of two independent populations has been extensively discussed in the literature. Rao and Reddy (1988) studied the estimation of the unknown common location parameter of two symmetric distributions with different scale parameters. They derived asymptotic distributions and the asymptotic relative efficiencies of proposed estimators: the mean, the median, the average of the mean and the median and the Hodges Lehmann estimator. Baklizi (004) considered estimation of the common location parameter of several exponentials. It is found that the proposed estimators are effective in taking advantage of the available prior information. Farsipour and Asgharzadeh (00) investigated the problem of estimating the common mean of two normal distributions. They derived a class of risk unbiased estimators which linearly combines the means of the two samples from both distributions. Mitra and Sinha (007) studied some aspects of the problem of estimation of a common mean of two normal populations from an asymptotic point of view. They also considered the Bayes estimate of the common mean under Jeffrey s prior. Chang et al. (01) considered the problem of estimating the common mean of two normal distributions with unknown ordered variances. They gave a broad class of estimators which includes the estimators proposed by Nair (198) and Elfessi et al. (199) and showed that the estimators stochastically dominate the estimators which do not take into account the order restriction on variances, including the one given by Graybill and Deal (1959). Then they proposed a broad class of individual estimators of two ordered means when unknown variances are ordered. The problem of estimating with delayed observations was investigated by Starr et al. (1976), who considered the case of Bayes estimation of a mean of normally distributed observations with known variance. Some of their results were generalized by Magiera (1996). He dealt with estimation of the mean value

4 300 Agnieszka St epień-baran parameter of the exponential family of distributions. Jokiel-Rokita and St epień (009) studied the model with delayed observations for estimating a location parameter. We consider the following model. Let the samples (X 1,..., X n1 ) and (Y 1,..., Y n ) be independent and have a joint distribution P θ with a Lebesgue p.d.f. ( x1 θ f,..., x ) n 1 θ, σ 1 σ 1 and ( y1 θ f,..., y ) n θ, σ σ respectively, where f is known, σ 1, σ > 0 are known and different scale parameters, and θ R is an unknown location parameter. The set of observations is bounded, i.e., the statistician can receive at most N = n 1 + n observations. It is assumed that X i is observed at time t i, i = 1,..., n 1, where t 1,..., t n1 are the values of the order statistics of positive i.i.d. random variables U 1,..., U n1 which are obtained before the conducted observations X 1,..., X n1 and independent of X 1,..., X n1. Similarly: Y i is observed at time s i, i = 1,..., n, where s 1,..., s n are the values of the order statistics of positive i.i.d. random variables V 1,..., V n which are obtained before the conducted observations Y 1,..., Y n and independent of Y 1,..., Y n. Furthermore, it is assumed that the samples (U 1,..., U n1 ) and (V 1,..., V n ) are independent. Let (1.1) n 1 k 1 (t) = 1 [0,t] (U i ) i=1 and (1.) n k (t) = 1 [0,t] (V i ) i=1 denote the number of observations which have been made by time t 0 for the sample (X 1,..., X n1 ) and (Y 1,..., Y n ), respectively, and let F 1,t = σ { k 1 (r), r t, X 1,..., X k1 (t)} and F,t = σ { k (r), r t, Y 1,..., Y k (t)} be the informations which is available at time t. The problem is to estimate the parameter θ. If observation is stopped at time t, the loss incurred is defined by (1.3) L t (θ, d) := L(θ, d) + c A k 1 (t) + c B k (t) + c 1 (t) + c (t),

5 Sequential Estimation of a Common Location Parameter of Two Populations 301 where L(θ, d) denotes the loss associated with estimation, when θ is the true value of the parameter and d is the chosen estimate. The functions c 1 (t) and c (t) represents the cost of observing the processes up to time t (k 1 (t) and k (t), respectively). It is supposed to be a differentiable and increasing convex functions such that c 1 (0) = 0 and c (0) = 0. The constants c A 0 and c B 0 are the cost of taking one observation X i and Y i, respectively. The family {P θ : θ R} is invariant under the location transformations x x + α (y y + α) with α R. Consequently, the decision problem is invariant under location transformations if and only if L(θ, a) = L(θ + α, a + α) for all α R, which is equivalent to (1.4) L(θ, a) = δ(a θ) for a Borel function δ( ) on R. An estimator d of the parameter θ is location equivariant if and only if d(x 1 + α,..., X n1 + α, Y 1 + α,..., Y n + α) = d(x 1,..., X n1, Y 1,..., Y n ) + α. Suppose that we agree to take at least one observation. If we observe the process for t t 1 units of time, then the conditional expected loss, given k 1 (t) and k (t), associated with an equivariant estimator d ( X k1 (t),y k (t)) based on the random size samples X k1 (t) = ( X 1,..., X k1 (t)) and Yk (t) = ( Y 1,..., Y k (t)) is of the form ( ( )) [ ( ( R t θ, d Xk1 (t),y k (t) : = Eθ L t θ, d Xk1 (t),y k (t))) ] k1 (t), k (t) (1.5) = h 1 (k 1 (t)) + h (k (t)) + c 1 (t) + c (t), where E θ means the expectation with respect to the conditional distribution given θ. The functions h 1 and h depend only on the loss function δ. The form of the risk function R t (θ, d), given by (1.5), follows from the fact that the risk of any equivariant estimator of the parameter θ in the invariant problem of estimation is independent of θ (see e.g. Lehmann and Casella 1998, Theorem 3.1.4). Hence, if an equivariant estimator exists which minimizes the constant risk, it is called the minimum risk equivariant (MRE) estimator. In Section we present the method of finding a stopping time which minimizes the expected risk associated with a MRE estimator of the parameter θ over all stopping times. We consider a situation when the common distributions of the random variables U 1,..., U n1 and V 1,..., V n, respectively, which can be interpreted as the lifetimes of n 1 and n objects are known exactly. In Section 3 we apply the results of Section to estimate a common location parameter of two normal distributions under the squared error loss and a LINEX loss function. Additionally, in Section 4 some illustrative simulations are given.

6 30 Agnieszka St epień-baran. THE OPTIMAL STOPPING TIME Suppose that in the estimation problem of the parameter θ with the loss function L(θ, d) there exists an MRE estimator, denoted by d. We look for a stopping time τ which minimizes the expected risk (.1) E [ R τ ( θ, d ( X k1 (τ),y k (τ)))] = E[h1 (k 1 (τ)) + h (k (τ)) + c 1 (τ) + c (τ)] over all stopping times τ t 1, τ T, where T denotes the class of (F 1,t, F,t )- measurable functions. Such a stopping time will be called an optimal stopping time. Then we construct an optimal sequential estimation procedure of the form ( τ, d ( )) X k1 (τ ),Y k (τ ). Let the random variables U 1,..., U n1 be independent and have a common known distribution function G 1. Suppose that G 1 (0) = 0, G 1 (t) > 0 for t > 0, G 1 is absolutely continuous with density g 1, and g 1 is the right hand derivative of G 1 on (0, ). Denote the class of such G 1 by G 1. Let ζ 1 = sup{t : G 1 (t) < 1}, and ρ 1 (t) = g 1 (t)[1 G 1 (t)] 1, 0 t < ζ 1, denote the failure rate. Under the above assumptions the process k 1 (t), given by (1.1), is a nonstationary Markov chain with respect to F 1,t, 0 t ζ 1 (see Starr et al. (1976)). The random variables V 1,..., V n satisfy the analogous assumptions. Namely, let the random variables V 1,..., V n be independent and have a common known distribution function G. Suppose that G (0) = 0, G (t) > 0 for t > 0, G is absolutely continuous with density g, and g is the right hand derivative of G on (0, ). Denote the class of such G by G. Let ζ = sup{t : G (t) < 1}, and ρ (t) = g (t)[1 G (t)] 1, 0 t < ζ, denote the failure rate. Under the above assumptions the process k (t), given by (1.), is a nonstationary Markov chain with respect to F,t, 0 t ζ. The infinitesimal operator A 1,t of the processes k 1 (t) at h 1 is defined by (.) A 1,t h1 (k) := lim s 0 s 1 E [ h1 (k 1 (t + s)) h 1 (k 1 (t)) k 1 (t) = k]. + The domain D A1,t of A 1,t is the set of all bounded Borel measurable functions h1 on the set {0, 1,..., n 1 } for which the limit in (.) exists boundedly pointwise for every k {0, 1,..., n 1 }. The infinitesimal operator A,t of the processes k (t) is defined analogously. To determine an optimal stopping time we use the following lemma which provides the form of the infinitesimal operator A 1,t of the process k 1 (t), given by (1.1). Lemma.1. Let h 1 be a given real-valued function on the set {0, 1,..., n 1 }. The infinitesimal operator A 1,t of the process k 1 (t), given by (1.1), at h 1 is of the form A 1,t h1 (k) = (n 1 k)[ h1 (k + 1) h ] 1 (k) ρ 1 (t).

7 Sequential Estimation of a Common Location Parameter of Two Populations 303 = = = Proof: Fix k {0, 1,..., n 1 }. It is clear that E [ h1 (k 1 (t + s)) h ] 1 (k 1 (t)) k 1 (t) = k n 1 i=k+1 [ h1 (i) h 1 (k)] P (k 1 (t + s) = i k 1 (t) = k) [ h1 (k + 1) h 1 (k)] P (k 1 (t + s) = k + 1 k 1 (t) = k) + n 1 i=k+ [ h1 (i) h 1 (k)] P (k 1 (t + s) = i k 1 (t) = k) [ h1 (k + 1) h 1 (k)] (n 1 k) G 1(t + s) G 1 (t) 1 G 1 (t) + n 1 i=k+ [ h1 (i) h 1 (k)] P (k 1 (t + s) = i k 1 (t) = k) = [ ] 1 G1 (t + s) n1 k 1 1 G 1 (t) [ h1 (k + 1) h 1 (k)] (n 1 k) G [ ] 1(t + s) G 1 (t) 1 G1 (t + s) n1 k 1 1 G 1 (t) 1 G 1 (t) + sup h1 (i) P (k 1 (t + s) k + k 1 (t) = k) i n 1 = [ h1 (k + 1) h 1 (k)] (n 1 k) G 1(t + s) G 1 (t) 1 G 1 (t) { + sup h1 (i) i n 1 1 [ 1 G1 (t + s) 1 G 1 (t) [ ] 1 G1 (t + s) n1 k 1 1 G 1 (t) ] n1 k [ 1 (n 1 k) G 1(t + s) G 1 (t) [1 G 1 (t + s)] Now it is easy to see that E [ h1 (k 1 (t + s)) h 1 (k 1 (t)) k 1 (t) = k] lim = s 0 + s = (n 1 k)[ h1 (k + 1) h ] 1 (k) ρ 1 (t) and the lemma is proved. ] }. The infinitesimal operator A,t of the processes k (t) is calculated analogously and we have A,t h (k) = (n k)[ h (k + 1) h ] (k) ρ (t). Let h 1 (k) = h 1 (k) for k = 1,..., n 1 and h 1 (0) = 0, and h (k) = h (k) for k = 1,..., n and h (0) = 0. The following theorem determines the optimal stopping time τ for a large class of possible h 1 and h. Theorem.1. Suppose that G 1 G 1 has non-increasing failure rate ρ 1, G G has non-increasing failure rate ρ, and the functions h 1 (k) and h (k) in

8 304 Agnieszka St epień-baran formula (1.5) are such that h 1 (k) h 1 (k+1) is non-increasing for k {1,..., n 1 1} and h (k) h (k + 1) is non-increasing for k {1,..., n 1}. Then the stopping time { } τ = inf t t 1 : A 1,t h1 (k 1 (t)) + A,t h (k (t)) + c 1(t) + c (t) 0 = inf { t t 1 : (n 1 k 1 (t))[h 1 (k 1 (t)) h 1 (k 1 (t) + 1)]ρ 1 (t) } (.3) + (n k (t))[h (k (t)) h (k (t) + 1)]ρ (t) c 1(t) + c (t) minimizes the expected risk given by (.1) over all stopping times τ t 1, τ T. Proof: The proof follows Starr et al. (1976), Theorem.1. Using Dynkin s formula, we have E[ h 1 (k 1 (τ)) + h (k (τ)) + c 1 (τ) + c (τ)] = { τ } = E [A 1,t h1 (k 1 (t)) + A,t h (k (t)) + c 1(t) + c (t)]dt 0 for all stopping times τ. In particular for τ t 1 we have (.4) E[h 1 (k 1 (τ )) + h (k (τ )) + c 1 (τ ) + c (τ )] E[h 1 (k 1 (τ)) + h (k (τ)) + c 1 (τ) + c (τ)] = = E[ h 1 (k 1 (τ )) + h (k (τ )) + c 1 (τ ) + c (τ )] E[ h 1 (k 1 (τ)) + h (k (τ)) + c 1 (τ) + c (τ)] { } τ = E [A 1,t h1 (k 1 (t)) + A,t h (k (t)) + c 1(t) + c (t)]dt1(τ < τ ) τ { τ } E [A 1,t h1 (k 1 (t)) + A,t h (k (t)) + c 1(t) + c (t)]dt 1(τ > τ ). τ Taking into account the assumptions concerning the function h 1 (k), h (k), c 1 (t), c (t), ρ 1 (t) and ρ (t) we have that (.4) is less or equal to zero. Thus, the stopping time τ is optimal. 3. SPECIAL CASE In this section we use the solutions of Section to estimate a common location parameter of two normal distributions under the squared error loss (3.1) L(θ, d) = (d θ) and under a LINEX loss function (3.) L(θ, d) = exp[a(d θ)] a(d θ) 1,

9 Sequential Estimation of a Common Location Parameter of Two Populations 305 where a 0. Taking MRE estimators as optimal estimators of a location parameter of two normal distributions, we construct optimal sequential estimation procedures under the aforementioned loss functions in the model with observations which are available at random times. Let X i, i = 1,..., n 1, be independent random variables from the normal distribution N(θ, σ1 ) and Y i, i = 1,..., n, be independent random variables from the normal distribution N(θ, σ ), where θ R is an unknown location parameter and σ 1, σ > 0 are known. We assume that the samples (X 1,..., X n1 ) and (Y 1,..., Y n ) are independent and σ 1 σ. Let X k1 (t) = 1 k 1 (t) X i, Y k 1 (t) k (t) = 1 k (t) k (t) i=1 denote the sample means based on the random size sample X k1 (t)= ( ) X 1,...,X k1 (t) and Y k (t) = ( Y 1,..., Y k (t)), respectively, where k1 (t) is given by (1.1) and k (t) is given by (1.). The following theorem provides the MRE estimator of the parameter θ and the corresponding risk function under the loss function given by (3.1) and (3.), respectively. i=1 Y i Theorem 3.1. For any stopping time t (a) If the loss function is given by (3.1), then the MRE estimator of the parameter θ is d S ( ) Xk1 (t),y k (t) = ωxk1 (t) + (1 ω)y k (t) with ω (0, 1), and the risk function of the estimator d S has the form R t (θ, d S) = ω σ1 k 1 (t) + (1 ω) σ k (t) + c A k 1 (t) + c B k (t) + c 1 (t) + c (t). (b) If the loss function is given by (3.), then the MRE estimator of the parameter θ is ( ) ( ) d ) L( Xk1 (t),y k (t) = ω X k1 (t) aσ 1 + (1 ω) Y k 1 (t) k (t) aσ k (t) ( ) σ + ω(1 ω) a 1 k 1 (t) + σ k (t) with ω (0, 1), and the risk function of the estimator d L has the form R t (θ,d L) = ω a σ1 k 1 (t) + (1 ω) a σ k (t) + c A k 1 (t) + c B k (t) + c 1 (t) + c (t).

10 306 Agnieszka St epień-baran Proof: The forms of the MRE estimators d S and d L are obtained from the general formula for the MRE estimators of the location parameter under the loss function (1.4) (see e.g. Shao (003), Theorem 4.5). The formulas for the risk functions R t (θ, d S ) and R t (θ, d L ) follow from straightforward calculations. On the basis of Theorems.1 and 3.1 we construct optimal sequential estimation procedures of the form ( τ, d ( X k1 (τ ),Y k (τ ))), where τ is defined by (.3), and d is the corresponding sequential MRE estimator of θ based on the random size samples X k1 (τ ) and Y k (τ ). The next theorem determines the optimal sequential estimation procedure under the loss function L(θ, d) given by (3.1) and (3.), respectively. Theorem 3.. Suppose that G 1 G 1 has non-increasing failure rate ρ 1 and G G has non-increasing failure rate ρ. (a) Under the loss function L t (θ, d) given by (1.3) ( with ( L(θ, d) of the form )) (3.1), the sequential estimation procedure τs, d S X k1(τs),y k (τs), where { [ ω τs σ1 = inf t t 1 : (n 1 k 1 (t)) k 1 (t) ω σ1 ] (k 1 (t) + 1) c A ρ 1 (t) [ (1 ω) σ + (n k (t)) (1 ω) σ ] } k (t) (k (t)+1) c B ρ (t) c 1(t) + c (t) and is optimal. d S ) (X k1 (τ S ),Y k (τ S ) = ωx k1 (τs ) + (1 ω)y k (τs ) (b) Under the loss function L t (θ, d) given by (1.3) ( with ( L(θ, d) of the form )) (3.), the sequential estimation procedure τl, d L X k1(τl),y k (τl), where { [ ω τl a σ1 = inf t t 1 : (n 1 k 1 (t)) k 1 (t) ω a σ1 ] (k 1 (t) + 1) c A ρ 1 (t) [ (1 ω) a σ + (n k (t)) (1 ω) a σ ] } k (t) (k (t)+1) c B ρ (t) c 1(t) + c (t) and d L is optimal. (X k1 (τ L ),Y k (τ L ) ) ( ) = ω X k1 (τl ) aσ 1 k 1 (τl ( ) ) + (1 ω) Y k (τl ) aσ k (τl ( ) ) σ + ω(1 ω) a 1 k 1 (τl ) + σ k (τl )

11 Sequential Estimation of a Common Location Parameter of Two Populations 307 Proof: We have to show that the assumptions of Theorem.1 are satisfied, i.e., the functions h 1 (k) h 1 (k + 1) and h (k) h (k + 1) are non-increasing on the set {1,..., n 1 1} and {1,..., n 1}, respectively. Hence, we need to verify the condition h 1 (k + 1) h 1 (k) h 1 (k + ) 0 and h (k + 1) h (k) h (k + ) 0, which are equivalent to h 1 (k + 1) (h 1 (k) + h 1 (k + ))/ and h (k + 1) (h (k) + h (k + ))/. This can be reduced to the verification that h 1 and h are convex on the interval [1, n 1 1] and [1, n 1], respectively. It is easy to see that (a) h 1(k) = ω σ 1 k 3, and h 1 (k) > 0, h (k) > 0 for k 1; h (k) = (1 ω) σ k 3 (b) h 1(k) = ω a σ1 k 3, h (k) = (1 ω) a σ k 3 and h 1 (k) > 0, h (k) > 0 for k SIMULATION RESULTS In this section we present some results of the numerical study. The first table contains the results of the simulation study for X 1,..., X n1 N(0, 1), n 1 = 30 and Y 1,..., Y n N(0, 5), n = 50: the means of τs, d S, τ L and d L for a =, over the 1000 replications, when ω = 0.5, ρ 1 (t) = 1 (U i E(1)), c 1 (t) = t and ρ (t) = ( 3t) 1, (V i We(1/, 3)), c (t) = e t 1. c A c B Mean(τ S ) Mean(d S ) Mean(τ L ) Mean(d L ) The second table contains the results of the simulation study for X 1,..., X n1 N(0, 1), n 1 = 30 and Y 1,..., Y n N(0, 5), n = 50: the means of τ S, d S, τ L and d L for a =, over the 1000 replications, when ω = 0.5, ρ 1(t) = ( 3t) 1 (U i We(1/, 3)), c 1 (t) = e t 1 and ρ (t) = 1 (V i E(1)), c (t) = t. c A c B Mean(τ S ) Mean(d S ) Mean(τ L ) Mean(d L )

12 308 Agnieszka St epień-baran The third table contains the results of the simulation study for X 1,..., X n1 N(0, 1), n 1 = 30 and Y 1,..., Y n N(0, 5), n = 50: the means of τ S, d S, τ L and d L for a =, over the 1000 replications, when ω = 0.75, ρ 1(t) = ( 3t) 1 (U i We(1/, 3)), c 1 (t) = t and ρ (t) = 1 (V i E(1)), c (t) = e t 1. c A c B Mean(τ S ) Mean(d S ) Mean(τ L ) Mean(d L ) Simulation results above are consistent with expectations. The both procedures are working properly. In case of Linex loss function, decision function is biased, however it is MRE estimator because ω is fixed. It could be applicable especially in a case when one sample is more preferable than second one. REFERENCES [1] Rao, M.S. and Reddy, M.K. (1988). Estimation of a common location parameter of two distributions, Aligarh J. Statist., 8, [] Baklizi, A. (004). Shrinkage estimation of the common location parameter of several exponentials, Comm. Statist. Simulation Comput., 33(), [3] Farsipour, N.S. and Asgharzadeh, A. (00). On the admissibility of estimators of the common mean of two normal populations under symmetric and asymmetric loss functions, South African Statist. J., 36(1), [4] Mitra, P.K. and Sinha, B.K. (007). On some aspects of estimation of a common mean of two independent normal populations, J. Statist. Plann. Inference, 137(1), [5] Chang, Y.; Oono, Y. and Shinozaki, N. (01). Improved estimators for the common mean and ordered means of two normal distributions with ordered variances, J. Statist. Plann. Inference, 14(9), [6] Nair, K.A. (198). An estimator of the common mean of two normal populations, J. Statist. Plann. Inference, 6(), [7] Elfessi, A. and Pal, N. (199). A note on the common mean of two normal populations with order restricted variances, Comm. Statist. Theory Methods, 1(11), [8] Graybill, F.A. and Deal, R.B. (1959). Combining unbiased estimators, Biometrics, 15, [9] Starr, N.; Wardrop, R. and Woodroofe, M. (1976). Estimating a mean from delayed observations, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 35(),

13 Sequential Estimation of a Common Location Parameter of Two Populations 309 [10] Magiera, R. (1996). On a class of sequential estimation problems for oneparameter exponential families, Sankhyā Ser. A, 58(1), [11] Jokiel-Rokita, A. and St epień, A. (009). Sequential estimation of a location parameter from delayed observations, Statist. Papers, 50(), [1] Lehmann, E.L. and Casella, G. (1998). Theory Of Point Estimation, Springer-Verlag, New York. [13] Shao, J. (003). Mathematical Statistics, Springer-Verlag, New York.

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County A Generalized p-value Approach to Inference on Common Mean Pranab K. Mitra and Bimal K. Sinha Department of Mathematics and Statistics, University Of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore,

More information

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions The Korean Communications in Statistics Vol. 14 No. 1, 2007, pp. 71 80 Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions R. Rezaeian 1)

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Lecture 2: Basic Concepts of Statistical Decision Theory

Lecture 2: Basic Concepts of Statistical Decision Theory EE378A Statistical Signal Processing Lecture 2-03/31/2016 Lecture 2: Basic Concepts of Statistical Decision Theory Lecturer: Jiantao Jiao, Tsachy Weissman Scribe: John Miller and Aran Nayebi In this lecture

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003 Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)

More information

GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS

GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS Authors: Shirin Moradi Zahraie Department of Statistics, Yazd University, Yazd, Iran (sh moradi zahraie@stu.yazd.ac.ir) Hojatollah

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

4 Invariant Statistical Decision Problems

4 Invariant Statistical Decision Problems 4 Invariant Statistical Decision Problems 4.1 Invariant decision problems Let G be a group of measurable transformations from the sample space X into itself. The group operation is composition. Note that

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Peter Hoff Statistical decision problems September 24, Statistical inference 1. 2 The estimation problem 2. 3 The testing problem 4

Peter Hoff Statistical decision problems September 24, Statistical inference 1. 2 The estimation problem 2. 3 The testing problem 4 Contents 1 Statistical inference 1 2 The estimation problem 2 3 The testing problem 4 4 Loss, decision rules and risk 6 4.1 Statistical decision problems....................... 7 4.2 Decision rules and

More information

Change-point models and performance measures for sequential change detection

Change-point models and performance measures for sequential change detection Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides

More information

COMPARISON OF RELATIVE RISK FUNCTIONS OF THE RAYLEIGH DISTRIBUTION UNDER TYPE-II CENSORED SAMPLES: BAYESIAN APPROACH *

COMPARISON OF RELATIVE RISK FUNCTIONS OF THE RAYLEIGH DISTRIBUTION UNDER TYPE-II CENSORED SAMPLES: BAYESIAN APPROACH * Jordan Journal of Mathematics and Statistics JJMS 4,, pp. 6-78 COMPARISON OF RELATIVE RISK FUNCTIONS OF THE RAYLEIGH DISTRIBUTION UNDER TYPE-II CENSORED SAMPLES: BAYESIAN APPROACH * Sanku Dey ABSTRACT:

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

ON MINIMAL PAIRWISE SUFFICIENT STATISTICS

ON MINIMAL PAIRWISE SUFFICIENT STATISTICS Journal of Applied Analysis Vol. 7, No. 2 (2001), pp. 285 292 ON MINIMAL PAIRWISE SUFFICIENT STATISTICS A. KUŚMIEREK Received September 15, 2000 and, in revised form, January 23, 2001 Abstract. Each statistic,

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation Probability and Statistics Volume 2012, Article ID 807045, 5 pages doi:10.1155/2012/807045 Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

MIT Spring 2016

MIT Spring 2016 Decision Theoretic Framework MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Decision Theoretic Framework 1 Decision Theoretic Framework 2 Decision Problems of Statistical Inference Estimation: estimating

More information

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Applying the Benjamini Hochberg procedure to a set of generalized p-values

Applying the Benjamini Hochberg procedure to a set of generalized p-values U.U.D.M. Report 20:22 Applying the Benjamini Hochberg procedure to a set of generalized p-values Fredrik Jonsson Department of Mathematics Uppsala University Applying the Benjamini Hochberg procedure

More information

Remarks on Improper Ignorance Priors

Remarks on Improper Ignorance Priors As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive

More information

EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS

EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS Statistica Sinica 7(1997), 755-769 EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS Su-Yun Huang and TaChen Liang Academia Sinica and Wayne State University Abstract: This paper deals

More information

Bayesian statistics: Inference and decision theory

Bayesian statistics: Inference and decision theory Bayesian statistics: Inference and decision theory Patric Müller und Francesco Antognini Seminar über Statistik FS 28 3.3.28 Contents 1 Introduction and basic definitions 2 2 Bayes Method 4 3 Two optimalities:

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans

More information

Classical and Bayesian inference

Classical and Bayesian inference Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions

Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions Hacettepe Journal of Mathematics Statistics Volume 44 (5) (2015), 1283 1291 Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions SH. Moradi

More information

Evaluating the Performance of Estimators (Section 7.3)

Evaluating the Performance of Estimators (Section 7.3) Evaluating the Performance of Estimators (Section 7.3) Example: Suppose we observe X 1,..., X n iid N(θ, σ 2 0 ), with σ2 0 known, and wish to estimate θ. Two possible estimators are: ˆθ = X sample mean

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Statistical Measures of Uncertainty in Inverse Problems

Statistical Measures of Uncertainty in Inverse Problems Statistical Measures of Uncertainty in Inverse Problems Workshop on Uncertainty in Inverse Problems Institute for Mathematics and Its Applications Minneapolis, MN 19-26 April 2002 P.B. Stark Department

More information

Qualifying Exam in Probability and Statistics.

Qualifying Exam in Probability and Statistics. Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Lecture 7 October 13

Lecture 7 October 13 STATS 300A: Theory of Statistics Fall 2015 Lecture 7 October 13 Lecturer: Lester Mackey Scribe: Jing Miao and Xiuyuan Lu 7.1 Recap So far, we have investigated various criteria for optimal inference. We

More information

Contents of this Document [ntc5]

Contents of this Document [ntc5] Contents of this Document [ntc5] 5. Random Variables: Applications Reconstructing probability distributions [nex14] Probability distribution with no mean value [nex95] Variances and covariances [nex20]

More information

The comparative studies on reliability for Rayleigh models

The comparative studies on reliability for Rayleigh models Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong

More information

27 Superefficiency. A. W. van der Vaart Introduction

27 Superefficiency. A. W. van der Vaart Introduction 27 Superefficiency A. W. van der Vaart 1 ABSTRACT We review the history and several proofs of the famous result of Le Cam that a sequence of estimators can be superefficient on at most a Lebesgue null

More information

arxiv: v2 [math.pr] 4 Feb 2009

arxiv: v2 [math.pr] 4 Feb 2009 Optimal detection of homogeneous segment of observations in stochastic sequence arxiv:0812.3632v2 [math.pr] 4 Feb 2009 Abstract Wojciech Sarnowski a, Krzysztof Szajowski b,a a Wroc law University of Technology,

More information

Admissible Estimation of a Finite Population Total under PPS Sampling

Admissible Estimation of a Finite Population Total under PPS Sampling Research Journal of Mathematical and Statistical Sciences E-ISSN 2320-6047 Admissible Estimation of a Finite Population Total under PPS Sampling Abstract P.A. Patel 1* and Shradha Bhatt 2 1 Department

More information

THE SHORTEST CLOPPER PEARSON RANDOM- IZED CONFIDENCE INTERVAL FOR BINOMIAL PROBABILITY

THE SHORTEST CLOPPER PEARSON RANDOM- IZED CONFIDENCE INTERVAL FOR BINOMIAL PROBABILITY REVSTAT Statistical Journal Volume 15, Number 1, January 2017, 141 153 THE SHORTEST CLOPPER PEARSON RANDOM- IZED CONFIDENCE INTERVAL FOR BINOMIAL PROBABILITY Author: Wojciech Zieliński Department of Econometrics

More information

A statement of the problem, the notation and some definitions

A statement of the problem, the notation and some definitions 2 A statement of the problem, the notation and some definitions Consider a probability space (X, A), a family of distributions P o = {P θ,λ,θ =(θ 1,...,θ M ),λ=(λ 1,...,λ K M ), (θ, λ) Ω o R K } defined

More information

Statistical Inference

Statistical Inference Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Spring, 2006 1. DeGroot 1973 In (DeGroot 1973), Morrie DeGroot considers testing the

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Key Words: first-order stationary autoregressive process, autocorrelation, entropy loss function,

Key Words: first-order stationary autoregressive process, autocorrelation, entropy loss function, ESTIMATING AR(1) WITH ENTROPY LOSS FUNCTION Małgorzata Schroeder 1 Institute of Mathematics University of Białystok Akademicka, 15-67 Białystok, Poland mszuli@mathuwbedupl Ryszard Zieliński Department

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Time Series and Dynamic Models

Time Series and Dynamic Models Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The

More information

Inference on reliability in two-parameter exponential stress strength model

Inference on reliability in two-parameter exponential stress strength model Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag

More information

THE SHORTEST CONFIDENCE INTERVAL FOR PROPORTION IN FINITE POPULATIONS

THE SHORTEST CONFIDENCE INTERVAL FOR PROPORTION IN FINITE POPULATIONS APPLICATIOES MATHEMATICAE Online First version Wojciech Zieliński (Warszawa THE SHORTEST COFIDECE ITERVAL FOR PROPORTIO I FIITE POPULATIOS Abstract. Consider a finite population. Let θ (0, 1 denote the

More information

Stat 451 Lecture Notes Monte Carlo Integration

Stat 451 Lecture Notes Monte Carlo Integration Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:

More information

Fundamental Probability and Statistics

Fundamental Probability and Statistics Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are

More information

PRINCIPLES OF OPTIMAL SEQUENTIAL PLANNING

PRINCIPLES OF OPTIMAL SEQUENTIAL PLANNING C. Schmegner and M. Baron. Principles of optimal sequential planning. Sequential Analysis, 23(1), 11 32, 2004. Abraham Wald Memorial Issue PRINCIPLES OF OPTIMAL SEQUENTIAL PLANNING Claudia Schmegner 1

More information

Robustní monitorování stability v modelu CAPM

Robustní monitorování stability v modelu CAPM Robustní monitorování stability v modelu CAPM Ondřej Chochola, Marie Hušková, Zuzana Prášková (MFF UK) Josef Steinebach (University of Cologne) ROBUST 2012, Němčičky, 10.-14.9. 2012 Contents Introduction

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Vladislav B. Tadić Abstract The almost sure convergence of two time-scale stochastic approximation algorithms is analyzed under

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

λ(x + 1)f g (x) > θ 0

λ(x + 1)f g (x) > θ 0 Stat 8111 Final Exam December 16 Eleven students took the exam, the scores were 92, 78, 4 in the 5 s, 1 in the 4 s, 1 in the 3 s and 3 in the 2 s. 1. i) Let X 1, X 2,..., X n be iid each Bernoulli(θ) where

More information

Classical and Bayesian inference

Classical and Bayesian inference Classical and Bayesian inference AMS 132 January 18, 2018 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 18, 2018 1 / 9 Sampling from a Bernoulli Distribution Theorem (Beta-Bernoulli

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Decomposability and time consistency of risk averse multistage programs

Decomposability and time consistency of risk averse multistage programs Decomposability and time consistency of risk averse multistage programs arxiv:1806.01497v1 [math.oc] 5 Jun 2018 A. Shapiro School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta,

More information

Reliability of Coherent Systems with Dependent Component Lifetimes

Reliability of Coherent Systems with Dependent Component Lifetimes Reliability of Coherent Systems with Dependent Component Lifetimes M. Burkschat Abstract In reliability theory, coherent systems represent a classical framework for describing the structure of technical

More information

A central limit theorem for randomly indexed m-dependent random variables

A central limit theorem for randomly indexed m-dependent random variables Filomat 26:4 (2012), 71 717 DOI 10.2298/FIL120471S ublished by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A central limit theorem for randomly

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

6.1 Variational representation of f-divergences

6.1 Variational representation of f-divergences ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016

More information

MIT Spring 2016

MIT Spring 2016 MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 3 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a)

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Taposh Banerjee University of Texas at San Antonio Joint work with Gene Whipps (US Army Research Laboratory) Prudhvi

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Optimal Uncertainty Quantification of Quantiles

Optimal Uncertainty Quantification of Quantiles Optimal Uncertainty Quantification of Quantiles Application to Industrial Risk Management. Merlin Keller 1, Jérôme Stenger 12 1 EDF R&D, France 2 Institut Mathématique de Toulouse, France ETICS 2018, Roscoff

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Minimax and -minimax estimation for the Poisson distribution under LINEX loss when the parameter space is restricted

Minimax and -minimax estimation for the Poisson distribution under LINEX loss when the parameter space is restricted Statistics & Probability Letters 50 (2000) 23 32 Minimax and -minimax estimation for the Poisson distribution under LINEX loss when the parameter space is restricted Alan T.K. Wan a;, Guohua Zou b, Andy

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Monte Carlo Integration

Monte Carlo Integration Monte Carlo Integration SCX5005 Simulação de Sistemas Complexos II Marcelo S. Lauretto www.each.usp.br/lauretto Reference: Robert CP, Casella G. Introducing Monte Carlo Methods with R. Springer, 2010.

More information