Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss.

Size: px
Start display at page:

Download "Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss."

Transcription

1 SOME NEW ESTIMATORS OF THE BINOMIAL PARAMETER n George Iliopoulos Department of Mathematics University of the Aegean Karlovassi, Samos, Greece geh@unipi.gr Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss. ABSTRACT On the basis of an observation from the binomial distribution B(n, p) with known probability of success p, we construct two classes of admissible estimators of the parameter n {, 2,...} when the loss function is the squared error. The estimators are either proper Bayes or limits of Bayes estimators.. INTRODUCTION Let X be an observation from the binomial distribution B(n, p), where the probability of success p q is a known constant in (0, ) and n N + {, 2,...} is an unknown parameter. This context arises in many situations, e.g. estimation of finite population size [cf. Mukhopadhyay ()] or simple random sampling with replacement. Moreover, a practical application in an animal counting problem has been discussed by Rukhin (2). In this paper we consider decision theoretic estimation of n under the squared error loss, L (δ, n) (δ n) 2. and the scaled squared error loss, L 2 (δ, n) n (δ n) 2. Now at the Department of Statistics and Insurance Science, University of Piraeus, 80 Karaoli & Dimitriou str., 8534, Piraeus, Greece.

2 According to Casella (3) the latter is the most natural loss function for estimating n. Off course, the admissibility problem of the estimators is not affected when L is replaced by L 2. However, in our approach the scaled squared error loss leads to more ordinary estimators, that is, estimators of the form δ(0) c, δ(x) ax + b, X, (.) for some constants a, b, c c(a, b). This is the usual form of estimators of n appeared so far in the literature. When the parameter space is assumed to be N {0,, 2,...}, the natural estimator of n is δ 0 X/p. Rukhin (2) and Ghosh and Meeden (4) showed that this estimator has optimal properties; it is the only unbiased estimator and is admissible and minimax under quadratic loss. Other relevant work has been done by Feldman and Fox (5), Hamedani and Walter (6), Sadooghi-Alvandi and Parsian (7) (who consider estimation under the asymmetric LINEX loss function) and Yang (8) (who provides a characterization of the admissible linear estimators ax + b under squared error loss). In the more realistic case of the truncated parameter space, i.e., n N + {, 2,...}, the estimator δ 0 is clearly inadmissible under any convex loss. Under squared error, Sadooghi- Alvandi (9) showed that this is also the case for its obvious modification, i.e., the estimator δ (0), δ (X) X/p, X. In the same paper, Sadooghi-Alvandi proved the admissibiliy of the estimator δ (0) q/p log p, δ (X) X/p, X, (.2) which is the generalized Bayes estimator of n under the improper prior π(n) /n, n, 2,... [see also Ghosh and Meeden (4)]. Sadooghi-Alvandi and Parsian (7) obtained analogous results under LINEX loss. Recently, Zou and Wan (0) established an explicit result for estimators of the form (.). We restate it here because we will often refer to it in the sequel. 2

3 Theorem.. [Zou and Wan (0)] The estimator b(a )a b, X 0, a δ a,b (X) b ax + b(a ), X, (.3) is admissible if and only if one of the following three conditions is satisfied: (i) a ; (ii) < a < /p and b ; (iii) a /p and b 0. The above class includes Sadooghi-Alvandi s (9) estimator δ in (.2) as a special case (a /p and b 0). In their paper, Zou and Wan (0) have also mentioned an estimator resulting from (.3) by letting a and b in such a way that b(a ) log c (c > ), that is, c log c, X 0, c δ c (X) X + log c, X. (.4) However, they have not proved its admissibility. Notice that the most of the admissibility results of Zou and Wan described in Theorem. have been established by assigning negative binomial priors to n truncated in N + and obtaining either the corresponding (unique) Bayes estimators or their limits. In what follows we provide some new estimators of n when the parameter space is N +. Letting n have Poisson or negative binomial prior (rather than n having a truncated one) we obtain the corresponding Bayes estimators with respect to L and L 2. In Section 2 we consider a Poisson prior which results in Bayes estimators of the form c +, X 0, T c (X) X + c + c, X. X + c (.5) These estimators are admissible for any c 0. Moreover, under L 2 it will be seen that the resulting Bayes estimator is essentially δ c in (.4) and this proves its admissibility. In Section 3 we consider a negative binomial prior and we get under L the class of Bayes estimators T r,θ (X) + rθq/(), X 0, X + rθq (r )θq + X + (r )θq, X, 3 (.6)

4 where r > 0, 0 < θ <. These estimators are also admissible. Furthermore, we explore the limiting cases. When r 0, T 0,θ is admissible, whereas when θ, T r, is not. 2. BAYES ESTIMATORS UNDER A POISSON PRIOR Let X B(n, p) where p (0, ) is a known constant and n N + {, 2,...} is an unknown parameter. That is, the probability mass function of X is f(x n) n! x!(n x)! px q n x, x 0,,..., n, where q p. Let the prior distribution of n be λ n π λ (n) e λ, n, 2,..., (2.) (n )! i.e., n P(λ) (Poisson with mean λ), λ > 0. Then, the posterior distribution is e λq (λq) n, n, 2,..., x 0, πλ(n x) (n )! e λq n (λq) n x, n x, x +,..., x. x+λq (n x)! It is well known that the Bayes estimator of n under L is the posterior mean, E λ (n x). Obviously, E λ (n 0) + λq, since conditionally on X 0, n P(λq). On the other hand, for x, E λ (n x) n 2 (λq)n x exp( λq) nx x + λq (n x)! (x + n) 2 exp( λq) (λq)n x + λq n0 n! x + λq {x2 + 2xλq + λq + λ 2 q 2 } x + λq + λq x + λq. It can be seen that the Bayes estimator of n is T c defined in (.5) with c λq > 0. Since it is the unique Bayes estimator under (2.) it is admissible. Taking c 0 we are led to the limiting estimator T 0 (0), T 0 (X) X, X. The admissibility of T 0 follows by Theorem., since it is the same as δ,0 in (.3). Summarizing, we have the following proposition. 4

5 Proposition 2.. For any c 0, the estimator T c in (.5) is admissible for n under the squared error loss. Consider now Bayesian estimation of n under the loss function L 2. Then, the Bayes estimator is the reciprocal of the posterior expectation of n, i.e., /E λ (n X). Under (2.) it holds and E λ (n 0) E λ (n x) n exp( λq) (λq)n n! P(Y ) [where Y P(λq)] λq exp( λq) λq nx x + λq x + λq n0 x + λq, x. exp( λq) (λq)n x (n x)! exp( λq) (λq)n n! Thus, the Bayes estimator of n with respect to L 2 is given by T λ(x) λq exp( λq), X 0, X + λq, X. Since it is also the unique Bayes estimator and admissibility under L 2 is equivalent to admissibility under L, T λ is admissible. Setting λq log c, c >, it is seen that T λ coincides with δ c in (.4). Thus, Proposition 2.2. For any c >, the estimator δ c in (.4) is admissible for n under the squared error loss. 3. BAYES ESTIMATORS UNDER A NEGATIVE BINOMIAL PRIOR Assume now that the prior distribution of n is π r,θ (n) Γ(r + n ) ( θ) r θ n Γ(r) (n )!, n, 2,..., r > 0, 0 < θ <, 5

6 i.e., n N B(r, θ) (negative binomial with parameters r, θ). Then, the posterior distribution of n is πr,θ(n x) ( θq) r Γ(r+n ) (θq) n, n, 2,..., x 0, Γ(r) (n )! n( θq) r+x x+(r )θq Γ(r+n ) (θq) n x, n x, x +,..., x. Γ(r+x ) (n x)! The derivation of the posterior means follows. When x 0, we have (3.) E r,θ (n 0) + rθq/(), (3.2) since conditionally on X 0, n N B(r, ). For x, E r,θ (n x) n 2 () r+x Γ(r + n ) (θq) n x nx x + (r )θq Γ(r + x ) (n x)! (x + n) 2 r+x Γ(r + n + x ) (θq) n () x + (r )θq n0 Γ(r + x ) n! { x 2 (r + x )θq + 2x x + (r )θq x (r )θq + + (r + x )θq + ( θq } (r + x )(r + x)(θq)2 + () 2 ) x + r x + (r )θq. (3.3) From (3.2), (3.3) we conclude that the Bayes estimator of n is T r,θ in (.6). Since T r,θ is the unique Bayes estimator of n, it is admissible for any r > 0, 0 < θ <. Before stating the corresponding proposition, we explore the limiting cases r 0 and θ. Lemma 3.. The estimator T 0,θ (X) X θq X θq is admissible for n under the squared error loss for any 0 < θ <. Proof. For convenience, we will supress θ from the subscript. The admissibility of T 0 will be established using a variant of the well known limiting Bayes method due to Blyth () [see also Lehmann and Casella (2), p.380]. Consider first the improper prior π r (n) Γ(r + n ) θ n (n )! 6, n, 2,....

7 Since Γ(t) attains its minimum for t (0, ) at t [cf. Abramowitz and Stegun (3), p.259], it follows that π r (n) Γ(t 0 ) θ n /(n )!, n, for any r > 0. The posterior distribution of n is π r(n x) π r,θ(n x) in (3.), and the Bayes estimator of n with respect to π r is T r ( T r,θ ). It tends to T 0 as r 0. Let I(n x) denote the indicator function with the obvious meaning. The marginal distribution of X is m r (x) f(x n)π r (n)i(n x) n q Γ(r) ( θq) r, x 0, p x θ x Γ(r+x ) x!( θq) r+x [x + (r )θq], x. Setting D r (n, x) [T r (x) n] 2 [T 0 (x) n] 2, the difference of the Bayes risks can be expressed as (r) n D r (n, x)f(x n)i(n x)π r (n) n x0 n D r (n, 0)f(0 n)π r (n) + D r (n, x)f(x n)i(n x)π r (n) n n x D r (n, 0)f(0 n)π r (n) + D r (n, x)πr(n x)m r (x). (3.4) n x nx The last equality is obtained by changing the order of summation (this is permitted since the series converges for all r > 0) and using the identity f(x n)π r (n) π r(n x)m r (x). The first term in (3.4) equals ( n n ( n n0 ( rθq ) 2 qr 2 Γ(r) () r qr2 Γ(r) () r rθq ) 2 (n ) 2 qn Γ(r + n ) rθq ) 2 n 2 qn+ Γ(r + n) θn n! q n+ Γ(r + n) θn n0 ( ) 2 θq 2qr2 Γ(r) ( θq n! 2rθq ( θq () r ) 2 0 as r 0 n0 ) 2 θ n (n )! nq n+ Γ(r + n) θn n! 7

8 (it holds r 2 Γ(r) rγ(r + ) 0). On the other hand, the inner sum of the second term is equal to E r { [n Tr (x) x] 2} E r { [n T0 (x) x] 2} Var r (n x) { Var r (n x) + [E r (n x) T 0 (x)] 2} ( ) 2 [ ] 2 θq {T r (x) T 0 (x)} 2 r 2 x() +, (x θq)(x + (r )θq) since T r (x) E r (n x). Hence, the second term in (3.4) equals where r ( ) 2 θq 2 [ ] θq + x( θq) 2 p x θ x Γ(r+x )[x+(r )θq] (x θq)(x+(r )θq) x!( θq) r+x x ) 2 r 2 ( θq θq x0 [ ] + (x+)( θq) 2 p x+ θ x Γ(r+x)[x++(r )θq] (x+ θq)(x++(r )θq) (x+)!( θq) r+x+ p r2 Γ(r)(θq) 2 E r [h(y )] ( θ) r () 3, (3.5) h(y) y + + (r )θq y + [ + and Y N B[r, ( θ)/()]. Now, for any r > 0, y 0, ] 2 (y + )() (y + )(y + + (r )θq) y + (y + )(y + + (r )θq) y + (y + ) 2 (), 2 and, when r <, (y + + (r )θq)/(y + ) <. Hence, for 0 < r <, h(y) is bounded above by [(2 θq)/()] 2. Therefore, the quantity in (3.5) is in absolute value less than Cr 2 Γ(r), C being a positive constant not depending on r. Thus, it tends to zero as r 0. Since both terms in (3.4) tend to zero, (r) 0 and consequently, the estimator T 0 T 0,θ is admissible for any 0 < θ <. Lemma 3.2. For any r > 0, the estimator + rq/p, X 0, T r, (X) X/p + rq/p + (r )q, X, X+(r )q is inadmissible for n under the squared error loss. 8

9 Proof. When r, the estimator becomes /p, X 0, T, (X) X/p + q/p, X, i.e., δ /p, in (.3), and by Theorem. it is inadmissible. Consider now the case r (such that T r, can be written in a unified form) and define the alternative estimator X/p + rq/p + Tr,,n 0 (X) X/p + (r )q X+(r )q, 0 X n 0, (r )q X+(r )q, X > n 0. Obviously, if n n 0 the two estimators do not differ and thus, they have equal risks. On the contrary, for n > n 0, their risks difference is (n) E n {[T r, (X) n] 2 I(X > n 0 )} E n {[T r,,n0 (X) n] 2 I(X > n 0 )} ( ) rq 2 p P(X > n0 ) + 2rq p E n {[ X p + (r )q X+(r )q n] I(X > n 0 ) } rq p { rq p P n(x > n 0 ) + 2 E n [ (r )q X+(r )q I(X > n 0) ] + 2 E n [( X p n) I(X > n 0 ) ]}. Recall that E n (X/p n) 0. 0 n 0 < n, the inequality being strict when n 0. This implies that E n [(X/p n)i(x > n 0 )] 0 for any Suppose first that r >. Then, by taking n 0 0 it can be immediately seen that (n) > 0 for all n. increasing in x. Therefore, when n 0, For the case r <, notice that (r )q/(x + (r )q) is strictly (n) > rq { [ ]} rq p p P (r )q n(x > n 0 ) + 2 E n X + (r )q I(X > n 0) > rq ( ) rq p P 2(r )q n(x > n 0 ) + > 0 p n 0 + (r )q for n 0 > ( r)(q + 2p/r) (and n > n 0 ). Thus, for suitably chosen n 0, the estimator Tr,,n 0 dominates T r, and the statement of the lemma is proved. Proposition 3.. For any r 0, 0 θ <, the estimator T r,θ in (.6) is admissible for n under the squared error loss. If θ, it is inadmissible for any r > 0. 9

10 The case r 0, θ, corresponding to the estimator T 0, (0), T 0, (X) X/p q/(x q), X, is not covered by Proposition 3.. We feel that T 0, is admissible but we were not able to prove it. In particular, the Blyth s method cannot be applied (this is usually the case when a prior distribution depends on two hyperparameters and the generalized Bayes estimator results by taking both to their extremes). In closing we comment that the Bayes estimator of n under L 2 has the form T r,θ(x) (r )θq ( θq)[ ( θq) r ], X 0, X + (r )θq, X. θq θq Setting a /(), b r, it can be seen that the above estimator is in fact δ a,b in (.3) and thus, its admissibility aspects are covered in detail by Theorem.. BIBLIOGRAPHY () Mukhopadhyay, P. Small Area Estimation in Survey Sampling, London: Narosa Publishing House, 998. (2) Rukhin, A. L. Statistical decision about the total number of observable objects. Sankhyã A, 975, 37, (3) Casella, G. Stabilizing binomial n estimators. J. Amer. Statist. Assoc., 986, 8, (4) Ghosh, M.; Meeden, G. How many tosses of the coin? Sankhyã A, 975, 37, (5) Feldman, D.; Fox, M. Estimation of the parameter n in the binomial distribution. J. Amer. Statist. Assoc., 968, 63, (6) Hamedani, G. G.; Walter, G. G. Bayes estimation of the binomial parameter n. Commun. Statist. - Theory Meth., 988, 7, (7) Sadooghi-Alvandi, S. M.; Parsian, A. Estimation of the binomial parameter n using a LINEX loss function. Commun. Statist. - Theory Meth., 992, 2,

11 (8) Yang, Y. Admissible linear estimation for the binomial parameter n. J. Math. Res. Expo., 996, 6, (9) Sadooghi-Alvandi, S. M. Admissible estimation of the binomial parameter n. Ann. Statist., 986, 4, (0) Zou, G.; Wan A. T. K. Admissible and minimax estimation of the parameter n in the binomial distribution. J. Statist. Plann. Inference (to appear). () Blyth, C. R. On minimax statistical decistion procedures and their admissibility. Ann. Math. Statist., 95, 22, (2) Lehmann, E. L.; Casella, G. Theory of Point Estimation, 2nd ed. Springer-Verlag, New York, 998. (3) Abramowitz, M.; Stegun, I. A. Handbook of Mathematical Functions. Dover, New York, 974.

Minimax and -minimax estimation for the Poisson distribution under LINEX loss when the parameter space is restricted

Minimax and -minimax estimation for the Poisson distribution under LINEX loss when the parameter space is restricted Statistics & Probability Letters 50 (2000) 23 32 Minimax and -minimax estimation for the Poisson distribution under LINEX loss when the parameter space is restricted Alan T.K. Wan a;, Guohua Zou b, Andy

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

Admissibility under the LINEX loss function in non-regular case. Hidekazu Tanaka. Received November 5, 2009; revised September 2, 2010

Admissibility under the LINEX loss function in non-regular case. Hidekazu Tanaka. Received November 5, 2009; revised September 2, 2010 Scientiae Mathematicae Japonicae Online, e-2012, 427 434 427 Admissibility nder the LINEX loss fnction in non-reglar case Hidekaz Tanaka Received November 5, 2009; revised September 2, 2010 Abstract. In

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Lecture notes on statistical decision theory Econ 2110, fall 2013

Lecture notes on statistical decision theory Econ 2110, fall 2013 Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions

Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions Hacettepe Journal of Mathematics Statistics Volume 44 (5) (2015), 1283 1291 Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions SH. Moradi

More information

STAT 830 Decision Theory and Bayesian Methods

STAT 830 Decision Theory and Bayesian Methods STAT 830 Decision Theory and Bayesian Methods Example: Decide between 4 modes of transportation to work: B = Ride my bike. C = Take the car. T = Use public transit. H = Stay home. Costs depend on weather:

More information

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003 Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)

More information

Peter Hoff Minimax estimation October 31, Motivation and definition. 2 Least favorable prior 3. 3 Least favorable prior sequence 11

Peter Hoff Minimax estimation October 31, Motivation and definition. 2 Least favorable prior 3. 3 Least favorable prior sequence 11 Contents 1 Motivation and definition 1 2 Least favorable prior 3 3 Least favorable prior sequence 11 4 Nonparametric problems 15 5 Minimax and admissibility 18 6 Superefficiency and sparsity 19 Most of

More information

Peter Hoff Minimax estimation November 12, Motivation and definition. 2 Least favorable prior 3. 3 Least favorable prior sequence 11

Peter Hoff Minimax estimation November 12, Motivation and definition. 2 Least favorable prior 3. 3 Least favorable prior sequence 11 Contents 1 Motivation and definition 1 2 Least favorable prior 3 3 Least favorable prior sequence 11 4 Nonparametric problems 15 5 Minimax and admissibility 18 6 Superefficiency and sparsity 19 Most of

More information

Peter Hoff Statistical decision problems September 24, Statistical inference 1. 2 The estimation problem 2. 3 The testing problem 4

Peter Hoff Statistical decision problems September 24, Statistical inference 1. 2 The estimation problem 2. 3 The testing problem 4 Contents 1 Statistical inference 1 2 The estimation problem 2 3 The testing problem 4 4 Loss, decision rules and risk 6 4.1 Statistical decision problems....................... 7 4.2 Decision rules and

More information

GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS

GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS Authors: Shirin Moradi Zahraie Department of Statistics, Yazd University, Yazd, Iran (sh moradi zahraie@stu.yazd.ac.ir) Hojatollah

More information

ESTIMATORS FOR GAUSSIAN MODELS HAVING A BLOCK-WISE STRUCTURE

ESTIMATORS FOR GAUSSIAN MODELS HAVING A BLOCK-WISE STRUCTURE Statistica Sinica 9 2009, 885-903 ESTIMATORS FOR GAUSSIAN MODELS HAVING A BLOCK-WISE STRUCTURE Lawrence D. Brown and Linda H. Zhao University of Pennsylvania Abstract: Many multivariate Gaussian models

More information

Lecture 13 and 14: Bayesian estimation theory

Lecture 13 and 14: Bayesian estimation theory 1 Lecture 13 and 14: Bayesian estimation theory Spring 2012 - EE 194 Networked estimation and control (Prof. Khan) March 26 2012 I. BAYESIAN ESTIMATORS Mother Nature conducts a random experiment that generates

More information

Lawrence D. Brown, T. Tony Cai and Anirban DasGupta

Lawrence D. Brown, T. Tony Cai and Anirban DasGupta Statistical Science 2005, Vol. 20, No. 4, 375 379 DOI 10.1214/088342305000000395 Institute of Mathematical Statistics, 2005 Comment: Fuzzy and Randomized Confidence Intervals and P -Values Lawrence D.

More information

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio Coercivity Conditions and Variational Inequalities Aris Daniilidis Λ and Nicolas Hadjisavvas y Abstract Various coercivity conditions appear in the literature in order to guarantee solutions for the Variational

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Optimal Sequential Procedures with Bayes Decision Rules

Optimal Sequential Procedures with Bayes Decision Rules International Mathematical Forum, 5, 2010, no. 43, 2137-2147 Optimal Sequential Procedures with Bayes Decision Rules Andrey Novikov Department of Mathematics Autonomous Metropolitan University - Iztapalapa

More information

MIT Spring 2016

MIT Spring 2016 Decision Theoretic Framework MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Decision Theoretic Framework 1 Decision Theoretic Framework 2 Decision Problems of Statistical Inference Estimation: estimating

More information

9 Bayesian inference. 9.1 Subjective probability

9 Bayesian inference. 9.1 Subjective probability 9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Econ 2148, spring 2019 Statistical decision theory

Econ 2148, spring 2019 Statistical decision theory Econ 2148, spring 2019 Statistical decision theory Maximilian Kasy Department of Economics, Harvard University 1 / 53 Takeaways for this part of class 1. A general framework to think about what makes a

More information

Econ 2140, spring 2018, Part IIa Statistical Decision Theory

Econ 2140, spring 2018, Part IIa Statistical Decision Theory Econ 2140, spring 2018, Part IIa Maximilian Kasy Department of Economics, Harvard University 1 / 35 Examples of decision problems Decide whether or not the hypothesis of no racial discrimination in job

More information

EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS

EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS Statistica Sinica 7(1997), 755-769 EMPIRICAL BAYES ESTIMATION OF THE TRUNCATION PARAMETER WITH LINEX LOSS Su-Yun Huang and TaChen Liang Academia Sinica and Wayne State University Abstract: This paper deals

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Conjugate Predictive Distributions and Generalized Entropies

Conjugate Predictive Distributions and Generalized Entropies Conjugate Predictive Distributions and Generalized Entropies Eduardo Gutiérrez-Peña Department of Probability and Statistics IIMAS-UNAM, Mexico Padova, Italy. 21-23 March, 2013 Menu 1 Antipasto/Appetizer

More information

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t )

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t ) LECURE NOES 21 Lecture 4 7. Sufficient statistics Consider the usual statistical setup: the data is X and the paramter is. o gain information about the parameter we study various functions of the data

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

LECTURE NOTES 57. Lecture 9

LECTURE NOTES 57. Lecture 9 LECTURE NOTES 57 Lecture 9 17. Hypothesis testing A special type of decision problem is hypothesis testing. We partition the parameter space into H [ A with H \ A = ;. Wewrite H 2 H A 2 A. A decision problem

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Lecture 5 - Information theory

Lecture 5 - Information theory Lecture 5 - Information theory Jan Bouda FI MU May 18, 2012 Jan Bouda (FI MU) Lecture 5 - Information theory May 18, 2012 1 / 42 Part I Uncertainty and entropy Jan Bouda (FI MU) Lecture 5 - Information

More information

Bayesian analysis in finite-population models

Bayesian analysis in finite-population models Bayesian analysis in finite-population models Ryan Martin www.math.uic.edu/~rgmartin February 4, 2014 1 Introduction Bayesian analysis is an increasingly important tool in all areas and applications of

More information

Classical and Bayesian inference

Classical and Bayesian inference Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Admissible Estimation of a Finite Population Total under PPS Sampling

Admissible Estimation of a Finite Population Total under PPS Sampling Research Journal of Mathematical and Statistical Sciences E-ISSN 2320-6047 Admissible Estimation of a Finite Population Total under PPS Sampling Abstract P.A. Patel 1* and Shradha Bhatt 2 1 Department

More information

PRINCIPLES OF OPTIMAL SEQUENTIAL PLANNING

PRINCIPLES OF OPTIMAL SEQUENTIAL PLANNING C. Schmegner and M. Baron. Principles of optimal sequential planning. Sequential Analysis, 23(1), 11 32, 2004. Abraham Wald Memorial Issue PRINCIPLES OF OPTIMAL SEQUENTIAL PLANNING Claudia Schmegner 1

More information

Lecture 7 October 13

Lecture 7 October 13 STATS 300A: Theory of Statistics Fall 2015 Lecture 7 October 13 Lecturer: Lester Mackey Scribe: Jing Miao and Xiuyuan Lu 7.1 Recap So far, we have investigated various criteria for optimal inference. We

More information

Solutions to the Exercises of Section 2.11.

Solutions to the Exercises of Section 2.11. Solutions to the Exercises of Section 2.11. 2.11.1. Proof. Let ɛ be an arbitrary positive number. Since r(τ n,δ n ) C, we can find an integer n such that r(τ n,δ n ) C ɛ. Then, as in the proof of Theorem

More information

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize

More information

Lecture 2: Basic Concepts of Statistical Decision Theory

Lecture 2: Basic Concepts of Statistical Decision Theory EE378A Statistical Signal Processing Lecture 2-03/31/2016 Lecture 2: Basic Concepts of Statistical Decision Theory Lecturer: Jiantao Jiao, Tsachy Weissman Scribe: John Miller and Aran Nayebi In this lecture

More information

A NONINFORMATIVE BAYESIAN APPROACH FOR TWO-STAGE CLUSTER SAMPLING

A NONINFORMATIVE BAYESIAN APPROACH FOR TWO-STAGE CLUSTER SAMPLING Sankhyā : The Indian Journal of Statistics Special Issue on Sample Surveys 1999, Volume 61, Series B, Pt. 1, pp. 133-144 A OIFORMATIVE BAYESIA APPROACH FOR TWO-STAGE CLUSTER SAMPLIG By GLE MEEDE University

More information

OPTIMAL SEQUENTIAL PROCEDURES WITH BAYES DECISION RULES

OPTIMAL SEQUENTIAL PROCEDURES WITH BAYES DECISION RULES K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 754 770 OPTIMAL SEQUENTIAL PROCEDURES WITH BAYES DECISION RULES Andrey Novikov In this article, a general problem of sequential statistical inference

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Lecture 1: Bayesian Framework Basics

Lecture 1: Bayesian Framework Basics Lecture 1: Bayesian Framework Basics Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de April 21, 2014 What is this course about? Building Bayesian machine learning models Performing the inference of

More information

Chi-square lower bounds

Chi-square lower bounds IMS Collections Borrowing Strength: Theory Powering Applications A Festschrift for Lawrence D. Brown Vol. 6 (2010) 22 31 c Institute of Mathematical Statistics, 2010 DOI: 10.1214/10-IMSCOLL602 Chi-square

More information

Bootstrap prediction and Bayesian prediction under misspecified models

Bootstrap prediction and Bayesian prediction under misspecified models Bernoulli 11(4), 2005, 747 758 Bootstrap prediction and Bayesian prediction under misspecified models TADAYOSHI FUSHIKI Institute of Statistical Mathematics, 4-6-7 Minami-Azabu, Minato-ku, Tokyo 106-8569,

More information

Bayesian statistics: Inference and decision theory

Bayesian statistics: Inference and decision theory Bayesian statistics: Inference and decision theory Patric Müller und Francesco Antognini Seminar über Statistik FS 28 3.3.28 Contents 1 Introduction and basic definitions 2 2 Bayes Method 4 3 Two optimalities:

More information

Stanford Statistics 311/Electrical Engineering 377

Stanford Statistics 311/Electrical Engineering 377 I. Bayes risk in classification problems a. Recall definition (1.2.3) of f-divergence between two distributions P and Q as ( ) p(x) D f (P Q) : q(x)f dx, q(x) where f : R + R is a convex function satisfying

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Hyperparameter estimation in Dirichlet process mixture models

Hyperparameter estimation in Dirichlet process mixture models Hyperparameter estimation in Dirichlet process mixture models By MIKE WEST Institute of Statistics and Decision Sciences Duke University, Durham NC 27706, USA. SUMMARY In Bayesian density estimation and

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Roots and Coefficients Polynomials Preliminary Maths Extension 1

Roots and Coefficients Polynomials Preliminary Maths Extension 1 Preliminary Maths Extension Question If, and are the roots of x 5x x 0, find the following. (d) (e) Question If p, q and r are the roots of x x x 4 0, evaluate the following. pq r pq qr rp p q q r r p

More information

STAT 830 Bayesian Estimation

STAT 830 Bayesian Estimation STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These

More information

BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION

BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION REVSTAT Statistical Journal Volume 9, Number 3, November 211, 247 26 BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION Authors: Sanjay

More information

Bayesian Decision Theory

Bayesian Decision Theory Bayesian Decision Theory 1/27 lecturer: authors: Jiri Matas, matas@cmp.felk.cvut.cz Václav Hlaváč, Jiri Matas Czech Technical University, Faculty of Electrical Engineering Department of Cybernetics, Center

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

Math 0230 Calculus 2 Lectures

Math 0230 Calculus 2 Lectures Math 00 Calculus Lectures Chapter 8 Series Numeration of sections corresponds to the text James Stewart, Essential Calculus, Early Transcendentals, Second edition. Section 8. Sequences A sequence is a

More information

arxiv: v1 [math.gm] 31 Dec 2015

arxiv: v1 [math.gm] 31 Dec 2015 On the Solution of Gauss Circle Problem Conjecture Revised arxiv:60.0890v [math.gm] 3 Dec 05 Nikolaos D. Bagis Aristotle University of Thessaloniki Thessaloniki, Greece email: nikosbagis@hotmail.gr Abstract

More information

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions The Korean Communications in Statistics Vol. 14 No. 1, 2007, pp. 71 80 Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions R. Rezaeian 1)

More information

The Jeffreys Prior. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) The Jeffreys Prior MATH / 13

The Jeffreys Prior. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) The Jeffreys Prior MATH / 13 The Jeffreys Prior Yingbo Li Clemson University MATH 9810 Yingbo Li (Clemson) The Jeffreys Prior MATH 9810 1 / 13 Sir Harold Jeffreys English mathematician, statistician, geophysicist, and astronomer His

More information

Bayesian decision making

Bayesian decision making Bayesian decision making Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských partyzánů 1580/3, Czech Republic http://people.ciirc.cvut.cz/hlavac,

More information

Remarks on Improper Ignorance Priors

Remarks on Improper Ignorance Priors As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive

More information

3. DISCRETE RANDOM VARIABLES

3. DISCRETE RANDOM VARIABLES IA Probability Lent Term 3 DISCRETE RANDOM VARIABLES 31 Introduction When an experiment is conducted there may be a number of quantities associated with the outcome ω Ω that may be of interest Suppose

More information

Sequential Decisions

Sequential Decisions Sequential Decisions A Basic Theorem of (Bayesian) Expected Utility Theory: If you can postpone a terminal decision in order to observe, cost free, an experiment whose outcome might change your terminal

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

HAPPY BIRTHDAY CHARLES

HAPPY BIRTHDAY CHARLES HAPPY BIRTHDAY CHARLES MY TALK IS TITLED: Charles Stein's Research Involving Fixed Sample Optimality, Apart from Multivariate Normal Minimax Shrinkage [aka: Everything Else that Charles Wrote] Lawrence

More information

Moreover this binary operation satisfies the following properties

Moreover this binary operation satisfies the following properties Contents 1 Algebraic structures 1 1.1 Group........................................... 1 1.1.1 Definitions and examples............................. 1 1.1.2 Subgroup.....................................

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions This article was downloaded by: [University of Aegean] On: 19 May 2013, At: 11:54 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

A CONDITION TO OBTAIN THE SAME DECISION IN THE HOMOGENEITY TEST- ING PROBLEM FROM THE FREQUENTIST AND BAYESIAN POINT OF VIEW

A CONDITION TO OBTAIN THE SAME DECISION IN THE HOMOGENEITY TEST- ING PROBLEM FROM THE FREQUENTIST AND BAYESIAN POINT OF VIEW A CONDITION TO OBTAIN THE SAME DECISION IN THE HOMOGENEITY TEST- ING PROBLEM FROM THE FREQUENTIST AND BAYESIAN POINT OF VIEW Miguel A Gómez-Villegas and Beatriz González-Pérez Departamento de Estadística

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Loss functions, utility functions and Bayesian sample size determination.

Loss functions, utility functions and Bayesian sample size determination. Loss functions, utility functions and Bayesian sample size determination. Islam, A. F. M. Saiful The copyright of this thesis rests with the author and no quotation from it or information derived from

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Integrated Objective Bayesian Estimation and Hypothesis Testing

Integrated Objective Bayesian Estimation and Hypothesis Testing Integrated Objective Bayesian Estimation and Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es 9th Valencia International Meeting on Bayesian Statistics Benidorm

More information

Charles Geyer University of Minnesota. joint work with. Glen Meeden University of Minnesota.

Charles Geyer University of Minnesota. joint work with. Glen Meeden University of Minnesota. Fuzzy Confidence Intervals and P -values Charles Geyer University of Minnesota joint work with Glen Meeden University of Minnesota http://www.stat.umn.edu/geyer/fuzz 1 Ordinary Confidence Intervals OK

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

arxiv: v4 [math.nt] 31 Jul 2017

arxiv: v4 [math.nt] 31 Jul 2017 SERIES REPRESENTATION OF POWER FUNCTION arxiv:1603.02468v4 [math.nt] 31 Jul 2017 KOLOSOV PETRO Abstract. In this paper described numerical expansion of natural-valued power function x n, in point x = x

More information

WITH SWITCHING ARMS EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT

WITH SWITCHING ARMS EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT lournal of Applied Mathematics and Stochastic Analysis, 12:2 (1999), 151-160. EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT WITH SWITCHING ARMS DONCHO S. DONCHEV

More information

A statement of the problem, the notation and some definitions

A statement of the problem, the notation and some definitions 2 A statement of the problem, the notation and some definitions Consider a probability space (X, A), a family of distributions P o = {P θ,λ,θ =(θ 1,...,θ M ),λ=(λ 1,...,λ K M ), (θ, λ) Ω o R K } defined

More information

Semi Bayes Estimation of the Parameters of Binomial Distribution in the Presence of Outliers

Semi Bayes Estimation of the Parameters of Binomial Distribution in the Presence of Outliers Journal of Statistical Theory and Applications Volume 11, Number 2, 2012, pp. 143-163 ISSN 1538-7887 Semi Bayes Estimation of the Parameters of Binomial Distribution in the Presence of Outliers U. J. Dixit

More information

CS 591, Lecture 2 Data Analytics: Theory and Applications Boston University

CS 591, Lecture 2 Data Analytics: Theory and Applications Boston University CS 591, Lecture 2 Data Analytics: Theory and Applications Boston University Charalampos E. Tsourakakis January 25rd, 2017 Probability Theory The theory of probability is a system for making better guesses.

More information

Estimation of Operational Risk Capital Charge under Parameter Uncertainty

Estimation of Operational Risk Capital Charge under Parameter Uncertainty Estimation of Operational Risk Capital Charge under Parameter Uncertainty Pavel V. Shevchenko Principal Research Scientist, CSIRO Mathematical and Information Sciences, Sydney, Locked Bag 17, North Ryde,

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

A noninformative Bayesian approach to domain estimation

A noninformative Bayesian approach to domain estimation A noninformative Bayesian approach to domain estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu August 2002 Revised July 2003 To appear in Journal

More information

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 34, 29, 327 338 NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS Shouchuan Hu Nikolas S. Papageorgiou

More information

Ordinary and Bayesian Shrinkage Estimation. Mohammad Qabaha

Ordinary and Bayesian Shrinkage Estimation. Mohammad Qabaha An - Najah Univ. J. Res. (N. Sc.) Vol., 007 Ordinary and ayesian Shrinkage Estimation التقدير باستخدام طريقتي التقلص العادية والتقلص لبييز Mohammad Qabaha محمد قبها Department of Mathematics, Faculty of

More information

Review Sheet on Convergence of Series MATH 141H

Review Sheet on Convergence of Series MATH 141H Review Sheet on Convergence of Series MATH 4H Jonathan Rosenberg November 27, 2006 There are many tests for convergence of series, and frequently it can been confusing. How do you tell what test to use?

More information

On the Behavior of Bayes Estimators, under Squared-error Loss Function

On the Behavior of Bayes Estimators, under Squared-error Loss Function Australian Journal of Basic and Applied Sciences, 3(4): 373-378, 009 ISSN 99-878 On the Behavior of Bayes Estimators, under Squared-error Loss Function Amir Teimour PayandehNajafabadi Department of Mathematical

More information

I. Bayesian econometrics

I. Bayesian econometrics I. Bayesian econometrics A. Introduction B. Bayesian inference in the univariate regression model C. Statistical decision theory Question: once we ve calculated the posterior distribution, what do we do

More information

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms

More information

The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation.

The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation. The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation. Binghamton University Department of Mathematical Sciences

More information