STAT 512 sp 2018 Summary Sheet

Size: px
Start display at page:

Download "STAT 512 sp 2018 Summary Sheet"

Transcription

1 STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A} for all A Y. If X is a discrete rv, the rv Y = g(x is discrete and has pmf given by { x g p Y (y = (y p X(x for y Y 0 for y / Y. If X has pdf f X then the cdf of Y = g(x is given by F Y (y = f X (xdx, < y <. {g(x<y} If X has pdf f X which is continuous on X and if g is monotone and g has a continuous derivative then the pdf of Y = g(x is given by { fx (g (y d f Y (y = dy g (y for y Y 0 for y / Y. If X has mgf M X (t, the mgf of Y = ax + b is given by M Y (t = e bt M X (at. If X has continuous cdf F X, the rv U = F X (X has the uniform distribution on (0,. To generate X F X, generate U Uniform(0, and set X = F (U, where F X (u = inf{x : F X(x u}. X If F X is strictly increasing, F X satisfies for all 0 < u < and < x <. x = F X (u u = F X(x

2 . Transformations of multiple random variables Let (X, X be a pair of continuous random variables with pdf f (X,X having joint support X = {(x, x : f (X,X (x, x > 0} and let Y = g (X, X and Y = g (X, X, where g and g are one-to-one functions taking values in X and returning values in the set Y = {(y, y : y = g (x, x, y = g (x, x, (x, x X }. Then the joint pdf f (Y,Y of (Y, Y is given by { f(x,x f (Y,Y (y, y = (g (y, y, g (y, y J(y, y for (y, y Y 0 for (y, y / Y, where g and g are the functions satisfying y = g (x, x and y = g (x, x x = g (y, y and x = g (y, y and J(y, y = y g (y, y y g (y, y y g provided J(y, y is not equal to zero for all (y, y Y. For real numbers a, b, c, d, a c y g b d = ad bc. (y, y (y, y, Let X,..., X n be mutually independent rvs such that X i has mgf M Xi (t for i =,..., n. Then the mgf of Y = X + + X n is given by M Y (t = n M Xi (t. i= 3. Random samples, sums of rvs, and order statistics A random sample is a collection of mutually independent rvs all having the same distribution. This common distribution is called the population distribution and quantities describing the population distribution are called population parameters. A sample statistic is a function of the rvs in the random sample and its distribution is called its sampling distribution. What we can learn about population parameters from sample statistics has everything to do with their sampling distributions. Random samples might be introduced in any of the following ways, Let X,..., X n be a random sample from a population with such-and-such distribution. Let X,..., X n be independent copies of the random variable X, where X has such-and-such distribution.

3 Let X,..., X n be IID rvs from such-and-such distribution (IID means independent and identically distributed. Let X,..., X n iid such-and-such distribution. Let X,..., X n be a random sample from a population with mean µ and variance σ < and define the sample mean and sample variance as X n = n (X + + X n S n = n We have E X n = µ, Var X n = σ /n, and ES n = σ. n (X i X n. For a random sample X,..., X n, the order statistics are the ordered data values, represented as i= X ( < X ( < < X (n. Let X ( < < X (n be the order statistics of a random sample from a population with cdf F X and pdf f X. Then the pdf of the kth order statistic X (k is given by f X(k (x = n! (k!(n k! [F X(x] k [ F X (x] n k f X (x for k =,..., n. In particular, the minimum X ( and the maximum X (n have the pdfs f X( (x = n[ F X (x] n f X (x f X(n (x = n[f X (x] n f X (x Let X ( < < X (n be the order statistics of a random sample from a population with cdf F X and pdf f X. Then the joint pdf of the jth and kth order statistics (X (j, X (k, with j < k n, is given by f (X(j,X (k (u, v = n! (j!(k j!(n k! f X(uf X (v[f X (u] j [F X (v F X (u] k j [ F X (v] n k for < u < v <. In particular, the joint pdf of the minimum and maximum (X (, X (n is for < u < v <. f (X(,X (n (u, v = n(n f X (uf X (v[f X (v F X (u] n 4. Pivot quantities and sampling from the Normal distribution A pivot quantity is a function of sample statistics and population parameters which has a known distribution (not depending on unknown parameters. 3

4 A ( α00% confidence interval (CI for an unknown parameter θ R is an interval (L, U where L and U are random variables such that P (L,U (L < θ < U = α, for α (0,. Important distributions and upper quantile notation: The Normal(0, distribution has pdf φ(z = π e z / ( < z <, and for ξ (0, we denote by z ξ the value satisfying P Z (Z > z ξ = ξ, where Z Normal(0,. The Chi-square distributions χ ν, ν =,,... have the pdfs f X (x; ν = Γ(ν/ ν/ xν/ e x/ (x > 0, and for ξ (0, we denote by χ ν,ξ the value satisfying P X(X > χ ν,ξ = ξ, where X χ ν. The parameter ν is called the degrees of freedom. The t distributions t ν, ν =,,... have the pdfs f T (t; ν = ν+ Γ( ( (ν+/ Γ( ν + t ( < t <, νπ ν and for ξ (0, we denote by t ν,ξ the value satisfying P T (T > t ν,ξ = ξ, where T t ν. The parameter ν is called the degrees of freedom. The F distributions F ν,ν, ν, ν =,,... have the pdfs f R (r; ν, ν = Γ( ν +ν Γ( ν Γ( ν ( ν ν ν / ( r (ν / + ν (ν +ν / r (r > 0, ν and for ξ (0, we denote by F ν,ν,ξ the value satisfying P R (R > F ν,ν,ξ = ξ, where R F ν,ν. The parameters ν and ν are called, respectively, the numerator degrees of freedom and the denominator degrees of freedom. Let Z Normal(0, and X χ ν be independent rvs. Then T = Z X/ν t ν. Let X χ ν and X χ ν be independent rvs. Then R = X /ν X /ν F ν,ν. Let X,..., X n be a random sample from the Normal(µ, σ distribution. Then n( X n µ/σ Normal(0,, giving P Xn ( X n z α/ σ n < µ < X n + z α/ σ n = α. (n S n/σ χ n, giving P S n ((n S n/χ n,α/ < σ < (n S n/χ n, α/ = α. 4

5 n( X n µ/s n t n, giving P ( Xn,S n( X S n t n n,α/ n < µ < X S n + t n n,α/ n = α. Consider two independent random samples X,..., X n from the Normal(µ, σ distribution and Y,..., Y n from the Normal(µ, σ distribution with sample means X n and Ȳn and sample variances S and S, respectively. Then S /σ S /σ F n,n, giving ( S P (S,S F S n,n, α/ < σ σ < S F S n,n,α/ = α. 5. Parametric estimation and properties of estimators Parametric framework: Let X,..., X n have a joint distribution which depends on a finite number of parameters θ,..., θ d, the values of which are unknown and lie in the spaces Θ,..., Θ d, respectively, where Θ k R, for k =,..., d. To know the joint distribution of X,..., X n, it is sufficient to know the values of θ,..., θ d, so we define estimators ˆθ,..., ˆθ d of θ,..., θ d based on X,..., X n. Nonparametric framework: Let X,..., X n have a joint distribution which depends on an infinite number of parameters. For example, suppose X,..., X n is a random sample from a distribution with pdf f X, where we do not specify any functional form for f X. Then we may regard the value of the pdf f X (x at every point x R as a parameter, and since there are an infinite number of values of x R, there are infinitely many parameters. The bias of an estimator ˆθ of a parameter θ is defined as Bias ˆθ = Eˆθ θ. And estimator is called unbiased if Bias ˆθ = 0, that is if Eˆθ = θ. The standard error of an estimator ˆθ of a parameter θ is defined as SE ˆθ = Var ˆθ, so the standard error of an estimator is simply its standard deviation. The mean squared error (MSE of an estimator ˆθ of a parameter θ is defined as MSE ˆθ = E(ˆθ θ, so the MSE of ˆθ is the expected squared distance between ˆθ and θ. MSE ˆθ = Var ˆθ + (Bias ˆθ. If ˆθ is an unbiased estimator of θ, it is not generally true that τ(ˆθ is an unbiased estimator of τ(θ. 6. Large-sample properties of estimators, consistency and WLLN 5

6 An sequence of estimators {θ n } n of a parameter θ Θ R is called consistent if for every ɛ > 0 and every θ Θ, lim n P ( ˆθ n θ < ɛ =. The weak law of large numbers (WLLN: Let X,..., X n be a random sample from a distribution with mean µ and variance σ <. Then X n = n (X + + X n is a consistent estimator of µ. A sequence of estimators {ˆθ n } n for θ is consistent if (a lim n Var ˆθ n = 0 (b lim n Bias ˆθ n = 0 Suppose {ˆθ,n } n and {ˆθ,n } n be sequences of estimators for θ and θ, respectively. Then (a ˆθ,n ± ˆθ,n is consistent for θ ± θ. (b ˆθ,nˆθ,n is consistent for θ θ. (c ˆθ,n /ˆθ,n is consistent for θ /θ, so long as θ 0. (d For any continuous function τ : R R, τ(ˆθ,n is consistent for τ(θ. (e For any sequences {a n } n and {b n } n such that lim n a n = and lim n b n = 0, a nˆθ,n + b n is consistent for θ. If X,..., X n is a random sample from a distribution with mean µ and variance σ < and 4th moment µ 4 <, the sample variance S n is consistent for σ. If X,..., X n is a random sample from the Bernoulli(p distribution and ˆp = n (X + + X n, then ˆp( ˆp is a consistent estimator of p( p. For a consistent sequence of estimators {ˆθ n } n of θ, we often write ˆθ n p θ, where p denotes something called convergence in probability. probability to θ. We often say, ˆθ n converges in 7. Large-sample pivot quantities and central limit theorem A sequence of random variables Y, Y,... with cdfs F Y, F Y,... is said to converge in distribution to the random variable Y F Y if lim F Y n (y = F Y (y n for all y R. We express convergence in distribution with the notation Y n D Y and we refer to the distribution with cdf F Y as the asymptotic distribution of the sequence of random variables Y, Y,... Convergence in distribution is a sense in which a random variable (we can think of a quantity computed on larger and larger samples behaves more and more like another random variable (as the sample size grows. A large-sample pivot quantity is a function of sample statistics and population parameters for which the asymptotic distribution is fully known (not depending on unknown parameters. 6

7 The Central Limit Theorem (CLT: Let X,..., X n be a random sample from any distribution and suppose the distribution has mean µ and variance σ <. Then X µ σ/ n D Z, Z Normal(0,. Application to CIs: The above means that for any α (0,, ( lim P z α/ < X µ n σ/ n < z α/ = α, giving that X n ± z α/ σ n is an approximate ( α00% CI for µ as long as n is large. A rule of thumb says n 30 is large. Slutzky s Theorem: Let X n D X and Y n p. Then (a X n Y n D X (b X n + Y n D X Corollary of Slutzky s Theorem: Let X,..., X n be a random sample from any distribution and suppose the distribution has mean µ and variance σ <. Moreover, let ˆσ n be a consistent estimator of σ based on X,..., X n. Then X n µ ˆσ n / n D Z, Z Normal(0,. Application to CIs: The above means that for any α (0,, ( lim P z α/ < X µ n ˆσ n / n < z α/ = α, giving that X n ± z α/ ˆσ n n is an approximate ( α00% CI for µ as long as n is large. A rule of thumb says n 30 is large. Let X,..., X n be a random sample from any distribution and suppose the distribution has mean µ and variance σ < and 4th moment µ 4 <. Then X n µ S n / n D Z, Z Normal(0,, so that for large n, X n ± z α/ S n n is an approximate ( α00% CI for µ. 7

8 Let X,..., X n be a random sample from the Bernoulli(p distribution and let ˆp n = X n. Then ˆp n p ˆp n( ˆp n n D Z, Z Normal(0,, so that for large n (a rule of thumb is to require min{nˆp n, n( ˆp n } 5, ˆp n ± z α/ ˆpn ( ˆp n n is an approximate ( α00% CI for p. Summary of ( α00% CIs for µ: Let X,..., X n be independent rvs with the same distribution as the rv X, where EX = µ, Var X = σ. X ± z α/ σ n (exact σ known X Normal(µ, σ σ unknown X ± t n,α/ S n n (exact X ± z α/ σ n (approx X non-normal σ known n 30 σ unknown X ± z α/ S n n (approx n < 30 Beyond the scope of this course 8. Classical two-sample results comparing means, proportions, and variances Summary of ( α00% CIs for µ µ : Let X,..., X n be independent rvs with the same distribution as the rv X, where EX = µ and Var X = σ and let Y,..., Y n be independent rvs with the same distribution as the rv Y, where EY = µ and Var σ. 8

9 Case (i: σ σ : X X ± tˆν,α/ s n + s n Normal min{n, n } < 30 non-normal Beyond the scope of this course. min{n, n } 30 X X ± z α/ s n + s n In the above Case (ii: σ = σ : ˆν = ( S n + S n ( ( S S n n + n n X X ± t n +n,α/. s pooled ( n + n Normal min{n, n } < 30 non-normal Beyond the scope of this course. min{n, n } 30 X X ± z α/ s pooled ( n + n 9

10 A ( α00% CI for p p : Let X,..., X iid n Bernoulli(p and Y,..., Y iid n Bernoulli(p and let ˆp = n (X + + X n and ˆp = n (Y + + Y n. Then for large n and n (Rule of thumb is min{n ˆp, n ( ˆp } 5 and min{n ˆp, n ( ˆp } 5 ˆp ( ˆp ˆp ˆp ± z α/ + ˆp ( ˆp n n is an approximate ( α00% CI for p p. A ( α00% CI for σ/σ : Consider two independent random samples X,..., X n from the Normal(µ, σ distribution and Y,..., Y n from the Normal(µ, σ distribution with sample variances S and S, respectively. Then ( S F S n,n, α/, S F S n,n,α/ is a ( α00% CI for σ /σ. 9. Sample size calculations Let ˆθ be an estimator of a parameter θ and let ˆθ ± η be a CI for θ. Then the quantity η is called the margin of error (ME of the CI. Let X,..., X n be a random sample from a population with mean µ and variance σ. The smallest sample size n such that the ME of the CI X n ± z α/ σ n is at most M is n = (zα/ M σ where x is the smallest integer greater than or equal to x. Plug in an estimate ˆσ of σ from a previous study to make the calculation. Let X,..., X n be a random sample from the Bernoulli(p distribution. The smallest sample size n such that the ME of the CI p( p ˆp ± z α/ n is at most M is (zα/ n = p( p. M Plug in an estimate ˆp of p from a previous study to make the calculation or use p = / to err on the side of larger n. Let X,..., X n be iid rvs with mean µ and variance σ and let Y,..., Y n be iid rvs with mean µ and variance σ. To find the smallest n = n + n such that the ME of the CI X Ȳ ± z σ α/ + σ n n 0,

11 is at most M, compute and set n = ( n zα/ = (σ + σ M n and n = σ + σ ( σ ( σ σ + σ n. Plug in estimates ˆσ and ˆσ for σ and σ from a previous study to make the calculation. Let X,..., X n be a random sample from the Bernoulli(p distribution and let Y,..., Y n be a random sample from the Bernoulli(p distribution. To find the smallest n = n + n such that the ME of the CI p ( p ˆp ˆp ± z α/ + p ( p n n is at most M, compute n = and set ( p ( p n = p ( p + n p ( p ( zα/ ( p ( p M + p ( p and n = ( p ( p p ( p + n. p ( p Plug in estimates ˆp and ˆp for p and p from a previous study to make the calculation. 0. First principles of estimation, sufficiency and Rao Blackwell theorem Let X,..., X n have a joint distribution depending on the parameter θ Θ R d. A statistic T (X,..., X n is a sufficient statistic for θ if it carries all the information about θ contained in X,..., X n. Specifically, T (X,..., X n is a sufficient statistic for θ if the joint distribution of X,..., X n conditional on the value of T (X,..., X n does not depend on θ. We can establish sufficiency using the following results: For discrete rvs (X,..., X n p (X,...,X n(x,..., x n ; θ, T (X,..., X n is a sufficient statistic for θ if for all (x,..., x n in the support of (X,..., X n, the ratio p (X,...,X n(x,..., x n ; θ p T (T (x,..., x n ; θ is free of θ, where p T (t; θ is the pmf of T (X,..., X n. For continuous rvs (X,..., X n f (X,...,X n(x,..., x n ; θ, T (X,..., X n is a sufficient statistic for θ if for all (x,..., x n in the support of (X,..., X n, the ratio f (X,...,X n(x,..., x n ; θ f T (T (x,..., x n ; θ is free of θ, where f T (t; θ is the pdf of T (X,..., X n.

12 The factorization theorem gives another way to see whether a statistic is sufficient for θ: For discrete rvs (X,..., X n p (X,...,X n(x,..., x n ; θ, T (X,..., X n is a sufficient statistic for θ if and only if there exist functions g(t ; θ and h(x,..., x n such that p (X,...,X n(x,..., x n ; θ = g(t (x,..., x n ; θh(x,..., x n for all (x,..., x n in the support of (X,..., X n and all θ Θ. For continuous rvs (X,..., X n f (X,...,X n(x,..., x n ; θ, T (X,..., X n is a sufficient statistic for θ if and only if there exist functions g(t ; θ and h(x,..., x n such that f (X,...,X n(x,..., x n ; θ = g(t (x,..., x n ; θh(x,..., x n for all (x,..., x n in the support of (X,..., X n and all θ Θ. We also consider statistics with multiple components which take the form T (X,..., X n = (T (X,..., X n,..., T K (X,..., X n for some K. For example, T (X,..., X n = (X (,..., X (n is the set of order statistics (which is always sufficient; here K = n. Minimum-variance unbiased estimator (MVUE: Let ˆθ be an unbiased estimator of θ Θ R. If Var ˆθ Var θ for every unbiased estimator θ of θ, then ˆθ is a MVUE. Rao Blackwell Theorem: Let θ be an estimator of θ Θ R such that E θ = θ and let T be a sufficient statistic for θ. Define ˆθ = E[ θ T ]. Then ˆθ is an estimator of θ such that Eˆθ = θ and Var ˆθ Var θ. Interpretation: An unbiased estimator of θ can always be improved by taking into account the value of a sufficient statistic for θ. Therefore, a MVUE must be a function of a sufficient statistic. The MVUE for a parameter θ is (essentially unique (for all situations considered in this course; to find it, do the following: (a Find a sufficient statistic for θ. (b Find a function of the sufficient statistic which is unbiased for θ. This is the MVUE. This also works for finding the MVUE of a function τ(θ of θ. In step (b, just find a function of the sufficient statistic which is unbiased for τ(θ.. MoMs and MLEs Let X,..., X n be independent rvs with the same distribution as the rv X, where X has a distribution depending on the parameters θ,..., θ d. If the first d moments EX,..., EX d of X are finite, then the method of moments (MoMs estimators ˆθ,..., ˆθ d are the values of θ,..., θ d which solve the following system of equations: m := n X i = EX =: µ n (θ,..., θ d m d := n i=. n Xi d = EX d =: µ d(θ,..., θ d i=

13 Let X,..., X n be a random sample from a distribution with pdf f X (x; θ,..., θ d or pmf p X (x; θ,..., θ d which depends on some parameters (θ,..., θ d Θ R d. Then the likelihood function is defined as L(θ,..., θ d ; X,..., X n = { n i= f X(X i ; θ,..., θ d, if X,..., X n iid f X (x; θ,..., θ d n i= p X(X i ; θ,..., θ d, if X,..., X n iid p X (x; θ,..., θ d. Moreover, the log-likelihood function is defined as l(θ,..., θ d ; X,..., X n = log L(θ,..., θ d ; X,..., X n. Let X,..., X n be rvs with likelihood function L(θ,..., θ d ; X,..., X n for some parameters (θ,..., θ d Θ R d. Then the maximum likelihood estimators (MLEs of θ,..., θ d are the values ˆθ,..., θ d which maximize the likelihood function over all (θ,..., θ d Θ. That is (ˆθ,..., ˆθ d = argmax L(θ,..., θ d ; X,..., X n (θ,...,θ d Θ = argmax l(θ,..., θ d ; X,..., X n (θ,...,θ d Θ If l(θ,..., θ d ; X,..., X n is differentiable and has a single maximum in the interior of Θ, then we can find the MLEs ˆθ,..., θ d by solving the following system of equations: θ l(θ,..., θ d ; X,..., X n = 0. l(θ,..., θ d ; X,..., X n = 0. θ d The MLE is always a function of a sufficient statistic. If ˆθ is the MLE for θ, then τ(ˆθ is the MLE for τ(θ, for any function τ. 3

14 pmf/pdf X M X (t EX Var X p X (x; p = p x ( p x, x = 0, pe t + ( p p p( p p X (x; n, p = ( n x p x ( p n x, x = 0,,..., n [pe t + ( p] n np np( p p X (x; p = ( p x p, x =,,... p X (x; p, r = ( x r ( p x r p r, x = r, r +,... [ pe t ( pe t p ( pp pe t ( pe t ] r rp r( pp p X (x; λ = e λ λ x /x! x = 0,,... e λ(et λ λ p X (x; N, M, K = ( ( M N M ( x K x / N K x = 0,,..., K complicadísimo! KM p X (x; K = K x =,..., K K N K K+ x= etx p X (x; x,..., x n = n x = x,..., x n n n i= etx i x = n n i= x i f X (x; µ, σ = π σ exp ( f X (x; α, β = x α exp Γ(αβ α f X (x; λ = exp ( x λ λ (x µ σ ( x β KM N n (N K(N M N(N (K+(K n i= (x i x < x < e µt+σ t / µ σ 0 < x < ( βt α αβ αβ 0 < x < ( λt λ λ f X (x; ν = x ν/ exp ( Γ(ν/ x 0 < x < ( t ν/ ν ν ν/ f X (x; α, β = Γ(α+β Γ(αΓ(β xα ( x β 0 < x < + ( t k k α+r α k= k! r=0 α+β+r α+β αβ (α+β (α+β+ Table : Commonly encountered pmfs and pdfs along with their mgfs, expected values, and variances. { p (X,...,X K (x,..., x K ; p,..., p K = p x p x K K (x,..., x K {0, } K : } K k= x k = ( { n! p (Y,...,Y K (y,..., y K ; n, p,..., p K = y! y K p y! p y K K (y,..., y K {0,,..., n} K : } K k= y k = n [ ( ( ( ( ] f (X,Y (x, y; µ X, µ Y, σx, σ Y, ρ = ( exp x µ X π σ X σ Y ρ ρ σ X ρ x µ X y µ Y y µ σ X σ Y + Y σ Y Table : The multinoulli and multinomial pmfs and the bivariate Normal pdf. 4

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Statistics and Econometrics I

Statistics and Econometrics I Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

Review. December 4 th, Review

Review. December 4 th, Review December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

STA 260: Statistics and Probability II

STA 260: Statistics and Probability II Al Nosedal. University of Toronto. Winter 2017 1 Properties of Point Estimators and Methods of Estimation 2 3 If you can t explain it simply, you don t understand it well enough Albert Einstein. Definition

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise. 1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

1. Point Estimators, Review

1. Point Estimators, Review AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.

More information

Elements of statistics (MATH0487-1)

Elements of statistics (MATH0487-1) Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

MATH4427 Notebook 2 Fall Semester 2017/2018

MATH4427 Notebook 2 Fall Semester 2017/2018 MATH4427 Notebook 2 Fall Semester 2017/2018 prepared by Professor Jenny Baglivo c Copyright 2009-2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Theory of Statistics.

Theory of Statistics. Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

SOLUTION FOR HOMEWORK 6, STAT 6331

SOLUTION FOR HOMEWORK 6, STAT 6331 SOLUTION FOR HOMEWORK 6, STAT 633. Exerc.7.. It is given that X,...,X n is a sample from N(θ, σ ), and the Bayesian approach is used with Θ N(µ, τ ). The parameters σ, µ and τ are given. (a) Find the joinf

More information

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009 EAMINERS REPORT & SOLUTIONS STATISTICS (MATH 400) May-June 2009 Examiners Report A. Most plots were well done. Some candidates muddled hinges and quartiles and gave the wrong one. Generally candidates

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method

STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method Rebecca Barter February 2, 2015 Confidence Intervals Confidence intervals What is a confidence interval? A confidence interval is calculated

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota Stat 5102 Lecture Slides Deck 3 Charles J. Geyer School of Statistics University of Minnesota 1 Likelihood Inference We have learned one very general method of estimation: method of moments. the Now we

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution DiscreteUniform(n). Discrete. Rationale Equally likely outcomes. The interval 1, 2,..., n of

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

1 Likelihood. 1.1 Likelihood function. Likelihood & Maximum Likelihood Estimators

1 Likelihood. 1.1 Likelihood function. Likelihood & Maximum Likelihood Estimators Likelihood & Maximum Likelihood Estimators February 26, 2018 Debdeep Pati 1 Likelihood Likelihood is surely one of the most important concepts in statistical theory. We have seen the role it plays in sufficiency,

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Chapter 8.8.1: A factorization theorem

Chapter 8.8.1: A factorization theorem LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

6 The normal distribution, the central limit theorem and random samples

6 The normal distribution, the central limit theorem and random samples 6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e

More information

Evaluating the Performance of Estimators (Section 7.3)

Evaluating the Performance of Estimators (Section 7.3) Evaluating the Performance of Estimators (Section 7.3) Example: Suppose we observe X 1,..., X n iid N(θ, σ 2 0 ), with σ2 0 known, and wish to estimate θ. Two possible estimators are: ˆθ = X sample mean

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

15 Discrete Distributions

15 Discrete Distributions Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Chapters 9. Properties of Point Estimators

Chapters 9. Properties of Point Estimators Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The

More information

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1 Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

IIT JAM : MATHEMATICAL STATISTICS (MS) 2013

IIT JAM : MATHEMATICAL STATISTICS (MS) 2013 IIT JAM : MATHEMATICAL STATISTICS (MS 2013 Question Paper with Answer Keys Ctanujit Classes Of Mathematics, Statistics & Economics Visit our website for more: www.ctanujit.in IMPORTANT NOTE FOR CANDIDATES

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................

More information

STA 4322 Exam I Name: Introduction to Statistics Theory

STA 4322 Exam I Name: Introduction to Statistics Theory STA 4322 Exam I Name: Introduction to Statistics Theory Fall 2013 UF-ID: Instructions: There are 100 total points. You must show your work to receive credit. Read each part of each question carefully.

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1

More information

2017 Financial Mathematics Orientation - Statistics

2017 Financial Mathematics Orientation - Statistics 2017 Financial Mathematics Orientation - Statistics Written by Long Wang Edited by Joshua Agterberg August 21, 2018 Contents 1 Preliminaries 5 1.1 Samples and Population............................. 5

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

ISyE 6644 Fall 2014 Test 3 Solutions

ISyE 6644 Fall 2014 Test 3 Solutions 1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Output Analysis for Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Output Analysis

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 20, 2009, 8:00 am - 2:00 noon Instructions:. You have four hours to answer questions in this examination. 2. You must show

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

F & B Approaches to a simple model

F & B Approaches to a simple model A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys

More information