Chapter 2. Poisson point processes

Size: px
Start display at page:

Download "Chapter 2. Poisson point processes"

Transcription

1 Chapter 2. Poisson point processes Jean-François Coeurjolly Laboratoire Jean Kuntzmann (LJK), Grenoble University

2 Setting for this chapter To ease the presentation, assume S R d. The intensity measure is absolutely continuous with respect to the Lebesgue measure and denote ρ its density. ρ is locally integrable on S, i.e. B s.t. B <, µ(b) = B ρ(ξ)dξ < X is a simple point process, i.e. µ(ξ) = 0, ξ S.

3 Towards the Poisson point process Simplest point process = point process with one point one uniform point P(ξ B) = B S. one point with a density f on S P(ξ B) = B f (ξ)dξ S f (ξ)dξ.

4 Binomial point process Definition Let f a density on B S and let n 1. The point process corresponding to n i.i.d. points with density f is called the binomial point process on B and is denoted by binomial(b, n, f ). Why binomial? particular case : B <, ξ i U(B). For any A B ( ) n A k B \ A n k P(N (A) = k) = k B n. f (ξ) = ρ(ξ) µ(b). For any A B P(N (A) = k) = ( ) n µ(a) k µ(b \ A) n k k µ(b) n.

5 Poisson point process Definition A point process X is called a Poisson point process on S R d with intensity ρ( ) if 1 B S, µ(b) <, N (B) P(µ(B)). 2 n 1, B S s.t. 0 < µ(b) < then given N (B) = n, X B binomial(b, n, f ) with f (ξ) = ρ(ξ) µ(b).(if µ(b) = 0 then X B = ). We denote X Poisson(S, ρ). Remark : when ρ( ) is constant, X is said to be the homogeneous Poisson point process. The homogeneous Poisson point process X is stationary and isotropic.

6 Simulation on a bounded domain The objective is to generate on a domain S a Poisson point process defined on S with S S. We assume that S is a rectangle. Homogeneous case : 1 Generate N (S) P(ρ S ). Let n be this realization. 2 Generate n uniform points in S. Inhomogeneous case : assume ρ( ) c. The simulation is done by thinning (see Møller and Waagepetersen 2004) a homogeneous Poisson point process. 1 Generate Poisson(S, c). Let y = {y 1,..., y n } be this realization. 2 For i = 1,..., n, delete independently the point y i with the probability 1 ρ(y i ) c.

7 Homogeneous Poisson point process S = [0, 1] 2, ρ = 100. (recall En(X S ) = EN (S) = ρ S ) 110 points

8 Homogeneous Poisson point process S = [0, 1] 2, ρ = 200. (recall En(X S ) = EN (S) = ρ S ) 184 points

9 Homogeneous Poisson point process S = [0, 2] [0, 1], ρ = 100. (recall En(X S ) = EN (S) = ρ S ) 210 points

10 Inhomogeneous Poisson point process S = [ 1, 1] 2, ρ(ξ) = β exp ( ξ 1 ξ ξ3 1). The parameter β is adjusted such that En(X S ) = EN (S) = S ρ(ξ)dξ =

11 Inhomogeneous Poisson point process S = [ 1, 1] 2, ρ(ξ) = β exp ( 2ξ 2 1 ξ2 2). The parameter β is adjusted such that En(X S ) = EN (S) = S ρ(ξ)dξ =

12 Inhomogeneous Poisson point process S = [ 1, 1] 2, ρ(ξ) = β exp (2 sin(4πξ 1 ξ 2 )). The parameter β is adjusted such that En(X S ) = EN (S) = S ρ(ξ)dξ =

13 Probability distribution Proposition (i) X Poisson(S, ρ) iif B S s.t. µ(b) < and F N lf P(X B F ) = n 0 exp( µ(b)) B... 1({x 1,..., x n } F ) B n ρ(x i )dx i where the integral for n = 0 corresponds to 1( F ). (ii) If X Poisson(S, ρ) then for any function h : N lf R + and any B S s.t. µ(b) < Eh(X B ) = n 0 exp( µ(b)) n! B... h({x 1,..., x n }) B where the integral for n = 0 should be read as h( ). i=1 n ρ(x i )dx i i=1 Proof : (i) follows from P(X B F ) = n 0 P(X B F N (B) = n)p(n (B) = n and the definition of the Poisson point process. (ii) follows from standard measure theory.

14 Existence of a Poisson point process If S < the existence is given by the previous proposition. If S = (S = R d for example), the existence follows from the next result. Theorem X Poisson(S, ρ) exists and is uniquely determined by its void probabilities v(b) = P(N (B) = 0) = exp( µ(b)) B S Proof : see Møller and Waagepetersen (2004).

15 Independence scattering Theorem If X Poisson(S, ρ) then X B1, X B2,... are independent random variables B 1, B 2,... disjoint sets of S. Proof : Let B 1, B 2 disjoint bounded domains of S. Let F 1, F 2 N lf and h j = 1(. F j ) (j = 1, 2) and B = B 1 B 2 (the general case is done by induction and since S is a complete space) P(X B1 F 1, X B2 F 2, ) = E(h 1 (X B1 )h 2 (X B2 )) = Eh(X B ) 1 n = exp( µ(b 1 ) µ(b 2 )) h({x 1,..., x n }) ρ(x i )dx i n! n 0 (B 1 B 2 ) n i=1 n ( ) n 1 n = exp( µ(b 1 ) µ(b 2 )) h 1 ({x 1,..., x k })h 2 ({x k+1,..., x n }) ρ(x i )dx i k n! n 0 k=0 B 1 k B 2 n k i=1 1 1 = exp( µ(b 1 ) µ(b 2 )) h 1 ({x 1,..., x k 1! k 2! B k 1 1 B k k1 })h 2 ({y 1,..., y k2 }) 2 2 = Eh 1 (X B1 )Eh 2 (X B2 ). k 1 0 k 2 0 k 1 k 2 ρ(x i )dx i ρ(y i )dy i i=1 i=1

16 Slivnyak-Mecke Theorem Theorem Let X Poisson(S, ρ) and let h be a non-negative function h : S N lf R +, then E h(ξ, X \ ξ) = Eh(ξ, X )ρ(ξ)dξ. ξ X S

17 Slivnyak-Mecke Theorem Theorem Let X Poisson(S, ρ) and let h be a non-negative function h : S N lf R +, then E h(ξ, X \ ξ) = Eh(ξ, X )ρ(ξ)dξ. ξ X Exercise : a point ξ of a configuration of points x is said to be a R-closed neighbour if d(ξ, x \ ξ) R. Let X be a homogeneous Poisson point process on R d with intensity ρ. Compute the mean number of R-closed neighbours of X in B = [0, 1] 2. S

18 Slivnyak-Mecke Theorem Theorem Let X Poisson(S, ρ) and let h be a non-negative function h : S N lf R +, then E h(ξ, X \ ξ) = Eh(ξ, X )ρ(ξ)dξ. ξ X Exercise : a point ξ of a configuration of points x is said to be a R-closed neighbour if d(ξ, x \ ξ) R. Let X be a homogeneous Poisson point process on R d with intensity ρ. Compute the mean number of R-closed neighbours of X in B = [0, 1] 2. Answer : E 1(d(ξ, X \ ξ) R) = ρ(1 exp( ρπr 2 )). ξ X B S

19 Simulation ρ = 20, R = 0.1 ρ = 100, R = 0.05 ξ x 1(d(ξ, x \ ξ) R) = 9 ξ x 1(d(ξ, x \ ξ) R) = 60 ρ(1 exp( ρπr 2 )) 9.33 ρ(1 exp( ρπr 2 )) 54.41

20 Proof of Slivnyak-Mecke Theorem Proof : we only consider the case where X depend only on X B for some B S s.t. µ(b) < (which of course covers the case where µ(s) < ). The general case is more awkward (see Møller and Waagepetersen 2004). Idea : apply the formula for the expectation with g(x B ) = ξ X h(ξ, X \ ξ). Since g( ) = 0, we have exp( µ(b)) n n Eg(X B ) =... h(x i, {x 1,..., x n } \ x i ) ρ(x i )dx i n! n 1 B B i=1 i=1 exp( µ(b)) n =... h(x n, {x 1,..., x n 1 }) ρ(x i )dx i (n 1)! n 1 B B i=1 exp( µ(b)) m = h(y, {x 1,..., x m }) ρ(x i )dx i B m! ρ(y)dy m 0 B m i=1 = Eh(ξ, X )ρ(ξ)dξ. B

21 Generalized Slivnyak-Mecke Theorem Theorem Let h : S m N lf R + then E ξ 1,...,ξ m X h({ξ 1,..., ξ m },X \ {ξ 1,..., ξ m } = S... Eh(x 1,..., x m, X ) S Proof : left as an exercise. (proof by induction) m ρ(x i )dx i. i=1

22 Second-order intensity of a Poisson point process We recalll that the second order intensity ρ (2) (which is the density of the second order reduced moment measure α (2) ) of a Poisson(S, ρ)) ρ (2) (u, v)dudv can be interpreted as the probability to observe a distinct pair of points around u and v. Proposition For any u, v S by ρ (2) (u, v) = ρ(u)ρ(v). Proof : for any B 1, B 2 S, we have from the GSM Theorem α (2) (B 1 B 2 ) = E 1({u, v} B 1 B 2 ) u,v X = ρ(u)ρ(v)dudv B 1 B 2 def = ρ (2) (u, v)dudv. B 1 B 2

23 Density of a Poisson point process Concept useful when dealing with parametric estimation. Proposition If µ(s) < then X Poisson(S, ρ) is absolutley continuous with respect to Poisson(S, 1) and the density is given by f (x) = exp ( S µ(s)) ρ(ξ). Proof : Let F N lf ξ x and let Y denote a Poisson(S, 1). Then exp( S ) E (1(Y F )f (Y )) =... n! n 0 S S n 1({x 1,..., x n } F ) exp ( S µ(s)) ρ(x i ) 1dx 1... dx n i=1 = P(X F ).

24 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties

25 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties (i) ρ corresponds to the maximum likelihood estimate.

26 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties (i) ρ corresponds to the maximum likelihood estimate. (ii) ρ is unbiased.

27 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties (i) ρ corresponds to the maximum likelihood estimate. (ii) ρ is unbiased. (iii)var ρ = ρ W. Proof : (i) follows from the definition of the density (ii-iii) can be checked using the Campbell formulae.

28 Homogeneous case (2) Asymptotic results

29 Homogeneous case (2) Asymptotic results For large N (W ), ρ W N(ρ W, ρ W ) and so W 1/2 ( ρ ρ) N(0, ρ). (the approximation is actually a convergence as W R d )

30 Homogeneous case (2) Asymptotic results For large N (W ), ρ W N(ρ W, ρ W ) and so W 1/2 ( ρ ρ) N(0, ρ). (the approximation is actually a convergence as W R d ) Variance stabilizing transform : 2 W 1/2 ( ρ ρ) N(0, 1)

31 Homogeneous case (2) Asymptotic results For large N (W ), ρ W N(ρ W, ρ W ) and so W 1/2 ( ρ ρ) N(0, ρ). (the approximation is actually a convergence as W R d ) Variance stabilizing transform : 2 W 1/2 ( ρ ρ) N(0, 1) We deduce a 1 α (α (0, 1)) confidence interval for ρ IC 1 α (ρ) = ( ρ ± z α/2 ) 2. 2 W 1/2

32 A simulation example We generated m = replications of homogeneous Poisson point processes with intensity ρ = 100 on [0, 1] 2 (blcak plots) and on [0, 2] 2 (red plots). Histograms of ρ Histograms of 2 W 1/2 ( ρ ρ) Density [0,1]^2 [0,2]^2 N(100,100) N(100,100/4) Density [0,1]^2 [0,2]^2 N(0,1)

33 A simulation example We generated m = replications of homogeneous Poisson point processes with intensity ρ = 100 on [0, 1] 2 (black plots) and on [0, 2] 2 (red plots). W = [0, 1] 2 W = [0, 2] 2 Emp. Mean of ρ Emp. Var. of ρ Emp. Coverage rate of 95% confidence intervals 95.31% 94.78%

34 Inhomogeneous case : parametric estimation Assume that ρ is parametrized by a vector θ R p (p 1). The most well-known model is the log-linear one : ρ(ξ) = ρ(ξ; θ) = exp(θ z(ξ)) where z(ξ) = (z 1 (ξ), z 2 (ξ),..., z p (ξ)) correspond to known spatial functions or spatial covariates. θ can be estimated by maximizing the log-likelihood on W l W (X, θ) = log ρ(ξ; θ) (1 ρ(ξ; θ))dξ ξ X W W = W + θ z(ξ) exp(θ z(ξ))dξ. ξ X W W } {{ } :=l W (X,θ) In other words θ = Argmax θ l W (X, θ).

35 Inhomogeneous case : parametric estimation (2) Why would θ be a good estimate?

36 Inhomogeneous case : parametric estimation (2) Why would θ be a good estimate? Compute the score function s W (X, θ) = l W (X, θ) = z(ξ) ξ X W W z(ξ) exp(θ z(ξ)) dξ. } {{ } :=ρ(ξ)

37 Inhomogeneous case : parametric estimation (2) Why would θ be a good estimate? Compute the score function s W (X, θ) = l W (X, θ) = z(ξ) ξ X W W z(ξ) exp(θ z(ξ)) dξ. } {{ } :=ρ(ξ) The true parameter θ 0 (i.e. X P θ0 ) minimizes the expectation of the score function. Indeed from Campbell formula

38 Inhomogeneous case : parametric estimation (2) Why would θ be a good estimate? Compute the score function s W (X, θ) = l W (X, θ) = z(ξ) ξ X W W z(ξ) exp(θ z(ξ)) dξ. } {{ } :=ρ(ξ) The true parameter θ 0 (i.e. X P θ0 ) minimizes the expectation of the score function. Indeed from Campbell formula Es W (X, θ) = z(ξ) ( exp(θ0 z(ξ)) exp(θ z(ξ)) ) dξ = 0 when θ = θ 0. W

39 Inhomogeneous case : parametric estimation (2) Why would θ be a good estimate? Compute the score function s W (X, θ) = l W (X, θ) = z(ξ) ξ X W W z(ξ) exp(θ z(ξ)) dξ. } {{ } :=ρ(ξ) The true parameter θ 0 (i.e. X P θ0 ) minimizes the expectation of the score function. Indeed from Campbell formula Es W (X, θ) = z(ξ) ( exp(θ0 z(ξ)) exp(θ z(ξ)) ) dξ = 0 when θ = θ 0. W Rathbun and Cressie (1994) showed the strong consistency and the asymptotic normality of θ as W R d.

40 A simulation example As an example, consider the model log ρ(ξ) = θz(ξ) with θ = 2 and z(ξ) = ξ 2 1 ξ We generated m = 1000 replications of this model and estimate θ W = [0, 1] 2 W = [0, 2] 2 EN (W ) = 200 EN (W ) = 800 Emp. Mean of θ Emp. Var. of θ

41 Data example : dataset bei A point pattern giving the locations of 3605 trees in a tropical rain forest. Accompanied by covariate data giving the elevation (altitude) (z 1 ) and slope of elevation (z 2 ) in the study region. elevation, z1 elevation gradient, z Assume an inhomogeneous Poisson point process (which is not true, see the next chapter) with intensity log ρ(ξ) = β + θ 1 z 1 (ξ) + θ 2 z 2 (ξ). Question : how can we prove that each covariate has a significant influence?

42 Data example : dataset bei (2) Using the MLE we get θ 1 = and θ 2 =

43 Data example : dataset bei (2) Using the MLE we get θ 1 = and θ 2 = Without any estimation of the standard error of estimates, one may use a parametric Bootstrap approach 1 generate B = 1000 inhomogeneous Poisson point processes with intensity ρ(ξ) = β + θ 1 z 1 (ξ) + θ 2 z 2 (ξ). 2 for each replication we estimate the parameters which allows us to estimate the standard errors of the initial estimates σ θ 1 = σ θ 2 = Using the asymptotic normality result, we can evaluate p-values of the tests H 1 : θ 1 0 and H 1 : θ 2 0 : we get p-value 1 0% p-value 2 35%. The elevation is clearly significant.

44 Inhomogeneous case : nonparametric estimation (Diggle 2003) Idea is to mimic the kernel density estimation to define a nonparametric estimator of the spatial function ρ. Let k : R d R + a symmetric kernel with intensity one. Examples of kernels Gaussian kernel : (2π) d/2 exp( y 2 /2). Cylindric kernel : 1 π 1( y 1). Epanecnikov kernel : 3 4 1( y < 1)(1 y 2 ). Let h be a positive real number (which will play the role of a bandwidth window), then the nonparametric estimate (with border correction) at the location η is defined as ( ) ρ h (η) = K h (η) 1 1 η ξ h k d h ξ X W

45 Intuitively, this works... Indeed, using the Campbell formula and a change of variables we can obtain ( ) E ρ h (η) = K h (η) 1 1 η ξ E h k d h ξ X W ( ) = K h (η) 1 1 η ξ W h k ρ(ξ)dξ d h = K h (η) 1 k ( ω ) ρ(ωh + η)dω h small ρ(η). W η h K h (η) 1 W η h k ( ω ) ρ(η)dω More theoretical justifications and properties and a discussion on the bandwidth parameter and edge corrections can be found in Diggle (2003).

46 Illustration on the bei dataset Using a Gaussian kernel and a bandwidth parameter chosen using a cross-likelihood

Lecture 2: Poisson point processes: properties and statistical inference

Lecture 2: Poisson point processes: properties and statistical inference Lecture 2: Poisson point processes: properties and statistical inference Jean-François Coeurjolly http://www-ljk.imag.fr/membres/jean-francois.coeurjolly/ 1 / 20 Definition, properties and simulation Statistical

More information

An introduction to spatial point processes

An introduction to spatial point processes An introduction to spatial point processes Jean-François Coeurjolly 1 Examples of spatial data 2 Intensities and Poisson p.p. 3 Summary statistics 4 Models for point processes A very very brief introduction...

More information

Spatial analysis of tropical rain forest plot data

Spatial analysis of tropical rain forest plot data Spatial analysis of tropical rain forest plot data Rasmus Waagepetersen Department of Mathematical Sciences Aalborg University December 11, 2010 1/45 Tropical rain forest ecology Fundamental questions:

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Second-Order Analysis of Spatial Point Processes

Second-Order Analysis of Spatial Point Processes Title Second-Order Analysis of Spatial Point Process Tonglin Zhang Outline Outline Spatial Point Processes Intensity Functions Mean and Variance Pair Correlation Functions Stationarity K-functions Some

More information

ESTIMATING FUNCTIONS FOR INHOMOGENEOUS COX PROCESSES

ESTIMATING FUNCTIONS FOR INHOMOGENEOUS COX PROCESSES ESTIMATING FUNCTIONS FOR INHOMOGENEOUS COX PROCESSES Rasmus Waagepetersen Department of Mathematics, Aalborg University, Fredrik Bajersvej 7G, DK-9220 Aalborg, Denmark (rw@math.aau.dk) Abstract. Estimation

More information

Jean-Michel Billiot, Jean-François Coeurjolly and Rémy Drouilhet

Jean-Michel Billiot, Jean-François Coeurjolly and Rémy Drouilhet Colloque International de Statistique Appliquée pour le Développement en Afrique International Conference on Applied Statistics for Development in Africa Sada 07 nn, 1?? 007) MAXIMUM PSEUDO-LIKELIHOOD

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Jesper Møller ) and Kateřina Helisová )

Jesper Møller ) and Kateřina Helisová ) Jesper Møller ) and ) ) Aalborg University (Denmark) ) Czech Technical University/Charles University in Prague 5 th May 2008 Outline 1. Describing model 2. Simulation 3. Power tessellation of a union of

More information

Statistics 222, Spatial Statistics. Outline for the day: 1. Problems and code from last lecture. 2. Likelihood. 3. MLE. 4. Simulation.

Statistics 222, Spatial Statistics. Outline for the day: 1. Problems and code from last lecture. 2. Likelihood. 3. MLE. 4. Simulation. Statistics 222, Spatial Statistics. Outline for the day: 1. Problems and code from last lecture. 2. Likelihood. 3. MLE. 4. Simulation. 1. Questions and code from last time. The difference between ETAS

More information

Stein s method and zero bias transformation: Application to CDO pricing

Stein s method and zero bias transformation: Application to CDO pricing Stein s method and zero bias transformation: Application to CDO pricing ESILV and Ecole Polytechnique Joint work with N. El Karoui Introduction CDO a portfolio credit derivative containing 100 underlying

More information

Estimating functions for inhomogeneous spatial point processes with incomplete covariate data

Estimating functions for inhomogeneous spatial point processes with incomplete covariate data Estimating functions for inhomogeneous spatial point processes with incomplete covariate data Rasmus aagepetersen Department of Mathematics Aalborg University Denmark August 15, 2007 1 / 23 Data (Barro

More information

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently

More information

Estimating Unnormalised Models by Score Matching

Estimating Unnormalised Models by Score Matching Estimating Unnormalised Models by Score Matching Michael Gutmann Probabilistic Modelling and Reasoning (INFR11134) School of Informatics, University of Edinburgh Spring semester 2018 Program 1. Basics

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Decomposition of variance for spatial Cox processes

Decomposition of variance for spatial Cox processes Decomposition of variance for spatial Cox processes Rasmus Waagepetersen Department of Mathematical Sciences Aalborg University Joint work with Abdollah Jalilian and Yongtao Guan December 13, 2010 1/25

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Pattern Recognition. Parameter Estimation of Probability Density Functions

Pattern Recognition. Parameter Estimation of Probability Density Functions Pattern Recognition Parameter Estimation of Probability Density Functions Classification Problem (Review) The classification problem is to assign an arbitrary feature vector x F to one of c classes. The

More information

RESEARCH REPORT. A logistic regression estimating function for spatial Gibbs point processes.

RESEARCH REPORT. A logistic regression estimating function for spatial Gibbs point processes. CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 2013 Adrian Baddeley, Jean-François Coeurjolly, Ege Rubak and Rasmus Waagepetersen A logistic regression estimating function

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Normalising constants and maximum likelihood inference

Normalising constants and maximum likelihood inference Normalising constants and maximum likelihood inference Jakob G. Rasmussen Department of Mathematics Aalborg University Denmark March 9, 2011 1/14 Today Normalising constants Approximation of normalising

More information

Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields

Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields 1 Introduction Jo Eidsvik Department of Mathematical Sciences, NTNU, Norway. (joeid@math.ntnu.no) February

More information

On Model Fitting Procedures for Inhomogeneous Neyman-Scott Processes

On Model Fitting Procedures for Inhomogeneous Neyman-Scott Processes On Model Fitting Procedures for Inhomogeneous Neyman-Scott Processes Yongtao Guan July 31, 2006 ABSTRACT In this paper we study computationally efficient procedures to estimate the second-order parameters

More information

Summary statistics for inhomogeneous spatio-temporal marked point patterns

Summary statistics for inhomogeneous spatio-temporal marked point patterns Summary statistics for inhomogeneous spatio-temporal marked point patterns Marie-Colette van Lieshout CWI Amsterdam The Netherlands Joint work with Ottmar Cronie Summary statistics for inhomogeneous spatio-temporal

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential

More information

Decomposition of variance for spatial Cox processes

Decomposition of variance for spatial Cox processes Decomposition of variance for spatial Cox processes Rasmus Waagepetersen Department of Mathematical Sciences Aalborg University Joint work with Abdollah Jalilian and Yongtao Guan November 8, 2010 1/34

More information

Aalborg Universitet. Inference Møller, Jesper. Publication date: Document Version Publisher's PDF, also known as Version of record

Aalborg Universitet. Inference Møller, Jesper. Publication date: Document Version Publisher's PDF, also known as Version of record Aalborg Universitet Inference Møller, Jesper Publication date: 2007 Document Version Publisher's PDF, also known as Version of record Link to publication from Aalborg University Citation for published

More information

DIFFERENT KINDS OF ESTIMATORS OF THE MEAN DENSITY OF RANDOM CLOSED SETS: THEORETICAL RESULTS AND NUMERICAL EXPERIMENTS.

DIFFERENT KINDS OF ESTIMATORS OF THE MEAN DENSITY OF RANDOM CLOSED SETS: THEORETICAL RESULTS AND NUMERICAL EXPERIMENTS. DIFFERENT KINDS OF ESTIMATORS OF THE MEAN DENSITY OF RANDOM CLOSED SETS: THEORETICAL RESULTS AND NUMERICAL EXPERIMENTS Elena Villa Dept. of Mathematics Università degli Studi di Milano Toronto May 22,

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Spatial modelling and spatial statistics

Spatial modelling and spatial statistics Introduction Spatial modelling and spatial statistics Course Notes, Autumn 2008 Zbyněk Pawlas Spatial point processes play a fundamental role in spatial statistics. They provide stochastic models describing

More information

6 The normal distribution, the central limit theorem and random samples

6 The normal distribution, the central limit theorem and random samples 6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

AALBORG UNIVERSITY. An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Rasmus Plenge Waagepetersen

AALBORG UNIVERSITY. An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Rasmus Plenge Waagepetersen AALBORG UNIVERITY An estimating function approach to inference for inhomogeneous Neyman-cott processes by Rasmus Plenge Waagepetersen R-2005-30 eptember 2005 Department of Mathematical ciences Aalborg

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

Lecture 1: Introduction

Lecture 1: Introduction Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One

More information

Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!

Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Exponential Families

Exponential Families Exponential Families David M. Blei 1 Introduction We discuss the exponential family, a very flexible family of distributions. Most distributions that you have heard of are in the exponential family. Bernoulli,

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

When is MLE appropriate

When is MLE appropriate When is MLE appropriate As a rule of thumb the following to assumptions need to be fulfilled to make MLE the appropriate method for estimation: The model is adequate. That is, we trust that one of the

More information

On robust and efficient estimation of the center of. Symmetry.

On robust and efficient estimation of the center of. Symmetry. On robust and efficient estimation of the center of symmetry Howard D. Bondell Department of Statistics, North Carolina State University Raleigh, NC 27695-8203, U.S.A (email: bondell@stat.ncsu.edu) Abstract

More information

which element of ξ contains the outcome conveys all the available information.

which element of ξ contains the outcome conveys all the available information. CHAPTER III MEASURE THEORETIC ENTROPY In this chapter we define the measure theoretic entropy or metric entropy of a measure preserving transformation. Equalities are to be understood modulo null sets;

More information

Estimation theory and information geometry based on denoising

Estimation theory and information geometry based on denoising Estimation theory and information geometry based on denoising Aapo Hyvärinen Dept of Computer Science & HIIT Dept of Mathematics and Statistics University of Helsinki Finland 1 Abstract What is the best

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help

More information

Generalized linear models

Generalized linear models Generalized linear models Søren Højsgaard Department of Mathematical Sciences Aalborg University, Denmark October 29, 202 Contents Densities for generalized linear models. Mean and variance...............................

More information

GIST 4302/5302: Spatial Analysis and Modeling Point Pattern Analysis

GIST 4302/5302: Spatial Analysis and Modeling Point Pattern Analysis GIST 4302/5302: Spatial Analysis and Modeling Point Pattern Analysis Guofeng Cao www.spatial.ttu.edu Department of Geosciences Texas Tech University guofeng.cao@ttu.edu Fall 2018 Spatial Point Patterns

More information

Ch.10 Autocorrelated Disturbances (June 15, 2016)

Ch.10 Autocorrelated Disturbances (June 15, 2016) Ch10 Autocorrelated Disturbances (June 15, 2016) In a time-series linear regression model setting, Y t = x tβ + u t, t = 1, 2,, T, (10-1) a common problem is autocorrelation, or serial correlation of the

More information

Yaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University.

Yaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University. Appendices to To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Gibbs Voronoi tessellations: Modeling, Simulation, Estimation.

Gibbs Voronoi tessellations: Modeling, Simulation, Estimation. Gibbs Voronoi tessellations: Modeling, Simulation, Estimation. Frédéric Lavancier, Laboratoire Jean Leray, Nantes (France) Work with D. Dereudre (LAMAV, Valenciennes, France). SPA, Berlin, July 27-31,

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

Lecture 2: Basic Concepts of Statistical Decision Theory

Lecture 2: Basic Concepts of Statistical Decision Theory EE378A Statistical Signal Processing Lecture 2-03/31/2016 Lecture 2: Basic Concepts of Statistical Decision Theory Lecturer: Jiantao Jiao, Tsachy Weissman Scribe: John Miller and Aran Nayebi In this lecture

More information

Density Estimation (II)

Density Estimation (II) Density Estimation (II) Yesterday Overview & Issues Histogram Kernel estimators Ideogram Today Further development of optimization Estimating variance and bias Adaptive kernels Multivariate kernel estimation

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Measurement Error and Linear Regression of Astronomical Data. Brandon Kelly Penn State Summer School in Astrostatistics, June 2007

Measurement Error and Linear Regression of Astronomical Data. Brandon Kelly Penn State Summer School in Astrostatistics, June 2007 Measurement Error and Linear Regression of Astronomical Data Brandon Kelly Penn State Summer School in Astrostatistics, June 2007 Classical Regression Model Collect n data points, denote i th pair as (η

More information

A Thinned Block Bootstrap Variance Estimation. Procedure for Inhomogeneous Spatial Point Patterns

A Thinned Block Bootstrap Variance Estimation. Procedure for Inhomogeneous Spatial Point Patterns A Thinned Block Bootstrap Variance Estimation Procedure for Inhomogeneous Spatial Point Patterns May 22, 2007 Abstract When modeling inhomogeneous spatial point patterns, it is of interest to fit a parametric

More information

1. Point Estimators, Review

1. Point Estimators, Review AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.

More information

Probability Models for Bayesian Recognition

Probability Models for Bayesian Recognition Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIAG / osig Second Semester 06/07 Lesson 9 0 arch 07 Probability odels for Bayesian Recognition Notation... Supervised Learning for Bayesian

More information

The problem is to infer on the underlying probability distribution that gives rise to the data S.

The problem is to infer on the underlying probability distribution that gives rise to the data S. Basic Problem of Statistical Inference Assume that we have a set of observations S = { x 1, x 2,..., x N }, xj R n. The problem is to infer on the underlying probability distribution that gives rise to

More information

Chapters 9. Properties of Point Estimators

Chapters 9. Properties of Point Estimators Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The

More information

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions

More information

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University REGRESSION ITH SPATIALL MISALIGNED DATA Lisa Madsen Oregon State University David Ruppert Cornell University SPATIALL MISALIGNED DATA 10 X X X X X X X X 5 X X X X X 0 X 0 5 10 OUTLINE 1. Introduction 2.

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

STA216: Generalized Linear Models. Lecture 1. Review and Introduction STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

Variance Estimation for Statistics Computed from. Inhomogeneous Spatial Point Processes

Variance Estimation for Statistics Computed from. Inhomogeneous Spatial Point Processes Variance Estimation for Statistics Computed from Inhomogeneous Spatial Point Processes Yongtao Guan April 14, 2007 Abstract This paper introduces a new approach to estimate the variance of statistics that

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Lecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger)

Lecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger) 8 HENRIK HULT Lecture 2 3. Some common distributions in classical and Bayesian statistics 3.1. Conjugate prior distributions. In the Bayesian setting it is important to compute posterior distributions.

More information

MCMC simulation of Markov point processes

MCMC simulation of Markov point processes MCMC simulation of Markov point processes Jenny Andersson 28 februari 25 1 Introduction Spatial point process are often hard to simulate directly with the exception of the Poisson process. One example

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Maximum-Likelihood Estimation: Basic Ideas

Maximum-Likelihood Estimation: Basic Ideas Sociology 740 John Fox Lecture Notes Maximum-Likelihood Estimation: Basic Ideas Copyright 2014 by John Fox Maximum-Likelihood Estimation: Basic Ideas 1 I The method of maximum likelihood provides estimators

More information

SUFFICIENT STATISTICS

SUFFICIENT STATISTICS SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the

More information

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION

More information

Bayes Decision Theory

Bayes Decision Theory Bayes Decision Theory Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density 0 Minimum-Error-Rate Classification Actions are decisions on classes

More information

ECE 275A Homework 7 Solutions

ECE 275A Homework 7 Solutions ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Chapter 7 Maximum Likelihood Estimation 7. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information