Lecture 2: Poisson point processes: properties and statistical inference
|
|
- Myles Simon
- 5 years ago
- Views:
Transcription
1 Lecture 2: Poisson point processes: properties and statistical inference Jean-François Coeurjolly 1 / 20
2 Definition, properties and simulation Statistical inference for the homogeneous Poisson point process Statistical inference for the inhomogeneous Poisson point process 2 / 20
3 Poisson point processes 2. Classical way of thinking of a Poisson point process m 1, bounded and disjoint B 1,..., B m S, the r.v. X B1,..., X Bm are independent. N (B) P ( B ρ(u)du) for any bounded B S. 1. Classical definition : X Poisson(S, ρ) X is a Poisson point process on S R d S <, with intensity function ρ (loc. integrable) if for any h : N lf R+, B S s.t. B ρ(u)du < E(h(X B )) = n 0 e B ρ(u)du n! B... h({x 1,..., x n }) B n ρ(x i )dx i. Theorem S R d, X Poisson(S, ρ) exists and is uniquely determined by v(b) = exp( B ρ(u)du), for any bounded B S. 3 / 20 i=1
4 A few properties of Poisson point processes Proposition : if X Poisson(S, ρ) 1. EN (B) = VarN (B) = ρ(u)du wich equals ρ B when ρ( ) = ρ B (homogeneouse case, i.e. stationary and istotropic case if S = R d ). 2. For any u, v S, ρ (2) (u, v) = ρ(u)ρ(v) (also valid for ρ (k), k 1) 3. and if S <, X admits a density w.r.t. Poisson(S, 1) given by f (x) = e S S ρ(u)du u x ρ(u). 4. Slivnyak-Mecke Theorem : for any non-negative function h : S N lf R +, then E h(u, X \ u) = E(h(u, X))ρ(u)du. u X S 4 / 20
5 Exercise 1 : Proof of 2-4 Exercises For property 2. Let A, B S and α (2) (A B) = µ(a)µ(b). For property 3. Let Y Poisson(S, 1) and prove that E(h(X)) = E(f (Y )h(y )) for any h : N lf R +. For property 4. Assume S < and the use the integral definition. Exercise 2. Let X be a homogeneous Poisson point process on S = [0, 1] 2. Compute the average number of R-closed neighbours of X in S, ie. average number of points which have at least a distinct neighbour in X at distance R. Extend Slivnyak-Mecke Formula to compute expectations of the form u,v h(u, v, X \ {u, v}). To what corresponds the average of u,v 1(d(u, X \ {u, v} R), d(v, X \ {u, v} R)). Compute this expectation using the extended SM formula. 5 / 20
6 Simulation of a Poisson p.p. on a bounded domain Let W be the window of simulation. Homogeneous case is straightforward : N (W ) P(ρ W ) ; let n the realization ; generate x 1,..., x n independent uniform points in W ; x = {x 1,..., x n }. Inhomogeneous case : a thinning procedure can be efficiently done if ρ(u) c : let y be a simulation of a Poisson(c,W) ; delete a point u y with prob. 1 ρ(u)/c. Find the (most probable) related realization in W = [ 1, 1] 2 for the intensities : ρ(u) = βe u 1 u 2 1.5u3 1 ; ρ = 200 ; ρ(u) = βe 2 sin(4πu 1 u 2 ) (β is adjusted s.t. the mean number of points in S, W ρ(u)du = 200.) 110 points / 20
7 Statistical inference for a Poisson point process Inference : consists in estimating ρ : homogeneous case. ρ( ; θ) : parametric inhomogeneous function. ρ(u) : non parametric estimation Note : All these estimates can be used even if the spatial point process is not Poisson (wait for a few slides...) Asymptotic properties very simple to derive under the Poisson assumption. Goodness-of-fit tests : tests based on quadrats counting, based on the void probability,... 7 / 20
8 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties 8 / 20
9 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties (i) ρ corresponds to the maximum likelihood estimate. 8 / 20
10 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties (i) ρ corresponds to the maximum likelihood estimate. (ii) ρ is unbiased. 8 / 20
11 Homogeneous case We consider here the problem of estimating the parameter ρ of a homogeneous Poisson point process defined on S and observed on a window W S. Since N (W ) P(ρ W ), the natural estimator of ρ is ρ = N (W )/ W Properties (i) ρ corresponds to the maximum likelihood estimate. (ii) ρ is unbiased. (iii)var ρ = ρ W. Proof : (i) follows from the definition of the density (ii-iii) can be checked using the Campbell formulae. 8 / 20
12 Asymptotic results Homogeneous case (2) 9 / 20
13 Homogeneous case (2) Asymptotic results For large N (W ), ρ W N(ρ W, ρ W ) and so W 1/2 ( ρ ρ) N(0, ρ). (the approximation is actually a convergence as W R d ) 9 / 20
14 Homogeneous case (2) Asymptotic results For large N (W ), ρ W N(ρ W, ρ W ) and so W 1/2 ( ρ ρ) N(0, ρ). (the approximation is actually a convergence as W R d ) Variance stabilizing transform : 2 W 1/2 ( ρ ρ) N(0, 1) 9 / 20
15 Homogeneous case (2) Asymptotic results For large N (W ), ρ W N(ρ W, ρ W ) and so W 1/2 ( ρ ρ) N(0, ρ). (the approximation is actually a convergence as W R d ) Variance stabilizing transform : 2 W 1/2 ( ρ ρ) N(0, 1) We deduce a 1 α (α (0, 1)) confidence interval for ρ IC 1 α (ρ) = ( ρ ± z α/2 ) 2. 2 W 1/2 9 / 20
16 A simulation example We generated m = replications of homogeneous Poisson point processes with intensity ρ = 100 on [0, 1] 2 (blcak plots) and on [0, 2] 2 (red plots). Histograms of ρ Histograms of 2 W 1/2 ( ρ ρ) Density [0,1]^2 [0,2]^2 N(100,100) N(100,100/4) Density [0,1]^2 [0,2]^2 N(0,1) / 20
17 A simulation example We generated m = replications of homogeneous Poisson point processes with intensity ρ = 100 on [0, 1] 2 (black plots) and on [0, 2] 2 (red plots). W = [0, 1] 2 W = [0, 2] 2 Emp. Mean of ρ Emp. Var. of ρ Emp. Coverage rate of 95% confidence intervals 95.31% 94.78% 11 / 20
18 Quadrat counting based test Performed as follows : 1. divide W into quadrats B 1,..., B m. 2. count n j = n(x Bj ) for j = 1,..., m. Under the null hypothesis, then n j are realizations of P(ρ B j ). 3. Construct the χ 2 statistic (n j e j ) 2 (n j ρ B j ) 2 T = = e j ρ B j where ρ = n/ W 4. Under the null hypothesis if T χ 2 (m 1) Example for the swedishpines dataset : j j / 20
19 Applications in R Pines datasets Consider the three unmarked datasets : japanesepines, swedishpines, finpines. Plot the data, estimate the intensity parameter. Construct a confidence interval for each of them. Which one is significantly the most abundant? Judge the assumption of the Poisson model using a GoF test based on quadrats. bei dataset Show that the bei (tropical forest dataset), the qk (earth quakes), the lightning strikes (for each summer) datasets cannot be modelled by a homogeneous Poisson point process. 13 / 20
20 Inhomogeneous case : parametric estimation Assume that ρ is parametrized by a vector θ R p (p 1). The most well-known model is the log-linear one : ρ(u) = ρ(u; θ) = exp(θ z(u)) where z(u) = (z 1 (u), z 2 (u),..., z p (u)) corresponds to known spatial functions or spatial covariates. θ can be estimated by maximizing the log-likelihood on W l W (X, θ) = log ρ(u; θ) + (1 ρ(u; θ))du u X W W = W + θ z(u) exp(θ z(u))du. u X W } W {{ } :=l W (X,θ) In other words { } d θ = Argmax θ l W (X, θ) = Argmin θ dθ l W (X; θ). 14 / 20
21 A very short background on estimating equations θ = ArgminU n (θ), θ 0 : true parameter. U n (θ) is said to define an unbiased estimating equation if EU n (θ 0 ) = 0. We usually define (a) Σ = Var(U n (θ 0 )) the variance-covariance matrix of U n (θ 0 ). (b) S = S(θ 0 ) where S(θ) = E ( d dθ U n(θ) ) the sensitivity matrix. If U n (θ 0 ) approx. N(0, Σ), then under some technical and regularity assumptions θ θ 0 approx. N(0, S 1 ΣS 1 ) Sketch of the proof : there exists θ s.t. θ θ 0 θ θ 0 U n (θ 0 ) U n ( θ) = U n (θ 0 ) = ( θ θ 0 )S( θ) ( θ θ 0 )S so θ θ 0 S 1 U n (θ 0 ). 15 / 20
22 Inhomogeneous case : parametric estimation (2) The score function s W (X, θ) = d dθ l W (X, θ) is an unbiased estimating equation! Indeed, 16 / 20
23 Inhomogeneous case : parametric estimation (2) The score function s W (X, θ) = d dθ l W (X, θ) is an unbiased estimating equation! Indeed, s W (X, θ) = z(u) u X W W z(u) exp(θ z(u)) du } {{ } :=ρ(u) 16 / 20
24 Inhomogeneous case : parametric estimation (2) The score function s W (X, θ) = d dθ l W (X, θ) is an unbiased estimating equation! Indeed, s W (X, θ) = z(u) u X W and from Campbell Theorem W z(u) exp(θ z(u)) du } {{ } :=ρ(u) 16 / 20
25 Inhomogeneous case : parametric estimation (2) The score function s W (X, θ) = d dθ l W (X, θ) is an unbiased estimating equation! Indeed, s W (X, θ) = z(u) u X W W z(u) exp(θ z(u)) du } {{ } :=ρ(u) and from Campbell Theorem Es W (X, θ) = z(u) ( exp(θ0 z(u)) exp(θ z(u)) ) du = 0 when θ = θ 0. W Exercise : verify that Σ = S = W z(u)z(u) exp(θ z(u))du. Hence, θ θ 0 N(0, S 1 ), i.e. S( θ) 1/2 ( θ θ 0 ) N(0, I p ). 16 / 20
26 Inhomogeneous case : parametric estimation (2) The score function s W (X, θ) = d dθ l W (X, θ) is an unbiased estimating equation! Indeed, s W (X, θ) = z(u) u X W W z(u) exp(θ z(u)) du } {{ } :=ρ(u) and from Campbell Theorem Es W (X, θ) = z(u) ( exp(θ0 z(u)) exp(θ z(u)) ) du = 0 when θ = θ 0. W Exercise : verify that Σ = S = W z(u)z(u) exp(θ z(u))du. Hence, θ θ 0 N(0, S 1 ), i.e. S( θ) 1/2 ( θ θ 0 ) N(0, I p ). Asymptotic results formalized by Rathbun and Cressie (1994). 16 / 20
27 Data example : dataset bei A point pattern giving the locations of 3605 trees in a tropical rain forest. Accompanied by covariate data giving the elevation (altitude) (z 1 ) and slope of elevation (z 2 ) in the study region. elevation, z1 elevation gradient, z Assume an inhomogeneous Poisson point process (which is not true, see the next chapter) with intensity log ρ(u) = β + θ 1 z 1 (u) + θ 2 z 2 (u). Question : how can we prove that each covariate has a significant and positive influence? / 20
28 Nonparametric estimation (see e.g. Diggle 2003) Idea is to mimic the kernel density estimation to define a nonparametric estimator of the spatial function ρ. Let k : R d R + a symmetric kernel with intensity one. Examples of kernels Gaussian kernel : (2π) d/2 exp( y 2 /2). Cylindric kernel : 1 π 1( y 1). Epanecnikov kernel : 3 4 1( y < 1)(1 y 2 ). Let h be a positive real number (which will play the role of a bandwidth window), then the nonparametric estimate (with border correction) at the location v is defined as ( ) ρ h (v) = K h (v) 1 1 v u h k, K d h (v) = h d k h h u X W Analogy with heatmaps! ( v u ) du 18 / 20
29 Intuitively, this works... Indeed, using the Campbell formula and a change of variables we can obtain ( ) E ρ h (v) = K h (v) 1 1 v u E h k d h u X W ( ) = K h (v) 1 1 v u W h k ρ(u)du d h = K h (v) 1 k ( ω ) ρ(ωh + v)dω h small ρ(v). W v h K h (v) 1 W v h k ( ω ) ρ(v)dω More theoretical justifications and properties and a discussion on the bandwidth parameter and edge corrections can be found in Diggle (2003). 19 / 20
30 Application to the qk dataset (quakes) sigma=.5 sigma=1 sigma= sigma= / 20
Chapter 2. Poisson point processes
Chapter 2. Poisson point processes Jean-François Coeurjolly http://www-ljk.imag.fr/membres/jean-francois.coeurjolly/ Laboratoire Jean Kuntzmann (LJK), Grenoble University Setting for this chapter To ease
More informationAn introduction to spatial point processes
An introduction to spatial point processes Jean-François Coeurjolly 1 Examples of spatial data 2 Intensities and Poisson p.p. 3 Summary statistics 4 Models for point processes A very very brief introduction...
More informationSpatial analysis of tropical rain forest plot data
Spatial analysis of tropical rain forest plot data Rasmus Waagepetersen Department of Mathematical Sciences Aalborg University December 11, 2010 1/45 Tropical rain forest ecology Fundamental questions:
More informationDecomposition of variance for spatial Cox processes
Decomposition of variance for spatial Cox processes Rasmus Waagepetersen Department of Mathematical Sciences Aalborg University Joint work with Abdollah Jalilian and Yongtao Guan November 8, 2010 1/34
More informationEstimating functions for inhomogeneous spatial point processes with incomplete covariate data
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data Rasmus aagepetersen Department of Mathematics Aalborg University Denmark August 15, 2007 1 / 23 Data (Barro
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7
MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is
More informationMaximum-Likelihood Estimation: Basic Ideas
Sociology 740 John Fox Lecture Notes Maximum-Likelihood Estimation: Basic Ideas Copyright 2014 by John Fox Maximum-Likelihood Estimation: Basic Ideas 1 I The method of maximum likelihood provides estimators
More informationDecomposition of variance for spatial Cox processes
Decomposition of variance for spatial Cox processes Rasmus Waagepetersen Department of Mathematical Sciences Aalborg University Joint work with Abdollah Jalilian and Yongtao Guan December 13, 2010 1/25
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationSpatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields
Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields 1 Introduction Jo Eidsvik Department of Mathematical Sciences, NTNU, Norway. (joeid@math.ntnu.no) February
More informationJesper Møller ) and Kateřina Helisová )
Jesper Møller ) and ) ) Aalborg University (Denmark) ) Czech Technical University/Charles University in Prague 5 th May 2008 Outline 1. Describing model 2. Simulation 3. Power tessellation of a union of
More informationESTIMATING FUNCTIONS FOR INHOMOGENEOUS COX PROCESSES
ESTIMATING FUNCTIONS FOR INHOMOGENEOUS COX PROCESSES Rasmus Waagepetersen Department of Mathematics, Aalborg University, Fredrik Bajersvej 7G, DK-9220 Aalborg, Denmark (rw@math.aau.dk) Abstract. Estimation
More informationJean-Michel Billiot, Jean-François Coeurjolly and Rémy Drouilhet
Colloque International de Statistique Appliquée pour le Développement en Afrique International Conference on Applied Statistics for Development in Africa Sada 07 nn, 1?? 007) MAXIMUM PSEUDO-LIKELIHOOD
More informationChapter 12 - Lecture 2 Inferences about regression coefficient
Chapter 12 - Lecture 2 Inferences about regression coefficient April 19th, 2010 Facts about slope Test Statistic Confidence interval Hypothesis testing Test using ANOVA Table Facts about slope In previous
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationNormalising constants and maximum likelihood inference
Normalising constants and maximum likelihood inference Jakob G. Rasmussen Department of Mathematics Aalborg University Denmark March 9, 2011 1/14 Today Normalising constants Approximation of normalising
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationParameter Estimation and Fitting to Data
Parameter Estimation and Fitting to Data Parameter estimation Maximum likelihood Least squares Goodness-of-fit Examples Elton S. Smith, Jefferson Lab 1 Parameter estimation Properties of estimators 3 An
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationGIST 4302/5302: Spatial Analysis and Modeling Point Pattern Analysis
GIST 4302/5302: Spatial Analysis and Modeling Point Pattern Analysis Guofeng Cao www.spatial.ttu.edu Department of Geosciences Texas Tech University guofeng.cao@ttu.edu Fall 2018 Spatial Point Patterns
More informationDSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc.
DSGE Methods Estimation of DSGE models: GMM and Indirect Inference Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@wiwi.uni-muenster.de Summer
More informationSpatial point processes: Theory and practice illustrated with R
Spatial point processes: Theory and practice illustrated with R Department of Mathematical Sciences Aalborg University Lecture IV, February 24, 2011 1/28 Contents of lecture IV Papangelou conditional intensity.
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationMath Review Sheet, Fall 2008
1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the
More informationSYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions
SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu
More information6 The normal distribution, the central limit theorem and random samples
6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e
More informationCh.10 Autocorrelated Disturbances (June 15, 2016)
Ch10 Autocorrelated Disturbances (June 15, 2016) In a time-series linear regression model setting, Y t = x tβ + u t, t = 1, 2,, T, (10-1) a common problem is autocorrelation, or serial correlation of the
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationGEOMETRIC -discrete A discrete random variable R counts number of times needed before an event occurs
STATISTICS 4 Summary Notes. Geometric and Exponential Distributions GEOMETRIC -discrete A discrete random variable R counts number of times needed before an event occurs P(X = x) = ( p) x p x =,, 3,...
More informationConfidence intervals for kernel density estimation
Stata User Group - 9th UK meeting - 19/20 May 2003 Confidence intervals for kernel density estimation Carlo Fiorio c.fiorio@lse.ac.uk London School of Economics and STICERD Stata User Group - 9th UK meeting
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 2. Statistical Schools Adapted from slides by Ben Rubinstein Statistical Schools of Thought Remainder of lecture is to provide
More informationICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts
ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes
More informationTwo step estimation for Neyman-Scott point process with inhomogeneous cluster centers. May 2012
Two step estimation for Neyman-Scott point process with inhomogeneous cluster centers Tomáš Mrkvička, Milan Muška, Jan Kubečka May 2012 Motivation Study of the influence of covariates on the occurrence
More informationLecture 32: Asymptotic confidence sets and likelihoods
Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence
More informationResiduals and Goodness-of-fit tests for marked Gibbs point processes
Residuals and Goodness-of-fit tests for marked Gibbs point processes Frédéric Lavancier (Laboratoire Jean Leray, Nantes, France) Joint work with J.-F. Coeurjolly (Grenoble, France) 09/06/2010 F. Lavancier
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationChapter 2: Resampling Maarten Jansen
Chapter 2: Resampling Maarten Jansen Randomization tests Randomized experiment random assignment of sample subjects to groups Example: medical experiment with control group n 1 subjects for true medicine,
More informationLECTURE 18: NONLINEAR MODELS
LECTURE 18: NONLINEAR MODELS The basic point is that smooth nonlinear models look like linear models locally. Models linear in parameters are no problem even if they are nonlinear in variables. For example:
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationMachine Learning Linear Classification. Prof. Matteo Matteucci
Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)
More informationBrandon C. Kelly (Harvard Smithsonian Center for Astrophysics)
Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming
More informationOn Model Fitting Procedures for Inhomogeneous Neyman-Scott Processes
On Model Fitting Procedures for Inhomogeneous Neyman-Scott Processes Yongtao Guan July 31, 2006 ABSTRACT In this paper we study computationally efficient procedures to estimate the second-order parameters
More informationStatistics 222, Spatial Statistics. Outline for the day: 1. Problems and code from last lecture. 2. Likelihood. 3. MLE. 4. Simulation.
Statistics 222, Spatial Statistics. Outline for the day: 1. Problems and code from last lecture. 2. Likelihood. 3. MLE. 4. Simulation. 1. Questions and code from last time. The difference between ETAS
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationEE/CpE 345. Modeling and Simulation. Fall Class 9
EE/CpE 345 Modeling and Simulation Class 9 208 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation - the behavior
More informationProbability Models for Bayesian Recognition
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIAG / osig Second Semester 06/07 Lesson 9 0 arch 07 Probability odels for Bayesian Recognition Notation... Supervised Learning for Bayesian
More informationLecture 1: Introduction
Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One
More informationRecall the Basics of Hypothesis Testing
Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE
More informationThe problem is to infer on the underlying probability distribution that gives rise to the data S.
Basic Problem of Statistical Inference Assume that we have a set of observations S = { x 1, x 2,..., x N }, xj R n. The problem is to infer on the underlying probability distribution that gives rise to
More informationChapter 2 Inference on Mean Residual Life-Overview
Chapter 2 Inference on Mean Residual Life-Overview Statistical inference based on the remaining lifetimes would be intuitively more appealing than the popular hazard function defined as the risk of immediate
More informationSpace Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses
Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William
More informationParameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!
Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses
More informationSTA216: Generalized Linear Models. Lecture 1. Review and Introduction
STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general
More informationEE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002
EE/CpE 345 Modeling and Simulation Class 0 November 8, 2002 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationRESEARCH REPORT. A logistic regression estimating function for spatial Gibbs point processes.
CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 2013 Adrian Baddeley, Jean-François Coeurjolly, Ege Rubak and Rasmus Waagepetersen A logistic regression estimating function
More informationOn robust and efficient estimation of the center of. Symmetry.
On robust and efficient estimation of the center of symmetry Howard D. Bondell Department of Statistics, North Carolina State University Raleigh, NC 27695-8203, U.S.A (email: bondell@stat.ncsu.edu) Abstract
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationSlide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2
Corner Detection 2D Image Features Corners are important two dimensional features. Two dimensional image features are interesting local structures. They include junctions of dierent types Slide 3 They
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationMIT Spring 2015
Assessing Goodness Of Fit MIT 8.443 Dr. Kempthorne Spring 205 Outline 2 Poisson Distribution Counts of events that occur at constant rate Counts in disjoint intervals/regions are independent If intervals/regions
More informationChapter 2. Continuous random variables
Chapter 2 Continuous random variables Outline Review of probability: events and probability Random variable Probability and Cumulative distribution function Review of discrete random variable Introduction
More informationGeneralized Linear Models (1/29/13)
STA613/CBB540: Statistical methods in computational biology Generalized Linear Models (1/29/13) Lecturer: Barbara Engelhardt Scribe: Yangxiaolu Cao When processing discrete data, two commonly used probability
More information[Chapter 6. Functions of Random Variables]
[Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating
More informationCorner. Corners are the intersections of two edges of sufficiently different orientations.
2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,
More informationUNIVERSITÄT POTSDAM Institut für Mathematik
UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More information2.1 Linear regression with matrices
21 Linear regression with matrices The values of the independent variables are united into the matrix X (design matrix), the values of the outcome and the coefficient are represented by the vectors Y and
More informationParametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1
Parametric Modelling of Over-dispersed Count Data Part III / MMath (Applied Statistics) 1 Introduction Poisson regression is the de facto approach for handling count data What happens then when Poisson
More informationLECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.
Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the
More informationStatistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That
Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationThe Delta Method and Applications
Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationHypothesis Testing with the Bootstrap. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods
Hypothesis Testing with the Bootstrap Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods Bootstrap Hypothesis Testing A bootstrap hypothesis test starts with a test statistic
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More informationLecture 30. DATA 8 Summer Regression Inference
DATA 8 Summer 2018 Lecture 30 Regression Inference Slides created by John DeNero (denero@berkeley.edu) and Ani Adhikari (adhikari@berkeley.edu) Contributions by Fahad Kamran (fhdkmrn@berkeley.edu) and
More informationBickel Rosenblatt test
University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.
Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web: capture/recapture.
More informationLecture 3. Inference about multivariate normal distribution
Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates
More informationSome New Aspects of Dose-Response Models with Applications to Multistage Models Having Parameters on the Boundary
Some New Aspects of Dose-Response Models with Applications to Multistage Models Having Parameters on the Boundary Bimal Sinha Department of Mathematics & Statistics University of Maryland, Baltimore County,
More informationChapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets
Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationStatistics and econometrics
1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample
More informationA tour of kernel smoothing
Tarn Duong Institut Pasteur October 2007 The journey up till now 1995 1998 Bachelor, Univ. of Western Australia, Perth 1999 2000 Researcher, Australian Bureau of Statistics, Canberra and Sydney 2001 2004
More informationAsymptotics for posterior hazards
Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and
More information