Some asymptotic expansions and distribution approximations outside a CLT context 1

Size: px
Start display at page:

Download "Some asymptotic expansions and distribution approximations outside a CLT context 1"

Transcription

1 6th St.Petersburg Workshop on Simulation (2009) Some asymptotic expansions and distribution approximations outside a CLT context 1 Manuel L. Esquível 2, João Tiago Mexia 3, João Lita da Silva 4, Luís P. C. Ramos 5 Abstract Some asymptotic expansions non necessarily related to the central limit theorem are discussed. After observing that the smoothing inequality of Esseen implies the proximity, in the Kolmogorov distance sense, of the distributions of the random variables of two random sequences satisfying a sort of general asymptotic relation, two instances of this observation are presented. A first example, partially motivated by the the statistical theory of high precision measurements, is given by a uniform asymptotic approximation to (g(x + µ n)) n IN, where g is some smooth function, X is a random variable having a moment and a bounded density and (µ n ) n IN is a sequence going to infinity; the multivariate case as well as the proofs and a complete set of references will be published elsewhere. We next present a second class of examples given by a randomization of the interesting parameter in some classical asymptotic formulas, namely, a generic Laplace s type integral, by the sequence (µ n X) n IN, X being a Gamma distributed random variable. Finally, a simulation study of this last example is presented in order to stress the quality of asymptotic approximations proposed. 1. Asymptotics for random variables The set-up for the generic question studied in this paper is given by [A], the following set of conditions. A1 There is a real parameter sequence (µ n ) n IN such that µ + := lim µ n = +. A2 We are given three sequences of random variables, depending on the parameter sequence 1, (X n) n IN, (Y n 0) n IN and (Z n) n IN, and verifying: X n = Y n + Z n or X n = Y n 1 + Zn Z n and lim = 0. (1) Y n n + Y n 1 This work was partially supported by Financiamento Base 2008 ISFL from FCT/MCTES/PT. 2 Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa (FCT/UNL), Portugal, mle@fct.unl.pt 3 FCT/UNL, Portugal, jtm@fct.unl.pt 4 FCT/UNL, Portugal, jfls@fct.unl.pt 5 FCT/UNL, Portugal, lpcr@fct.unl.pt

2 where the limit is taken is some prescribed sense: almost surely, in probability, in the mean, etc. A3 Possibly, the values X := X(µ + ), Y := Y (µ + ), Z := Z(µ + ) are not defined. The generic question we want to consider is the following. Given (1), can something be said about the asymptotic distribution of X n knowing the distribution of Y n? It is to be expected that under a set of hypothesis including [A], we may approximate, for large values of the parameter µ n, the distribution of X n, which may be hard to compute, by the distribution of Y n, that is, X n d Y n as µ n 1, in a sense to be determined, preferably, in the Kolmogorov distance sense. Remark 1. The first equality in 1 is a sort of asymptotic equality between the random variables X n and Y n. To the best of our knowledge, a comprehensive theory of asymptotic relations for random variables is not yet available although some authors tackled this topic in particular points. (see [1, p. 443]). Remark 2. The answer for the question above is not, in general, a central limit theorem issue due in particular to the fact that, possibly, the limiting values X, Y and Z are not defined. Our approach is based in the following asymptotic results. Lemma 1 (Esseen s type estimate). Let (X n ) n IN, (Y n ) n IN and (Z n ) n IN be sequences of random variables satisfying the hypotheses [A] above and formula 1. Suppose, furthermore that Y n admits a density and that for some δ ]0, 1] we have IE[ Z n δ ] < +. Then, for C δ = (1/π)2 2 δ 1+δ 24 1+δ δ (1 + (1/δ)): [ sup F Xn (x) F Yn (x) C δ IE Z n δ] δ 1 sup x x F Y n (x) δ 1+δ. (2) Theorem 1. Let (X n ) n IN, (Y n ) n IN and (Z n ) n IN be sequences of random variables satisfying the conditions [A] above and formula 1. Suppose, furthermore, that for each n 1 the random variable Y n admit a density F Y n and that for some δ ]0, 1] we have IE[ Z n δ ] < +. Then, if we have that: [ lim IE Z n δ] δ 1 sup x F Y n (x) = 0, lim sup F Xn (x) F Yn (x) = 0, (3) x that is, we have the uniform approximation, for large values of the parameter µ n, of the distribution function of X n by the distribution function of Y n. 445

3 2. A linear transform approximation result We now present some approximation results, consequences of theorem 1, where the first term in the asymptotic expansion is an affine transform of the initial random variable. The driving tool, both in the unidimensional and multidimensional cases, is to consider asymptotic Taylor type expansions. A first idea to deal with the problem studied in this question would be to apply some form of the delta method (see [2]). As pointed out in remark 2, the delta method relies on the central limit theorem which, in general, is not applicable to the situation under scrutiny. The proof is a consequence of Taylor s theorem and of theorem 1. Theorem 2. Suppose a non-negative absolutely continuous r.v. X such that IE X p < + for some p > 0, with d.f. F X verifying sup F X(x) C. Consider also a sequence x R (X n) n N of r.v. s defined by X n := g(x + µ n) where µ n is a non-random real sequence verifying lim µ n = + and g : R R is a C 2 (R) map such that: (a) E g (X) < +. (b) g is convex on R. (c) g (t) g (t) is decreasing on ]t0, + [ and tends to zero as t +. Then, with Y n := g(µ n ) + g (µ n )X, the law of X n is uniformly approximate by the law of Y n for large values of n, that is, lim sup F Xn (x) F Yn (x) = 0. x R Remark 3. This theorem can be extended to the cases where the support of X is ]a, + [ for some a R. 3. Non linear approximations In this instance of an application of theorem 1 we study an integral transform of a rescaled random variable. We will write X G(u, δ, λ) if the density of X is given by: x IR f (u,δ,λ) (x) = 1 δ u Γ(u) (x λ)(u 1) e x λ δ 1I [λ,+ [ (x), 1I [λ,+ [ being the indicator function of the interval [λ, + [. Recall that for such a random variable we have IE[X] = uδ + λ, IV[X] = uδ 2 and sup x IR f (u,δ,λ) (x) = sup f (u,δ,0) (x) = 1 x IR δγ(u) (u 1)(u 1) e (u 1). Proposition 1. Let X G(u, δ, λ) and (µ n ) n 1 lim µ n = +. Consider also a non-random sequence satisfying X n := a 0 e µ nxt dt (a > 0). 1 + t2 Then, we have lim sup x IR F Xn (x) F 1/(µn X)(x) = 0, that is, the law of X n is approximate by the law of 1/(µ n X) for n sufficiently large. 446

4 Our second example has its source in the general theme of Laplace integrals. Often, the asymptotic behavior of models in the applied sciences is studied using this kind of integrals. The example presented here amounts to a randomization of these models. Proposition 2. Consider p, q > 0, a ]0, + ], f C([0, a[) verifying f(0) 0 and X G(u, δ, λ). Suppose also a non-random sequence (µ n ) n 1 satisfying lim µ n = + and let a f(0)γ p X n := t p 1 f(t)e µ nxt q q dt and Y n := (µ nx) p q. q 0 We then have that lim sup x IR F Xn (x) F Yn (x) = 0, that is, the law of X n is approximate by the law of Y n for n sufficiently large. 4. A simulation study as a general practical approximation methodology Follows a simulation study having as an ultimate purpose to sketch a methodology that allows to use the approximations proposed above. Let us suppose that we are faced with the problem of using a distribution given, for instance, by the one treated in propositions 1 or 2. A natural question is to determine the size of the parameter µ n necessary to use the asymptotic approximation instead of the true distribution. We propose the following protocol for a simulation study in order to determine the order n for which it is safe to use the distribution of Y n instead of the distribution of X n. We present next a simulation study for the asymptotic approximation studied in proposition 2 in the case where X G(u, δ, 0): X n := + 0 t p 1 cos(t)e µnxtq dt and Y n := Γ( p q ) q(µ nx) p q. The simulation protocol implemented in Wolfram s Mathematica was the following. 1. For specific values of p, q, simulate samples of dimension 1000 of a Gamma distributed random variable with parameters u, δ and the correspondent values of random variables X n and Y n. 2. Compare the samples of the random variables X n and Y n by means, for instance, of the Kolmogorov-Smirnov test determining the p-value. 3. Repeat the previous steps for different values of parameters u, p, q studying the p-value as a function of the shape parameter u. In figure 1 we show the variation of the p-value as a function of of µ n for three values of the shape parameter u of the random variable X. It is clear that as the value of u increases the variation becomes steeper. In table 1 we present the size of µ n necessary to achieve a p-value equal to one, with an error less than 10 10, for different choices of the parameters p, q. For such a p-value the distributions of the samples of X n and Y n are impossible to differentiate by means of the Kolmogorov-Smirnov test. In table 2 we present the size of µ n necessary to achieve a p-value equal to 0.01 for different choices of the parameters p, q. For such a p-value the distributions of the samples of X n and Y n are impossible to differentiate, by the Kolmogorov-Smirnov test up to a confidence level of 99%. 447

5 p Value u 2 u 5 u Μ n Figure 1: p-values as function of µ n for 3 values of u (q = 3, p = 1, δ = 2) q = 2 p = p = q = 3 p = p = Table 1: First value of µ n for which the p-value is equal to 1 for u = 2,..., q = 2 p = p = q = 3 p = p = Table 2: First value of µ n for which the p-value is equal to 0.01 for u = 2,..., 10 The simulations above confirm what was to be expected from the theoretical results presented. The method for practical use of the approximations may be so summarized. Given an asymptotic approximation simulate samples of the two distributions varying the parameter µ n and performing, for instance, a Kolmogorov-Smirnov test until the p-value reaches a prescribed value. For values of µ n greater the the one determined it would be safe, up to the confidence level associated with the p-value chosen, to use the asymptotic approximation instead of the original distribution. 5. Conclusion and final remarks In this work we have shown that under mild technical hypothesis it is possible to prove the validity of asymptotic approximations, in the Kolmogorov s distance sense, to distributions of random variables tied by some non trivial almost sure asymptotic relation. 448

6 Other simulation results, not presented in this work, make believe that the validity of these asymptotic approximations is much more comprehensive, thus indicating the need of further studies. References [1] Hoffmann-Jørgensen, J. (1994), Probability with a View Towards Statistics, volume II, Chapman & Hall. [2] Oehlert, G. W. (1992), A Note on the Delta Method, The American Statistician, 46, no. 1,

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Solution. If we does not need the pointwise limit of

More information

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if Paninski, Intro. Math. Stats., October 5, 2005 35 probability, Z P Z, if P ( Z Z > ɛ) 0 as. (The weak LL is called weak because it asserts convergence in probability, which turns out to be a somewhat weak

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

4.4 Uniform Convergence of Sequences of Functions and the Derivative

4.4 Uniform Convergence of Sequences of Functions and the Derivative 4.4 Uniform Convergence of Sequences of Functions and the Derivative Say we have a sequence f n (x) of functions defined on some interval, [a, b]. Let s say they converge in some sense to a function f

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

Stochastic Models (Lecture #4)

Stochastic Models (Lecture #4) Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence

More information

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0) Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying

More information

A NOTE ON THE EXISTENCE OF TWO NONTRIVIAL SOLUTIONS OF A RESONANCE PROBLEM

A NOTE ON THE EXISTENCE OF TWO NONTRIVIAL SOLUTIONS OF A RESONANCE PROBLEM PORTUGALIAE MATHEMATICA Vol. 51 Fasc. 4 1994 A NOTE ON THE EXISTENCE OF TWO NONTRIVIAL SOLUTIONS OF A RESONANCE PROBLEM To Fu Ma* Abstract: We study the existence of two nontrivial solutions for an elliptic

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

The Delta Method and Applications

The Delta Method and Applications Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

lim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note:

lim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note: APPM/MATH 4/5520, Fall 2013 Notes 9: Convergence in Distribution and the Central Limit Theorem Definition: Let {X n } be a sequence of random variables with cdfs F n (x) = P(X n x). Let X be a random variable

More information

A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control

A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control Fernando A. C. C. Fontes 1 and Lalo Magni 2 1 Officina Mathematica, Departamento de Matemática para a Ciência e

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

δ -method and M-estimation

δ -method and M-estimation Econ 2110, fall 2016, Part IVb Asymptotic Theory: δ -method and M-estimation Maximilian Kasy Department of Economics, Harvard University 1 / 40 Example Suppose we estimate the average effect of class size

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

Eigenvalues and Eigenfunctions of the Laplacian

Eigenvalues and Eigenfunctions of the Laplacian The Waterloo Mathematics Review 23 Eigenvalues and Eigenfunctions of the Laplacian Mihai Nica University of Waterloo mcnica@uwaterloo.ca Abstract: The problem of determining the eigenvalues and eigenvectors

More information

H 2 : otherwise. that is simply the proportion of the sample points below level x. For any fixed point x the law of large numbers gives that

H 2 : otherwise. that is simply the proportion of the sample points below level x. For any fixed point x the law of large numbers gives that Lecture 28 28.1 Kolmogorov-Smirnov test. Suppose that we have an i.i.d. sample X 1,..., X n with some unknown distribution and we would like to test the hypothesis that is equal to a particular distribution

More information

Mod-φ convergence I: examples and probabilistic estimates

Mod-φ convergence I: examples and probabilistic estimates Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,

More information

Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n

Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n Large Sample Theory In statistics, we are interested in the properties of particular random variables (or estimators ), which are functions of our data. In ymptotic analysis, we focus on describing the

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Stat 643 Review of Probability Results (Cressie)

Stat 643 Review of Probability Results (Cressie) Stat 643 Review of Probability Results (Cressie) Probability Space: ( HTT,, ) H is the set of outcomes T is a 5-algebra; subsets of H T is a probability measure mapping from T onto [0,] Measurable Space:

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Bahadur representations for bootstrap quantiles 1

Bahadur representations for bootstrap quantiles 1 Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

On the number of ways of writing t as a product of factorials

On the number of ways of writing t as a product of factorials On the number of ways of writing t as a product of factorials Daniel M. Kane December 3, 005 Abstract Let N 0 denote the set of non-negative integers. In this paper we prove that lim sup n, m N 0 : n!m!

More information

A Dimension Reduction Technique for Estimation in Linear Mixed Models

A Dimension Reduction Technique for Estimation in Linear Mixed Models A Dimension Reduction Technique for Estimation in Linear Mixed Models M. de Carvalho, M. Fonseca, M. Oliveira, J.T. Mexia Abstract This paper proposes a dimension reduction technique for estimation in

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Section Taylor and Maclaurin Series

Section Taylor and Maclaurin Series Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Lecture 2: CDF and EDF

Lecture 2: CDF and EDF STAT 425: Introduction to Nonparametric Statistics Winter 2018 Instructor: Yen-Chi Chen Lecture 2: CDF and EDF 2.1 CDF: Cumulative Distribution Function For a random variable X, its CDF F () contains all

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Inference For High Dimensional M-estimates: Fixed Design Results

Inference For High Dimensional M-estimates: Fixed Design Results Inference For High Dimensional M-estimates: Fixed Design Results Lihua Lei, Peter Bickel and Noureddine El Karoui Department of Statistics, UC Berkeley Berkeley-Stanford Econometrics Jamboree, 2017 1/49

More information

Section 21. The Metric Topology (Continued)

Section 21. The Metric Topology (Continued) 21. The Metric Topology (cont.) 1 Section 21. The Metric Topology (Continued) Note. In this section we give a number of results for metric spaces which are familar from calculus and real analysis. We also

More information

The minimum rank of matrices and the equivalence class graph

The minimum rank of matrices and the equivalence class graph Linear Algebra and its Applications 427 (2007) 161 170 wwwelseviercom/locate/laa The minimum rank of matrices and the equivalence class graph Rosário Fernandes, Cecília Perdigão Departamento de Matemática,

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak.

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak. Large Sample Theory Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

To get horizontal and slant asymptotes algebraically we need to know about end behaviour for rational functions.

To get horizontal and slant asymptotes algebraically we need to know about end behaviour for rational functions. Concepts: Horizontal Asymptotes, Vertical Asymptotes, Slant (Oblique) Asymptotes, Transforming Reciprocal Function, Sketching Rational Functions, Solving Inequalities using Sign Charts. Rational Function

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

Upper Bounds for Partitions into k-th Powers Elementary Methods

Upper Bounds for Partitions into k-th Powers Elementary Methods Int. J. Contemp. Math. Sciences, Vol. 4, 2009, no. 9, 433-438 Upper Bounds for Partitions into -th Powers Elementary Methods Rafael Jaimczu División Matemática, Universidad Nacional de Luján Buenos Aires,

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

Convexity and Smoothness

Convexity and Smoothness Capter 4 Convexity and Smootness 4.1 Strict Convexity, Smootness, and Gateaux Differentiablity Definition 4.1.1. Let X be a Banac space wit a norm denoted by. A map f : X \{0} X \{0}, f f x is called a

More information

Some Results Concerning Uniqueness of Triangle Sequences

Some Results Concerning Uniqueness of Triangle Sequences Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

Harmonic Analysis on the Cube and Parseval s Identity

Harmonic Analysis on the Cube and Parseval s Identity Lecture 3 Harmonic Analysis on the Cube and Parseval s Identity Jan 28, 2005 Lecturer: Nati Linial Notes: Pete Couperus and Neva Cherniavsky 3. Where we can use this During the past weeks, we developed

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Projection Theorem 1

Projection Theorem 1 Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

Inference for Identifiable Parameters in Partially Identified Econometric Models

Inference for Identifiable Parameters in Partially Identified Econometric Models Inference for Identifiable Parameters in Partially Identified Econometric Models Joseph P. Romano Department of Statistics Stanford University romano@stat.stanford.edu Azeem M. Shaikh Department of Economics

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Convergence Concepts of Random Variables and Functions

Convergence Concepts of Random Variables and Functions Convergence Concepts of Random Variables and Functions c 2002 2007, Professor Seppo Pynnonen, Department of Mathematics and Statistics, University of Vaasa Version: January 5, 2007 Convergence Modes Convergence

More information

The Contraction Mapping Theorem and the Implicit and Inverse Function Theorems

The Contraction Mapping Theorem and the Implicit and Inverse Function Theorems The Contraction Mapping Theorem and the Implicit and Inverse Function Theorems The Contraction Mapping Theorem Theorem The Contraction Mapping Theorem) Let B a = { x IR d x < a } denote the open ball of

More information

2 Chance constrained programming

2 Chance constrained programming 2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Testing Downside-Risk Efficiency Under Distress

Testing Downside-Risk Efficiency Under Distress Testing Downside-Risk Efficiency Under Distress Jesus Gonzalo Universidad Carlos III de Madrid Jose Olmo City University of London XXXIII Simposio Analisis Economico 1 Some key lines Risk vs Uncertainty.

More information

NEW BOUNDS FOR TRUNCATION-TYPE ERRORS ON REGULAR SAMPLING EXPANSIONS

NEW BOUNDS FOR TRUNCATION-TYPE ERRORS ON REGULAR SAMPLING EXPANSIONS NEW BOUNDS FOR TRUNCATION-TYPE ERRORS ON REGULAR SAMPLING EXPANSIONS Nikolaos D. Atreas Department of Mathematics, Aristotle University of Thessaloniki, 54006, Greece, e-mail:natreas@auth.gr Abstract We

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability Probability Theory Chapter 6 Convergence Four different convergence concepts Let X 1, X 2, be a sequence of (usually dependent) random variables Definition 1.1. X n converges almost surely (a.s.), or with

More information

The Lindeberg central limit theorem

The Lindeberg central limit theorem The Lindeberg central limit theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto May 29, 205 Convergence in distribution We denote by P d the collection of Borel probability

More information

The harmonic map flow

The harmonic map flow Chapter 2 The harmonic map flow 2.1 Definition of the flow The harmonic map flow was introduced by Eells-Sampson in 1964; their work could be considered the start of the field of geometric flows. The flow

More information

Chapter 4. The dominated convergence theorem and applications

Chapter 4. The dominated convergence theorem and applications Chapter 4. The dominated convergence theorem and applications The Monotone Covergence theorem is one of a number of key theorems alllowing one to exchange limits and [Lebesgue] integrals (or derivatives

More information

Probability for Statistics and Machine Learning

Probability for Statistics and Machine Learning ~Springer Anirban DasGupta Probability for Statistics and Machine Learning Fundamentals and Advanced Topics Contents Suggested Courses with Diffe~ent Themes........................... xix 1 Review of Univariate

More information

Asymptotic statistics using the Functional Delta Method

Asymptotic statistics using the Functional Delta Method Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random

More information

Advanced Statistics II: Non Parametric Tests

Advanced Statistics II: Non Parametric Tests Advanced Statistics II: Non Parametric Tests Aurélien Garivier ParisTech February 27, 2011 Outline Fitting a distribution Rank Tests for the comparison of two samples Two unrelated samples: Mann-Whitney

More information

The faithfulness of atomic polymorphism

The faithfulness of atomic polymorphism F Ferreira G Ferreira The faithfulness of atomic polymorphism Abstract It is known that the full intuitionistic propositional calculus can be embedded into the atomic polymorphic system F at, a calculus

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

9 Sequences of Functions

9 Sequences of Functions 9 Sequences of Functions 9.1 Pointwise convergence and uniform convergence Let D R d, and let f n : D R be functions (n N). We may think of the functions f 1, f 2, f 3,... as forming a sequence of functions.

More information

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University 1 / 36 A Brief Analysis of Central Limit Theorem Omid Khanmohamadi (okhanmoh@math.fsu.edu) Diego Hernán Díaz Martínez (ddiazmar@math.fsu.edu) Tony Wills (twills@math.fsu.edu) Kouadio David Yao (kyao@math.fsu.edu)

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Mathematical MCQ for international students admitted to École polytechnique

Mathematical MCQ for international students admitted to École polytechnique Mathematical MCQ for international students admitted to École polytechnique This multiple-choice questionnaire is intended for international students admitted to the first year of the engineering program

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

1. Background and Overview

1. Background and Overview 1. Background and Overview Data mining tries to find hidden structure in large, high-dimensional datasets. Interesting structure can arise in regression analysis, discriminant analysis, cluster analysis,

More information

Lower semicontinuous and Convex Functions

Lower semicontinuous and Convex Functions Lower semicontinuous and Convex Functions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 6, 2017 Outline Lower Semicontinuous Functions

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as

More information

The Contraction Mapping Theorem and the Implicit Function Theorem

The Contraction Mapping Theorem and the Implicit Function Theorem The Contraction Mapping Theorem and the Implicit Function Theorem Theorem The Contraction Mapping Theorem Let B a = { x IR d x < a } denote the open ball of radius a centred on the origin in IR d. If the

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Convexity and Smoothness

Convexity and Smoothness Capter 4 Convexity and Smootness 4. Strict Convexity, Smootness, and Gateaux Di erentiablity Definition 4... Let X be a Banac space wit a norm denoted by k k. A map f : X \{0}!X \{0}, f 7! f x is called

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information