Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment

Size: px
Start display at page:

Download "Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment"

Transcription

1 Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire Statistique & Génome, Évry, FRANCE Soon: Lab. Probabilités & Modèles Aléatoires, Paris, FRANCE

2 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

3 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

4 DNA unzipping RWRE introduced by [Chernov (67)] to model DNA replication. By the end of 90 s, various DNA unzipping experiments appeared. f G A C A C T C T A C C T G A M A G A T G G A C T G T G T C f Goals DNA sequencing (exploratory), Study the structural properties of the molecule.

5 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

6 Model description I Random environment on Z ω = {ω x } x Z i.i.d. with ω x (0, 1) and ω x ν θ, θ Θ unknown parameter, Θ R d compact set, P θ = ν Z θ law on (0, 1) Z of ω and E θ expectation, Markov process conditional on the environment For fixed ω, let X = {X t } t N be the Markov chain on Z starting at X 0 = 0 and with transitions ω x if y = x + 1, P ω (X t+1 = y X t = x) = 1 ω x if y = x 1, 0 otherwise. P ω is the measure on the path space of X given ω (quenched law).

7 Model description II Random walk in random environment (RWRE) The (unconditional) law of X is the annealed law P θ ( ) = P ω ( )dp θ (ω), Note that X is not a Markov process. 1 ω x ω x 0 x 1 x x + 1

8 Limiting behaviour of X Let ρ x = 1 ω x ω x, x Z. [Solomon (75)] proved the classification: (a) Recurrent case: If E θ (log ρ 0 ) = 0, then = lim inf t X t < lim sup X t = +, t P θ -almost surely. (b) Transient case: if E θ (log ρ 0 ) < 0, then lim X t = +, t P θ -almost surely. If we moreover let T n = inf{t N : X t = n}, then (b1) Ballistic case: if E θ (ρ 0 ) < 1, then, P θ -almost surely, T n /n c, P θ -a.s. (b2) Sub-ballistic case: If E θ (ρ 0 ) 1, then T n /n + P θ -almost surely, when n tends to infinity.

9 Context Goal and context Goal: Estimate the parameter value θ relying on the observation of X [0,Tn]. In a much more general setting, [Adelman & Enriquez (04)] provide a link between the RWRE and the environment, leading to moment estimators for the distribution ν θ. Drawback: estimate some moments first and then invert a function to recover the parameter θ. May induce a loss of efficiency. We focus on maximum likelihood estimation (MLE). We assume a transient ballistic random walk.

10 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

11 MLE construction I We let L n x := T n 1 s=0 1{(X s, X s+1 ) = (x, x 1)} the number of left steps from site x and R n x the number of right steps (defined similarly). We have P ω (X [0,Tn]) = ω Rn x x (1 ω x ) Ln x x Z and (i.i.d. env.) P θ (X [0,Tn]) = x Z 1 Note that Only the visited sites contribute in this product. The number of visited sites x < 0 is bounded. For x = 1,..., n 1, R n x = L n x a Rn x (1 a) Ln x dν θ (a). R n x 0 x x + 1 n L n x+1

12 MLE construction II Let φ θ be the function from N 2 to R given by φ θ (x, y) = log 1 The criterion function θ l n (θ) is defined as and our estimator is 0 a x+1 (1 a) y dν θ (a). (1) n 1 l n (θ) = φ θ (L n x+1, L n x), x=0 θ n Argmax l n (θ). θ Θ

13 Results: consistency, asymptotic normality and efficiency Under appropriate (and classical) assumptions, in the transient ballistic case, we establish that the MLE satisfies Consistency: lim n + θn = θ, in P -probability, Asymptotic normality: n( θ n θ ) P dist. N (0, Σ 1 θ ), Efficiency: Σ θ is the Fisher information matrix. Francis Comets, Mikael Falconnet, Oleg Loukianov, Dasha Loukianova & Catherine Matias Maximum likelihood estimator consistency for ballistic random walk in a parametric random environment. Stochastic Processes & Applications, 124(1): , Mikael Falconnet, Dasha Loukianova & Catherine Matias, Asymptotic normality and efficiency of the maximum likelihood estimator for the parameter of a ballistic random walk in a random environment. Mathematical Methods of Statistics, 23(1):1-19, 2014.

14 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

15 Underlying BPIRE I Main property (Kesten, Kozlov, Spitzer, 75) where (L n n, L n n 1,..., Ln 0 ) P (Z 0,..., Z n ) Z 0 = 0, and for k = 0,..., n 1, Z k+1 = Z k i=0 ξ k+1,i, with {ξ k,i } k N ;i N independent and m N, P ω (ξ k,i = m) = (1 ω k ) m ω k. Under annealed law P θ, {Z n } n N is an irreducible positive recurrent homogeneous Markov chain with transition kernel Q θ.

16 Underlying BPIRE II Consequence We have an equality in P -distribution l n (θ) = dist. n 1 k=0 φ θ(z k, Z k+1 ) and the right-hand side is (up to a constant) the likelihood of a positive recurrent Markov process. About ballistic assumption Stationary measure of (Z n ) has a finite first order moment only in the ballistic case. In this case, l n /n converges to a finite limit l, Sub-ballistic case studied in Mikael Falconnet, Arnaud Gloter & Dasha Loukianova Maximum likelihood estimation in the context of a sub-ballistic random walk in a parametric random environment. arxiv

17 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

18 Examples of environment distributions I Example 1: Finite and known support Fix a 1 < a 2 (0, 1) and let ν p = pδ a1 + (1 p)δ a2, where δ a is the Dirac mass located at value a. Unknown parameter p Θ (0, 1) (namely θ = p) Assume that a 1, a 2 and Θ are such that the process is transient and ballistic. Then, the assumptions are satisfied and one can estimate p consistently and efficiently. May be generalised to K > 2 fixed and known support points and θ = (p 1,..., p K 1 ).

19 Examples of environment distributions II Example 2: Two unknown support points ν θ = pδ a1 + (1 p)δ a2 and unknown parameter θ = (p, a 1, a 2 ) Θ, where Θ is a compact subset of (0, 1) {(a 1, a 2 ) (0, 1) 2 : a 1 < a 2 } such that the process is transient and ballistic. Then, the assumptions are satisfied and one can estimate θ consistently. Moreover, if E θ (ρ 3 0 ) < 1, the MLE estimator is asymptotically normal and efficient.

20 Examples of environment distributions III Example 3: Beta distribution dν(a) = 1 B(α,β) aα 1 (1 a) β 1 da, Unknown parameter θ = (α, β) Θ where Θ is a compact subset of {(α, β) (0, + ) 2 : α > β + 1}. As E θ (ρ 0 ) = β/(α 1), the constraint α > β + 1 ensures that the process is transient and ballistic. Then, the assumptions are satisfied and one can estimate θ consistently and efficiently.

21 Outline Biophysical context Nearest-neighbour one-dimensional random walk in random environment MLE construction and properties RWRE and Branching process with immigration in random environment (BPIRE) Three examples Simulations

22 Simulations protocol Three models corresponding to the previous 3 examples, with θ as in Table 1. In each model, 1, 000 repeats of the following procedure Generate a random environment according to distribution ν θ on the set of sites { 10 4,..., 10 4 }. Run a random walk in this environment and stop it successively at the hitting times T n, with n {10 3 k; 1 k 10}. For each value of n, Estimate θ with MLE and [Adelman & Enriquez (04)] s procedure Estimate the Fisher information matrix Σ θ and compute a confidence interval for θ Simulation Fixed parameter Estimated parameter Example 1 (a 1, a 2 ) = (0.4, 0.7) p = 0.3 Example 2 - (a 1, a 2, p ) = (0.4, 0.7, 0.3) Example 3 - (α, β ) = (5, 1) Table : Parameter values for each experiment.

23 Boxplots of MLE (white) and [Adelman & Enriquez (04)] s estimate (grey) - Ex. 1 (ˆp) and 3 (ˆα, ˆβ)

24 Boxplots of MLE - Ex. 2 (ˆp, â 1, â 2 )

25 Empirical coverages of confidence regions Ex. 1 Ex. 2 Ex. 3 n Table : Empirical coverages of (1 γ) asymptotic level confidence regions, for γ {0.01, 0.05, 0.1} and relying on 1000 iterations.

26 Conclusions Good performances of θ n on simulated data Unbiased estimator (like [Adelman & Enriquez (04)] s one) Less spread out than [Adelman & Enriquez (04)] s one (in fact efficient). Easier to compute (Ex. 2 [Adelman & Enriquez (04)] s estimate is out of reach). Confidence regions build from θ n have accurate empirical coverage. Questions?

27 References O. Adelman and N. Enriquez. Random walks in random environment: what a single trajectory tells. Israel J. Math., 142: , A.A. Chernov. Replication of a multicomponent chain by the lightning mechanism. Biofizika, 12: , F. Solomon. Random walks in a random environment. Ann. Probability, 3:1 31, 1975.

Maximum likelihood estimator consistency for ballistic random walk in a parametric random environment

Maximum likelihood estimator consistency for ballistic random walk in a parametric random environment Maximum likelihood estimator consistency for ballistic random walk in a parametric random environment Francis Comets, Mikael Falconnet, Oleg Loukianov, Dasha Loukianova, Catherine Matias To cite this version:

More information

Asymptotic Normality and Efficiency of the Maximum Likelihood Estimator for the Parameter of a Ballistic Random Walk in a Random Environment

Asymptotic Normality and Efficiency of the Maximum Likelihood Estimator for the Parameter of a Ballistic Random Walk in a Random Environment ISSN 066-5307, Mathematical Methods of Statistics, 204, Vol. 23, No., pp. 9. c Allerton Press, Inc., 204. Asymptotic Normality and Efficiency of the Maximum Likelihood Estimator for the Parameter of a

More information

Quenched Limit Laws for Transient, One-Dimensional Random Walk in Random Environment

Quenched Limit Laws for Transient, One-Dimensional Random Walk in Random Environment Quenched Limit Laws for Transient, One-Dimensional Random Walk in Random Environment Jonathon Peterson School of Mathematics University of Minnesota April 8, 2008 Jonathon Peterson 4/8/2008 1 / 19 Background

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

A log-scale limit theorem for one-dimensional random walks in random environments

A log-scale limit theorem for one-dimensional random walks in random environments A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Invariant measure for random walks on ergodic environments on a strip

Invariant measure for random walks on ergodic environments on a strip Invariant measure for random walks on ergodic environments on a strip Dmitry Dolgopyat and Ilya Goldsheid Department of Mathematics and Institute of Physical Science & Technology University of Maryland

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method

STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method Rebecca Barter February 2, 2015 Confidence Intervals Confidence intervals What is a confidence interval? A confidence interval is calculated

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1 5.3 METABOLIC NETWORKS 193 5.3 Metabolic Networks 5.4 Bayesian Networks Let G = (V, E) be a directed acyclic graph. We assume that the vertices i V (1 i n) represent for example genes and correspond to

More information

arxiv:math/ v1 [math.pr] 24 Apr 2003

arxiv:math/ v1 [math.pr] 24 Apr 2003 ICM 2002 Vol. III 1 3 arxiv:math/0304374v1 [math.pr] 24 Apr 2003 Random Walks in Random Environments Ofer Zeitouni Abstract Random walks in random environments (RWRE s) have been a source of surprising

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

Excited random walks in cookie environments with Markovian cookie stacks

Excited random walks in cookie environments with Markovian cookie stacks Excited random walks in cookie environments with Markovian cookie stacks Jonathon Peterson Department of Mathematics Purdue University Joint work with Elena Kosygina December 5, 2015 Jonathon Peterson

More information

Stochastic relations of random variables and processes

Stochastic relations of random variables and processes Stochastic relations of random variables and processes Lasse Leskelä Helsinki University of Technology 7th World Congress in Probability and Statistics Singapore, 18 July 2008 Fundamental problem of applied

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

ON COMPOUND POISSON POPULATION MODELS

ON COMPOUND POISSON POPULATION MODELS ON COMPOUND POISSON POPULATION MODELS Martin Möhle, University of Tübingen (joint work with Thierry Huillet, Université de Cergy-Pontoise) Workshop on Probability, Population Genetics and Evolution Centre

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Mixing time for a random walk on a ring

Mixing time for a random walk on a ring Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix

More information

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Random Walk in Periodic Environment

Random Walk in Periodic Environment Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics

More information

On optimal stopping of autoregressive sequences

On optimal stopping of autoregressive sequences On optimal stopping of autoregressive sequences Sören Christensen (joint work with A. Irle, A. Novikov) Mathematisches Seminar, CAU Kiel September 24, 2010 Sören Christensen (CAU Kiel) Optimal stopping

More information

Jean-Michel Billiot, Jean-François Coeurjolly and Rémy Drouilhet

Jean-Michel Billiot, Jean-François Coeurjolly and Rémy Drouilhet Colloque International de Statistique Appliquée pour le Développement en Afrique International Conference on Applied Statistics for Development in Africa Sada 07 nn, 1?? 007) MAXIMUM PSEUDO-LIKELIHOOD

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Modeling heterogeneity in random graphs

Modeling heterogeneity in random graphs Modeling heterogeneity in random graphs Catherine MATIAS CNRS, Laboratoire Statistique & Génome, Évry (Soon: Laboratoire de Probabilités et Modèles Aléatoires, Paris) http://stat.genopole.cnrs.fr/ cmatias

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Lecture 23. Random walks

Lecture 23. Random walks 18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable

More information

The parabolic Anderson model on Z d with time-dependent potential: Frank s works

The parabolic Anderson model on Z d with time-dependent potential: Frank s works Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Mandelbrot s cascade in a Random Environment

Mandelbrot s cascade in a Random Environment Mandelbrot s cascade in a Random Environment A joint work with Chunmao Huang (Ecole Polytechnique) and Xingang Liang (Beijing Business and Technology Univ) Université de Bretagne-Sud (Univ South Brittany)

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

Spectra of Large Random Stochastic Matrices & Relaxation in Complex Systems

Spectra of Large Random Stochastic Matrices & Relaxation in Complex Systems Spectra of Large Random Stochastic Matrices & Relaxation in Complex Systems Reimer Kühn Disordered Systems Group Department of Mathematics, King s College London Random Graphs and Random Processes, KCL

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

The Art of Sequential Optimization via Simulations

The Art of Sequential Optimization via Simulations The Art of Sequential Optimization via Simulations Stochastic Systems and Learning Laboratory EE, CS* & ISE* Departments Viterbi School of Engineering University of Southern California (Based on joint

More information

FUNCTIONAL CENTRAL LIMIT (RWRE)

FUNCTIONAL CENTRAL LIMIT (RWRE) FUNCTIONAL CENTRAL LIMIT THEOREM FOR BALLISTIC RANDOM WALK IN RANDOM ENVIRONMENT (RWRE) Timo Seppäläinen University of Wisconsin-Madison (Joint work with Firas Rassoul-Agha, Univ of Utah) RWRE on Z d RWRE

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting

Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Anne Philippe Laboratoire de Mathématiques Jean Leray Université de Nantes Workshop EDF-INRIA,

More information

Convergence of Quantum Statistical Experiments

Convergence of Quantum Statistical Experiments Convergence of Quantum Statistical Experiments M!d!lin Gu"! University of Nottingham Jonas Kahn (Paris-Sud) Richard Gill (Leiden) Anna Jen#ová (Bratislava) Statistical experiments and statistical decision

More information

Introduction to Algorithmic Trading Strategies Lecture 10

Introduction to Algorithmic Trading Strategies Lecture 10 Introduction to Algorithmic Trading Strategies Lecture 10 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

The Poisson transform for unnormalised statistical models. Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB)

The Poisson transform for unnormalised statistical models. Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB) The Poisson transform for unnormalised statistical models Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB) Part I Unnormalised statistical models Unnormalised statistical models

More information

On the recurrence of some random walks in random environment

On the recurrence of some random walks in random environment ALEA, Lat. Am. J. Probab. Math. Stat. 11 2, 483 502 2014 On the recurrence of some random walks in random environment Nina Gantert, Michael Kochler and Françoise Pène Nina Gantert, Technische Universität

More information

Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones. Jefferson Huang

Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones. Jefferson Huang Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones Jefferson Huang School of Operations Research and Information Engineering Cornell University November 16, 2016

More information

Rare event simulation for the ruin problem with investments via importance sampling and duality

Rare event simulation for the ruin problem with investments via importance sampling and duality Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

DETECTING PHASE TRANSITION FOR GIBBS MEASURES. By Francis Comets 1 University of California, Irvine

DETECTING PHASE TRANSITION FOR GIBBS MEASURES. By Francis Comets 1 University of California, Irvine The Annals of Applied Probability 1997, Vol. 7, No. 2, 545 563 DETECTING PHASE TRANSITION FOR GIBBS MEASURES By Francis Comets 1 University of California, Irvine We propose a new empirical procedure for

More information

Lecture Notes on Random Walks in Random Environments

Lecture Notes on Random Walks in Random Environments Lecture Notes on Random Walks in Random Environments Jonathon Peterson Purdue University February 2, 203 This lecture notes arose out of a mini-course I taught in January 203 at Instituto Nacional de Matemática

More information

Bootstrap random walks

Bootstrap random walks Bootstrap random walks Kais Hamza Monash University Joint work with Andrea Collevecchio & Meng Shi Introduction The two and three dimensional processes Higher Iterations An extension any (prime) number

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Skorokhod embeddings for two-sided Markov chains

Skorokhod embeddings for two-sided Markov chains Skorokhod embeddings for two-sided Markov chains Peter Mörters and István Redl Department of Mathematical Sciences, University of Bath, Bath BA2 7AY, England E mail: maspm@bath.ac.uk and ir250@bath.ac.uk

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course Approximate Dynamic Programming (a.k.a. Batch Reinforcement Learning) A.

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Consistency of Quasi-Maximum Likelihood Estimators for the Regime-Switching GARCH Models

Consistency of Quasi-Maximum Likelihood Estimators for the Regime-Switching GARCH Models Consistency of Quasi-Maximum Likelihood Estimators for the Regime-Switching GARCH Models Yingfu Xie Research Report Centre of Biostochastics Swedish University of Report 2005:3 Agricultural Sciences ISSN

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Properties for systems with weak invariant manifolds

Properties for systems with weak invariant manifolds Statistical properties for systems with weak invariant manifolds Faculdade de Ciências da Universidade do Porto Joint work with José F. Alves Workshop rare & extreme Gibbs-Markov-Young structure Let M

More information

arxiv: v1 [math.pr] 20 Jul 2007

arxiv: v1 [math.pr] 20 Jul 2007 Encyclopedia of Mathematical Physics (J.-P. Françoise, G. Naber, and S. T. Tsou, eds.) Vol. 4, pp. 353 371. Elsevier, Oxford, 2006. Random Walks in Random Environments arxiv:0707.3160v1 [math.pr] 20 Jul

More information

Intertwining of Markov processes

Intertwining of Markov processes January 4, 2011 Outline Outline First passage times of birth and death processes Outline First passage times of birth and death processes The contact process on the hierarchical group 1 0.5 0-0.5-1 0 0.2

More information

Statistical Estimation: Data & Non-data Information

Statistical Estimation: Data & Non-data Information Statistical Estimation: Data & Non-data Information Roger J-B Wets University of California, Davis & M.Casey @ Raytheon G.Pflug @ U. Vienna, X. Dong @ EpiRisk, G-M You @ EpiRisk. a little background Decision

More information

Perturbed Proximal Gradient Algorithm

Perturbed Proximal Gradient Algorithm Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Infinite-Horizon Average Reward Markov Decision Processes

Infinite-Horizon Average Reward Markov Decision Processes Infinite-Horizon Average Reward Markov Decision Processes Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Infinite Horizon Average Reward MDP 1 Outline The average

More information

Methods of Estimation

Methods of Estimation Methods of Estimation MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Methods of Estimation I 1 Methods of Estimation I 2 X X, X P P = {P θ, θ Θ}. Problem: Finding a function θˆ(x ) which is close to θ.

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Chapter 2. Poisson point processes

Chapter 2. Poisson point processes Chapter 2. Poisson point processes Jean-François Coeurjolly http://www-ljk.imag.fr/membres/jean-francois.coeurjolly/ Laboratoire Jean Kuntzmann (LJK), Grenoble University Setting for this chapter To ease

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Central-limit approach to risk-aware Markov decision processes

Central-limit approach to risk-aware Markov decision processes Central-limit approach to risk-aware Markov decision processes Jia Yuan Yu Concordia University November 27, 2015 Joint work with Pengqian Yu and Huan Xu. Inventory Management 1 1 Look at current inventory

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Adaptive Monte Carlo methods

Adaptive Monte Carlo methods Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information