On optimal stopping of autoregressive sequences
|
|
- Clement Woods
- 6 years ago
- Views:
Transcription
1 On optimal stopping of autoregressive sequences Sören Christensen (joint work with A. Irle, A. Novikov) Mathematisches Seminar, CAU Kiel September 24, 2010 Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
2 Outline 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
3 Outline AR(1) processes 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
4 AR(1) processes Setting AR(1) process X n = λx n 1 + Z n, X 0 = x, Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
5 AR(1) processes Setting AR(1) process X n = λx n 1 + Z n, X 0 = x, λ (0, 1), Z 1, Z 2,... iid. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
6 AR(1) processes Setting AR(1) process X n = λx n 1 + Z n, X 0 = x, λ (0, 1), Z 1, Z 2,... iid. X n = (1 λ)x n 1 n + L n, L n = n i=1 Z i Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
7 Setting AR(1) processes AR(1) process X n = λx n 1 + Z n, X 0 = x, λ (0, 1), Z 1, Z 2,... iid. X n = (1 λ)x n 1 n + L n, X n = λ n X 0 + n 1 i=0 λi Z n i. L n = n i=1 Z i Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
8 AR(1) processes Path of an AR(1) process Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
9 AR(1) processes Threshold time, overshoot In many applications such as statistical surveillance it is of interest to find the joint distribution of τ a = inf{n N 0 : X n a} X τa a (threshold time) (overshoot). a X τa a τ a Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
10 Outline Basic results on optimal stopping 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
11 Basic results on optimal stopping Optimal stopping of Markov processes Let (X n ) n N0 be a Markov process on E, g : E R, ρ (0, 1). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
12 Basic results on optimal stopping Optimal stopping of Markov processes Let (X n ) n N0 be a Markov process on E, g : E R, ρ (0, 1). Find a stopping time τ that maximizes E x (ρ τ g(x τ )). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
13 Basic results on optimal stopping Optimal stopping of Markov processes Let (X n ) n N0 be a Markov process on E, g : E R, ρ (0, 1). Find a stopping time τ that maximizes Solution : E x (ρ τ g(x τ )). v(x) := sup E x (ρ τ g(x τ )) τ S := {x E : v(x) g(x)} Optimal stopping time: value function. optimal stopping set. τ = inf{n N 0 : X n S}. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
14 Outline Threshold times as optimal stopping times 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
15 Threshold times as optimal stopping times Elementary approach to optimal stopping of AR(1) processes 1 Reduce the set of potential optimal stopping times to threshold time. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
16 Threshold times as optimal stopping times Elementary approach to optimal stopping of AR(1) processes 1 Reduce the set of potential optimal stopping times to threshold time. 2 Find the joint distribution of (τ a, X τa a) to obtain an expression for E x (ρ τa g(x τa )). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
17 Threshold times as optimal stopping times Elementary approach to optimal stopping of AR(1) processes 1 Reduce the set of potential optimal stopping times to threshold time. 2 Find the joint distribution of (τ a, X τa a) to obtain an expression for E x (ρ τa g(x τa )). 3 Find the optimal boundary a. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
18 Threshold times as optimal stopping times Novikov-Shiryaev problem Let g(x) = (x + ) α, α > 0. Consider sup τ E x (ρ τ (X τ + ) α ) = sup E(ρ τ ((λ τ x + X τ ) + ) α ). τ Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
19 Threshold times as optimal stopping times Novikov-Shiryaev problem Let g(x) = (x + ) α, α > 0. Consider sup τ E x (ρ τ (X τ + ) α ) = sup E(ρ τ ((λ τ x + X τ ) + ) α ). τ Proposition If (X n ) n N0 is a AR(1) process, then there exists a such that the optimal stopping set S is given by S = [a, ) Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
20 Proof Threshold times as optimal stopping times For simplicity: Z 1 0. State-space: [0, ) Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
21 Proof Threshold times as optimal stopping times For simplicity: Z 1 0. State-space: [0, ) Then v(x) g(x) = sup τ E(ρ τ (λ τ x + X τ ) α ) x α = sup E(ρ τ (λ τ + X τ /x) α ) τ Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
22 Proof Threshold times as optimal stopping times For simplicity: Z 1 0. State-space: [0, ) Then v(x) g(x) = sup τ E(ρ τ (λ τ x + X τ ) α ) x α = sup E(ρ τ (λ τ + X τ /x) α ) τ Hence S = {x : v(x) g(x)} is of the form [a, ). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
23 Outline Phasetype distributions and overshoot 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
24 Motivation Phasetype distributions and overshoot If (X n ) n N0 is a random walk (or an AR(1) process) with Exp(β)-distributed jumps. Then τ a, X τa a are independent and X τa a d = Exp(β). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
25 Phasetype distributions and overshoot Distributions of phasetype (J t ) t 0 : Markov chain in continuous time on {1,..., m} { }. 1,..., m: transient, absorbing. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
26 Phasetype distributions and overshoot Distributions of phasetype (J t ) t 0 : Markov chain in continuous time on {1,..., m} { }. 1,..., m: transient, absorbing. ( ) Generator: ˆQ 1 Q q =, Q R 0 0 m m, q = Q... 1 ˆα = (α 1,..., α m, 0) initial distribution Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
27 Phasetype distributions and overshoot Distributions of phasetype (J t ) t 0 : Markov chain in continuous time on {1,..., m} { }. 1,..., m: transient, absorbing. ( ) Generator: ˆQ 1 Q q =, Q R 0 0 m m, q = Q... 1 ˆα = (α 1,..., α m, 0) initial distribution η := inf{t 0 : J t = } PH(Q, α) := P ηˆα is called phasetype distribution with parameters Q, α. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
28 Phasetype distributions and overshoot Properties of phasetype distributions density: f (t) = αe Qt q Laplace transform: Eˆα (e sη ) = α( si Q) 1 q, s C with R(s) < δ. closed under mixture and convolution. Phasetype distributions are dense in all distributions on (0, ). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
29 Phasetype distributions and overshoot Properties of phasetype distributions density: f (t) = αe Qt q Laplace transform: Eˆα (e sη ) = α( si Q) 1 q, s C with R(s) < δ. closed under mixture and convolution. Phasetype distributions are dense in all distributions on (0, ). Ex: m = 1, Q = ( λ) PH(Q, (1)) := Exp(λ). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
30 Phasetype distributions and overshoot Phasetype distribution and overshoot Generalization: If (X n ) n N0 is AR(1) process with innovations Z n = S n T n, S n, T n 0 independent, S n PH(Q, α), then E x (ρ τa g(x τa )) = m E x (ρ τa 1 Ki )E(g(a + R i )), i=1 R i PH(Q, e i ), K i events. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
31 Phasetype distributions and overshoot Phasetype distribution and overshoot Generalization: If (X n ) n N0 is AR(1) process with innovations Z n = S n T n, S n, T n 0 independent, S n PH(Q, α), then E x (ρ τa g(x τa )) = m E x (ρ τa 1 Ki )E(g(a + R i )), i=1 R i PH(Q, e i ), K i events. How to find E x (ρ τa 1 Ki )? Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
32 Phasetype distributions and overshoot Phasetype distribution and overshoot Generalization: If (X n ) n N0 is AR(1) process with innovations Z n = S n T n, S n, T n 0 independent, S n PH(Q, α), then E x (ρ τa g(x τa )) = m E x (ρ τa 1 Ki )E(g(a + R i )), i=1 R i PH(Q, e i ), K i events. How to find E x (ρ τa 1 Ki )? Answer: Use the stationary distribution, complex martingales and complex analysis. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
33 Outline Continuous fit condition 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
34 Continuous fit condition How to find the find the optimal threshold a? For each a R we consider v a (x) = E x (ρ τa g(x τa )). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
35 Continuous fit condition How to find the find the optimal threshold a? For each a R we consider v a (x) = E x (ρ τa g(x τa )). Choose a such that v a (a ) = g(a ). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
36 Continuous fit Continuous fit condition a 0.7 is the unique solution to the transcendental equation g(a ) = v a (a ). Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
37 Continuous fit condition Smooth and continuous fit condition Principle for continuous time processes Let g be differentiable, then the following can be expected: 1 If the process X enters the interior of the optimal stopping set S immediately after starting on a S, then the value function v is smooth at a. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
38 Continuous fit condition Smooth and continuous fit condition Principle for continuous time processes Let g be differentiable, then the following can be expected: 1 If the process X enters the interior of the optimal stopping set S immediately after starting on a S, then the value function v is smooth at a. 2 If the process X doesn t enter the interior of the optimal stopping set S immediately after starting on a S, then the value function v is continuous at a Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
39 Continuous fit condition Smooth and continuous fit condition Principle for continuous time processes Let g be differentiable, then the following can be expected: 1 If the process X enters the interior of the optimal stopping set S immediately after starting on a S, then the value function v is smooth at a. 2 If the process X doesn t enter the interior of the optimal stopping set S immediately after starting on a S, then the value function v is continuous at a The second principle holds in our situation. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
40 Outline Summary 1 AR(1) processes 2 Basic results on optimal stopping 3 Threshold times as optimal stopping times 4 Phasetype distributions and overshoot 5 Continuous fit condition 6 Summary Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
41 Summary Summary We studied optimal stopping problems driven by autoregressive processes. Elementary arguments reduces the problem to finding an optimal threshold time. The joint distribution of (τ a, X τa a) can be obtained for phasetype innovations. The optimal threshold can be found using the continuous fit principle. Sören Christensen (CAU Kiel) Optimal stopping of AR(1) sequences / 22
On Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationStochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)
Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationNon-homogeneous random walks on a semi-infinite strip
Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationNonparametric regression with martingale increment errors
S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration
More informationStatistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data
Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Taposh Banerjee University of Texas at San Antonio Joint work with Gene Whipps (US Army Research Laboratory) Prudhvi
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationAsymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment
Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationLatent voter model on random regular graphs
Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationLecturer: Olga Galinina
Renewal models Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Reminder. Exponential models definition of renewal processes exponential interval distribution Erlang distribution hyperexponential
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More informationAnalyzing Aggregated AR(1) Processes
Analyzing Aggregated AR(1) Processes Jon Gunnip Supervisory Committee Professor Lajos Horváth (Committee Chair) Professor Davar Koshnevisan Professor Paul Roberts Analyzing Aggregated AR(1) Processes p.1
More informationMixing time for a random walk on a ring
Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationIntroduction General Framework Toy models Discrete Markov model Data Analysis Conclusion. The Micro-Price. Sasha Stoikov. Cornell University
The Micro-Price Sasha Stoikov Cornell University Jim Gatheral @ NYU High frequency traders (HFT) HFTs are good: Optimal order splitting Pairs trading / statistical arbitrage Market making / liquidity provision
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationExact Simulation of the Stationary Distribution of M/G/c Queues
1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationHarmonic Functions and Brownian Motion in Several Dimensions
Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time
More informationNEW FRONTIERS IN APPLIED PROBABILITY
J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation
More informationAlbert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes
Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:
More informationExponential martingales: uniform integrability results and applications to point processes
Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationBayesian Quickest Change Detection Under Energy Constraints
Bayesian Quickest Change Detection Under Energy Constraints Taposh Banerjee and Venugopal V. Veeravalli ECE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign, Urbana,
More informationOn an Effective Solution of the Optimal Stopping Problem for Random Walks
QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More information1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).
Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationLecture 7. 1 Notations. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction
More informationPositive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network. Haifa Statistics Seminar May 5, 2008
Positive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network Yoni Nazarathy Gideon Weiss Haifa Statistics Seminar May 5, 2008 1 Outline 1 Preview of Results 2 Introduction Queueing
More informationCALCULATION OF RUIN PROBABILITIES FOR A DENSE CLASS OF HEAVY TAILED DISTRIBUTIONS
CALCULATION OF RUIN PROBABILITIES FOR A DENSE CLASS OF HEAVY TAILED DISTRIBUTIONS MOGENS BLADT, BO FRIIS NIELSEN, AND GENNADY SAMORODNITSKY Abstract. In this paper we propose a class of infinite dimensional
More informationSingle Index Quantile Regression for Heteroscedastic Data
Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University JSM, 2015 E. Christou, M. G. Akritas (PSU) SIQR JSM, 2015
More informationThe Modified Keifer-Weiss Problem, Revisited
The Modified Keifer-Weiss Problem, Revisited Bob Keener Department of Statistics University of Michigan IWSM, 2013 Bob Keener (University of Michigan) Keifer-Weiss Problem IWSM 2013 1 / 15 Outline 1 Symmetric
More informationOn semilinear elliptic equations with measure data
On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July
More information2 Mathematical Model, Sequential Probability Ratio Test, Distortions
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 153 162 Robust Sequential Testing of Hypotheses on Discrete Probability Distributions Alexey Kharin and Dzmitry Kishylau Belarusian State University,
More informationMULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS
MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS By MATTHEW GOFF A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department
More informationProblem Points S C O R E Total: 120
PSTAT 160 A Final Exam December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain with the
More informationT 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:
108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More informationA Conditional Approach to Modeling Multivariate Extremes
A Approach to ing Multivariate Extremes By Heffernan & Tawn Department of Statistics Purdue University s April 30, 2014 Outline s s Multivariate Extremes s A central aim of multivariate extremes is trying
More informationA.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy.
University of Liverpool piunov@liv.ac.uk The Markov Decision Process under consideration is defined by the following elements X = {0, 1, 2,...} is the state space; A is the action space (Borel); p(z x,
More informationA Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition
A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition Junfeng Wu, Karl Henrik Johansson and Ling Shi E-mail: jfwu@ust.hk Stockholm, 9th, January 2014 1 / 19 Outline Outline
More informationA New Penalty-SQP Method
Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationGeneralized Fibonacci Numbers and Blackwell s Renewal Theorem
Generalized Fibonacci Numbers and Blackwell s Renewal Theorem arxiv:1012.5006v1 [math.pr] 22 Dec 2010 Sören Christensen Christian-Albrechts-Universität, Mathematisches Seminar, Kiel, Germany Abstract:
More informationThe Contact Process on the Complete Graph with Random, Vertex-dependent, Infection Rates
The Contact Process on the Complete Graph with Random, Vertex-dependent, Infection Rates Jonathon Peterson Purdue University Department of Mathematics December 2, 2011 Jonathon Peterson 12/2/2011 1 / 8
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationX t = a t + r t, (7.1)
Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical
More informationPlan Martingales cont d. 0. Questions for Exam 2. More Examples 3. Overview of Results. Reading: study Next Time: first exam
Plan Martingales cont d 0. Questions for Exam 2. More Examples 3. Overview of Results Reading: study Next Time: first exam Midterm Exam: Tuesday 28 March in class Sample exam problems ( Homework 5 and
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationThe Mabinogion Sheep Problem
The Mabinogion Sheep Problem Kun Dong Cornell University April 22, 2015 K. Dong (Cornell University) The Mabinogion Sheep Problem April 22, 2015 1 / 18 Introduction (Williams 1991) we are given a herd
More informationFUNCTIONAL CENTRAL LIMIT (RWRE)
FUNCTIONAL CENTRAL LIMIT THEOREM FOR BALLISTIC RANDOM WALK IN RANDOM ENVIRONMENT (RWRE) Timo Seppäläinen University of Wisconsin-Madison (Joint work with Firas Rassoul-Agha, Univ of Utah) RWRE on Z d RWRE
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More information1 Math 285 Homework Problem List for S2016
1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:
More informationprocess on the hierarchical group
Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of
More informationMixing Times and Hitting Times
Mixing Times and Hitting Times David Aldous January 12, 2010 Old Results and Old Puzzles Levin-Peres-Wilmer give some history of the emergence of this Mixing Times topic in the early 1980s, and we nowadays
More informationAn Introduction to Probability Theory and Its Applications
An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationModule 6:Random walks and related areas Lecture 24:Random woalk and other areas. The Lecture Contains: Random Walk.
The Lecture Contains: Random Walk Ehrenfest Model file:///e /courses/introduction_stochastic_process_application/lecture24/24_1.htm[9/30/2013 1:03:45 PM] Random Walk As already discussed there is another
More informationConvergence Rate of Markov Chains
Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 6
MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and
More informationOn the principle of smooth fit in optimal stopping problems
1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:
More informationRandom Walks Conditioned to Stay Positive
1 Random Walks Conditioned to Stay Positive Bob Keener Let S n be a random walk formed by summing i.i.d. integer valued random variables X i, i 1: S n = X 1 + + X n. If the drift EX i is negative, then
More informationOn detection of unit roots generalizing the classic Dickey-Fuller approach
On detection of unit roots generalizing the classic Dickey-Fuller approach A. Steland Ruhr-Universität Bochum Fakultät für Mathematik Building NA 3/71 D-4478 Bochum, Germany February 18, 25 1 Abstract
More informationPreliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012
Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Stanford University (Joint work with Persi Diaconis, Arpita Ghosh, Seung-Jean Kim, Sanjay Lall, Pablo Parrilo, Amin Saberi, Jun Sun, Lin
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Assume X P θ, θ Θ, with joint pdf (or pmf) f(x θ). Suppose we observe X = x. The Likelihood function is L(θ x) = f(x θ) as a function of θ (with the data x held fixed). The
More informationMarkov Switching Models
Applications with R Tsarouchas Nikolaos-Marios Supervisor Professor Sophia Dimelis A thesis presented for the MSc degree in Business Mathematics Department of Informatics Athens University of Economics
More informationarxiv: v2 [q-fin.cp] 9 Jun 2011
A METHOD FOR PRICING AMERICAN OPTIONS USING SEMI-INFINITE LINEAR PROGRAMMING Sören Christensen Christian-Albrechts-Universität Kiel arxiv:1103.4483v2 [q-fin.cp] 9 Jun 2011 We introduce a new approach for
More informationAnalytical properties of scale functions for spectrally negative Lévy processes
Analytical properties of scale functions for spectrally negative Lévy processes Andreas E. Kyprianou 1 Department of Mathematical Sciences, University of Bath 1 Joint work with Terence Chan and Mladen
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationProbabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components
Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components John Z. Sun Massachusetts Institute of Technology September 21, 2011 Outline Automata Theory Error in Automata Controlling
More informationStability of the two queue system
Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk)
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationIrregular Birth-Death process: stationarity and quasi-stationarity
Irregular Birth-Death process: stationarity and quasi-stationarity MAO Yong-Hua May 8-12, 2017 @ BNU orks with W-J Gao and C Zhang) CONTENTS 1 Stationarity and quasi-stationarity 2 birth-death process
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationABC methods for phase-type distributions with applications in insurance risk problems
ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon
More informationMFM Practitioner Module: Quantitiative Risk Management. John Dodson. October 14, 2015
MFM Practitioner Module: Quantitiative Risk Management October 14, 2015 The n-block maxima 1 is a random variable defined as M n max (X 1,..., X n ) for i.i.d. random variables X i with distribution function
More informationPerturbed Proximal Gradient Algorithm
Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing
More informationApproximation of Heavy-tailed distributions via infinite dimensional phase type distributions
1 / 36 Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions Leonardo Rojas-Nandayapa The University of Queensland ANZAPW March, 2015. Barossa Valley, SA, Australia.
More informationThe G/GI/N Queue in the Halfin-Whitt Regime I: Infinite Server Queue System Equations
The G/GI/ Queue in the Halfin-Whitt Regime I: Infinite Server Queue System Equations J. E Reed School of Industrial and Systems Engineering Georgia Institute of Technology October 17, 27 Abstract In this
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More information