Outline. A Central Limit Theorem for Truncating Stochastic Algorithms
|
|
- Lindsay Simmons
- 5 years ago
- Views:
Transcription
1 Outline A Central Limit Theorem for Truncating Stochastic Algorithms Jérôme Lelong lelong Tuesday September 5, Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 / General Framework Let u: θ R d u(θ) R d, be a continuous function defined as an expectation on a probability space (Ω, A, P). u : R d R d θ E[U(θ, Z)]. Z is a r.v. in R m and U a measurable function defined on R d R m. Hypothesis 1 (convexity)! θ R d, u(θ ) = and θ R d, θ θ, (θ θ u(θ)) >. Remark: if u is the gradient of a strictly convex function, then u satisfies Hypothesis 1. Problem: How to find the root of u? Jérôme Lelong (CERMICS) Tuesday September 5, 6 3 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 4 / 3
2 For sub-linear functions Assume that for all θ R d We define for θ R d E[ U(θ, Z) ] K(1 + θ ). (1) θ n+1 = θ n γ n+1 U(θ n, Z n+1 ). () (Z n ) n is i.i.d. following the law of Z. γ n >, γ n ց, γi = and Theorem 1 (Robbins Monro) γ i <. Assume Hypothesis 1 and that Equation (1) is true then, the sequence () converges a.s. to θ. For fast growing functions: an intuitive approach Consider an increasing sequence of compact sets (K j ) j such that j= K j = R d. Consider (Z n ) n and (γ n ) n as defined previously. Prevent the algorithm from blowing up: At each step, θ n should remain in a given compact set. If such is not the case, reset the algorithm and consider a larger compact set. This is due to Chen (see [Chen and Zhu, 1986]). Condition (1) is barely satisfied in practice. Jérôme Lelong (CERMICS) Tuesday September 5, 6 5 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 6 / 3 For fast growing functions For fast growing functions: mathematical approach θ n+1 = θ + θ n+1 =θ n+ 1 θ n θ +θ n K σn = K σn+1 K σn+1 = K σn+1 For θ K and σ =, we define (θ n ) n and (σ n ) n θ n+ 1 = θ n γ n+1 U(θ n, Z n+1 ), if θ n+ 1 K σ n θ n+1 = θ n+ 1 and σ n+1 = σ n, if θ n+ 1 / K σ n θ n+1 = θ and σ n+1 = σ n + 1. σ n counts the number of truncations up to time n. F n = σ(z k ; k n). θ n is F n measurable and Z n+1 independent of F n. (3) Jérôme Lelong (CERMICS) Tuesday September 5, 6 7 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 8 / 3
3 It is often more convenient to rewrite (3) as follows where θ n+1 = θ n γ n+1 u(θ n ) γ n+1 δm n+1 }{{}}{{} noise term Newton algorithm } {{ } standard Robbins Monro algorithm + γ n+1 p n+1 }{{} truncation term δm n+1 = U(θ n, Z n+1 ) u(θ n ), { u(θn ) + δm n γ and p n+1 = n+1 (θ θ n ) if θ n+ 1 / K σ n, otherwise. δm n is a martingale increment, p n is the truncation term. (4) a.s convergence of Chen s procedure Hypothesis (integrability) For all p >, the series n γ n+1δm n+1 1 θn p converges a.s. Hypothesis is satisfied as soon as u and θ E[ U(θ, Z) ] are bounded on any compact sets (or continuous). Hint Theorem Under Hypotheses 1 and, the sequence (θ n ) n defined by (3) converges a.s. to θ and the sequence (σ n ) n is a.s. finite. A proof of this theorem can be found in [Delyon, 1996]. Jérôme Lelong (CERMICS) Tuesday September 5, 6 9 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / General Problem Option pricing problem in a Brownian driven model (no jump): compute E[ψ(G)] by a MC method with G N(, I d ). One way of reducing the variance is to perform importance sampling techniques. For all θ R d, [ ] θ θ G E[ψ(G)] = E ψ(g + θ)e. (5) Minimise[ v(θ) = E ψ(g + θ) e θ G θ ] = E [ψ(g) e [ v(θ) = E (θ G)ψ(G) e θ G+ θ θ θ G+ ]. ]. (6) Jérôme Lelong (CERMICS) Tuesday September 5, 6 11 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3
4 Analysis of the problem To use the previous algorithms, we set u = v and Z = G. v is strictly convex, hence Hypothesis 1 is satisfied. v is not sub-linear. Cannot use a standard S.A. Theorem Hypothesis holds even if ψ is of an exponential type. Theorem holds and provides a way to compute θ. Theorem More details in [Arouna, 4]. Example Jérôme Lelong (CERMICS) Tuesday September 5, 6 13 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 14 / 3 A CLT for Standard S.A. I First, let us consider the standard algorithm θ n+1 = θ n γ n+1 u(θ n ) γ n+1 δm n+1 with γ n = γ α (n+1), 1/ < α 1. For n, we define the renormalised error n = θ n θ γn. A CLT for Standard S.A. II Hypothesis 3 There exists a function y : R d R d d satisfying lim x y(x) = and a symmetric definite positive matrix A such that u(θ) = A(θ θ ) + y(θ θ )(θ θ ). There exists a real number ρ > such that ( κ = sup E δm n +ρ) <. n There exists a symmetric definite positive matrix Σ such that E(δM n δm n F n 1) P n Σ. Jérôme Lelong (CERMICS) Tuesday September 5, 6 15 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 16 / 3
5 A CLT for Standard S.A. III Why we cannot deduce a CLT for truncating S.A. from the CLT for Standard S.A. Under Hypotheses 1 and 3 and if the algorithm converges a.s., then n L N(, V ). A proof of this result can be found in [Kushner and Yin, 3], [Benveniste et al., 199], [Bouton, 1985], [Duflo, 1997] for instance. The number of projections is a.s. finite. ω A, P(A) = 1, N(ω), n > N(ω), p n (ω) =. N is r.v. a.s. finite but not bounded. Hence, one cannot use a time shifting argument. Random time shifting does not preserve convergence in distribution. If X n L X and τ < a.s., one can show trivial examples where X n+τ does not converge. Choose for instance τ and τ independent r.v. on {, 1} with parameter 1/ and set X n := ( 1) n (τ τ ). Jérôme Lelong (CERMICS) Tuesday September 5, 6 17 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 18 / 3 A CLT for Randomly Truncating S.A. Consider Chen s Algorithm Definition. Theorem 3 Assume Hypotheses 1 and 3. If there exists η > such that n d(θ, K n ) > η, then n L N(, V ). if 1/ < α < 1 with V = if α = 1 and γa I > with V = γ exp ( At)Σ exp ( At)dt. (( ) ) (( ) ) I I exp γa t Σ exp γa t dt. A functional CLT for Chen s algorithm I We introduce a sequence of interpolating times {t n (u); u, n } t n (u) = sup { with the convention sup =. k ; } n+k γ i u. (7) i=n γn γn + γ n+1 T Ptn(T) γ i= n+i n () = n and n (t) = n+tn(t)+1 for t. [ n+p For t i=n γ i, n+p+1 i=n γ i [, t n (t) = p and n (t) = n+p+1. Jérôme Lelong (CERMICS) Tuesday September 5, 6 19 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 / 3
6 A functional CLT for Chen s algorithm II We introduce W n (.) W n () = and W n (t) = Theorem 4 Assume the Hypotheses of Theorem 3. ( n ( ), W n ( )) D D === (, W) n+t n(t)+1 i=n+1 γi δm i for t >. (8) on any finite time interval where is a stationary Ornstein Uhlenbeck process of initial law N(, V ) and W a Wiener process F,W measurable with covariance matrix Σ. Averaging Algorithms I γ (n+1) α with 1/ < α < 1. For any t >, we We restrict to γ n = introduce a moving window average of the iterates ˆθ n (t) = γ n t We study the renormalised error n+ t γn i=n θ i. (9) ˆ n (t) = ˆθ n (t) θ n+ γn t γn = (θ i θ ). (1) γn t We need to characterise the limit law of ( n, n+1,..., n+p ) when (n, p). i=n Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 / 3 A CLT for Averaging Algorithms Theorem 5 Under the Hypotheses of Theorem 3, where ˆ n (t) L n N(, ˆV ) ˆV = 1 t A 1 ΣA 1 + A (e At I)V + V A (e At I) t Jérôme Lelong (CERMICS) Tuesday September 5, 6 3 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 4 / 3
7 Major steps Donsker s Theorem for martingales increments n is tight in R. Tightness Use a localisation technique to prove that (sup t [,T] n (t) ) n is tight. Prove a Donsker Theorem for martingales increment. n ( ) satisfies Aldous criteria. Tightness in D Theorem (W n ( ), n ( )) n is tight in D D and converges in law to (W, ) where W is a Wiener process with respect to the smallest σ algebra that measures (W( ), ( )) with covariance matrix Σ and is the stationary solution of d (t) = Q (t)dt dw(t). Theorem 6 Let (M n (t)) n be a sequence of martingales. Assume that (M n ( )) n is tight in D and satisfies a C tightness criteria. (M n (t)) n is a uniformly square integrable family for each t. M n t P n t Then, M n ( ) ==== D[,T] B.M. n W n (t) satisfies Theorem 6. Jérôme Lelong (CERMICS) Tuesday September 5, 6 5 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 6 / 3 Comments on Hypothesis M n is a martingale n M n = γ i δm i 1 θi 1 p, i=1 E(M n F n 1 ) = M n 1 + γ n 1 θn 1 pe(δm n F n 1 ). if sup n M n < then M n converges a.s. n M n = γi E(δM i F i 1)1 θi 1 p. i=1 E(δMi F i 1 ) = E ( U(θ, Z) ) θ=θ i 1 u(θ i 1 ) E ( U(θ, Z) ) θ=θ i 1. Jérôme Lelong (CERMICS) Tuesday September 5, 6 7 / 3 Let τ and τ be independent r.v. on {, 1} with parameter 1/. We set X n := ( 1) n (τ τ ). τ τ is symmetric. Hence, X n is constant in law. E(e iuxn+τ ) = E(e iu( 1)n+τ (τ τ ) ), = 1 ( ) E(e iu( 1)n+τ τ ) + E(e iu( 1)n+τ (τ 1) ), = 1 ( ) 1 + e iu( 1)n+11 + e iu( 1)n ( 1) + 1, 4 = 1 ( 1 + e iu( 1)n+1). Hence, (X n+τ ) n does not converge in distribution. Jérôme Lelong (CERMICS) Tuesday September 5, 6 8 / 3
8 ( Basket Option 1 N N i= Si T K ) + Basket Option Figure: Evolution of the drift vector Figure: Importance Sampling Figure: Standard MC basket size : 6 initial value : maturity time : 1 strike : 1 interest rate :. volatility :. step size : 6 Sample Number : e3 e3 3e3 4e3 5e e3 e3 3e3 4e3 5e3 1e3 e3 3e3 4e3 5e3 variance = variance = Jérôme Lelong (CERMICS) Tuesday September 5, 6 9 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 3 / 3 Atlas Option Consider a basket of 16 stocks. At maturity time, remove the stocks with the three best and the three worst performances. It pays off 15% of the average of the remaining stocks. Atlas Option Figure: Importance Sampling Figure: Standard MC Figure: Evolution of the drift vector basket size : 16 maturity time : 1 interest rate :. volatility :. step size :.1 Sample Number : variance =.7596 variance = Jérôme Lelong (CERMICS) Tuesday September 5, 6 31 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 3 / 3
9 Arouna, B. (Winter 3/4). Robbins-monro algorithms and variance reduction in finance. The Journal of Computational Finance, 7(). Benveniste, A., Métivier, M., and Priouret, P. (199). Adaptive algorithms and stochastic approximations, volume of Applications of Mathematics (New York). Springer-Verlag, Berlin. Translated from the French by Stephen S. Wilson. Bouton, C. (1985). Approximation Gaussienne d algorithmes stochastiques à dynamique Markovienne. PhD thesis, Université Pierre et Marie Curie - Paris 6. Delyon, B. (1996). General results on the convergence of stochastic algorithms. IEEE Transactions on Automatic Control, 41(9): Duflo, M. (1997). Random Iterative Models. Springer-Verlag Berlin and New York. Kushner, H. and Yin, G. (3). Stochastic Approximation and Recursive Algorthims and Applications. Springer-Verlag New York, second edition. Chen, H. and Zhu, Y. (1986). Stochastic Approximation Procedure with randomly varying truncations. Scientia Sinica Series. Jérôme Lelong (CERMICS) Tuesday September 5, 6 3 / 3 Jérôme Lelong (CERMICS) Tuesday September 5, 6 3 / 3
A Central Limit Theorem for Robbins Monro Algorithms with Projections
A Central Limit Theorem for Robbins Monro Algorithms with Projections LLONG Jérôme, CRMICS, cole Nationale des Ponts et Chaussées, 6 et 8 av Blaise Pascal, 77455 Marne La Vallée, FRANC. 2th September 25
More informationA framework for adaptive Monte-Carlo procedures
A framework for adaptive Monte-Carlo procedures Jérôme Lelong (with B. Lapeyre) http://www-ljk.imag.fr/membres/jerome.lelong/ Journées MAS Bordeaux Friday 3 September 2010 J. Lelong (ENSIMAG LJK) Journées
More informationAsymptotic normality of randomly truncated stochastic algorithms
Asymptotic normality of randomly truncated stochastic algorithms Jérôme Lelong To cite this version: Jérôme Lelong. Asymptotic normality of randomly truncated stochastic algorithms. ESAIM: Probability
More informationStochastic Gradient Descent in Continuous Time
Stochastic Gradient Descent in Continuous Time Justin Sirignano University of Illinois at Urbana Champaign with Konstantinos Spiliopoulos (Boston University) 1 / 27 We consider a diffusion X t X = R m
More informationAlmost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms
Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Vladislav B. Tadić Abstract The almost sure convergence of two time-scale stochastic approximation algorithms is analyzed under
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationLecture 4: Introduction to stochastic processes and stochastic calculus
Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationSome multivariate risk indicators; minimization by using stochastic algorithms
Some multivariate risk indicators; minimization by using stochastic algorithms Véronique Maume-Deschamps, université Lyon 1 - ISFA, Joint work with P. Cénac and C. Prieur. AST&Risk (ANR Project) 1 / 51
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationStochastic Volatility and Correction to the Heat Equation
Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century
More informationSUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS
Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationShort-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility
Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,
More informationSUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES
SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,
More informationRegular Variation and Extreme Events for Stochastic Processes
1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for
More informationSPDES driven by Poisson Random Measures and their numerical September Approximation 7, / 42
SPDES driven by Poisson Random Measures and their numerical Approximation Hausenblas Erika Montain University Leoben, Austria September 7, 2011 SPDES driven by Poisson Random Measures and their numerical
More informationDetecting instants of jumps and estimating intensity of jumps from continuous or discrete data
Detecting instants of jumps and estimating intensity of jumps from continuous or discrete data Denis Bosq 1 Delphine Blanke 2 1 LSTA, Université Pierre et Marie Curie - Paris 6 2 LMA, Université d'avignon
More informationInformation and Credit Risk
Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley
More informationMulti-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes
Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes Anatoliy Swishchuk Department of Mathematics and Statistics University of Calgary Calgary, Alberta, Canada Lunch at the Lab Talk
More informationA random perturbation approach to some stochastic approximation algorithms in optimization.
A random perturbation approach to some stochastic approximation algorithms in optimization. Wenqing Hu. 1 (Presentation based on joint works with Chris Junchi Li 2, Weijie Su 3, Haoyi Xiong 4.) 1. Department
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationCONVERGENCE OF A STOCHASTIC APPROXIMATION VERSION OF THE EM ALGORITHM
The Annals of Statistics 1999, Vol. 27, No. 1, 94 128 CONVERGENCE OF A STOCHASTIC APPROXIMATION VERSION OF THE EM ALGORITHM By Bernard Delyon, Marc Lavielle and Eric Moulines IRISA/INRIA, Université Paris
More informationWeak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij
Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationBrownian Motion. Chapter Definition of Brownian motion
Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier
More informationRecursive estimation of GARCH processes
Proceedings of the 9th International Symposium on Mathematical Theory of Networks and Systems MTNS 5 9 July, Budapest, Hungary Recursive estimation of GARCH processes László Gerencsér, Zsanett Orlovits
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationA Perturbed Gradient Algorithm in Hilbert spaces
A Perturbed Gradient Algorithm in Hilbert spaces Kengy Barty, Jean-Sébastien Roy, Cyrille Strugarek May 19, 2005 Abstract We propose a perturbed gradient algorithm with stochastic noises to solve a general
More informationOnline solution of the average cost Kullback-Leibler optimization problem
Online solution of the average cost Kullback-Leibler optimization problem Joris Bierkens Radboud University Nijmegen j.bierkens@science.ru.nl Bert Kappen Radboud University Nijmegen b.kappen@science.ru.nl
More informationSize Distortion and Modi cation of Classical Vuong Tests
Size Distortion and Modi cation of Classical Vuong Tests Xiaoxia Shi University of Wisconsin at Madison March 2011 X. Shi (UW-Mdsn) H 0 : LR = 0 IUPUI 1 / 30 Vuong Test (Vuong, 1989) Data fx i g n i=1.
More informationOn the fast convergence of random perturbations of the gradient flow.
On the fast convergence of random perturbations of the gradient flow. Wenqing Hu. 1 (Joint work with Chris Junchi Li 2.) 1. Department of Mathematics and Statistics, Missouri S&T. 2. Department of Operations
More informationGenerative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis
Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :
More informationAsymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri
Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri University of Bergamo (Italy) ilia.negri@unibg.it SAPS VIII, Le Mans 21-24 March,
More informationNonuniform Random Variate Generation
Nonuniform Random Variate Generation 1 Suppose we have a generator of i.i.d. U(0, 1) r.v. s. We want to generate r.v. s from other distributions, such as normal, Weibull, Poisson, etc. Or other random
More informationGARCH processes continuous counterparts (Part 2)
GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/
More informationNested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model
Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai
More informationSome functional (Hölderian) limit theorems and their applications (II)
Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen
More informationSTAT Sample Problem: General Asymptotic Results
STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationRandom sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland
Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions
More informationStochastic Optimization One-stage problem
Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique
More informationSTATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.
STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution
More informationSTAT 331. Martingale Central Limit Theorem and Related Results
STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal
More informationBeyond stochastic gradient descent for large-scale machine learning
Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new
More informationFUNCTIONAL CENTRAL LIMIT (RWRE)
FUNCTIONAL CENTRAL LIMIT THEOREM FOR BALLISTIC RANDOM WALK IN RANDOM ENVIRONMENT (RWRE) Timo Seppäläinen University of Wisconsin-Madison (Joint work with Firas Rassoul-Agha, Univ of Utah) RWRE on Z d RWRE
More informationVast Volatility Matrix Estimation for High Frequency Data
Vast Volatility Matrix Estimation for High Frequency Data Yazhen Wang National Science Foundation Yale Workshop, May 14-17, 2009 Disclaimer: My opinion, not the views of NSF Y. Wang (at NSF) 1 / 36 Outline
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationLAN property for sde s with additive fractional noise and continuous time observation
LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationDynamic Risk Measures and Nonlinear Expectations with Markov Chain noise
Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University
More informationMATH 6605: SUMMARY LECTURE NOTES
MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationCONVERGENCE RATE OF LINEAR TWO-TIME-SCALE STOCHASTIC APPROXIMATION 1. BY VIJAY R. KONDA AND JOHN N. TSITSIKLIS Massachusetts Institute of Technology
The Annals of Applied Probability 2004, Vol. 14, No. 2, 796 819 Institute of Mathematical Statistics, 2004 CONVERGENCE RATE OF LINEAR TWO-TIME-SCALE STOCHASTIC APPROXIMATION 1 BY VIJAY R. KONDA AND JOHN
More informationHomework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.
Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each
More informationImportance Sampling and Statistical Romberg method
Importance Sampling and Statistical Romberg method Mohamed Ben Alaya, Kaouther Hajji, Ahmed Kebaier o cite this version: Mohamed Ben Alaya, Kaouther Hajji, Ahmed Kebaier Importance Sampling and Statistical
More informationUnderstanding Regressions with Observations Collected at High Frequency over Long Span
Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University
More informationVariance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models
Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Jean-Pierre Fouque North Carolina State University SAMSI November 3, 5 1 References: Variance Reduction for Monte
More informationPerturbed Proximal Gradient Algorithm
Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationThomas Knispel Leibniz Universität Hannover
Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationAsymptotic inference for a nonstationary double ar(1) model
Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk
More informationNonlinear GMM. Eric Zivot. Winter, 2013
Nonlinear GMM Eric Zivot Winter, 2013 Nonlinear GMM estimation occurs when the GMM moment conditions g(w θ) arenonlinearfunctionsofthe model parameters θ The moment conditions g(w θ) may be nonlinear functions
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationWeak Convergence: Introduction
7 Weak Convergence: Introduction 7.0 Outline of Chapter Up to now, we have concentrated on the convergence of {θ n } or of {θ n ( )} to an appropriate limit set with probability one. In this chapter, we
More informationLarge deviations and averaging for systems of slow fast stochastic reaction diffusion equations.
Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics
More informationMATH4210 Financial Mathematics ( ) Tutorial 7
MATH40 Financial Mathematics (05-06) Tutorial 7 Review of some basic Probability: The triple (Ω, F, P) is called a probability space, where Ω denotes the sample space and F is the set of event (σ algebra
More informationStochastic differential equation models in biology Susanne Ditlevsen
Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informatione - c o m p a n i o n
OPERATIONS RESEARCH http://dx.doi.org/1.1287/opre.111.13ec e - c o m p a n i o n ONLY AVAILABLE IN ELECTRONIC FORM 212 INFORMS Electronic Companion A Diffusion Regime with Nondegenerate Slowdown by Rami
More informationBranching Processes II: Convergence of critical branching to Feller s CSB
Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied
More informationStochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.
Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of
More informationAn Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem
An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition heorem Adam Jakubowski Nicolaus Copernicus University, Faculty of Mathematics and Computer Science, ul. Chopina
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationBeyond stochastic gradient descent for large-scale machine learning
Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines, Nicolas Le Roux and Mark Schmidt - CAP, July
More informationModel Counting for Logical Theories
Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern
More informationConfidence Intervals of Prescribed Precision Summary
Confidence Intervals of Prescribed Precision Summary Charles Stein showed in 1945 that by using a two stage sequential procedure one could give a confidence interval for the mean of a normal distribution
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More informationEconometrics II - EXAM Answer each question in separate sheets in three hours
Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following
More informationCONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner
STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,
More informationA numerical method for solving uncertain differential equations
Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical
More informationOn an Effective Solution of the Optimal Stopping Problem for Random Walks
QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More informationAverage-cost temporal difference learning and adaptive control variates
Average-cost temporal difference learning and adaptive control variates Sean Meyn Department of ECE and the Coordinated Science Laboratory Joint work with S. Mannor, McGill V. Tadic, Sheffield S. Henderson,
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 7 9/5/013 The Reflection Principle. The Distribution of the Maximum. Brownian motion with drift Content. 1. Quick intro to stopping times.
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More information