Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion

Size: px
Start display at page:

Download "Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion"

Transcription

1 Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion Peng Shige Shandong University, China School of Stochastic Dynamical Systems and Ergodicity Loughborough University, Dec

2 Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion Peng Shige Shandong University, China School of Stochastic Dynamical Systems and Ergodicity Loughborough University, Dec

3 Optimal Unbiased Estimation for Maximal Distribution 1. Preliminary: Sublinear expectation space (Ω, H, Ê) Stochastic Dynamical System fo Let Ω be a given set and let H be a linear space of real valued functions defined on Ω. We assume c H, c R, X H, if X H

4 Sublinear expectation space (Ω, H, Ê) Ê[λX ] = λê[x ] for λ 0. Stochastic Dynamical System fo Definition A Sublinear expectation (i) Monotonicity: Ê is a functional E : H R satisfying Ê[X ] Ê[Y ] if X Y, (ii) Constant preserving: Ê[c] = c for c R, (iii) Sub-additivity: For each X, Y H, Ê[X + Y ] Ê[X ] + Ê[Y ], (iv) Positive homogeneity:

5 (Ω, H, Ê) is called a sublinear expectation space A sublinear expectation Ê[ ] defined on (Ω, H) is said to be regular if Ê[X i ] 0 for each sequence {X i } i=1 of random variables in H such that X i (ω) 0 for each ω Ω. If (i) and (ii) are satisfied, Ê is called a nonlinear expectation and the triple (Ω, H, Ê) is called a nonlinear expectation space

6 1. Sublinear Expectations Stochastic Dynamical System fo Ω: a given set H: a linear space of real valued functions defined on Ω, s.t. a) c H for each constant c, b) X H = X H

7 (Ω, H, E): a sublinear expectation space. Stochastic Dynamical System fo Proof. [Definition 1.1] A Sublinear expectation E is a functional E : H R satisfying Monotonicity: Cash translatability: E[X ] E[Y ] if X Y. E[X + c] = E[X ] + c for c R. Sub-additivity: For each X, Y H, Positive homogeneity: E[X + Y ] E[X ] + E[Y ]. E[λX ] = λe[x ] for λ 0.

8 Proof. [Definition 1.2] Let E 1 and E 2 be two nonlinear expectations defined on (Ω, H). E 1 is said to be dominated by E 2 if E 1 [X ] E 1 [Y ] E 2 [X Y ] for X, Y H. (1.1)

9 Proof. [Remark. 1.4] (iii)+(iv) is called sublinearity. This sublinearity implies (v) Convexity: E[αX + (1 α)y ] αe[x ] + (1 α)e[y ] for α [0, 1]. If a nonlinear expectation E satisfies convexity, we call it a convex expectation.

10 2. Representation of a Sublinear Expectation Stochastic Dynamical System fo Theorem (THM 2.1) Let E a sublinear functional defined on (Ω, H). Then a family of linear functionals {E θ : θ Θ} on (Ω, H) s. t. E[X ] = max θ Θ E θ[x ] for X H Furthermore, if E is a sublinear expectation, then each E θ is a linear expectation.

11 3. Distributions, Independence and Product Spaces Stochastic Dynamical System fo Definition Let X 1 and X 2 be two n dim. random vectors in a nonlinear expectation spaces (Ω, H, E). They are called identically distributed, denoted by X 1 d = X2, if E 1 [ϕ(x 1 )] = E 2 [ϕ(x 2 )] for ϕ C Lip (R n ).

12 Lemma Let (Ω, H, E) be a sublinear expectation space and let F X [ϕ] := E[ϕ(X )] be the sublinear distribution of X H d. Then there exists a family of probability measures {F θ } θ Θ defined on (R d, B(R d )) such that ˆ F X [ϕ] = sup ϕ(x)f θ (dx), ϕ C l,lip (R d ). (1) θ Θ R d

13 A simple and useful property Stochastic Dynamical System fo Proposition. Let (Ω, H, E) be a sublinear expectation space and X, Y be two random variables such that E[Y ] = E[ Y ], i.e., Y has no mean-uncertainty. Then we have E[X + αy ] = E[X ] + αe[y ] for α R. In particular, if E[Y ] = E[ Y ] = 0, then E[X + αy ] = E[X ].

14 Proof. We have E[αY ] = α + E[Y ] + α E[ Y ] = α + E[Y ] α E[Y ] = αe[y ] for α R Thus E[X + αy ] E[X ] + E[αY ] = E[X ] + αe[y ] E.Q. = E[X ] E[ αy ] E[X + αy ].

15 A more general form of the above proposition is: Proposition. We make the same assumptions as the previous proposition. Let Ẽ be a nonlinear expectation on (Ω, H) dominated by the sublinear expectation E. If E[Y ] = E[ Y ], then we have as well as In particular Ẽ[αY ] = αẽ[y ] = αe[y ], α R (2) Ẽ[X + αy ] = Ẽ[X ] + αẽ[y ], X H, α R. (3) Ẽ[X + c] = Ẽ[X ] + c, for c R. (4)

16 Definition A sequence of n-dimensional random vectors {η i } i=1 defined on a sublinear expectation space (Ω, H, E) is said to converge in distribution (or converge in law) under E if for each ϕ C b.lip (R n ), the sequence {E[ϕ(η i )]} i=1 converges.

17 The following notion of independence plays a key role in the nonlinear expectation theory. Definition In a nonlinear expectation space (Ω, H, E), a random vector Y H n is said to be independent from another random vector X H m under E[ ] if for each test function ϕ C Lip (R m+n ) we have E[ϕ(X, Y )] = E[E[ϕ(x, Y )] x=x ]. Remark. Y is independent from X does not implies X is independent from Y.

18 1. Maximal distribution and nonlinear normal Distribution Stochastic Dynamical System fo Definition A d-dimensional random vector η = (η 1,, η d ) on (Ω, H, E) is called maximal distributed if there exists a bounded, closed and convex subset Γ R d such that E[ϕ(η)] = max y Γ ϕ(y).

19 Remark. A maximal distributed random vector η satisfies aη + b η d = (a + b)η for a, b 0, where η is an independent copy of η.

20 Remark. When d = 1 we have Γ = [µ, µ], where µ = E[η] and µ = E[ η]. The distribution of η is F η [ϕ] = E[ϕ(η)] = sup ϕ(y) µ y µ for ϕ C Lip (R).

21 Law of Large Numbers g(p) := E[ p, Y 1 Stochastic ], p R d. Dynamical System fo Theorem (Law of large numbers) Let {Y i } i=1 be a sequence of real valued i.i.d. random sequince on a sublinear expectation space Ω, H, E) such that Ê[( Y 1 c) + ] c 0. Then { S n } n=1 defined by S n := 1 n n Y i i=1 converges in law to a maximal distribution: lim E[ϕ( S n )] = max ϕ(y), (5) n y [µ,µ] for all ϕ C (R), where µ = Ê[ Y 1 ], µ = Ê[Y 1 ].

22 Let X 1,, X n be n copies of the same maximal distribution X i d = M[µ,µ], i = 1,, n. with unknown parameters µ µ, and X i is independent of {X j } j=1,,i 1. In short, we say our X 1,, X n are i.i.d.. It is clear that maximum distribution is completely determined by these two parameters. It is then important to construct statistics to estimate parameters µ and µ properly.

23 It is natural to think of the following statistics: µ (X 1,, X n ) := max{x 1,, X n }, (6) µ (X 1,, X n ) := min{x 1,, X n }. (7)

24 Like in the classical statistical theory, we want our estimator to be unbiased in some sense, which should be redefined properly. Definition Let f n C (R n ), a statistic T n = f n (X 1,, X n ) is called an unbiased estimator of µ (resp. for µ) if Ê[f n (X 1,, X n )] = µ (resp. Ê[ f n (X 1,, X n )] = µ), for all < µ µ <.

25 We have the following lemma. Lemma If f n C Lip (R n ), the estimator T n = f n (X 1, X 2,, X n ) is unbiased for the upper mean µ, then for all µ µ, we have, max f n(x 1,, x n ) = µ, (8) (x 1,,x n ) [µ,µ] n (resp. min f n(x 1,, x n ) = µ) (9) (x 1,,x n ) [µ,µ] n Consequently, for all µ µ, and (x 1,, x n ) [µ, µ] n, f n (x 1,, x n ) µ, (10) (resp. f n (x 1,, x n ) µ). (11)

26 Proof. Since { Ê[f n (X 1,, X n )] = Ê = Ê [ =... = max Ê[ max µ x n µ f n(x 1,, x n )] } max f n(x 1,, X n 1, x n ) µ x n µ 1 i n ] max f n(x 1,, x n ). µ x i µ Denote f n 1 (x 1,, x n 1 ) := max µ xn µ f n (x 1,, x n 1, x n ), then f n 1 (a 1,, a n 1 ) f n 1 (b 1,, b n 1 ) max f n(a 1,, a n 1, x n ) f n (b 1,, b n 1, x n ) x n [µ, µ] max L (a 1,, a n 1 ) (b 1,, b n 1 ) x n [µ, µ] = L (a 1,, a n 1 ) (b 1,, b n 1 ), x 1 =X 1,,x n 1 =X n which means f n 1 C Lip (R n 1 ), hence we can continue our equality Stochastic Dynamical System fo

27 Ê[f n (X 1,, X n )] = Ê[f n 1 (X 1,, X n 1 )] = Ê[f 1 (X 1 )] = max x 1 [µ, µ] f 1(x 1 ) = max x 1 [µ, µ] max f 2(x 1, x 2 ) x 2 [µ, µ] = max (x 1,x 2 ) [µ, µ] 2 f 2(x 1, x 2 ) = max (x 1,,x n ) [µ, µ] n f n(x 1,, x n ). Then we have (8) and thus (10). The proof of (9) and (11) are similar. E.Q.

28 is the smallest unbiased estimator for the lower mean µ. Stochastic Dynamical System fo Now, we present our main result. Theorem Let X 1,, X n be i.i.d. sample of size n from the population of maximal distribution X i d = M[µ,µ], i = 1,, n, with unknown parameters µ µ. Then we have, quasi surely (i.e., P θ -almost surely for any θ Θ), µ min{x 1 (ω),, X n (ω)} max{x 1 (ω),, X n (ω)} µ. Moreover, µ n = max{x 1,, X n } is the largest unbiased estimator for the upper mean µ, µ n = min{x 1,, X n }

29 PROOF It is easy to check that µ = max{x 1,, X n } is an unbiased estimator for the unknown upper mean µ and µ = min{x 1,, X n } is an unbiased estimator for the unknown lower mean µ. Stochastic Dynamical System fo

30 Let T n = f n (Y 1,, Y n ) be a given unbiased estimator for the upper mean µ. For any y 1,, y n R, we set µ = max{y 1,, y n }, µ = min{y 1,, y n }, d and consider the case Y i = M[ µ, µ]. According to Lemma 9, the unbiased estimator T n = f n (Y 1,, Y n ), f n must satisfy (10), namely, f n (y 1,, y n ) µ = max{y 1,, y n }. Since y 1,, y n can be arbitrarily chosen, we then have f n (y 1,, y n ) max{y 1,, y n }, y 1,, y n R. Thus µ n is the largest estimator for the upper mean. We can prove that µ n is the smallest estimator for the lower mean. [ENDPF]

31 The next proposition tell us that the two estimators are both maximal distributed with the same lower mean µ and upper mean µ. Proposition. Let X 1,, X n be a maximal distributed i.i.d. sample with X 1 = M [µ,µ]. Then max{x 1 (ω),, X n (ω)} d = min{x 1 (ω),, X n (ω)} d = M [µ,µ].

32 Proof. It is clear that, for each ϕ C (R), we have Ê[ϕ(max{X 1,, X n })] = max (x 1,,x n ) [µ, µ] n ϕ(x 1 x 2 x n ) = max x [µ, µ] ϕ(x) = Ê[ϕ(X 1 )]. Thus max{x 1,, X n } d = M [µ,µ]. Similarly we can prove that min{x 1,, X n } d = M [µ,µ].

33 Remark. In fact, we can prove that, for continuous function f C (R n ), we have f (X 1,, X n ) d = M [µf,µ f ] (12) where µ f := max (x 1,,x n ) [µ, µ] n f (x 1,, x n ), µ f := min (x 1,,x n ) [µ, µ] n f (x 1,, x n ). Indeed, for each ϕ C (R), Ê[ϕ(f (X 1,, X n ))] = which implies (12). max ϕ(f (x 1,, x n )) = max ϕ(y), (x 1,,x n ) [µ, µ] n y [µ,µ f f ]

arxiv:math/ v1 [math.pr] 13 Feb 2007

arxiv:math/ v1 [math.pr] 13 Feb 2007 arxiv:math/0702358v1 [math.pr] 13 Feb 2007 Law of Large Numbers and Central Limit Theorem under Nonlinear Expectations 1 Introduction Shige PENG Institute of Mathematics Shandong University 250100, Jinan,

More information

Law of Large Number and Central Limit Theorem under Uncertainty, the Related/ New 53. and Applications to Risk Measures

Law of Large Number and Central Limit Theorem under Uncertainty, the Related/ New 53. and Applications to Risk Measures Law of Large Number and Central Limit Theorem under Uncertainty, the Related New Itô s Calculus and Applications to Risk Measures Shige Peng Shandong University, Jinan, China Presented at 1st PRIMA Congress,

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Solvability of G-SDE with Integral-Lipschitz Coefficients

Solvability of G-SDE with Integral-Lipschitz Coefficients Solvability of G-SDE with Integral-Lipschitz Coefficients Yiqing LIN joint work with Xuepeng Bai IRMAR, Université de Rennes 1, FRANCE http://perso.univ-rennes1.fr/yiqing.lin/ ITN Marie Curie Workshop

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

MATH 217A HOMEWORK. P (A i A j ). First, the basis case. We make a union disjoint as follows: P (A B) = P (A) + P (A c B)

MATH 217A HOMEWORK. P (A i A j ). First, the basis case. We make a union disjoint as follows: P (A B) = P (A) + P (A c B) MATH 217A HOMEWOK EIN PEASE 1. (Chap. 1, Problem 2. (a Let (, Σ, P be a probability space and {A i, 1 i n} Σ, n 2. Prove that P A i n P (A i P (A i A j + P (A i A j A k... + ( 1 n 1 P A i n P (A i P (A

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

S6880 #13. Variance Reduction Methods

S6880 #13. Variance Reduction Methods S6880 #13 Variance Reduction Methods 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic

More information

Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation

Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation arxiv:1312.0287v3 [math.pr] 13 Jan 2016 Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation François Baccelli University of Texas at Austin Mir-Omid Haji-Mirsadeghi Sharif

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Lecture 4 The nucleolus

Lecture 4 The nucleolus Lecture 4 The nucleolus The nucleolus is based on the notion of excess and has been introduced by Schmeidler [3]. The excess measures the amount of complaints of a coalition for a payoff distribution.

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

λ(x + 1)f g (x) > θ 0

λ(x + 1)f g (x) > θ 0 Stat 8111 Final Exam December 16 Eleven students took the exam, the scores were 92, 78, 4 in the 5 s, 1 in the 4 s, 1 in the 3 s and 3 in the 2 s. 1. i) Let X 1, X 2,..., X n be iid each Bernoulli(θ) where

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9947(XX)0000-0 UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS YULIAN

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Chapter 5. Nucleolus. Core(v) := {x R n x is an imputation such that e(x, S) 0 for all coalitions S}.

Chapter 5. Nucleolus. Core(v) := {x R n x is an imputation such that e(x, S) 0 for all coalitions S}. Chapter 5 Nucleolus In this chapter we study another way of selecting a unique efficient allocation that forms an alternative to the Shapley value. Fix a TU-game (N, v). Given a coalition S and an allocation

More information

1. Point Estimators, Review

1. Point Estimators, Review AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Gradient Ascent Chris Piech CS109, Stanford University

Gradient Ascent Chris Piech CS109, Stanford University Gradient Ascent Chris Piech CS109, Stanford University Our Path Deep Learning Linear Regression Naïve Bayes Logistic Regression Parameter Estimation Our Path Deep Learning Linear Regression Naïve Bayes

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Weak solutions of mean-field stochastic differential equations

Weak solutions of mean-field stochastic differential equations Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries EC 521 MATHEMATICAL METHODS FOR ECONOMICS Lecture 1: Preliminaries Murat YILMAZ Boğaziçi University In this lecture we provide some basic facts from both Linear Algebra and Real Analysis, which are going

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Relative deviation metrics with applications in

Relative deviation metrics with applications in Relative deviation metrics with applications in finance Stoyan V. Stoyanov FinAnalytica, Inc., USA e-mail: stoyan.stoyanov@finanalytica.com Svetlozar T. Rachev University of Karlsruhe, Germany and University

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Topics: consistent estimators; sub-σ-fields and partial observations; Doob s theorem about sub-σ-field measurability;

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

Quasi-Lovász extensions on bounded chains

Quasi-Lovász extensions on bounded chains Quasi-Lovász extensions on bounded chains Miguel Couceiro and Jean-Luc Marichal 1 LAMSADE - CNRS, Université Paris-Dauphine Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16, France miguel.couceiro@dauphine.fr

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

The quasi-periodic Frenkel-Kontorova model

The quasi-periodic Frenkel-Kontorova model The quasi-periodic Frenkel-Kontorova model Philippe Thieullen (Bordeaux) joint work with E. Garibaldi (Campinas) et S. Petite (Amiens) Korean-French Conference in Mathematics Pohang, August 24-28,2015

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

1. Probability Measure and Integration Theory in a Nutshell

1. Probability Measure and Integration Theory in a Nutshell 1. Probability Measure and Integration Theory in a Nutshell 1.1. Measurable Space and Measurable Functions Definition 1.1. A measurable space is a tuple (Ω, F) where Ω is a set and F a σ-algebra on Ω,

More information

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1]. THEOREM OF OSELEDETS We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets []. 0.. Cocycles over maps. Let µ be a probability measure

More information

Zangwill s Global Convergence Theorem

Zangwill s Global Convergence Theorem Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

Research Article On λ-statistically Convergent Double Sequences of Fuzzy Numbers

Research Article On λ-statistically Convergent Double Sequences of Fuzzy Numbers Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2008, Article ID 47827, 6 pages doi:0.55/2008/47827 Research Article On λ-statistically Convergent Double Sequences of Fuzzy

More information

Invariance Principle for Variable Speed Random Walks on Trees

Invariance Principle for Variable Speed Random Walks on Trees Invariance Principle for Variable Speed Random Walks on Trees Wolfgang Löhr, University of Duisburg-Essen joint work with Siva Athreya and Anita Winter Stochastic Analysis and Applications Thoku University,

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

EE514A Information Theory I Fall 2013

EE514A Information Theory I Fall 2013 EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Stochastic Dynamic Programming: The One Sector Growth Model

Stochastic Dynamic Programming: The One Sector Growth Model Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0). Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012 Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

The Lebesgue Integral

The Lebesgue Integral The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Chapter 4: Asymptotic Properties of the MLE

Chapter 4: Asymptotic Properties of the MLE Chapter 4: Asymptotic Properties of the MLE Daniel O. Scharfstein 09/19/13 1 / 1 Maximum Likelihood Maximum likelihood is the most powerful tool for estimation. In this part of the course, we will consider

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Some appications of free stochastic calculus to II 1 factors.

Some appications of free stochastic calculus to II 1 factors. Some appications of free stochastic calculus to II 1 factors. Dima Shlyakhtenko UCLA Free Entropy Dimension (Voiculescu) G = S nitely generated discrete group, τ G : CF S C. ω Mn n = M 1 k=1 k k/{(x k

More information

Lecture 2: Convergence of Random Variables

Lecture 2: Convergence of Random Variables Lecture 2: Convergence of Random Variables Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Introduction to Stochastic Processes, Fall 2013 1 / 9 Convergence of Random Variables

More information

Almost Sure Convergence of a Sequence of Random Variables

Almost Sure Convergence of a Sequence of Random Variables Almost Sure Convergence of a Sequence of Random Variables (...for people who haven t had measure theory.) 1 Preliminaries 1.1 The Measure of a Set (Informal) Consider the set A IR 2 as depicted below.

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Consequences of the Completeness Property

Consequences of the Completeness Property Consequences of the Completeness Property Philippe B. Laval KSU Today Philippe B. Laval (KSU) Consequences of the Completeness Property Today 1 / 10 Introduction In this section, we use the fact that R

More information

Stochastic Gradient Descent in Continuous Time

Stochastic Gradient Descent in Continuous Time Stochastic Gradient Descent in Continuous Time Justin Sirignano University of Illinois at Urbana Champaign with Konstantinos Spiliopoulos (Boston University) 1 / 27 We consider a diffusion X t X = R m

More information

Markov Chains, Conductance and Mixing Time

Markov Chains, Conductance and Mixing Time Markov Chains, Conductance and Mixing Time Lecturer: Santosh Vempala Scribe: Kevin Zatloukal Markov Chains A random walk is a Markov chain. As before, we have a state space Ω. We also have a set of subsets

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

ABSTRACT CONDITIONAL EXPECTATION IN L 2

ABSTRACT CONDITIONAL EXPECTATION IN L 2 ABSTRACT CONDITIONAL EXPECTATION IN L 2 Abstract. We prove that conditional expecations exist in the L 2 case. The L 2 treatment also gives us a geometric interpretation for conditional expectation. 1.

More information

4. Convex Sets and (Quasi-)Concave Functions

4. Convex Sets and (Quasi-)Concave Functions 4. Convex Sets and (Quasi-)Concave Functions Daisuke Oyama Mathematics II April 17, 2017 Convex Sets Definition 4.1 A R N is convex if (1 α)x + αx A whenever x, x A and α [0, 1]. A R N is strictly convex

More information

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables EE50: Probability Foundations for Electrical Engineers July-November 205 Lecture 2: Expectation of CRVs, Fatou s Lemma and DCT Lecturer: Krishna Jagannathan Scribe: Jainam Doshi In the present lecture,

More information

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002 Statistics Ph.D. Qualifying Exam: Part II November 9, 2002 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information