Law of Large Number and Central Limit Theorem under Uncertainty, the Related/ New 53. and Applications to Risk Measures

Size: px
Start display at page:

Download "Law of Large Number and Central Limit Theorem under Uncertainty, the Related/ New 53. and Applications to Risk Measures"

Transcription

1 Law of Large Number and Central Limit Theorem under Uncertainty, the Related New Itô s Calculus and Applications to Risk Measures Shige Peng Shandong University, Jinan, China Presented at 1st PRIMA Congress, July 06-10, 2009 UNSW, Sydney, Australia

2 A Chinese Understanding of Beijing Opera A popular point of view of Beijing opera by Chinese artists,

3 A Chinese Understanding of Beijing Opera A popular point of view of Beijing opera by Chinese artists, Translation (with certain degree of uncertainty) The universe is a big opera stage, An opera stage is a small universe

4 A Chinese Understanding of Beijing Opera A popular point of view of Beijing opera by Chinese artists, Translation (with certain degree of uncertainty) The universe is a big opera stage, An opera stage is a small universe Real world Modeling world

5 Scientists point of view of the real world after Newton: People strongly believe that our universe is deterministic: it can be precisely calculated by (ordinary) differential equations.

6 Scientists point of view of the real world after Newton: People strongly believe that our universe is deterministic: it can be precisely calculated by (ordinary) differential equations. After longtime explore our scientists begin to realize that our universe is full of uncertainty

7 A fundamental tool to understand and to treat uncertainty: Probability Theory Pascal & Fermat, Huygens, de Moivre, Laplace, Ω: a set of events: only one event ω Ω happens but we don t know which ω (at least before it happens).

8 A fundamental tool to understand and to treat uncertainty: Probability Theory Pascal & Fermat, Huygens, de Moivre, Laplace, Ω: a set of events: only one event ω Ω happens but we don t know which ω (at least before it happens). The probability of A Ω: P(A) number times A happens total number of trials

9 A fundamental revolution of probability theory (1933 Kolmogorov s Foundation of Probability Theory (Ω, F, P)

10 A fundamental revolution of probability theory (1933 Kolmogorov s Foundation of Probability Theory (Ω, F, P) The basic rule of probability theory P( P(Ω) = 1, P( ) = 0, P(A + B) = P(A) + P(B) i=1 A i )= P(A i ) i=1

11 A fundamental revolution of probability theory (1933 Kolmogorov s Foundation of Probability Theory (Ω, F, P) The basic rule of probability theory P( P(Ω) = 1, P( ) = 0, P(A + B) = P(A) + P(B) i=1 A i )= P(A i ) i=1 The basic idea of Kolmogorov: Using (possibly infinite dimensional) Lebesgue integral to calculate the expectation of a random variable X (ω) E [X ]= Ω X (ω)dp(ω)

12 vnm Expected Utility Theory: The foundation of modern economic theory E [u(x )] E [u(y )] The utility of X is bigger than that of Y

13 Probability Theory Markov Processes Itô s calculus: and pathwise stochastic analysis Statistics: life science and medical industry; insurance, politics; Stochastic controls; Statistic Physics; Economics, Finance; Civil engineering; Communications, internet; Forecasting: Whether, pollution,

14 Probability measure P( ) vs mathematical expectation E [ ]: who is more fundamental? We can also first define a linear functional E : H R, s.t. E [αx + c] = αe [X ]+c, E [X ] E [Y ], if X Y E [X i ] 0, if X i 0.

15 Probability measure P( ) vs mathematical expectation E [ ]: who is more fundamental? We can also first define a linear functional E : H R, s.t. E [αx + c] = αe [X ]+c, E [X ] E [Y ], if X Y E [X i ] 0, if X i 0. By Daniell-Stone Theorem There is a unique probability measure (Ω, F, P) such that P(A) =E [1 A ] for each A F

16 Probability measure P( ) vs mathematical expectation E [ ]: who is more fundamental? We can also first define a linear functional E : H R, s.t. E [αx + c] = αe [X ]+c, E [X ] E [Y ], if X Y E [X i ] 0, if X i 0. By Daniell-Stone Theorem There is a unique probability measure (Ω, F, P) such that P(A) =E [1 A ] for each A F For nonlinear expectation No more such equivalence!!

17 Why nonlinear expectation? M. Allais Paradox Ellesberg Paradox in economic theory The linearity of expectation E cannot be a linear operator [ Artzner-Delbean-Eden-Heath] Coherent risk measure

18 A well-known puzzle: why normal distributions are so widely used? If someone faces a random variable X with uncertain distribution, he will first try to use normal distribution N (µ, σ 2 ).

19 The density of anormal distribution p(x) = 1 2π exp x2 2

20 Explanation for this phenomenon is the well-known central limit theorem:

21 Explanation for this phenomenon is the well-known central limit theorem: Theorem (Central Limit Theorem (CLT)) {X i } i=1 is assumed to be i.i.d. with µ = E [X 1] and σ 2 = E [(X 1 µ) 2 ]. Then for each bounded and continuous function ϕ C (R), we have lim E[ϕ( 1 n i n i µ))] = E [ϕ(x )], X N(0, σ i=1(x 2 ).

22 Explanation for this phenomenon is the well-known central limit theorem: Theorem (Central Limit Theorem (CLT)) {X i } i=1 is assumed to be i.i.d. with µ = E [X 1] and σ 2 = E [(X 1 µ) 2 ]. Then for each bounded and continuous function ϕ C (R), we have lim E[ϕ( 1 n i n i µ))] = E [ϕ(x )], X N(0, σ i=1(x 2 ). The beauty and power of this result comes from: the above sum tends to N (0, σ 2 ) regardless the original distribution of X i, provided that X i X 1, for all i = 2, 3, and that X 1, X 2, are mutually independent.

23 History of CLT Abraham de Moivre 1733, Pierre-Simon Laplace, 1812: Théorie Analytique des Probabilité Aleksandr Lyapunov, 1901 Cauchy s, Bessel s and Poisson s contributions, von Mises, Polya, Lindeberg, Lévy, Cramer

24 dirty work through contaminated data? But in, real world in finance, as well as in many other human science situations, it is not true that the above {X i } i=1 is really i.i.d. Many academic people think that people in finance just widely and deeply abuse this beautiful mathematical result to do dirty work through contaminated data

25 A new CLT under Distribution Uncertainty 1. We do not know the distribution of X i ; 2. We don t assume X i X j, they may have difference unknown distributions. We only assume that the distribution of X i, i = 1, 2, are within some subset of distribution functions L(X i ) {F θ (x) : θ Θ}.

26 In the situation of distributional uncertainty and/or probabilistic uncertainty (or model uncertainty) the problem of decision become more complicated. A well-accepted method is the following robust calculation: sup E θ [ϕ(ξ)], inf E θ[ϕ(η)] θ Θ θ Θ where E θ represent the expectation of a possible probability in our uncertainty model.

27 But this turns out to be a new and basic mathematical tool: Ê[X ]=sup E θ [X ] θ Θ The operator Ê has the properties: a) X Y then Ê[X ] Ê[Y ] b) Ê[c] = c c) Ê[X + Y ] Ê[X ]+Ê[Y ] d) Ê[λX ]=λê[x ], λ 0.

28 Sublinear Expectation Definition A nonlinear expectation: a functional Ê : H R

29 Sublinear Expectation Definition A nonlinear expectation: a functional Ê : H R (a) Monotonicity: if X Y then Ê[X ] Ê[Y ].

30 Sublinear Expectation Definition A nonlinear expectation: a functional Ê : H R (a) Monotonicity: if X Y then Ê[X ] Ê[Y ]. (b) Constant preserving: Ê[c] =c. A sublinear expectation:

31 Sublinear Expectation Definition A nonlinear expectation: a functional Ê : H R (a) Monotonicity: if X Y then Ê[X ] Ê[Y ]. (b) Constant preserving: Ê[c] =c. A sublinear expectation: (c) Sub-additivity (or self dominated property): Ê[X ] Ê[Y ] Ê[X Y ].

32 Sublinear Expectation Definition A nonlinear expectation: a functional Ê : H R (a) Monotonicity: if X Y then Ê[X ] Ê[Y ]. (b) Constant preserving: Ê[c] =c. A sublinear expectation: (c) Sub-additivity (or self dominated property): Ê[X ] Ê[Y ] Ê[X Y ]. (d) Positive homogeneity: Ê[λX ]=λê[x], λ 0.

33 Coherent Risk Measures and Sunlinear Expectations H is the set of bounded and measurable random variables on (Ω, F). ρ(x ) := Ê[ X ]

34 Robust representation of sublinear expectations Huber Robust Statistics (1981). Artzner, Delbean, Eber & Heath (1999) Föllmer & Schied (2004) (For interest rate uncertainty, see Barrieu & El Karoui (2005)).

35 Robust representation of sublinear expectations Theorem Huber Robust Statistics (1981). Artzner, Delbean, Eber & Heath (1999) Föllmer & Schied (2004) Ê[ ] is a sublinear expectation on H if and only if there exists a subset P M 1,f (the collection of all finitely additive probability measures) such that Ê[X ]=sup E P [X ], X H. P P (For interest rate uncertainty, see Barrieu & El Karoui (2005)).

36 Meaning of the robust representation: Statistic model uncertainty Ê[X ] = sup P P E P [X ], X H. The size of the subset P represents the degree of model uncertainty: The stronger the Ê the more the uncertainty Ê 1 [X ] Ê 2 [X ], X H P 1 P 2

37 The distribution of a random vector in (Ω, H, Ê)

38 The distribution of a random vector in (Ω, H, Ê) Definition (Distribution of X ) Given X = (X 1,, X n ) H n. We define: ˆF X [ϕ] := Ê[ϕ(X )] : ϕ C b (R n ) R. We call ˆF X [ ] the distribution of X under Ê.

39 The distribution of a random vector in (Ω, H, Ê) Definition (Distribution of X ) Given X = (X 1,, X n ) H n. We define: ˆF X [ϕ] := Ê[ϕ(X )] : ϕ C b (R n ) R. We call ˆF X [ ] the distribution of X under Ê. Sublinear Distribution F X [ ] forms a sublinear expectation on (R n, C b (R n )). Briefly: F X [ϕ] =sup ϕ(x)f θ (dy). θ Θ R n

40 Definition Definition X,Y are said to be identically distributed, (X Y,orX is a copy of Y ), if they have same distributions: Ê[ϕ(X )] = Ê[ϕ(Y )], ϕ C b (R n ).

41 Definition Definition X,Y are said to be identically distributed, (X Y,orX is a copy of Y ), if they have same distributions: Ê[ϕ(X )] = Ê[ϕ(Y )], ϕ C b (R n ). The meaning of X Y : X and Y has the same subset of uncertain distributions.

42 Independence under Ê Definition Y (ω) is said to be independent from X (ω) if we have: Ê[ϕ(X, Y )] = Ê[Ê[ϕ(x, Y )] x=x ], ϕ( ).

43 Independence under Ê Definition Y (ω) is said to be independent from X (ω) if we have: Ê[ϕ(X, Y )] = Ê[Ê[ϕ(x, Y )] x=x ], ϕ( ). Fact Meaning: the realization of X does not change (improve) the distribution uncertainty of Y.

44 The notion of independence: it can be subjective Example (An extreme example) In reality, Y = X but we are in the very beginning and we know noting about the relation of X and Y, the only information we know is X, Y Θ. The robust expectation of ϕ(x, Y ) is: Ê[ϕ(X, Y )] = sup x,y Θ ϕ(x, y). Y is independent to X, X is also independent to Y.

45 The notion of independence Fact Y is independent to X X is independent to Y Example DOES NOT IMPLIES σ 2 := Ê[Y 2 ] > σ 2 := Ê[ Y 2 ] > 0, Ê[X ] = Ê[ X ]=0. Then

46 The notion of independence Fact Y is independent to X X is independent to Y Example DOES NOT IMPLIES σ 2 := Ê[Y 2 ] > σ 2 := Ê[ Y 2 ] > 0, Ê[X ] = Ê[ X ]=0. Then If Y is independent to X : Ê[XY 2 ]=Ê[Ê[xY 2 ] x=x ]=Ê[X + σ 2 X σ 2 ] = Ê[X + ](σ 2 σ 2 ) > 0.

47 The notion of independence Fact Y is independent to X X is independent to Y Example DOES NOT IMPLIES σ 2 := Ê[Y 2 ] > σ 2 := Ê[ Y 2 ] > 0, Ê[X ] = Ê[ X ]=0. Then If Y is independent to X : Ê[XY 2 ]=Ê[Ê[xY 2 ] x=x ]=Ê[X + σ 2 X σ 2 ] But if X is independent to Y : = Ê[X + ](σ 2 σ 2 ) > 0. Ê[XY 2 ]=Ê[Ê[X ]Y 2 ]=0.

48 Central Limit Theorem (CLT) Let {X i } i=1 be a i.i.d. in a sublinear expectation space (Ω, H, Ê) in the following sense: (i) identically distributed: Ê[ϕ(X i )] = Ê[ϕ(X 1 )], i = 1, 2, (ii) independent: for each i, X i+1 is independent to (X 1, X 2,, X i ) under Ê. We also assume that Ê[X 1 ]= Ê[ X 1 ]. We denote σ 2 = Ê[X 2 1 ], σ 2 = Ê[ X 2 1 ] Then for each convex function ϕ we have Ê[ϕ( S n n )] 1 2πσ 2 ϕ(x) exp( x 2 2σ 2 )dx and for each concave function ψ we have S n 1 x 2

49 Theorem ( CLT under distribution uncertainty) Let {X i } i=1 be a i.i.d. in a sublinear expectation space (Ω, H, Ê) in the following sense: (i) identically distributed: Ê[ϕ(X i )] = Ê[ϕ(X 1 )], i = 1, 2, (ii) independent: for each i, X i+1 is independent to (X 1, X 2,, X i ) under Ê. We also assume that Ê[X 1 ]= Ê[ X 1 ]. Then for each convex function ϕ we have Ê[ϕ( S n n )] Ê[ϕ(X )], X N(0, [σ 2, σ 2 ])

50 Normal distribution under Sublinear expectation An fundamentally important sublinear distribution Definition A random variable X in (Ω, H, Ê) is called normally distributed if ax + b X a 2 + b 2 X, a, b 0. where X is an independent copy of X.

51 Normal distribution under Sublinear expectation An fundamentally important sublinear distribution Definition A random variable X in (Ω, H, Ê) is called normally distributed if ax + b X a 2 + b 2 X, a, b 0. where X is an independent copy of X. We have Ê[X ]=Ê[ X]=0.

52 Normal distribution under Sublinear expectation An fundamentally important sublinear distribution Definition A random variable X in (Ω, H, Ê) is called normally distributed if ax + b X a 2 + b 2 X, a, b 0. where X is an independent copy of X. We have Ê[X ]=Ê[ X ]=0. We denote X N(0, [σ 2, σ 2 ]), where σ 2 := Ê[X 2 ], σ 2 := Ê[ X 2 ].

53 G-normal distribution: a under sublinear expectation E[ ] (1) For each convex ϕ, we have Ê[ϕ(X )] = 1 2πσ 2 ϕ(y) exp( y 2 2σ 2 )dy

54 G-normal distribution: a under sublinear expectation E[ ] (1) For each convex ϕ, we have Ê[ϕ(X )] = 1 2πσ 2 ϕ(y) exp( y 2 2σ 2 )dy (2) For each concave ϕ, we have, Ê[ϕ(X )] = 1 2πσ 2 ϕ(y) exp( y 2 2σ 2 )dy

55 Fact If σ 2 = σ 2, then N (0; [σ 2, σ 2 ])=N(0, σ 2 ). Fact The larger to [σ 2, σ 2 ] the stronger the uncertainty. Fact But the uncertainty subset of N (0; [σ 2, σ 2 ]) is not just consisting of N (0; σ), σ [σ 2, σ 2 ]!!

56 Theorem X N (0, [σ 2, σ 2 ]) in (Ω, H, Ê) iff for each ϕ C b (R) the function u(t, x) := Ê[ϕ(x + tx )], x R, t 0 is the solution of the PDE u t = G (u xx ), t > 0, x R u t=0 = ϕ, where G (a) = 1 2 (σ2 a + σ 2 a )(= Ê[ a 2 X 2 ]). G-normal distribution.

57

58 Law of Large Numbers (LLN), Central Limit Theorem (CLT) Striking consequence of LLN & CLT Accumulated independent and identically distributed random variables tends to a normal distributed random variable, whatever the original distribution.

59 Law of Large Numbers (LLN), Central Limit Theorem (CLT) Striking consequence of LLN & CLT Accumulated independent and identically distributed random variables tends to a normal distributed random variable, whatever the original distribution. LLN under Choquet capacities: Marinacci, M. Limit laws for non-additive probabilities and their frequentist interpretation, Journal of Economic Theory 84, Nothing found for nonlinear CLT.

60 Definition A sequence of random variables {η i } i=1 under Ê if the limit in H is said to converge in law lim Ê[ϕ(η i )], i for each ϕ C b (R).

61 Central Limit Theorem under Sublinear Expectation Theorem Let {X i } i=1 in (Ω, H, Ê) be identically distributed: X i X 1, and each X i+1 is independent to (X 1,, X i ). We assume furthermore that Ê[ X 1 2+α ] < and Ê[X 1 ]=Ê[ X 1 ]=0. S n := X X n. Then S n / n converges in law to N (0; [σ 2, σ 2 ]): where lim Ê[ϕ( S n )]=Ê[ϕ(X)], ϕ C b (R), n n where X N (0, [σ 2, σ 2 ]), σ 2 = Ê[X 2 1 ], σ 2 = Ê[ X 2 1 ].

62 Cases with mean-uncertainty What happens if Ê[X 1 ] > Ê[ X 1 ]? Definition A random variable Y in (Ω, H, Ê) is N ([µ, µ] {0})-distributed (Y N([µ, µ] {0})) if ay + bȳ (a + b)y, a, b 0. where Ȳ is an independent copy of Y, where µ := Ê[Y ] > µ := Ê[ Y ]

63 Cases with mean-uncertainty What happens if Ê[X 1 ] > Ê[ X 1 ]? Definition A random variable Y in (Ω, H, Ê) is N ([µ, µ] {0})-distributed (Y N([µ, µ] {0})) if ay + bȳ (a + b)y, a, b 0. where Ȳ is an independent copy of Y, where µ := Ê[Y ] > µ := Ê[ Y ] We can prove that Ê[ϕ(Y )] = sup y [µ,µ] ϕ(y).

64 Case with mean-uncertainty E[ ] Definition A pair of random variables (X, Y ) in (Ω, H, Ê) is N ([µ, µ] [σ 2, σ 2 ])-distributed ((X, Y ) N([µ, µ] [σ 2, σ 2 ])) if (ax + b X, a 2 Y + b 2 Ȳ ) ( a 2 + b 2 X, (a 2 + b 2 )Y ), a, b 0. where ( X, Ȳ ) is an independent copy of (X, Y ), µ := Ê[Y ], µ := Ê[ Y ] σ 2 := Ê[X 2 ], σ 2 := Ê[ X ], (Ê[X ] = Ê[ X ]=0).

65 Theorem (X, Y ) N ([µ, µ], [σ 2, σ 2 ]) in (Ω, H, Ê) iff for each ϕ C b (R) the function u(t, x, y) := Ê[ϕ(x + tx, y + ty )], x R, t 0 is the solution of the PDE u t = G (u y, u xx ), t > 0, x R u t=0 = ϕ, where G (p, a) := Ê[ a 2 X 2 + py ].

66

67 Central Limit Theorem under Sublinear Expectation Theorem Let {X i + Y i } i=1 be an independent and identically distributed sequence. We assume furthermore that Ê[ X 1 2+α ]+Ê[ Y 1 1+α ] < and Ê[X 1 ]=Ê[ X 1 ]=0. S X n := X X n,s Y n := Y Y n. Then S n / n converges in law to N (0; [σ 2, σ 2 ]): lim Ê[ϕ( S n X + S n Y n n n )]=Ê[ϕ(X + Y )], ϕ C b (R), where (X, Y ) is N ([µ, µ], [σ 2, σ 2 ])-distributed.

68 Remarks. 1. Like classical case, these new LLN and CLT plays fundamental roles for the case with probability and distribution uncertainty;

69 Remarks. 1. Like classical case, these new LLN and CLT plays fundamental roles for the case with probability and distribution uncertainty; 2. According to our LLN and CLT, the uncertainty accumulates, in general, there are infinitely many probability measures. The best way to calculate Ê[X ] is to use our theorem, instead of calculating Ê[X ]=sup θ Θ E θ [X ];

70 Remarks. 1. Like classical case, these new LLN and CLT plays fundamental roles for the case with probability and distribution uncertainty; 2. According to our LLN and CLT, the uncertainty accumulates, in general, there are infinitely many probability measures. The best way to calculate Ê[X ] is to use our theorem, instead of calculating Ê[X ]=sup θ Θ E θ [X ]; 3. Many statistic methods must be revisited to clarify their meaning of uncertainty: it s very risky to treat this new notion in a classical way. This will cause serious confusions.

71 Remarks. 1. Like classical case, these new LLN and CLT plays fundamental roles for the case with probability and distribution uncertainty; 2. According to our LLN and CLT, the uncertainty accumulates, in general, there are infinitely many probability measures. The best way to calculate Ê[X ] is to use our theorem, instead of calculating Ê[X ]=sup θ Θ E θ [X ]; 3. Many statistic methods must be revisited to clarify their meaning of uncertainty: it s very risky to treat this new notion in a classical way. This will cause serious confusions. 4. The third revolution: Deterministic point of view = Probabilistic point of view = uncertainty of probabilities.

72 G Brownian Motion Definition Under (Ω, F, Ê), a process B t (ω) =ω t, t 0, is called a G Brownian motion if: (i) B t+s B s is N (0, [σ 2 t, σ 2 t]) distributed s, t 0

73 G Brownian Motion Definition Under (Ω, F, Ê), a process B t (ω) =ω t, t 0, is called a G Brownian motion if: (i) B t+s B s is N (0, [σ 2 t, σ 2 t]) distributed s, t 0 (ii) For each t 1 t n, B tn B tn 1 (B t1,, B tn 1 ). is independent to

74 G Brownian Motion Definition Under (Ω, F, Ê), a process B t (ω) =ω t, t 0, is called a G Brownian motion if: (i) B t+s B s is N (0, [σ 2 t, σ 2 t]) distributed s, t 0 (ii) For each t 1 t n, B tn B tn 1 (B t1,, B tn 1 ). For simplification, we set σ 2 = 1. is independent to

75 Theorem If, under some (Ω, F, Ê), a stochastic process B t (ω), t 0 satisfies

76 Theorem If, under some (Ω, F, Ê), a stochastic process B t (ω), t 0 satisfies For each t 1 t n, B tn B tn 1 is independent to (B t1,, B tn 1 ).

77 Theorem If, under some (Ω, F, Ê), a stochastic process B t (ω), t 0 satisfies For each t 1 t n, B tn B tn 1 is independent to (B t1,, B tn 1 ). B t is identically distributed as B s+t B s, for all s, t 0

78 Theorem If, under some (Ω, F, Ê), a stochastic process B t (ω), t 0 satisfies For each t 1 t n, B tn B tn 1 is independent to (B t1,, B tn 1 ). B t is identically distributed as B s+t B s, for all s, t 0 Ê[ B t 3 ] = o(t).

79 Theorem If, under some (Ω, F, Ê), a stochastic process B t (ω), t 0 satisfies For each t 1 t n, B tn B tn 1 is independent to (B t1,, B tn 1 ). B t is identically distributed as B s+t B s, for all s, t 0 Ê[ B t 3 ] = o(t). Then B is a G-Brownian motion.

80 G Brownian Fact Like N (0, [σ 2, σ 2 ])-distribution, the G-Brownian motion B t (ω) =ω t, t 0, can strongly correlated under the unknown objective probability, it can even be have very long memory. But it is i.i.d under the robust expectation Ê. By which we can have many advantages in analysis, calculus and computation, compare with, e.g. fractal B.M.

81 Itô s integral of G Brownian motion For each process (η t ) t 0 L 2,0 F (0, T ) of the form η t (ω) = N 1 j=0 ξ j (ω)i [tj,t j+1 )(t), ξ j L 2 (F tj ) (F tj -meas. & Ê[ ξ j 2 ] < ) we define I (η) = T 0 η(s)db s := N 1 j=0 ξ j (B tj+1 B tj ).

82 Itô s integral of G Brownian motion For each process (η t ) t 0 L 2,0 F (0, T ) of the form η t (ω) = N 1 j=0 ξ j (ω)i [tj,t j+1 )(t), ξ j L 2 (F tj ) (F tj -meas. & Ê[ ξ j 2 ] < ) we define I (η) = T 0 η(s)db s := N 1 j=0 ξ j (B tj+1 B tj ). Lemma We have and T Ê[( 0 T Ê[ 0 η(s)db s ) 2 ] η(s)db s ]=0 T 0 Ê[(η(t)) 2 ]dt.

83 Definition Under the Banach norm η 2 := T 0 Ê[(η(t)) 2 ]dt, I (η) : L 2,0 (0, T ) L 2 (F T ) is a contract mapping We then extend I (η) to L 2 (0, T ) and define, the stochastic integral T 0 η(s)db s := I (η), η L 2 (0, T ).

84 Lemma We have (i) t s η udb u = r s η udb u + t r η udb u. (ii) t s (αη u + θ u )db u = α t s η udb u + t s θ udb u, α L 1 (F s ) (iii) Ê[X + T r η u db u H s ]=Ê[X ], X L 1 (F s ).

85 Quadratic variation process of G BM We denote: B t = B 2 t 2 t 0 B s db s = N 1 lim max(t k+1 t k ) 0 k=0 (B t N k+1 B tk ) 2 B is an increasing process called quadratic variation process of B. Ê[ B t ]=σ 2 t but Ê[ B t ]= σ 2 t

86 Quadratic variation process of G BM We denote: B t = B 2 t 2 t 0 B s db s = N 1 lim max(t k+1 t k ) 0 k=0 (B t N k+1 B tk ) 2 B is an increasing process called quadratic variation process of B. Ê[ B t ]=σ 2 t but Ê[ B t ]= σ 2 t Lemma B s t := B t+s B s, t 0 is still a G-Brownian motion. We also have B t+s B s B s t U([σ 2 t, σ 2 t]).

87 We have the following isometry T Ê[( 0 T η(s)db s ) 2 ]=Ê[ η 2 (s)d B s ], 0 η MG 2 (0, T )

88 Itô s formula for G Brownian motion X t = X 0 + t 0 α s ds + t 0 η s d B s + t 0 β s db s

89 Itô s formula for G Brownian motion X t = X 0 + t 0 α s ds + t 0 η s d B s + t 0 β s db s Theorem. Let α, β and η be process in L 2 G (0, T ). Then for each t 0 and in L 2 G (H t) we have Φ(X t )=Φ(X 0 )+ + t 0 t 0 Φ x (X u )β u db u + t 0 Φ x (X u )α u du [Φ x (X u )η u Φ xx(x u )β 2 u]d B u

90 Stochastic differential equations Problem We consider the following SDE: X t = X 0 + t 0 b(x s )ds + t 0 h(x s )d B s + t 0 σ(x s )db s, t > 0. where X 0 R n is given b, h, σ : R n R n are given Lip. functions. The solution: a process X M 2 G (0, T ; Rn ) satisfying the above SDE.

91 Stochastic differential equations Problem We consider the following SDE: X t = X 0 + t 0 b(x s )ds + t 0 h(x s )d B s + t 0 σ(x s )db s, t > 0. where X 0 R n is given b, h, σ : R n R n are given Lip. functions. The solution: a process X M 2 G (0, T ; Rn ) satisfying the above SDE. Theorem There exists a unique solution X M 2 G (0, T ; Rn ) of the stochastic differential equation.

92 Prospectives Risk measures and pricing under dynamic volatility uncertainties ([A-L-P1995], [Lyons1995]) for path dependent options; Stochastic (trajectory) analysis of sublinear and/or nonlinear Markov process.

93 Prospectives Risk measures and pricing under dynamic volatility uncertainties ([A-L-P1995], [Lyons1995]) for path dependent options; Stochastic (trajectory) analysis of sublinear and/or nonlinear Markov process. New Feynman-Kac formula for fully nonlinear PDE: path-interpretation. t u(t, x) =Ê x [(B t ) exp( 0 c(b s )ds)] t u = G (D 2 u)+c(x)u, u t=0 = ϕ(x).

94 Prospectives Risk measures and pricing under dynamic volatility uncertainties ([A-L-P1995], [Lyons1995]) for path dependent options; Stochastic (trajectory) analysis of sublinear and/or nonlinear Markov process. New Feynman-Kac formula for fully nonlinear PDE: path-interpretation. t u(t, x) =Ê x [(B t ) exp( 0 c(b s )ds)] t u = G (D 2 u)+c(x)u, u t=0 = ϕ(x). Fully nonlinear Monte-Carlo simulation.

95 Prospectives Risk measures and pricing under dynamic volatility uncertainties ([A-L-P1995], [Lyons1995]) for path dependent options; Stochastic (trajectory) analysis of sublinear and/or nonlinear Markov process. New Feynman-Kac formula for fully nonlinear PDE: path-interpretation. t u(t, x) =Ê x [(B t ) exp( 0 c(b s )ds)] t u = G (D 2 u)+c(x)u, u t=0 = ϕ(x). Fully nonlinear Monte-Carlo simulation. BSDE driven by G-Brownian motion: a challenge.

96 The talk is based on: Peng, S. (2006) G Expectation, G Brownian Motion and Related Stochastic Calculus of Ito s type, in arxiv:math.pr/ v2 3Jan 2006, to appear in Proceedings of 2005 Abel Symbosium, Springer.

97 The talk is based on: Peng, S. (2006) G Expectation, G Brownian Motion and Related Stochastic Calculus of Ito s type, in arxiv:math.pr/ v2 3Jan 2006, to appear in Proceedings of 2005 Abel Symbosium, Springer. Peng, S. (2006) Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation, in arxiv:math.pr/ v2 28Jan 2006.

98 The talk is based on: Peng, S. (2006) G Expectation, G Brownian Motion and Related Stochastic Calculus of Ito s type, in arxiv:math.pr/ v2 3Jan 2006, to appear in Proceedings of 2005 Abel Symbosium, Springer. Peng, S. (2006) Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation, in arxiv:math.pr/ v2 28Jan Peng, S. Law of large numbers and central limit theorem under nonlinear expectations, in arxiv:math.pr/ v1 13 Feb 2007 Peng, S. A New Central Limit Theorem under Sublinear Expectations, arxiv: v1 [math.pr] 18 Mar 2008

99 The talk is based on: Peng, S. (2006) G Expectation, G Brownian Motion and Related Stochastic Calculus of Ito s type, in arxiv:math.pr/ v2 3Jan 2006, to appear in Proceedings of 2005 Abel Symbosium, Springer. Peng, S. (2006) Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation, in arxiv:math.pr/ v2 28Jan Peng, S. Law of large numbers and central limit theorem under nonlinear expectations, in arxiv:math.pr/ v1 13 Feb 2007 Peng, S. A New Central Limit Theorem under Sublinear Expectations, arxiv: v1 [math.pr] 18 Mar 2008 Peng, S.L. Denis, M. Hu and S. Peng, Function spaces and capacity related to a Sublinear Expectation: application to G-Brownian Motion Pathes, see arxiv: v1 [math.pr] 9 Feb 2008.

100 The talk is based on: Peng, S. (2006) G Expectation, G Brownian Motion and Related Stochastic Calculus of Ito s type, in arxiv:math.pr/ v2 3Jan 2006, to appear in Proceedings of 2005 Abel Symbosium, Springer. Peng, S. (2006) Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation, in arxiv:math.pr/ v2 28Jan Peng, S. Law of large numbers and central limit theorem under nonlinear expectations, in arxiv:math.pr/ v1 13 Feb 2007 Peng, S. A New Central Limit Theorem under Sublinear Expectations, arxiv: v1 [math.pr] 18 Mar 2008 Peng, S.L. Denis, M. Hu and S. Peng, Function spaces and capacity related to a Sublinear Expectation: application to G-Brownian Motion Pathes, see arxiv: v1 [math.pr] 9 Feb Song, Y. (2007) A general central limit theorem under Peng s G-normal distribution, Preprint.

101 Thank you,

102 In the classic period of Newton s mechanics, including A. Einstein, people believe that everything can be deterministically calculated. The last century s research affirmatively claimed the probabilistic behavior of our universe: God does plays dice! Nowadays people believe that everything has its own probability distribution. But a deep research of human behaviors shows that for everything involved human or life such, as finance, this may not be true: a person or a community may prepare many different p.d. for your selection. She change them, also purposely or randomly, time by time.

arxiv:math/ v1 [math.pr] 13 Feb 2007

arxiv:math/ v1 [math.pr] 13 Feb 2007 arxiv:math/0702358v1 [math.pr] 13 Feb 2007 Law of Large Numbers and Central Limit Theorem under Nonlinear Expectations 1 Introduction Shige PENG Institute of Mathematics Shandong University 250100, Jinan,

More information

Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion

Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion Peng Shige Shandong University, China peng@sdu.edu.cn School of Stochastic Dynamical Systems and Ergodicity Loughborough University,

More information

Solvability of G-SDE with Integral-Lipschitz Coefficients

Solvability of G-SDE with Integral-Lipschitz Coefficients Solvability of G-SDE with Integral-Lipschitz Coefficients Yiqing LIN joint work with Xuepeng Bai IRMAR, Université de Rennes 1, FRANCE http://perso.univ-rennes1.fr/yiqing.lin/ ITN Marie Curie Workshop

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Random G -Expectations

Random G -Expectations Random G -Expectations Marcel Nutz ETH Zurich New advances in Backward SDEs for nancial engineering applications Tamerza, Tunisia, 28.10.2010 Marcel Nutz (ETH) Random G-Expectations 1 / 17 Outline 1 Random

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions

More information

Generalized quantiles as risk measures

Generalized quantiles as risk measures Generalized quantiles as risk measures Bellini, Klar, Muller, Rosazza Gianin December 1, 2014 Vorisek Jan Introduction Quantiles q α of a random variable X can be defined as the minimizers of a piecewise

More information

Coherent risk measures

Coherent risk measures Coherent risk measures Foivos Xanthos Ryerson University, Department of Mathematics Toµɛας Mαθηµατ ικὼν, E.M.Π, 11 Noɛµβρὶoυ 2015 Research interests Financial Mathematics, Mathematical Economics, Functional

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Robust Mean-Variance Hedging via G-Expectation

Robust Mean-Variance Hedging via G-Expectation Robust Mean-Variance Hedging via G-Expectation Francesca Biagini, Jacopo Mancin Thilo Meyer Brandis January 3, 18 Abstract In this paper we study mean-variance hedging under the G-expectation framework.

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

A numerical method for solving uncertain differential equations

A numerical method for solving uncertain differential equations Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Origins of Probability Theory

Origins of Probability Theory 1 16.584: INTRODUCTION Theory and Tools of Probability required to analyze and design systems subject to uncertain outcomes/unpredictability/randomness. Such systems more generally referred to as Experiments.

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Risk Measures in non-dominated Models

Risk Measures in non-dominated Models Purpose: Study Risk Measures taking into account the model uncertainty in mathematical finance. Plan 1 Non-dominated Models Model Uncertainty Fundamental topological Properties 2 Risk Measures on L p (c)

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Decision principles derived from risk measures

Decision principles derived from risk measures Decision principles derived from risk measures Marc Goovaerts Marc.Goovaerts@econ.kuleuven.ac.be Katholieke Universiteit Leuven Decision principles derived from risk measures - Marc Goovaerts p. 1/17 Back

More information

Obstacle problems for nonlocal operators

Obstacle problems for nonlocal operators Obstacle problems for nonlocal operators Camelia Pop School of Mathematics, University of Minnesota Fractional PDEs: Theory, Algorithms and Applications ICERM June 19, 2018 Outline Motivation Optimal regularity

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

A Note on Robust Representations of Law-Invariant Quasiconvex Functions

A Note on Robust Representations of Law-Invariant Quasiconvex Functions A Note on Robust Representations of Law-Invariant Quasiconvex Functions Samuel Drapeau Michael Kupper Ranja Reda October 6, 21 We give robust representations of law-invariant monotone quasiconvex functions.

More information

Quasi-sure Stochastic Analysis through Aggregation

Quasi-sure Stochastic Analysis through Aggregation Quasi-sure Stochastic Analysis through Aggregation H. Mete Soner Nizar Touzi Jianfeng Zhang March 7, 21 Abstract This paper is on developing stochastic analysis simultaneously under a general family of

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP Dedicated to Professor Gheorghe Bucur on the occasion of his 7th birthday ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP EMIL POPESCU Starting from the usual Cauchy problem, we give

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15. IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random

More information

Risk-Averse Dynamic Optimization. Andrzej Ruszczyński. Research supported by the NSF award CMMI

Risk-Averse Dynamic Optimization. Andrzej Ruszczyński. Research supported by the NSF award CMMI Research supported by the NSF award CMMI-0965689 Outline 1 Risk-Averse Preferences 2 Coherent Risk Measures 3 Dynamic Risk Measurement 4 Markov Risk Measures 5 Risk-Averse Control Problems 6 Solution Methods

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Stochastic Optimization One-stage problem

Stochastic Optimization One-stage problem Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

From Random Variables to Random Processes. From Random Variables to Random Processes

From Random Variables to Random Processes. From Random Variables to Random Processes Random Processes In probability theory we study spaces (Ω, F, P) where Ω is the space, F are all the sets to which we can measure its probability and P is the probability. Example: Toss a die twice. Ω

More information

Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics

Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Meng Xu Department of Mathematics University of Wyoming February 20, 2010 Outline 1 Nonlinear Filtering Stochastic Vortex

More information

Coherent and convex monetary risk measures for unbounded

Coherent and convex monetary risk measures for unbounded Coherent and convex monetary risk measures for unbounded càdlàg processes Patrick Cheridito ORFE Princeton University Princeton, NJ, USA dito@princeton.edu Freddy Delbaen Departement für Mathematik ETH

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Exact Simulation of Diffusions and Jump Diffusions

Exact Simulation of Diffusions and Jump Diffusions Exact Simulation of Diffusions and Jump Diffusions A work by: Prof. Gareth O. Roberts Dr. Alexandros Beskos Dr. Omiros Papaspiliopoulos Dr. Bruno Casella 28 th May, 2008 Content 1 Exact Algorithm Construction

More information

Numerical scheme for quadratic BSDEs and Dynamic Risk Measures

Numerical scheme for quadratic BSDEs and Dynamic Risk Measures Numerical scheme for quadratic BSDEs, Numerics and Finance, Oxford Man Institute Julien Azzaz I.S.F.A., Lyon, FRANCE 4 July 2012 Joint Work In Progress with Anis Matoussi, Le Mans, FRANCE Outline Framework

More information

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2 B8.3 Mathematical Models for Financial Derivatives Hilary Term 18 Solution Sheet In the following W t ) t denotes a standard Brownian motion and t > denotes time. A partition π of the interval, t is a

More information

MATH MW Elementary Probability Course Notes Part I: Models and Counting

MATH MW Elementary Probability Course Notes Part I: Models and Counting MATH 2030 3.00MW Elementary Probability Course Notes Part I: Models and Counting Tom Salisbury salt@yorku.ca York University Winter 2010 Introduction [Jan 5] Probability: the mathematics used for Statistics

More information

Uncertainty in Kalman Bucy Filtering

Uncertainty in Kalman Bucy Filtering Uncertainty in Kalman Bucy Filtering Samuel N. Cohen Mathematical Institute University of Oxford Research Supported by: The Oxford Man Institute for Quantitative Finance The Oxford Nie Financial Big Data

More information

The Quantum Dice. V. S. Varadarajan. University of California, Los Angeles, CA, USA

The Quantum Dice. V. S. Varadarajan. University of California, Los Angeles, CA, USA The Quantum Dice V. S. Varadarajan University of California, Los Angeles, CA, USA UCLA, February 26, 2008 Abstract Albert Einstein once said that he does not believe in a God who plays dice. He was referring

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

Local vs. Nonlocal Diffusions A Tale of Two Laplacians Local vs. Nonlocal Diffusions A Tale of Two Laplacians Jinqiao Duan Dept of Applied Mathematics Illinois Institute of Technology Chicago duan@iit.edu Outline 1 Einstein & Wiener: The Local diffusion 2

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

L -uniqueness of Schrödinger operators on a Riemannian manifold

L -uniqueness of Schrödinger operators on a Riemannian manifold L -uniqueness of Schrödinger operators on a Riemannian manifold Ludovic Dan Lemle Abstract. The main purpose of this paper is to study L -uniqueness of Schrödinger operators and generalized Schrödinger

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Lecture Introduction

Lecture Introduction Lecture 1 1.1 Introduction The theory of Partial Differential Equations (PDEs) is central to mathematics, both pure and applied. The main difference between the theory of PDEs and the theory of Ordinary

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Lecture 1. ABC of Probability

Lecture 1. ABC of Probability Math 408 - Mathematical Statistics Lecture 1. ABC of Probability January 16, 2013 Konstantin Zuev (USC) Math 408, Lecture 1 January 16, 2013 1 / 9 Agenda Sample Spaces Realizations, Events Axioms of Probability

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information