Uniformly Uniformly-ergodic Markov chains and BSDEs

Size: px
Start display at page:

Download "Uniformly Uniformly-ergodic Markov chains and BSDEs"

Transcription

1 Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue, Rennes, May 2013 Research supported by the Oxford Man Institute S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

2 Outline 1 Uniform Uniform-Ergodicity 2 Markov chain BSDEs 3 Markov chain Ergodic BSDEs 4 Markov chain BSDEs to stopping times S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

3 Outline 1 Uniform Uniform-Ergodicity 2 Markov chain BSDEs 3 Markov chain Ergodic BSDEs 4 Markov chain BSDEs to stopping times S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

4 Markov chains Consider a time homogenous finite/countable-state continuous-time Markov chain X. X is a process in (Ω, F, {F t } t 0, P), where {F t } is the natural filtration of X. X takes values in X, the standard basis of R N where N = number of states. X jumps from state X t to state e i X at rate e i AX t, so X t = X 0 + AX u du + M t for some R N -valued martingale M. ]0,t] A is a R N N matrix, with [A] ij 0 for i j, and 1 Ae i = j [A] ij 0. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

5 Example For N = 4, we can consider the Markov chain with matrix A = Which can be drawn as a graph S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

6 Uniform ergodicity If X 0 µ, for some probability measure µ on X, we write X t P t µ. Writing µ as a vector, P t µ e At µ. We say X is uniformly ergodic if P t µ π TV Re ρt for all µ for some R, ρ > 0, some probability measure π on X. TV is the total variation norm (and writing signed measures as vectors, TV = l1 ). The measure π is the ergodic measure of the Markov chain and is unique. It also satisfies P t π = π, so it is stationary. We call (R, ρ) the parameters of ergodicity. Irreducible finite state chains are always uniformly ergodic. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

7 Stability of Uniform Ergodicity We can tie together the matrix A and the measure P. Hence a change of measure corresponds to using a different (stochastic, time varying?) matrix B. Question We say A is uniformly ergodic if X is uniformly ergodic under P A Suppose the chain is uniformly ergodic under the measure P A, and has associated parameters (R A, ρ A ). If A and B are similar, can we say the chain is uniformly ergodic under P B, and can we say anything about (R B, ρ B )? We wish to find a relationship between A and B under which we can get a uniform estimate of ρ B. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

8 Definition For γ > 0, we say A γ-controls B (and write A γ B) if B γa is also a rate matrix. If A γ B and B γ A we write A γ B. Example A = Then for γ 1/2, A γ B. B = S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

9 Note: 1 is a partial order. If A γ B for some γ > 0 then A and B have the same zero entries. Our key result is: Theorem Suppose A is uniformly ergodic and γ ]0, 1[. Then there exist constants R, ρ dependent on γ and A such that B is uniformly ergodic with parameters ( R, ρ) whenever A γ B. The key is that these constants are uniform in B, so these measures are uniformly uniformly-ergodic. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

10 Outline 1 Uniform Uniform-Ergodicity 2 Markov chain BSDEs 3 Markov chain Ergodic BSDEs 4 Markov chain BSDEs to stopping times S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

11 Markov chain BSDEs This result allows us to describe (uniformly) the long-run behaviour of Markov chains under a variety of measures. For infinite-horizon BSDEs, we can think of dynamically changing the measure, so such results are useful. Before we can make use of this connection, we need the basic theory of BSDEs when noise is driven by a Markov chain. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

12 Martingale Representation Recall that X t = X 0 + AX u du + M t ]0,t] M is a R N martingale with predictable quadratic variation d M = φ t dt = (diag(ax t ) + AX t X t + X t X t A )dt We have E [( Z dm ) 2 ] = E [ Z φ t Zdt] =: E [ Z 2 M t dt ] We can give a martingale representation theorem with respect to M, namely for any scalar martingale L L t = L 0 + Zt dm t ]0,t] and Z is unique up equality in z 2 M = z φ t z. Note 1 2 M t 0. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

13 BSDE Existence and Uniqueness Theorem Let f be a predictable function with f (ω, t, y, z) f (ω, t, y, z ) 2 c( y y 2 + z z 2 M t ) for some c > 0, and E[ ]0,T] f (ω, t, 0, 0) 2 dt] <. Then for any ξ L 2 (F T ) the BSDE Y t = ξ + f (ω, t, Y u, Z u )du + ZudM u ]t,t] ]t,t] has a unique adapted solution (Y, Z) with appropriate integrability. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

14 γ-balanced drivers To get a comparison theorem, we need the following definition. Definition f is γ-balanced if there exists a random field λ, with λ(,, z, z ) predictable and λ(ω, t,, ) Borel measurable, such that f (ω, t, y, z) f (ω, t, y, z ) = (z z ) (λ(ω, t, z, z ) AX t ), for each e i X, e i λ(ω, t, z, z ) e i AX t for some γ > 0, where 0/0 := 1, 1 λ(ω, t, z, z ) 0 and [γ, γ 1 ] λ(ω, t, z + α1, z ) = λ(ω, t, z, z ) for all α R. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

15 Comparison theorem Lemma If f is γ-balanced for γ > 0, then it is uniformly Lipschitz in z with respect to Mt. Lemma If f (u; ) is γ-balanced for each u, then g( ) := ess sup u {f (u; )} is γ-balanced (given it is always finite). S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

16 Comparison theorem Lemma If B γ A then f (ω, t, y, z) = z (B A)X t is γ-balanced with λ( ) = BX t, and Y t = E B [ξ F t ]. The purpose of this definition is to obtain: Theorem (Comparison theorem) Let f be γ-balanced for some γ > 0. Then if ξ ξ and f (ω, t, y, z) f (ω, t, y, z), the associated BSDE solutions satisfy Y t Y t for all t, and Y t = Y t on A F t iff Y s = Y s on A for all s t. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

17 Comparison theorem The comparison theorem is easy to derive from Lemma Let f be γ-balanced. Then there exists a measure Q P such that Y t Y t ( + f (ω, s, Ys, Z s ) f (ω, s, Y s, Z s ) ) ds ]0,t] is a Q-martingale. Proof. Q is the measure where λ(ω, t, Z t, Z t) is the compensator of X. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

18 Markovian Solutions We can also obtain a Feynman Kac type result. Theorem Let f : X [0, T] R R n, and consider the BSDE with Y t = φ(x T ) + f (X t, t, Y u, Z u )du + ZudM u ]t,t] ]t,t] for some function φ : X R. Then the solution satisfies Y t = u(t, X t ) = X t u t, Z t = u t for u solving du t = (f(t, u t ) + A u t )dt; e i u T = φ(e i ) where e i f(t, u t) := f (e i, t, e i u t, u t ). S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

19 Application to Control Consider the problem min u E u[ e rt φ(x T ) + ]0,T] ] e rt L(u t ; X t, t)dt where u is a predictable process with values in U, and E u is the expectation under which X has compensator λ t (u t ) R N. Suppose λ t ( ) satisfies e i λt( ) e i AX t [γ, γ 1 ] for some γ > 0. Define the Hamiltonian f (ω, t, y, z) = ry t + inf u U {L(u; X t, t) + z (λ t (u) AX t (ω))} The dynamic value function Y t satisfies the BSDE with driver f, terminal value φ(x t ). By the comparison theorem, we have existence and uniqueness of an optimal feedback control. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

20 Outline 1 Uniform Uniform-Ergodicity 2 Markov chain BSDEs 3 Markov chain Ergodic BSDEs 4 Markov chain BSDEs to stopping times S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

21 Ergodic BSDEs Hu, Fuhrman, Tessitore and collaborators have introduced ergodic BSDEs. These are known to relate to ergodic control problems. In a Brownian setting, existence/uniqueness of these equations depends on analysis of uniform ellipticity estimates of perturbations of the generator of the underlying X. These methods do not directly map to the Markov Chain case. Instead we will use the uniform ergodicity estimates we have derived. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

22 Infinite-Horizon BSDEs The first step is to establish existence of Infinite Horizon BSDEs Theorem Let α > 0, and consider the equation dy t = (f (ω, t, Z t ) αy t )dt + Z t dm t t [0, [. If f (ω, t, z ) < C, this admits a solution (Y, Z) with Y < C/α for t [0, [, and this is the unique bounded solution. Proof. Let Y T t solve the BSDE dy t = (f (ω, t, Z t ) αy t )dt + Z t dm t ; Y T T = 0. Then show Y T t C/α for all T and Y T Y (uniformly on compacts). S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

23 EBSDE Existence/Uniqueness Theorem (SC & Hu 2012) The ergodic BSDE dy t = (f (X t, Z t ) λ)dt + Z t dm t admits a bounded solution (Y, Z, λ) with Y t = u(x t ) and u(ˆx) = 0, whenever f is γ-balanced with f (x, 0) < C. Any other bounded solution has the same λ, any other bounded Markovian solution has Y t = Y t + c. Our Feynman Kac type result gives, with Y = u(x t ) = X t u, Z = u, f(u) λ = A u which is readily calculable for small dimensions. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

24 Proof. Denote the infinite horizon BSDE solution Yt α matrices B T γ A, = u α (X t ). Then for some rate [ ] u α (x) = lim T EBT e αu f (X u, 0)du X 0 = x ]0,T] So, with ( R, ρ) the uniform convergence bounds on P B u µ, u α (x) u α (x ) C lim e αu P BT u δ x P BT u δ x TV du T ]0,T] C R(α + ρ) 1 C R/α < A diagonal argument implies u α (x) u α (ˆx) u(x) and αu α (ˆx) λ, as α 0. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

25 Properties The comparison theorem does not hold for Y for EBSDEs. However, it does hold for λ. Theorem Let f, f be γ-balanced and f f. Then we have λ λ in the EBSDE solutions. Theorem Let π A denote the ergodic measure when X has matrix A. Then for some B u γ A, λ = f (x, u)dπ A (x) = f (x, 0)dπ Bu (x) X X Y t + c = f (x, u)µ A X 0 (x) = f (x, 0)µ Bu X 0 (x) where µ A x = R + (P A u δ x π A )du. X X S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

26 Applications Application to Ergodic control similar to the finite-horizon case, with value { λ = min lim sup E u[ 1 ]} L(u t ; X t )dt u T T ]0,T] Write f as the Hamiltonian, comparison theorem gives optimal feedback control, etc... Instead, we will look at creating spatially stable nonlinear probabilities on graphs. These attempt to generalize the ergodic distribution of a Markov chain to a nonlinear setting. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

27 Applications Consider the driver f (x, z) = I {x Ξ} + g(x, z). Then if g 0, the EBSDE solutions will be λ = π(ξ). If g is convex, we will have a convex ergodic probability, If g(x, z) = sup B B {z (B A)x}, when B has column-exchangeability, then we can show that λ Ξ = sup π B (Ξ) B The global balance equation Aπ 0 becomes the Isaac s condition { } sup inf B X B {v B x}dπ B (x) 0 for all v R N. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

28 Nonlinear Pagerank The Pagerank algorithm is modelled on a Markov chain randomly following links in a graph (e.g. the WWW) at a constant rate. Nodes are ranked according to the ergodic probability the chain is at each node. What happens if we instead dynamically take the minimum over a range of rate matrices? S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

29 Pagerank example Consider this undirected graph Our Markov chain moves between states randomly, at a constant (exit) rate. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

30 Pagerank example The ergodic probabilities/pagerank under this basic model can be simply calculated. 11.1% 5.6% 11.1% 16.7% 11.1% 16.7% 16.7% 11.1% We have three states which are equally highest ranked, even though they have different positions in the graph. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

31 Pagerank example Now suppose a bias can be introduced dynamically, so that the relative rate of jumping to the right/left can be doubled. Then the EBSDE can be solved with drivers f (x, z) = I {x Ξ} + min B {z (B A)x} for each Ξ X. The corresponding values λ Ξ (for Ξ singletons) are 5.5% 2.0% 4.3% 10.3% 4.8% 7.8% 11.4% 6.8% S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

32 Pagerank example We can also assess collections of states, so if Ξ = {1, 3, 4}, then λ Ξ = Ξ = {2, 3, 4}, then λ Ξ = Ξ = {3, 4, 5}, then λ Ξ = Ξ = {4, 5, 6}, then λ Ξ = Note all these collections have ergodic measure under the basic model. Peculiarly, λ {2} = > = λ {5}, but λ {2,3,4} < λ {3,4,5}. In this way we can assess the centrality of a collection of states. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

33 Outline 1 Uniform Uniform-Ergodicity 2 Markov chain BSDEs 3 Markov chain Ergodic BSDEs 4 Markov chain BSDEs to stopping times S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

34 BSDEs to stopping times Another interesting class of BSDEs is where the terminal time is a stopping time. Here we want Y t = ξ + ]t,τ] f (ω, u, Y u, Z u )du + ZudM u ]t,τ] when τ is a stopping time, ξ is F τ -measurable. If f (ω, t, y, z) f (ω, t, y, z) y y α for some α > 0, then we can use the discounted methods considered earlier. This does not cover the interesting case f (x, z) = z (B A)x. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

35 BSDEs to stopping times We will assume sufficient bounds on ξ and τ that our BSDE admits a unique solution with polynomial growth. This is similar to the backwards heat equation admitting only one polynomial growth solution. Of use will be the family Q γ of measures where X has compensator λ with e i λ(ω, t) e i AX [γ, γ 1 ] for all i. t These measures are singular on F but equivalent on F τ. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

36 BSDEs to stopping times Imagine that we can prove the following bounds. Assumption For some nondecreasing K, K : R + [1, [, constants β, β > 0, E Q [ ξ F t ] K(t), E Q [(1 + τ) 1+β F t ] K(t), E Q [K(τ) 1+ β F t ] K(t), all P-a.s. for all Q Q γ and all t. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

37 BSDEs to stopping times Theorem Let ξ, τ satisfy the previous bounds, and suppose f is γ-balanced, and such that for any y, y, z, f (ω, t, 0, 0) c(1 + t ˆβ) f (ω, t, y, z) f (ω, t, y, z) y y [ c, 0] for some c R, some ˆβ [0, β[. Then the BSDE Y t = ξ + f (ω, u, Y u, Z u )du + ]t,τ] admits a unique adapted solution satisfying the bound Y t (1 + c)k(t). ]t,τ] Z udm u S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

38 BSDEs to stopping times Proof. Approximate with finite horizon BSDE Y t = I T τ ξ + f (ω, u, Y u, Z u )du + ZudM u ]t,τ T] ]t,τ T] and let T, using Hölder and Markov inequalities to prove convergence on compacts. We can also prove the comparison theorem, which implies... Theorem If the conditions on f hold only locally, and ξ, f (ω, t, 0, 0) are bounded, by the comparison theorem everything works given conditions on τ. This isn t too interesting if we can t verify the assumptions on τ. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

39 Hitting times work! Theorem If τ is the first hitting time of a set Ξ X, and ξ = g(τ, X τ ) for some g(t, x) c(1 + t ˆβ), then the assumptions are satisfied. Proof. As we have uniform ergodicity estimates, we can show that τ admits exponential moments uniformly bounded under all Markovian measures in Q γ. By applying the comparison theorem to finite-time approximations, we can extend this to all measures in Q γ. By extension, this also holds for nth hitting times, etc... S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

40 Markovian Structure We can also give a Feynman Kac type result Theorem Let τ be the first hitting time of a set Ξ X and let f satisfy the conditions. Consider the BSDE Y t = φ(x τ ) f (X u, Y u, Z u )du + Z u dm u ]t,τ] ]t,τ] where φ is a bounded function X R. Then there exists a bounded vector u R N such that Y t = X t u, Z t u and f (x, x u, u) = u Ax for x X \ Ξ, with boundary values x u = φ(x) for x Ξ. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

41 Applications: Network Reliability Consider a model for transmission of messages over our graph Messages are passed randomly between nodes, we wish to assess whether a message created at node 1 will reach node 8 before being killed at the faulty node 5. This probability corresponds to a terminal value φ(x) = I {x=e8 } at a time τ = inf{t : X t = e 5 or e 8 }. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

42 Applications: Network Reliability Without control, the probability is given by 45% 50% 55% 36% 0% 50% 64% 100% With our control, we have 68% 67% 73% 59% 0% 67% 78% 100% S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

43 Applications: Non-Ohmic Circuits Markov chains can be used to model electronic circuits of resistors. If w ij = 1/R ij is the resistance of a connection, then the voltage potential v satisfies { v(i) = φ(i), i Ξ for Ξ a source set. v(i) j w ij = j w ijv(j), i / Ξ This depends on Ohm s Law and Kirtchoff s current laws. For finite circuits, the voltage potential is the expected value at the first hitting time of Ξ, for a Markov chain with A ij = w ji for i j. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

44 Applications: Non-Ohmic Circuits Diodes, Transistors, etc. do not satisfy Ohm s law. For a diode, the effective resistance is R v ij = v(j) v(i) c i (exp([v(j) v(i)]/c v ) 1) > 0 Write B v for the rate matrix with entries B v ij = 1/Rv ji. We can write the voltage potential as solving the BSDE to the first hitting time of Ξ with driver f (x, z) = z (B z A)x where A is generated from a reference resistor circuit. One can show the required conditions are (locally) satisfied, so for any finite source potential we have unique solutions. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

45 Applications: Non-Ohmic Circuits: Example Consider an AND logic gate. +5v +5v Cv=0.1 +5v Cv= v +5v R= v R= v +0v +0.1v Cv= v +0.1v R= v S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

46 Connection to EBSDE The following theorem gives a relation between Ergodic BSDEs and BSDEs to stopping times. Theorem Let ˆf (x, z) = f (x, z) X f (x, z)dπa (x). Then the BSDE to τ = inf{t : X t = ˆx} with driver ˆf and terminal value ξ = 0 agrees with the EBSDE with driver f and u(ˆx) = 0. Hence the corresponding BSDEs to τ T converge uniformly to the EBSDE solution. This suggests an alternative Monte-Carlo approach to numerical computation of EBSDE solutions, which is of importance for large/infinite-state chains. S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

47 Conclusions We have seen that one can obtain uniform uniform-ergodicity estimates for Markov chains under perturbations of the rate matrix. With these, one can study solutions to Ergodic BSDEs and BSDEs to hitting times Solutions are readily calculable for small examples, and can be used to model various problems. Open problems include removing the γ-balanced requirement, including y in the driver of EBSDEs, better numerics... S.N. Cohen (Oxford) Uniformly-ergodic Markov chains Rennes, May / 47

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Existence and Comparisons for BSDEs in general spaces

Existence and Comparisons for BSDEs in general spaces Existence and Comparisons for BSDEs in general spaces Samuel N. Cohen and Robert J. Elliott University of Adelaide and University of Calgary BFS 2010 S.N. Cohen, R.J. Elliott (Adelaide, Calgary) BSDEs

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

arxiv: v1 [math.pr] 24 Nov 2011

arxiv: v1 [math.pr] 24 Nov 2011 On Markovian solutions to Markov Chain BSDEs Samuel N. Cohen Lukasz Szpruch University of Oxford arxiv:1111.5739v1 [math.pr] 24 Nov 2011 September 14, 2018 Dedicated to Charles Pearce on the occasion of

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Quadratic BSDE systems and applications

Quadratic BSDE systems and applications Quadratic BSDE systems and applications Hao Xing London School of Economics joint work with Gordan Žitković Stochastic Analysis and Mathematical Finance-A Fruitful Partnership 25 May, 2016 1 / 23 Backward

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Approximation of BSDEs using least-squares regression and Malliavin weights

Approximation of BSDEs using least-squares regression and Malliavin weights Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

4.6 Example of non-uniqueness.

4.6 Example of non-uniqueness. 66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with

More information

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009 BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, 16-20 March 2009 1 1) L p viscosity solution to 2nd order semilinear parabolic PDEs

More information

Continuous dependence estimates for the ergodic problem with an application to homogenization

Continuous dependence estimates for the ergodic problem with an application to homogenization Continuous dependence estimates for the ergodic problem with an application to homogenization Claudio Marchi Bayreuth, September 12 th, 2013 C. Marchi (Università di Padova) Continuous dependence Bayreuth,

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10

More information

Viscosity Solutions of Path-dependent Integro-Differential Equations

Viscosity Solutions of Path-dependent Integro-Differential Equations Viscosity Solutions of Path-dependent Integro-Differential Equations Christian Keller University of Southern California Conference on Stochastic Asymptotics & Applications Joint with 6th Western Conference

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Elliptic Kirchhoff equations

Elliptic Kirchhoff equations Elliptic Kirchhoff equations David ARCOYA Universidad de Granada Sevilla, 8-IX-2015 Workshop on Recent Advances in PDEs: Analysis, Numerics and Control In honor of Enrique Fernández-Cara for his 60th birthday

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3

EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3 EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS X. X. Liao 2 and X. Mao 3 Department of Statistics and Modelling Science University of Strathclyde Glasgow G XH, Scotland, U.K. ABSTRACT

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

A class of globally solvable systems of BSDE and applications

A class of globally solvable systems of BSDE and applications A class of globally solvable systems of BSDE and applications Gordan Žitković Department of Mathematics The University of Texas at Austin Thera Stochastics - Wednesday, May 31st, 2017 joint work with Hao

More information

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Non-Zero-Sum Stochastic Differential Games of Controls and St

Non-Zero-Sum Stochastic Differential Games of Controls and St Non-Zero-Sum Stochastic Differential Games of Controls and Stoppings October 1, 2009 Based on two preprints: to a Non-Zero-Sum Stochastic Differential Game of Controls and Stoppings I. Karatzas, Q. Li,

More information

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Ulrich Horst 1 Humboldt-Universität zu Berlin Department of Mathematics and School of Business and Economics Vienna, Nov.

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Uncertainty in Kalman Bucy Filtering

Uncertainty in Kalman Bucy Filtering Uncertainty in Kalman Bucy Filtering Samuel N. Cohen Mathematical Institute University of Oxford Research Supported by: The Oxford Man Institute for Quantitative Finance The Oxford Nie Financial Big Data

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

A mathematical framework for Exact Milestoning

A mathematical framework for Exact Milestoning A mathematical framework for Exact Milestoning David Aristoff (joint work with Juan M. Bello-Rivas and Ron Elber) Colorado State University July 2015 D. Aristoff (Colorado State University) July 2015 1

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Potential theory of subordinate killed Brownian motions

Potential theory of subordinate killed Brownian motions Potential theory of subordinate killed Brownian motions Renming Song University of Illinois AMS meeting, Indiana University, April 2, 2017 References This talk is based on the following paper with Panki

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes

More information

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions Hao Xing University of Michigan joint work with Erhan Bayraktar, University of Michigan SIAM Conference on Financial

More information

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f), 1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Liquidation in Limit Order Books. LOBs with Controlled Intensity

Liquidation in Limit Order Books. LOBs with Controlled Intensity Limit Order Book Model Power-Law Intensity Law Exponential Decay Order Books Extensions Liquidation in Limit Order Books with Controlled Intensity Erhan and Mike Ludkovski University of Michigan and UCSB

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients Cent. Eur. J. Eng. 4 24 64-7 DOI:.2478/s353-3-4-6 Central European Journal of Engineering The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

More information

Kai Lai Chung

Kai Lai Chung First Prev Next Go To Go Back Full Screen Close Quit 1 Kai Lai Chung 1917-29 Mathematicians are more inclined to build fire stations than to put out fires. Courses from Chung First Prev Next Go To Go Back

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

We shall finally briefly discuss the generalization of the solution methods to a system of n first order differential equations.

We shall finally briefly discuss the generalization of the solution methods to a system of n first order differential equations. George Alogoskoufis, Dynamic Macroeconomic Theory, 2015 Mathematical Annex 1 Ordinary Differential Equations In this mathematical annex, we define and analyze the solution of first and second order linear

More information

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching Discrete Dynamics in Nature and Society Volume 211, Article ID 549651, 12 pages doi:1.1155/211/549651 Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting

More information

Noncooperative continuous-time Markov games

Noncooperative continuous-time Markov games Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information