ABSTRACT. WANG, TIANHENG. Hybrid Impulsive and Mean Field Control Systems. (Under the direction of Negash Medhin.)

Size: px
Start display at page:

Download "ABSTRACT. WANG, TIANHENG. Hybrid Impulsive and Mean Field Control Systems. (Under the direction of Negash Medhin.)"

Transcription

1 ABSTRACT WANG, TIANHENG. Hybrid Impulsive and Mean Field Control Systems. Under the direction of Negash Medhin. This dissertation deals with hybrid impulsive optimal control problems and mean field game/control problems. In addition to continuous control which is well studied in optimal control theory, the system in our study is also influenced by impulsive control which changes the state of the system in discrete time. Hybrid impulsive control problems receive considerable attention for their wide applications in epidemiology, economy and sociology. We will discuss impulsive optimal control in both deterministic version and stochastic version. Perturbation methods will be used to derive the Pontryagin Minimum Principle which characterizes the necessary conditions for the optimal control and trajectory. In stochastic problems the necessary conditions involve coupled forward and bacward stochastic differential equationsfbsde. The solution to the coupled forward bacward stochastic differential equation with jump conditions will be provided. Dynamic programming is also discussed and the comparison between Hamilton-Jacobi-BellmanHJB Equation and Minimum Principle will be illustrated. A Multi-group SIR with Vaccination model will be studied as an example in detail and numerical results will be given at the end of the chapters. Mean field game/control is an extension of stochastic optimal control which deals with a control problem involving a large number of interacting agents. Because the evolution of the system satisfies a measure valued SDE, the Minimum Principle or HJB equation characterizing the optimal control will be coupled with the Foer-Planc equation which describes the evolution of the probability distribution of the state process. We will discuss an interesting problem of mean field game with a dominating player, where the dominating player mae decisions based on the behavior of a representative player rather than individual minor players. At the end we will discuss impulsive mean field control problems.

2 Copyright 218 by Tianheng Wang All Rights Reserved

3 Hybrid Impulsive and Mean Field Control Systems by Tianheng Wang A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy Applied Mathematics Raleigh, North Carolina 218 APPROVED BY: Zhilin Li Tao Pang Reha Uzsoy Negash Medhin Chair of Advisory Committee

4 BIOGRAPHY Tianheng Wang was born and raised in Wenzhou, a small city located at the east coast of China. In the year of 28, he went to Zhejiang University and started his study of mathematics. In the year of 212, he came to United States to continue his relationship with mathematics at NC State University. He wored with Doctor Medhin on optimal control problems. ii

5 ACKNOWLEDGEMENTS I would lie to than the department of mathematics of NC State University for providing financial support. I would lie to express my gratitude to all the professors who gave me excellent lectures. They are Dr. Bociu, Dr. Campbell, Dr.Chertoc, Dr. Chu, Dr. Haider, Dr. Kang, Dr. Medhin and Dr. Putcha. I would lie to than my advisor Dr. Medhin for being my mentor and inspiration. He provided me guidance and help through all these years. I cannot remember how many times he stayed up late revising my wor. I would lie to than the members of my committee, Dr. Li, Dr. Pang and Dr. Uzsoy, for their help and encouragement. I would lie to than my father and mother. Though we are thousands of miles apart, their support and unconditional love is always with me. iii

6 TABLE OF CONTENTS List of Tables List of Figures vi vii Chapter 1 Deterministic Impulsive Control Problems Deterministic Multi-group SIR Model Stability Vaccination Deterministic Impulsive Control Problem Necessary Conditions by Methods of Variation of Calculus Numerical Solution to SIR Model Chapter 2 Stochastic Impulsive Control Stochastic Multi-group SIR Model Stability of Disease Free Equilibrium Stability of Endemic Equilibrium Vaccination Stochastic Impulsive Control Problems Necessary Conditions by Methods of Variation of Calculus Forward Bacward Stochastic Differential Equations Necessary Conditions by Dynamic Programming Approach Numerical Solution to Stochastic SIR Model Proof of A Lemma Chapter 3 Mean Field Control Master Equation for the SIR Model Stochastic Interacting System Terms and Notations Inter-Bans Lending and Borrowing Multi-Objective Problem Mean Field Game Limit of Empirical Distribution Methods of Variation of Calculus Solution to Multi-Objective Problem and the Representative Player Mean Field Game with a Dominating Player Representative Player Optimal Control for the Representative Player and the Dominating Player Numerical Experiment Impulsive Mean Field Control Mean Field Type Control Impulsive Mean Field Control Numerical Result Summary and Future Wor iv

7 3.8.1 Summary Future Wor References v

8 LIST OF TABLES Table 1.1 List of Costs Tested by Varying Controls Table 2.1 List of Cost Tested by Varying Controls Table 3.1 List of Cost of the Representative Agent Table 3.2 List of Cost of the Dominating Agent Table 3.3 List of Cost Tested by Varying Controls Table 3.4 Value of Cost Function vi

9 LIST OF FIGURES Figure 1.1 Populations of Susceptibles in All Groups under Optimal Controls Figure 1.2 Populations of Infected in All Groups under Optimal Controls Figure 1.3 Optimal Control of City 1: û 1j = η 1 ξ 1 S 1 I j Figure 1.4 Optimal Control of City 2: û 2j = η 2 ξ 2 S 2 I j Figure 1.5 Optimal Control of City 3: û 3j = η 3 ξ 3 S 3 I j Figure 1.6 Populations of Susceptibles in All Groups under Zero Controls Figure 1.7 Populations of Infected in All groups under Zero Controls Figure 2.1 populations of susceptible in different groups under optimal controls... 7 Figure 2.2 populations of infected in different groups under optimal controls Figure 2.3 expected populations of susceptible in different groups under zero controls 71 Figure 2.4 expected populations of infected in different groups under zero controls. 71 Figure 2.5 Value Function V S 1, S 2 =.75, I 1, I 2 = Figure 2.6 Value Function V S 1, S 2, I 1 =.5, I 2 = Figure 2.7 Value Function V S 1 =.75, S 2 =.5, I 1, I Figure 3.1 Path/State of Optimal Control of Repr. Player: ût = CPtxt+gt R.. 19 Figure 3.2 Path/State of Optimal Control of Dominating Player: û t = R 1 C pt11 Figure 3.3 Optimal Path/State of Representative Player Figure 3.4 Optimal Path/State of Dominating Player Figure 3.5 Evolution of Distribution Figure 3.6 Shift of Distribution V X Figure 3.7 A Particular Path of Optimal Control: ût = αex t,t t x Figure 3.8 An Optimal Path of State Figure 3.9 Expected Optimal Control Figure 3.1 Expected Optimal Trajectory: mt = EX t vii

10 Chapter 1 Deterministic Impulsive Control Problems In this chapter we study deterministic impulsive optimal control problems. In many control problems changes in the dynamics occur unexpectedly or are applied by a controller as needed. The times at which changes occur are not necessarily nown a priori, or they are probabilistic. In manufacturing systems and flight operations changes in the control system may be automatically implemented as needed in response to possibly unexpected external factors affecting the operations of the system. In health-care it is necessary to launch a reasonably effective and timely policy to deal with infectious disease epidemics long-term and short-term[2], [3], [12], [15], [18], [19], [25]. Thus, impulsive control problems have received considerable attention for their wide applications in engineering, epidemiology, economics and sociology. Multi-group SIR model is important in studying the spreading of diseases. Taing migration into consideration, the rate of level of interactions between groups and within groups are important features of the model. Stability of the dynamic system and reproduction number are different with the single-group SIR model[16], [26], [31]. The positive preserve of the state variables needs justification. In Section 1.1, we introduce a multi-group SIR model and discuss its stability properties. We will introduce a vaccination strategy and formulate the SIR model as a combined continuous and discrete control problem. In Section 1.2 we give detailed statement of the general impulsive optimal control problem and obtain the necessary conditions characterizing the optimal control and trajectory. In Section 1.3, we give numerical results for the impulsive SIR model. 1

11 1.1 Deterministic Multi-group SIR Model Let us begin this chapter by considering a situation where a disease is spreading in the Triangle region, i.e. Raleigh, Durham and Chapel Hill. One could get infected by contacting infectious people at wor, at school, at restaurants, at grocery stores and all public places. Taing into account the large group of commuters, a person from Raleigh cannot ignore the possibility of getting infected by infectious people from the other two cities. We denote by S, I and R the size of the susceptible, infectious and recovered population of city, respectively. Let β j be the transmission rate between city and city j, and β j S I j represent the new infections in city caused by coming into contact with infectious people of city j. We let Λ, d, γ denote the birth rate, death rate and recovery rate of city, respectively. Thus, we could describe the spread of the disease by the following SIR model: Ṡ = Λ n j=1 β js I j d S, I = n j=1 β js I j d + γ I, Ṙ = γ I d R. 1.1 To simplify the model, we assume that once an infected individual recovers, he or she will be immune to this disease. Under this assumption the size of the susceptible and infectious population will not be affected by the size of the recovered population. Thus, we will focus on the reduced SIR model: Ṡ = Λ n j=1 β js I j d S, I = n j=1 β js I j d + γ I, 1.2 in the rest of the thesis Stability In this section we will study the stability of the SIR system 1.2. It is clear that the system always has a disease free solution {S = Λ d, I = }n =1. We call E = [S 1,, S 2,,..., S n, ] the disease free equilibrium. In the study of epidemiology, the reproduction number represents the number of cases 2

12 generated by one infected person over the course of his infectious period. The reproduction number R of system 1.2, which is defined to be the greatest eigenvalue of the matrix B, where B j = β j Λ d d + γ, 1.3 plays a role of threshold in the long term qualitative behavior of the SIR system. When R > 1, Guo, Li and Shuai [16] proved that there exists an endemic equilibrium E = [S 1, I 1,..., S n, I n] other than the disease free equilibrium, where {S >, I > }n =1 satisfies n Λ β j S I j d S =, 1.4 j=1 n β j S I j d + γ I j=1 =. 1.5 The stability regarding the reproduction number is described in the following Theorem If R < 1, then the disease free equilibrium E of the system 1.2 is globally stable. 2. If R > 1, then the endemic equilibrium E of the system 1.2 is globally stable. We will use the following lemmas to prove the theorem. Lemma Let A be a matrix where each column sums to zero. Then 1. the cofactors C ij of a given column are equal, i.e., C ij = C j, 1 i, j, n, if the off-diagonal entries of A are all negative, then there exists a vector w with all positive entries which solves Aw =. Lemma The system 1.2 will have a nonnegative solution {S t, I t}, t,. 3

13 Proof of Lemma We will prove the result for the first column, and the same applies to the other columns. Denote by Ãij the submatrix of A with ith row and jth column removed. It is obvious that Ã11 differs from Ãi1 in the top i 1 rows. We have 4

14 C i1 = 1 1 i detãi1 = 1 1 i det a 12 a a 1n a 22 a a 2n a i 1,2 a i 1,3... a i 1,n a i+1,2 a i+1,3... a i+1,n a n2 a n3... a nn = 1 1 i det n =2 a 2 n =2 a 3... n =2 a n a 22 a a 2n a i 1,2 a i 1,3... a i 1,n a i+1,2 a i+1,3... a i+1,n a n2 a n3... a nn = 1 i det a i2 a i3... a in a 22 a a 2n a i 1,2 a i 1,3... a i 1,n a i+1,2 a i+1,3... a i+1,n a n2 a n3... a nn = det a 22 a a 2n a i 1,2 a i 1,3... a i 1,n a i2 a i3... a in a i+1,2 a i+1,3... a i+1,n a n2 a n3... a nn = detã11 = C Using the same reason we will have C 1j = C ij, for every column j. 2. for any i = 1, 2,..., n, we have 5

15 n n a ij C jj = a ij C ij = deta =, 1.8 j=1 j=1 where the second equation is the cofactor expansion for deta in terms of ith row. So the vector w = [C 11 C C nn ] T solves Aw =. If all the off diagonal entries of A are negative, then each diagonal entry of A equals the sum of absolute value of all the other entries of the column. We could see that ÃT is diagonally dominant matrix, then C = 1 + detã = detãt >. 1.9 Proof of Lemma Let τ e = inf{t : St = or It = } denote the explosion time, and we now that the system 1.2 has unique continuous solution {S t, I t} on t, τ e. We define τ m = inf{t : min{s t, I t, = 1,..., n} m 1 or max{s t, I t, = 1,...n} m}. 1.1 We now that m n implies τ m τ n τ e for sufficiently large m and n. Now we claim that lim τ m =. Otherwise, we assume that lim τ m = T <. Clearly it is true that τ m T, m m m >. Using the fact that gx = x 1 ln x, we define V t = a g S a + gi = S a a ln S a + I 1 ln I, 1.11 where a s are nonnegative coefficients to be determined. Differentiating V t with respect to time, we have 6

16 dv dt = = 1 a Λ β j S I j d S S I j j Λ d S a Λ + a S j β j I j + a d d + γ I j β j S I j d + γ I β j S I j I + d + γ 1.12 Now we choose a satisfying a β j d j + γ j, yielding a β j I j j d + γ I = j a β j d j γ j I j, if I j. We integrate the equation 1.12 from to τ m, then we have τm V τ m = V + +a d d + γ I j τm V + Λ d S a Λ S + a β j I j β j S I j I + d + γ dt Λ + a d + d + γ dt = V + τ m Λ + a d + d + γ j V + T Λ + a d + d + γ. At time τ m, we now at least one of {S, I } has the value m or m 1, so we will have V τ m max{m a ln m a, m 1 a ln ma, m 1 ln m, m ln m} The right hand side of 1.13 could be arbitrarily large if m. Since V τ m V + T Λ + a d + d + γ, we will have a contradiction unless lim τ m = T =. Thus the m 7

17 solution {S t, I t} to system 1.2 is nonnegative on t,. Proof of Theorem First let us consider the system Ẋ = Λ d X, X = S 1.14 By the comparison principle we have S t X t. Then, we consider the Lyapunov function V t as follows V t = n =1 v d + γ I t, 1.15 where v = [v 1,..., v n ] T is the left eigenvector of the matrix defined in 1.3. We need to justify that {v } n =1 are positive numbers such that the function defined in 1.15 is a valid Lyapunov function. Perron Frobenius Theorem states that a matrix with all positive components has a positive eigenvalue, and the eigenvector corresponding to that eigenvalue has all positive components. Thus V t is a nonnegative function. Differentiating V t along the solution of 1.2, we have that 8

18 V t = = = = n v n d + γ =1 j=1 n n =1 j=1 n n j=1 =1 β j S ti j t d + γ I t v d + γ β j X ti j t v n R v j I j t + j=1 n [ n j=1 β j Λ d d + γ I jt + n n j=1 =1 β j v d + γ =1 v n n v I t =1 n j=1 =1 v β j I j t X t Λ d + γ d β j I j t X t Λ d + γ d X t Λ + R 1v j ]I j t. d n v I t =1 n v I t =1 We now that lim t X t = Λ d. With the assumption that R < 1, we will have V t when t is sufficiently large, and the equality holds only if I j =. Thus the disease free equilibrium E is globally stable. 2. Now we suppose that R > 1, in which case there exists an endemic equilibrium E = [S 1, S 2,..., S n, I 1, I 2,..., I n]. Define β j = β j S I j, and B = β j 1 1j β 21 β β n1 β 12 j 2 β 2j β β n2 β 13 β 23 j 3 β 3j... β n β 1n β 2n β 3n... β j n nj We notice that each column of B sums to zero and all the off-diagonal entries of B are negative. By Lemma 1.1.2, the linear equation Bw = has a positive solution w = [w 1,..., w n ] T. The th row of the equation Bw = is equivalent to 9

19 n n β j w = β j w j j=1 j=1 We define gx = x 1 ln x. By checing the derivative of gx we easily get gx for x > and g1 = min gx =. We define the Lyapunov function x> V t = = n [ w S g S t =1 S + I g I t ] I n [ w S t S S ln S t =1 S + I t I I ln I t ] I Differentiating V t, and using the equilibrium condition 1.4, 1.5, we have V t = n [ w 1 S n n β j S S I j + d S β j S I j d S =1 j=1 j=1 + 1 I n ] β j S I j d + γ I I n = w d S 2 S S S + S = =1 n w β j S I j j, =, n j,=1 n j,=1 Ij w βj g Ij n j,=1 j=1 w β j S I j 2 S S + I j I j [ S g Ij S I j I + g S Ij g S I j I g Ij w βj g Ij n j,=1 n g =1 w βj g I I I I n β j w j j=1 I I ] S I j I S I j I I I 1

20 where the second to last equality is a result of So we have that V t and equality holds if S t = S, I t = I. Therefore, the endemic equilibrium E is globally stable Vaccination In this subsection we introduce the vaccination strategy. Periodical repetition of vaccinations are provided to a certain portion of the susceptible group. Those who receive vaccination will be immune to the disease. Introducing vaccination into the system 1.2, we have Ṡ = Λ β j S I j d S, j I = β j S I j d + γ I, j t t i, t i+1, 1.19 with impulse condition S t + i = S t i 1 c i, I t + i = I t i, 1.2 where c i is the portion of the susceptible from group i who receive vaccination at time t. We call the system 1.19,1.2 impulsive SIR model. 1.2 Deterministic Impulsive Control Problem In this section we will discuss the main part of the first chapter. We will give a detailed statement of the impulsive optimal control problem, and study the necessary conditions that the optimal control must satisfy. We consider the system whose evolution satisfies the following ordinary differential equation: ẋt = f xt, ut, t, t t, t At time t = t, = 1, 2,..., N 1, the system satisfies the following jump condition 11

21 xt + = g xt, c We call xt R n the state variable, ut R m the continuous control variable, and c R M the impulsive control variable. The impulsive optimal control problem is to find a law for the control ut and c such that the following cost functional Ju, c = N 1 =1 φ xt, c + N 1 = t+1 L x, u, tdt + φ N xt N, 1.23 t is minimized. We assume that f x, u, t : R n R m R R n, g x, c : R n R M R n. L x, u, t : R n R m R R, φ x, c : R n R M R. are smooth functions which have continuous derivatives of all orders Necessary Conditions by Methods of Variation of Calculus In this subsection, we will study the necessary condition for the impulse optimal control problem. We introduce a small perturbation to the control and derive the variation for the cost functional. The adjoint variable is defined and the adjoint equation is obtained, which will lead us to the variational inequalities. We conclude this part with the maximum principle. We assume that {û, ĉ } is the optimal control set and ˆxt is the state corresponding to {û, ĉ }. Let {ũ, θ, c θ} be another set of controls where 12

22 ũt, θ = ût + θvt, 1.24 c θ = ĉ + θc, = 1,..., N v, c are arbitrary perturbations, and < θ 1. Let x θ t be the state corresponding to {ũ, θ, c θ}. Define yt = 1 θ x θt ˆxt The function yt in the interval t, t +1 satisfies ẏt = f x ˆxt, ût, ty + f ˆxt, ût, tv + θηt 1.27 u and yt + = g x ˆxt, ĉ yt + g c ˆxt, ĉ c + θζ Then, g yt = Φ t, t x ˆxt, ĉ yt + g c ˆxt, ĉ c t + Φ t, s f t u ˆxs, ûs, svsds + θ Φ t, t ζ + t Φ t, sηsds t, t t, t +1, 1.29 where Φ t, s is the fundamental solution for the linear system 13

23 ż = f x ˆxt, ût, tz, t zs = I. < t +1, 1.3 We have the following fact. Lemma There is a function pt Lt, t N ; R n, satisfying the differential equation ṗs = L x ˆxs, ûs, s T + f x ˆxs, ûs, s T ps, s t, t +1, 1.31 and the jump condition p T t = pt t + g x ˆxt, ĉ + φ x ˆxt, ĉ, 1.32 then the variation of the cost functional has the following form Jû + θv, ĉ + θc Jû, ĉ 1.33 N 1 = θ =1 α c + θ N 1 = t+1 t H ˆxs, ûs, ps, svsds + oθ, u where α T = φ c ˆxt, ĉ + p T t + g c ˆxt, ĉ, 1.34 H x, u, p, t = L x, u, t + p T f x, u, t Proof. To prove the lemma, we compute the difference between the perturbed cost and the minimal cost: 14

24 Jû + θv, ĉ + θc Jû, ĉ { N 1 = θ =1 { φn 1 = θ c tn + φ c c + N 1 = c N 1 + φ N x Φ N 1 t N, s f N 1 t N 1 u tn + t N 1 t + N 2 + [ LN 1 x t+1 φ+1 x yt +1 + L t Φ N 1 t N, t N 1 vsds Φ N 1 t, t N 1 Φ N 1 t, s f N 1 t N 1 u =1 φ c c + { φn 1 = θ c N 2 = vsds gn 1 x gn 1 x x yt + L u vtdt } + oθ yt N 1 + g N 1 c N 1 c yt N 1 + g N 1 c N 1 c ] + L N 1 u vtdt t+1 φ+1 x yt +1 + L t + φ N x Φ N 1t N, t N 1 g N 1 c tn [ φn + t N 1 x Φ N 1t N, s + tn φn + x Φ N 1t N, t N 1 g N 1 + x N 2 + =1 φ c c + { = θ αn 1c T N 1 + N 2 + =1 φ c c + N 2 = tn t N 1 N 2 =1 s x yt + L u tn + L N 1 t N 1 x L N 1 x Φ N 1t, sdt tn L N 1 t N 1 x φ t+1 +1 x yt +1 + L t vtdt } + oθ Φ N 1t, t N 1 g N 1 dt c N 1 c ] fn 1 u vs + L N 1 u vs Φ N 1t, t N 1 dt g N 1 yt N 1 x x yt + L u vtdt } + oθ p T f N 1 N 1 u vs + L N 1 u vs ds + βn 1yt T N 1 t+1 φ+1 x yt +1 + L t x yt + L u vtdt } + oθ. ds The last equation above is obtained by defining 15

25 αn 1 T φ N 1 + φ N c x Φ N 1t N, t N 1 g tn N 1 + c p T s φ tn N x Φ N 1t N, s + s L N 1 βn 1 T φ N x Φ N 1t N, t N 1 g N 1 + x L N 1 t N 1 x Φ N 1t, t N 1 g N 1 dt, c 1.36 x Φ N 1t, sdt, s t N 1, t N, 1.37 tn L N 1 t N 1 x Φ N 1t, t N 1 dt g N 1 x, 1.38 Now we use induction to prove the following claim for l = N 1, N 2,..., 1: = θ Jû + θv, ĉ + θc Jû, ĉ 1.39 { N 1 =l l 1 + =1 t+1 α T c + p T f t u vs + L u vs ds + βl T yt l φ c c l 1 + = t+1 φ+1 x yt +1 + L t x yt + L u vtdt } + oθ. The l = N 1 case is already shown. Assuming the case for l = j, we compute βj T yt j + φ j 1 c { Φ j 1 t j, t j 1 = β T j + φ j x tj c j 1 + t j 1 c j 1 + φ tj j x yt j + { Lj 1 gj 1 x + φ j 1 c x t + Φ j 1 t j, s f ] j 1 t j 1 u vsds tj = αj 1c T j 1 + p T f j 1 t j 1 u L j 1 t j 1 x yt + L j 1 u vtdt yt j 1 + g j 1 c c j 1 + [ Φ j 1 t, t j 1 + L j 1 u vs gj 1 x } dt tj Φ j 1 t j, s f j 1 t j 1 u yt j 1 + g j 1 c c j 1 vs + L j 1 u vs ds + β T j 1yt j 1. } vsds 16

26 The last equation is obtained by defining α T j 1 p T s β T j 1 [ β T j β T j [ β T j + φ tj j Φ j 1 t j, t j 1 + x + φ tj j Φ j 1 t j, s + x + φ j Φ j 1 t j, t j 1 + x s L j 1 t j 1 x L j 1 tj ] Φ gj 1 j 1t, t j 1 dt c + φ j 1, c 1.4 x Φ j 1t, sdt, s t j 1, t j, 1.41 L j 1 t j 1 x ] Φ gj 1 j 1t, t j 1 dt x Then 1.39 could be written as Jû + θv, ĉ + θc Jû, ĉ { N 1 = θ j 2 + =j 1 =1 t+1 α T c + p T f t u vs + L u vs ds + βj 1yt T j 1 φ c c + j 2 = t+1 φ+1 x yt +1 + L t x yt + L u vtdt } + oθ, which closes the induction. Then, we have 17

27 = θ = θ = θ Jû + θv, ĉ + θc Jû, ĉ { N 1 =1 + φ 1 x yt 1 + { N 1 =1 t+1 α T c + p T f t u vs + L u vs ds + β1 T yt 1 } t1 t L x yt + L u vtdt + oθ t+1 α T c + p T f t u vs + L u vs ds + β1 T + φ t 1 1 x t1 L t + Φ t, s f t x t u vsdsdt + { N 1 α T c + =1 t Φ t 1, s f u vsds N 1 = t+1 t } L t u vtdt t1 + oθ p T f u vs + L } u vs ds + oθ. The last equation is obtained by defining p T s β 1 + φ 1 x t1 L Φ t 1, s + s x Φ t, sdt, s t, t From , we will have that α T j 1 = φ j 1 c + p T j 1t + j 1 g j 1 c 1.44 βj 1 T = p T t + j 1 g j x Lj 1 T fj 1 T ṗs = + p, s tj 1, t j 1.46 x x p T t j = β j + φ j x = pt t + j g j x + φ j x 1.47 By defining the Hamiltonian H : 18

28 H x, u, p, t = L x, u, t + p T f x, u, t, t t, t +1, 1.48 we have the result Jû + θv, ĉ + θc Jû, ĉ N 1 = θ =1 α c + θ N 1 = t+1 t H ˆxs, ûs, ps, svsds + oθ, u Assuming there is no constraint on the control variables, we have the following result. Theorem If ˆxt is the solution of the impulse optimal control problem stated as above, we have φ c ˆxt, ĉ + p T t + g c ˆxt, ĉ =, 1.49 H u ˆxs, ûs, ps, s =, s t, t Remar Although Pontryagin s maximum principle gives a stronger set of necessary conditions than Theorem 1.2.2, it is 1.49,1.5 that we will use in the sequel. In the following we will prove the Pontryagin s maximum principle by employing the spie variation methods. Theorem Let {û, ĉ} be optimal control pair, ˆx be the optimal state variable corresponding to {û, ĉ}, p be the adjoint variable defined by 1.31, and H x, u, p, t be the Hamiltonian defined by Then, we have H ˆxτ, v, pτ, τ H ˆxτ, ûτ, pτ, τ, τ t, t for any admissible control v. 19

29 Proof. Let u ε be defined as u ε t = ût t t, τ v t τ, τ + ε ût t τ + ε, t +1 ût t t j, t j + 1, j 1.52 with < ε 1, and x ε be the state variable corresponding to {u ε, ĉ}. We consider z ε defined by dzε dt = f x ˆx, û, tz ε, t τ, t +1, 1.53 z ε τ = fˆxτ, v, τ 1 ε τ+ε τ fˆxs, ûs, sds, 1.54 Then, by taing ε and Grönwall s Inequality, we have z ε t for t τ, t +1. Then we consider z j ε defined by dz j ε dt = f j x ˆx, û, tzj ε, t t j, t j+1, 1.55 zεt j + j = g j x zj 1 ε t j It could be proved that by induction that z j εt, t t j, t j+1 as ε for j = +1,..., N 1. We let y ε t = 1 ε x εt ˆxt εz ε t, t τ + ε, t +1. Then, we have 2

30 dy 1 ε dt = f x ˆx + λx ε ˆx, û, tdλy ε + y ε τ + ε = 1 ε τ+ε τ 1 f x ˆx + λx ε ˆx, û, t f x ˆx, û, t z ε dt, 1.57 τ+ε f f x ε t, v, t f ˆxτ, v, τdt τ x ˆx, û, tz ε dt, 1.58 By Grönwall s Inequality again we have y ε t for t τ + ε, t +1 letting ε. Then, let y ε t = 1 ε x εt ˆxt εz j εt, t t j, t j+1 for j = + 1,..., N 1, and we have dy 1 ε dt = f j x ˆx + λx ε ˆx, û, tdλy ε + y ε t + j = fj x ˆx + λx ε ˆx, û, t f j x ˆx, û, t z j εdt, 1.59 g j x ˆxt j + λx εt j ˆxt j, ĉ jy ε t j dλ 1.6 gj x ˆxt j + λx εt j ˆxt j, ĉ j g x ˆxt j, ĉ j zεt j j dλ, By using Grönwall s Inequality it could be proved by induction that y ε t, t t j, t j+1 for j = + 1,..., N 1, as ε. 21

31 Then, we have = 1 ε 1 ε Ju ε Ju ε + τ+ε τ t+1 τ+ε N 1 j=+1 L x ε, v, t L ˆx, û, tdt [ 1 ε L x ε, û, t L ˆx, û, tdt + 1 ε φ +1x ε t +1 φ +1ˆxt +1 tj+1 = L ˆxτ, v, τ 1 ε L j x ε, û, t L j ˆx, û, tdt + 1 ] t j ε φ j+1x ε t j+1 φ j+1ˆxt j+1 τ+ε t+1 L + τ x ˆx, û, tz ε dt + 1 ε τ+ε τ L ˆx, û, tdt + 1 ε t+1 τ+ε τ+ε τ L x ε, v, t L ˆxτ, v, τdt L x ε, û, t L ˆx, û, t ε L x ˆx, û, tz ε dt L τ x ˆx, û, tz ε dt + φ +1 x ˆxt +1 z ε t φ +1 x ε t ε +1 φ +1ˆxt +1 ε φ +1 x ˆxt +1 z ε t +1 N 1 [ tj+1 L j + t j x ˆx, û, tzj εdt + 1 tj+1 L j x ε, û, t L j ˆx, û, t ε L j ε t j x ˆx, û, tzj ε dt j=+1 + φ j+1 x ˆxt j+1 zj εt j ] φ j+1 x ε t j+1 ε φ j+1ˆxt j+1 ε φ j+1 x ˆxt j+1 zj εt j+1. Since p T tz j ε t = L j x + pt f j x zj ε + p T f j x zj ε = L j x zj ε, t t j, t j+1, 1.62 we have 22

32 t+1 τ tj+1 t j L x ˆx, û, tz ε dt = p T τzετ j p T t +1 zj εt = p T τ fˆxτ, v 1 ε = p T τ fˆxτ, v 1 ε τ+ε τ τ+ε τ fˆx, ûdt p T t +1 z ε t +1. fˆx, ûdt p T t + +1 g +1 x + φ +1 zε t x +1. L j x ˆx, û, tzj εdt = p T t + j zj εt + j pt t j+1 zj εt j = p T t + j g j x zj εt j p T t + j+1 g j+1 x + φ j+1 z j x εt j+1. Therefore, we have 23

33 1 ε Ju ε Jû 1.65 τ+ε τ+ε = L ˆxτ, v, τ 1 L ˆx, û, tdt + 1 L x ε, v, t L ˆxτ, v, τdt ε τ ε τ +p T τ fˆxτ, v 1 τ+ε fˆx, ûdt p T t ε +1 z ε t ε t+1 τ+ε τ L x ε, û, t L ˆx, û, t ε L x ˆx, û, tz ε dt + φ +1 x xt +1 z ε t ε ε + 1 ε N 1 j=+1 tj+1 t j [ p T t + j zj εt + j pt t j+1 zj εt j+1 τ+ε τ L x ˆx, û, tz ε dt φ +1 x ε t +1 φ +1ˆxt +1 ε φ +1 x ˆxt +1 z ε t +1 L j x ε, û, t L j ˆx, û, t ε L j x ˆx, û, tzj εdt + φ j+1 x ˆxt j+1 zj εt j+1 φ j+1 x ε t j+1 φ j+1ˆxt j+1 ε φ j+1 x ˆxt j+1 zj εt j+1 ] = L ˆxτ, v, τ + p T τf ˆxτ, v, τ 1 ε + 1 ε + 1 ε + 1 ε ε τ+ε τ t+1 pt pτ T f ˆx, û, tdt + 1 ε τ+ε τ τ+ε L x ε, û, t L ˆx, û, t ε L τ+ε x z ε dt φ +1 x ε t +1 φ +1ˆxt +1 ε φ +1 N 1 j=+1 [ tj+1 t j L j x ε, û, t L j ˆx, û, t ε L j x zj εdt τ L ˆx, û, t + p T tf ˆx, û, tdt L x ε, v, t L ˆxτ, v, τdt τ+ε τ L x z ε dt x ˆxt +1 z ε t +1 φ j+1 x ε t j+1 φ j+1ˆxt j+1 ε φ j+1 x ˆxt j+1 zj εt j+1 ] = H ˆxτ, pτ, v, τ 1 ε τ+ε τ H ˆx, p, û, tdt + X ε H ˆxτ, pτ, v, τ H ˆxτ, p τ, ûτ, τ as ε, 24

34 where we use the fact that = 1 L j x ε, u, t L j x, u, t ε L j ε 1 x zj ε 1 L j x x + λx ε x, u, ty ε dλ + as ε 1.66 Lj x x + λx ε x, u, t L j x x, u, t z j εdλ and that y ε t, z ε t. Thus, the necessary condition for optimal control u satisfies H xτ, v, p τ, τ H xτ, uτ, p τ, τ, τ t, t for all admissible v in control domain. 1.3 Numerical Solution to SIR Model In this section, we will apply the impulse control methodology to the SIR model. Previous wors [25] have provided a continuous and impulsive vaccination strategies to hold the epidemics in a stable state. Here we consider the multi-group SIR model: Ṡ i = Λ i d i S i n j=1 β ijs i I j, I i = n j=1 β ijs i I j d i + γ i I i, S i t + = S it 1 c i, I i t + = I it, 1.68 where S i and I i are the population of susceptible and infected in group i, Λ i and d i represent the birth and death rates of the individuals in group i, and γ i represents the recovery rate of the infected individuals in group i, β ij is the infection rate in group i caused by infected population from group j, and c i is the proportion of susceptible population in group i receiving the vaccination at time t. We now that c i taes value in [, 1]. To eep the disease under control, each group will enforce a migration policy to restrict incoming populations from other groups, 25

35 at the expense of retarding economic growth. We will define the infected rate β ij = β ij u ij, where u ij [, β ij ] represents the control of migration. Now, we consider the cost function J = a 2 N 1 n = i=1 N 1 2 c i S i t + n = i=1 t+1 t b 2 I2 i n j=1 u 2 ij dt + n i=1 e 2 I2 i t N From the necessary condition 1.49, 1.5, we now that the optimal control has the form û ij = η i ξ i S i I j, and the optimal impulsive control has the form ĉ i = ξ i t /a, where [ξ, η]t is the adjoint variable corresponding to the SIR system. For the numerical experiment, we set the parameters of the model as follows: Λ d β γ a b e T t and initial conditions S 1 5 I Here is the table showing the cost values in case of constant controls

36 Table 1.1: List of Costs Tested by Varying Controls u c J β β β β β β β β β β β β û ĉ Remar The Table 1.1 lists the cost of the system driven by different control pairs. The first row shows that the cost is if no controls are applied to the system. The second row shows that the cost is if the migration rate is reduced by 1%. The last row shows that the cost is if the system is driven by optimal control pair {û, ĉ}, which is derived by solving the necessary conditions 1.49,1.5. The Table 1.1 shows that the cost of the system under the optimal control pair is lower than the cost if the system is driven by other controls. The Figure 1.1 shows the susceptible population if the system is driven by optimal control. The jumps of the curves represents the effect of vaccination. Those people receiving vaccination are removed from the susceptible group as they are immune to this disease. The Figures 1.3, 1.4 and 1.5 display the optimal migration restriction of the three cities. It is reasonable that the restriction decreases as the sizes of the infected population is under control. The Figures

37 shows that the size of the infected population increases for a period of time if no control is applied..9.8 Population of Susceptible -- Optimal Group 1 Group 2 Group Figure 1.1: Populations of Susceptibles in All Groups under Optimal Controls.1.9 Population of Infected -- Optimal Group 1 Group 2 Group Figure 1.2: Populations of Infected in All Groups under Optimal Controls 28

38 .35 Optimal Control of City 1 u 11.3 u 12 u Figure 1.3: Optimal Control of City 1: û 1j = η 1 ξ 1 S 1 I j.35 Optimal Control of City 2 u 21.3 u 22 u Figure 1.4: Optimal Control of City 2: û 2j = η 2 ξ 2 S 2 I j 29

39 .3 Optimal Control of City 3 u 31 u u Figure 1.5: Optimal Control of City 3: û 3j = η 3 ξ 3 S 3 I j.9.85 Population of Susceptible -- No Control Group 1 Group 2 Group Figure 1.6: Populations of Susceptibles in All Groups under Zero Controls 3

40 .115 Population of Infected -- No Control Group 1 Group 2 Group Figure 1.7: Populations of Infected in All groups under Zero Controls 31

41 Chapter 2 Stochastic Impulsive Control In this chapter we study stochastic impulsive optimal control problems. In practice, the real world is a world of uncertainty. We receive weather forecast telling the chance of rain, we gain or lose money as the stoc price goes up and down, and we hear the sounds of the radio, which receives signals contaminated by noises. In the perspective of quantum physics, every particle of the world behaves in a random manner. Stochastic model is more interesting and receives more attention than deterministic model due to its wider applicability. There are not many papers dealing with stochastic multi-group SIR models. The stability properties depends on the reproduction number as is the case in deterministic model. In fact, the diffusion coefficients are also critical to the stability[36]. We will give details and corrections to some of the arguments in [36]. Stochastic optimal control problem is one of the hot topics in applied mathematics research[4], [5], [35], [37]. The necessary condition of the optimal control involves solving a coupled system of forward bacward stochastic differential equationsfbsdes. Solvability and explicit scheme of FBSDEs were discussed in [27], [3], [35]. In general, FBSDEs might not necessarily have a solution. There are solutions when the FBSDEs are derived as necessary conditions of an optimal control problem. We follow the same plan of action as the first chapter. In Section 2.1, we discuss the stochastic multi-group SIR model and its stability properties. In Section 2.2, we give the statement of the stochastic impulsive optimal control and derive the necessary conditions from two directions. Solutions to the forward bacward stochastic differential equations will be studied and the relation between maximum principle and dynamic programming will be discussed. In Section 2.3 we give numerical results on the stochastic impulse SIR model. At the end of the chapter, 32

42 we give a proof of a lemma which is used in discussion of stability. 2.1 Stochastic Multi-group SIR Model In this section we will include randomness into the SIR model. One option is to replace the death rate d with d + σ db t. It is reasonable to consider this ind of replacement since there are unpredictable natural disasters such as earthquae and tsunami that will cause unpredictable number of deaths. Let us now consider the stochastic multi-group SIR model: ds = Λ n j=1 β js I j d S dt + ρ S dw t, n di = j=1 β js I j d + γ I dt + θ I db t, 2.1 where W t, B t, 1 n, are independent Brownian motions, and σ, ρ, 1 n, are nonnegative numbers describing the volatility. Because of the presence of random noise, the study of the long term behavior of the SIR model becomes more complicated. In the rest of the section, we will address two questions: first, does there exist a limit for the susceptible population and infected population? Second, if there exists a limit, then in what sense does the population converge to that limit? Does it have almost sure convergence, L 2 convergence, wea convergence or convergence in probability? As we have seen in Theorem 1.1.1, the reproduction number R is the threshold regarding the long term stability. One reasonable guess is that in case of low volatility, the stochastic process will converge to the equilibrium of the deterministic model. The following lemma is the stochastic version of Lemma 1.1.3, which will be used in the discussion of stability of the stochastic SIR model. Lemma The system 2.1 will almost surely have a nonnegative solution {S t, I t}, t,. Proof of Lemma Let τ e denote the explosion time, and we now that the system 2.1 has unique solution {S t, I t} on t, τ e. We define the stopping time τ m = inf{t : min{s t, I t} m 1 or max {S t, I t} m}

43 For any ω in sample space, we have τ m ω τ n ω if m n. So the limit of τ m exists and we define T = lim τ m. We claim that T = almost sure. Otherwise assuming P T < >, m we will have P T < = P 1 T < >. 2.3 =1 There exists K and ε > such that P K 1 T < K = ε. We define the set A K = {ω : T ω < K}, then we have τ m ω < K for ω A K, m. We define V t = S a a ln S a + I 1 ln I, 2.4 where a s are chosen in the same manner as in Lemma By Ito s lemma, we have dv = 1 a S Λ j β j S I j d S + a ρ 2 2 = +1 1 β j S I j d + γ I + θ2 dt +... dw +... db I 2 j Λ d S a Λ + a β j I j + a d d + γ I S j β j S I j I + d + γ + a ρ 2 2 j + θ2 dt +... dw +... db We integrate the above equation from to K τ m, and we will have 34

44 K τm V K τ m = V I j 1 a S Λ j β j S I j d S + a ρ 2 2 β j S I j d + γ I + θ2 dt + M K τm 2 K τm V + Λ + a d + d + γ + a ρ θ2 2 dt + M K τ m V + K Λ + a d + d + γ + a ρ θ2 2 + M K τ m. 2.6 Taing expectation of the above inequality 2.6, we will have EV K τ m EV + K Λ + a d + d + γ + a ρ θ Since the function V t is always nonnegative, we have EV K τ m P τ m < KE[V τ m τ m < K] P T < KE[V τ m τ m < K] 2.8 ε max{m a ln m a, m 1 a ln ma, m 1 ln m, m ln m}. Combining 2.7 and 2.8 we have ε max{m a ln m a, m 1 a ln ma, m 1 ln m, m ln m} EV + K Λ + a d + d + γ + a ρ θ2, which leads to a contradiction since the left hand side of 2.9 could be arbitrarily large if we tae m. Therefore we have lim τ m = and the the solution {S t, I t} to system m 2.1 is nonnegative on t,. 35

45 2.1.1 Stability of Disease Free Equilibrium In this part we consider the case when R < 1. Consider the system: dx = X = S Λ d X dt + ρ X dw, 2.1 By the comparison principle, we have that S t X t almost surely. Let v = [v 1,..., v n ] T be the same vector defined in Theorem 1.1.1, which satisfies n =1 We consider the Lyapunov function V t = have v β j Λ d d + γ = R v j n e I, where e = =1 v d + γ. By Ito s formula, we 36

46 d log V = 1 e β j S I j V,j 1 e β j X I j V,j = 1 Λ e β j I j V d,j + 1 [ V = 1 V,j R V [,j w I dt 1 e β j X Λ d I j ]dt w I dt 1 2V 2 w I dt 1 2V 2 2V 2 e 2 θ2 I2 dt + 1 e θ I db V e 2 θ2 I2 dt + 1 V e 2 θ2 I2 dt + 1 V w I dt 1 2V 2 e 2 θ2 I2 dt + 1 e θ I db V e β j X Λ d I j ]dt e θ I db e θ I db Integrating the above equation, then computing time average, we have log V T log V T + 1 T 1 T T T 1 V R 1 1 V e θ I db + w I dt 1 T 1 T T T 1 V 1 2V 2 e 2 θ2 I2 dt 2.12 j e β j X Λ d I j dt. Now we loo at the first integral on the right hand side. Notice that w I V = w I j e ji j = w I w j j d j +γ j I j w I 1 max j {d j +γ j } w I = max j {d j + γ j }, 2.13 So the first integral is bounded by 37

47 1 T T 1 V R 1 R 1 max j {d j + γ j }, R 1, w I dt, R < 1, Looing at the second integral, we compute e2 θ2 I2 V 2 = e2 θ2 I2 j e ji j 2 = e2 θ2 I2 j e jθ j I j θ 1 j 2 e2 θ2 I2 j e2 j θ2 j I2 j j θ 2 j = 1 θ The second integral is bounded by 1 T T 1 2V 2 e 2 θ2 I2 dt θ 2 Now we loo at the third integral. Define a martingale Mt = that t 1 V e θ I db. We will show lim T 1 MT = a.s T We define A m = { 1 ω : lim T T MT > 1 }, 2.18 m A = m=1a m, 2.19 { 1 } B = ω : lim T T MT =. 2.2 Let ω / A, then it is true for all m N that 38

48 lim T 1 T MT 1 m, 2.21 which implies lim we need to show T 1 T MT =. Then, we have Ac = B. To show the almost sure convergence, For a fixed m, we define PB = 1 PA = PA m =, m 2.22 A m,l = { ω : 1 sup T >2 l T MT > 1 }, 2.23 m then we will have A m = l=1 A m,l. Notice that A m,l A m,l+1, 2.24 we have PA m = lim l PA m,l By Doob s Martingale Inequality, we have P sup MT > ε 2 r 1 <T 2 r 1 ε P sup MT > ε 1 T 2 r ε E M2r 2.26 [ ] 1 E M2 r 2 2 = 1 ε { [ 2 r E 1 ]} 1 V 2 e2 θ2 I2 dt 2. 39

49 Notice that 1 V 2 e2 θ2 I2 = e2 θ2 I2 j e ji j 2 e2 θ2 I2 e 2 I2 = θ We have that We choose ε = 1 m 2r 1, then we have that P sup MT > ε θ 2 r 2 2 r 1 <T 2 r ε P sup 2 r 1 <T 2 r T MT > 1 m 1 = P sup MT > ε 2 r 1 <T 2 r T 2 r 1 P sup MT > ε 2 r 1 <T 2 r mθ. 2 r 2 Then, we have that PA m,l r=l+1 1 P sup 2 r 1 <T 2 r T MT > 1 m r=l+1 2mθ 2 r as l. Thus, the almost sure convergence is proved. Now we loo at the fourth integral under the limit T. 4

50 = 1 T lim T T,j 1 T lim T T,j,j,j,j e β j e j e β j e j e β j e j 1 V e β j I j X Λ d dt 2.31 e β j I j V 1 T lim T T [ X Λ dt d X Λ d dt x Λ νxdx d x Λ ] 1 2νxdx 2 d In the fourth line of the we used the ergodic property of the process X t, i.e. for any measurable function fx, we have 1 T lim fx tdt = fxνxdx, a.s., 2.32 T T R where νx is the stationary distribution of X t. We will provide the proof of the ergodic property in the end of this chapter. Using this property of X with function fx = x Λ d m, we have [ x Λ 2 ] 1 T [ m νxdx = lim X Λ d T T 1 T = lim E T T 1 lim T T T 2 ] m dt 2.33 d [ X Λ d 2 m ] dt E X Λ 2dt d We can solve X t in explicit form as 41

51 X t = e d + ρ 2 2 t+ρbt S + t e d + ρ 2 2 s t+ρbt Bs Λ ds By cumbersome calculation, which will be provided at the end of this chapter, we have lim E X T Λ 2 = T d ρ 2 d 2 2d ρ Then, letting m in 2.33 we have x Λ d 2νxdx ρ 2 d 2 2d ρ Then, the inequality 2.31 could be rewritten as 1 T lim T T,j 1 V e β j I j X Λ d dt,j = max e β j e j l { = max l { = max l { ρ d 2d ρ 2 ρ l 2d l ρ 2 l ρ l 2d l ρ 2 l ρ l 2d l ρ 2 l },j } j w β j d d + γ e j R w j e j } R d j + γ j j 2.37 Therefore, we have the estimate 42

52 1 lim T T log V T R 1 max {d j + γ } + max l { ρ l 2d l ρ 2 l 1 2 θ 2 } R d j + γ j. j 2.38 Remar From the above proof we could see the limit in the estimate 2.38 is taen in the almost sure sense. The first two terms on the right hand side of 2.38 are negative, the third term is a small positive number with the assumption that the volatility ρ is small. Then, the estimation 2.38 states that in case of R < 1, the Lyapunov function V t, which is equivalent to the infected population, decrease to zero exponentially almost surely Stability of Endemic Equilibrium Now we consider the case when R > 1. By Theorem we now that the deterministic multi-group SIR system 1.2 has an endemic equilibrium E = [S1, S 2,..., S n, I1, I 2,..., I n], and E is globally stable. We have the equilibrium condition n Λ β j S I j d S =, 2.39 j=1 n β j S I j d + γ I =. 2.4 j=1 Let B and w = [w 1,..., w n ] be defined as in Theorem 1.1.1, where B = β j 1 1j β21 β31... βn1 β 12 β j 2 2j β32... βn2 β 13 β23 β j 3 3j... βn β 1n β2n β3n... β j n nj, 2.41 and w satisfies 43

53 n n β j w = β j w j j=1 j=1 We define gx = 1 + ln x, and the fact that x gx will be used repeatedly. We consider the function N V 1 = w [S S t S t S g S + I I t I t ] I g I = Differentiating 2.43, we have 44

54 dv 1 = w [Λ d S d + γ I S S Λ j I I j β j S I j d + γ I ρ2 S θ2 I β j S I j d S ] +...db +...dw = [ w β j S I j + d S d S S β j S S I j + d S + β j S I j j j j +d S I β j S I j + S I I j ρ2 S + 1 ] 2 θ2 I +...db +...dw j j = [ w d S 2 S S S + β j S S I j 2 I I S + I j j S Ij I S I j I S I j = ρ2 S θ2 I w d S +,j ] +...db +...dw 2 S S S + S,j w β j S I j w β j S I j g I I I I g S S + g I S I j S S I S I S I j I j I S + I j g I j Ij + I j Ij w 1 2 ρ2 S θ2 I +...db +...dw From 2.42 we have w β j S I j,j g I I I I =,j w j β j S j I g I I I I, 2.45 Therefore, we have dv 1 w d S 2 S S S + S w 1 2 ρ2 S θ2 I +... db +... dw Consider the function 45

55 V 2 = N w I =1 [ I I g I ] I Then, we have dv 2 = = w [1 I I w [ j j β j S I j I I β j S I j d + γ I θ2 I β j S I j I β j S I j + I j j j ] +...db +...dw β j S I j θ2 I ] +...db +...dw = [ w β j S S I j Ij + j j β j S I j S S + I j I j I I I S I j I S + 1 ] I j 2 θ2 I +...db +...dw [ w β j S S I j Ij + β j S I j S S + I j j j Ij + S I j 1 ln I S ] I j I S +...db +...dw j I j = [ w β j S S I j Ij + j j + j β j S I j S S β j S I j I j I j ln I j I j I I θ2 I I I 2 + g S + 1 ] S 2 θ2 I +...db +...dw [ w β j S S I j Ij + j j β j S I j S S + ln I I + S ] S 2 θ2 I +...db +...dw Consider the function 46

56 V 3 = N =1 S S w 2 2S Then, we have dv 3 = [ S S w S Λ j β j S I j d S + ρ2 S2 ] 2S +...db +...dw 2.5 = [ S S w S j β j S I j + d S j β j S I j d S + ρ2 S S + S 2 ] 2S +...db +...dw [ = w d ρ 2 S S S 2 1 S β j S S 2 I j j j ] +...db +...dw +ρ 2 S [ w d ρ 2 S S S 2 j β j S S I j I j + ρ 2 S β j S S I j I j +...db +...dw ] Consider the function V 4 = S S + I I Then, we have 47

57 dv 4 = 2S S + I I Λ d S d + γ I + ρ 2 S2 + θ2 I db +...dw = 2S S + I I d + γ I + d S d S d + γ I +ρ 2 S2 + θ2 I2 +...db +...dw = 2d S S 2 2d + γ I I 2 22d + γ S S I I +ρ 2 S2 + θ2 I2 +...db +...dw. Note that 22d + γ S S I I d + γ I I 2 + 2d + γ 2 d + γ S S 2, 2.54 Then, we have dv 4 2d + γ 2 d + γ 2d + 2ρ 2 S S 2 d + γ 2θ 2 I I ρ 2 S 2 + 2θ2 I db +...dw. Choose λ = max { j β jij /d } and ε min { d ρ 2 S 2d +γ 2 d +γ compute 2d + 2ρ 2 1 }. Now we dλv 1 + V 2 + V 3 + εv A S S 2 B I I 2 + C ρ 2 + D θ db +...dw, 48

58 where A = d ρ 2 S ε 2d + γ 2 2d + 2ρ 2, 2.57 d + γ B = εd + γ 2θ 2, 2.58 C = 1 2 λw S + w S + 2εS 2, 2.59 D = 1 2 λw I + w I + 2εI Integrate 2.56 from to T and then tae expectation, we have EλV 1 + V 2 + V 3 + εv 4 T EλV 1 + V 2 + V 3 + εv E T A S S 2 B I I 2 dt + C ρ 2 + D θ 2 T. Divide 2.61 by T, we will have 1 T EλV 1 + V 2 + V 3 + εv 4 T 1 T EλV 1 + V 2 + V 3 + εv E 1 T T A S S 2 B I I 2 dt + C ρ 2 + D θ 2. Then, we have E 1 T T A S S 2 + B I I 2 dt C ρ 2 + D θ T EλV 1 + V 2 + V 3 + εv 4 1 T EλV 1 + V 2 + V 3 + εv 4 T C ρ 2 + D θ T EλV 1 + V 2 + V 3 + εv 4. 49

59 Assuming T 1, we now that the right hand side of 2.63 is bounded by constant. Then we tae T on both side of By dominating convergence theorem, it is valid to exchange limit operator and expectation. So we will have 1 T E lim A S S T T 2 + B I I 2 dt C ρ 2 + D θ Remar From the estimation 2.64 we could see that the limit is taen in the L 2 sense. With the assumption that the volatility {ρ, θ } is small, the susceptible population and the infected population [S 1, I 1,..., S n, I n ] will be close to the endemic equilibrium [S1, I 1,..., S n, In] Vaccination We will apply pulse vaccination strategy to the stochastic SIR model. We have the following system: Ṡ = Λ n j=1 β js I j d S + σ S dw t, I = n j=1 β js I j d + γ I + ρ I db t, t t i, t i+1, 2.65 with vaccination condition S t + i = S t i 1 c i, I t + i = I t i We will study the optimal strategy for vaccination and give numerical results in the later sections. 2.2 Stochastic Impulsive Control Problems In this section we will give detailed description of the impulsive optimal control problem, and study the necessary condition that the optimal controls must satisfy. We have two approaches: methods of variation of calculus and dynamic programming. In addition we will discuss the solutions to coupled forward bacward stochastic differential equations. 5

60 We loo at the system whose evolution satisfies the following stochastic differential equation d dx = f x, u, tdt + σ j x, u, tdw jt, t t, t +1, 2.67 j=1 where W = W 1,..., W d T is a standard d-dimensional Wiener process defined on a complete probability space Ω, F, P with filtration F t = σ{w s; s t}. Impulsive control c s are applied to the system at time t, = 1,..., N 1, and the state variable satisfies the following jump conditions xt + = g xt, c The stochastic impulsive optimal control problem is to find a continuous control ut adapted to F t and impulses c s, such that the cost functional Ju, c = E { N 1 =1 φ xt, c + N 1 = t+1 } L x, u, tdt + φ N xt N t 2.69 is minimized. We assume that f x, u, t : R n R m R R n, g x, c : R n R M R n. σ j x, u, t : Rn R m R R n. L x, u, t : R n R m R R, φ x, c : R n R M R. are smooth functions which have continuous derivatives of all orders. 51

61 2.2.1 Necessary Conditions by Methods of Variation of Calculus As in the deterministic version, we derive the variational equation by adding a small perturbation to the optimal control. An adjoint variable is defined by a bacward stochastic differential equation, and the optimal continuous control is found by minimizing the Hamiltonian, and the optimal impulse is determined in the process. Assume {û, ĉ } is the optimal control pair and ˆx is the state variable of the system corresponding the control {û, ĉ }. Let us define another set of control {u θ, c θ } by u θ t ût + θvt, c θ ĉ + θc, = 1,..., N 1, where v, c are arbitrary perturbations, and < θ 1. Let x θ be the state variable corresponding to {u θ t, c θ }. Let us consider z t, =,... N 1, which solves the system, dz = f x ˆxt, ût, tz + f u v dt + d j=1 σ j x z + σj u v dw j, t t, t +1, 2.7 z t + = g x ˆxt, ĉ z 1 t + g c c, 2.71 for =,..., N 1. We define z 1 t = and c = just to ease the notation. Then, we will have the following estimation. Lemma Let ˆx, x θ and z be defined as above, then we have 1. E{x θ t ˆxt θz t} = Oθ 2, t t, t +1, and that 52

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems International Differential Equations Volume 011, Article ID 613695, 13 pages doi:10.1155/011/613695 Research Article Mean Square Stability of Impulsive Stochastic Differential Systems Shujie Yang, Bao

More information

Lecture 6 Random walks - advanced methods

Lecture 6 Random walks - advanced methods Lecture 6: Random wals - advanced methods 1 of 11 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitovic Lecture 6 Random wals - advanced methods STOPPING TIMES Our last

More information

Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response

Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response Junyi Tu, Yuncheng You University of South Florida, USA you@mail.usf.edu IMA Workshop in Memory of George R. Sell June 016 Outline

More information

Change detection problems in branching processes

Change detection problems in branching processes Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0 MATH 56A: STOCHASTIC PROCESSES CHAPTER 0 0. Chapter 0 I reviewed basic properties of linear differential equations in one variable. I still need to do the theory for several variables. 0.1. linear differential

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES JONATHAN LUK These notes discuss theorems on the existence, uniqueness and extension of solutions for ODEs. None of these results are original. The proofs

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

arxiv: v3 [math.ds] 22 Feb 2012

arxiv: v3 [math.ds] 22 Feb 2012 Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,

More information

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 6), 936 93 Research Article Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Weiwei

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

Higher order weak approximations of stochastic differential equations with and without jumps

Higher order weak approximations of stochastic differential equations with and without jumps Higher order weak approximations of stochastic differential equations with and without jumps Hideyuki TANAKA Graduate School of Science and Engineering, Ritsumeikan University Rough Path Analysis and Related

More information

Stochastic Differential Equations in Population Dynamics

Stochastic Differential Equations in Population Dynamics Stochastic Differential Equations in Population Dynamics Numerical Analysis, Stability and Theoretical Perspectives Bhaskar Ramasubramanian Abstract Population dynamics in the presence of noise in the

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology FE610 Stochastic Calculus for Financial Engineers Lecture 3. Calculaus in Deterministic and Stochastic Environments Steve Yang Stevens Institute of Technology 01/31/2012 Outline 1 Modeling Random Behavior

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

Hopenhayn Model in Continuous Time

Hopenhayn Model in Continuous Time Hopenhayn Model in Continuous Time This note translates the Hopenhayn (1992) model of firm dynamics, exit and entry in a stationary equilibrium to continuous time. For a brief summary of the original Hopenhayn

More information

Nonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability

Nonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability p. 1/1 Nonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability p. 2/1 Perturbed Systems: Nonvanishing Perturbation Nominal System: Perturbed System: ẋ = f(x), f(0) = 0 ẋ

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Introduction to Stochastic SIR Model

Introduction to Stochastic SIR Model Introduction to Stochastic R Model Chiu- Yu Yang (Alex), Yi Yang R model is used to model the infection of diseases. It is short for Susceptible- Infected- Recovered. It is important to address that R

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Derivation of Itô SDE and Relationship to ODE and CTMC Models

Derivation of Itô SDE and Relationship to ODE and CTMC Models Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)

More information

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI Department of Computer Science APPROVED: Vladik Kreinovich,

More information

GLOBAL STABILITY OF SIR MODELS WITH NONLINEAR INCIDENCE AND DISCONTINUOUS TREATMENT

GLOBAL STABILITY OF SIR MODELS WITH NONLINEAR INCIDENCE AND DISCONTINUOUS TREATMENT Electronic Journal of Differential Equations, Vol. 2015 (2015), No. 304, pp. 1 8. SSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu GLOBAL STABLTY

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

On Robust Arm-Acquiring Bandit Problems

On Robust Arm-Acquiring Bandit Problems On Robust Arm-Acquiring Bandit Problems Shiqing Yu Faculty Mentor: Xiang Yu July 20, 2014 Abstract In the classical multi-armed bandit problem, at each stage, the player has to choose one from N given

More information

ODE Final exam - Solutions

ODE Final exam - Solutions ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

High-Gain Observers in Nonlinear Feedback Control

High-Gain Observers in Nonlinear Feedback Control High-Gain Observers in Nonlinear Feedback Control Lecture # 1 Introduction & Stabilization High-Gain ObserversinNonlinear Feedback ControlLecture # 1Introduction & Stabilization p. 1/4 Brief History Linear

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Introduction to SEIR Models

Introduction to SEIR Models Department of Epidemiology and Public Health Health Systems Research and Dynamical Modelling Unit Introduction to SEIR Models Nakul Chitnis Workshop on Mathematical Models of Climate Variability, Environmental

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Kasetsart University Workshop. Mathematical modeling using calculus & differential equations concepts

Kasetsart University Workshop. Mathematical modeling using calculus & differential equations concepts Kasetsart University Workshop Mathematical modeling using calculus & differential equations concepts Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu

More information

Systemic Risk and Stochastic Games with Delay

Systemic Risk and Stochastic Games with Delay Systemic Risk and Stochastic Games with Delay Jean-Pierre Fouque (with René Carmona, Mostafa Mousavi and Li-Hsien Sun) PDE and Probability Methods for Interactions Sophia Antipolis (France) - March 30-31,

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Delay SIR Model with Nonlinear Incident Rate and Varying Total Population

Delay SIR Model with Nonlinear Incident Rate and Varying Total Population Delay SIR Model with Nonlinear Incident Rate Varying Total Population Rujira Ouncharoen, Salinthip Daengkongkho, Thongchai Dumrongpokaphan, Yongwimon Lenbury Abstract Recently, models describing the behavior

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching Discrete Dynamics in Nature and Society Volume 211, Article ID 549651, 12 pages doi:1.1155/211/549651 Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Numerical Integration of SDEs: A Short Tutorial

Numerical Integration of SDEs: A Short Tutorial Numerical Integration of SDEs: A Short Tutorial Thomas Schaffter January 19, 010 1 Introduction 1.1 Itô and Stratonovich SDEs 1-dimensional stochastic differentiable equation (SDE) is given by [6, 7] dx

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

6. Age structure. for a, t IR +, subject to the boundary condition. (6.3) p(0; t) = and to the initial condition

6. Age structure. for a, t IR +, subject to the boundary condition. (6.3) p(0; t) = and to the initial condition 6. Age structure In this section we introduce a dependence of the force of infection upon the chronological age of individuals participating in the epidemic. Age has been recognized as an important factor

More information

Backward martingale representation and endogenous completeness in finance

Backward martingale representation and endogenous completeness in finance Backward martingale representation and endogenous completeness in finance Dmitry Kramkov (with Silviu Predoiu) Carnegie Mellon University 1 / 19 Bibliography Robert M. Anderson and Roberto C. Raimondo.

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Lecture 15 Perron-Frobenius Theory

Lecture 15 Perron-Frobenius Theory EE363 Winter 2005-06 Lecture 15 Perron-Frobenius Theory Positive and nonnegative matrices and vectors Perron-Frobenius theorems Markov chains Economic growth Population dynamics Max-min and min-max characterization

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Mean-Field optimization problems and non-anticipative optimal transport. Beatrice Acciaio

Mean-Field optimization problems and non-anticipative optimal transport. Beatrice Acciaio Mean-Field optimization problems and non-anticipative optimal transport Beatrice Acciaio London School of Economics based on ongoing projects with J. Backhoff, R. Carmona and P. Wang Robust Methods in

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity Vincent D. Blondel, Julien M. Hendricx and John N. Tsitsilis July 24, 2009 Abstract

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

MATRIX REPRESENTATIONS FOR MULTIPLICATIVE NESTED SUMS. 1. Introduction. The harmonic sums, defined by [BK99, eq. 4, p. 1] sign (i 1 ) n 1 (N) :=

MATRIX REPRESENTATIONS FOR MULTIPLICATIVE NESTED SUMS. 1. Introduction. The harmonic sums, defined by [BK99, eq. 4, p. 1] sign (i 1 ) n 1 (N) := MATRIX REPRESENTATIONS FOR MULTIPLICATIVE NESTED SUMS LIN JIU AND DIANE YAHUI SHI* Abstract We study the multiplicative nested sums which are generalizations of the harmonic sums and provide a calculation

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

Research Article An Impulse Model for Computer Viruses

Research Article An Impulse Model for Computer Viruses Discrete Dynamics in Nature and Society Volume 2012, Article ID 260962, 13 pages doi:10.1155/2012/260962 Research Article An Impulse Model for Computer Viruses Chunming Zhang, Yun Zhao, and Yingjiang Wu

More information

LogFeller et Ray Knight

LogFeller et Ray Knight LogFeller et Ray Knight Etienne Pardoux joint work with V. Le and A. Wakolbinger Etienne Pardoux (Marseille) MANEGE, 18/1/1 1 / 16 Feller s branching diffusion with logistic growth We consider the diffusion

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information