Markov Chain Models in Economics, Management and Finance

Size: px
Start display at page:

Download "Markov Chain Models in Economics, Management and Finance"

Transcription

1 Markov Chain Models in Economics, Management and Finance Intensive Lecture Course in High Economic School, Moscow Russia Alexander S. Poznyak Cinvestav, Mexico April 2017 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

2 Contact Data Alexander S. Poznyak CIVESTAV-IP, Mexico Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

3 Resume This course introduces a newly developed optimization technique for a wide class of discrete and continuous-time nite Markov chains models. Along with a coherent introduction to the Markov models description (controllable Markov chains classi cations, ergodicity property, rate of convergence to a stationary distribution) some optimization methods (such as Lagrange multipliers, Penalty functions and Extra-proximal scheme) are discussed. Based on the these models and numerical methods Marketing, Portfolio Optimization, Transfer Pricing as well as Stackleberg-ash Games, Bargaining and Other Con ict Situations are profoundly considered. While all required statements are proved systematically, the emphasis is on understanding and applying the considered theory to real-world situations. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

4 Structure of the course 1-st Lecture Day: Basic otions on Controllable Markov Chains Models, Decision Making and Production Optimization Problem. 2-nd Lecture Day: The Mean-Variance Customer Portfolio Problem: Bank Credit Policy Optimization. 3-rd Lecture Day: Con ict Situation Resolution: Multi-Participants Problems, Pareto and ash Concepts, Stackelberg equilibrium. 4-th Lecture Day: Bargaining (egotiation). 5-th Lecture Day: Partially Observable Markov Chain Models and Tra c Optimization Problem. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

5 Recommended Bibliography Books:1 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

6 Recommended Bibliography Books:2 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

7 Recommended Bibliography Papers recently published (1) Julio B. Clempner and Alexander S. Poznyak, Simple Computing of the Customer Lifetime Value: a Fixed Local-OoptimalL Policy Approach. J. Syst. Sci. Syst. Eng. December 2014, v.23, issue 4, pp Kristal K.Trejo, Julio B..Clempner, Alexander S. Poznyak. A Stackelberg security game with random strategies based on the extraproximal theoretic approach. Engineering Applications of Arti cial Intelligence, 37 (2015), Kristal K.Trejo, Julio B..Clempner, Alexander S. Poznyak. A Computing the Stckleberge/ash Equilibria Using the Extraproximal Method. Int. J. Appl. Math. Comput. Sci., 2015, Vol. 25, o. 2, Julio B. Clempner, Alexander S. Poznyak. Modeling the multi-tra c signal-control synchronization: A Markov chains game theory approach. Engineering Applications of Arti cial Intelligence, 43 (2015) Julio B. Clempner, Alexander S. Poznyak. Stackelberg security games: Computing the shortest-path equilibrium. Expert Systems with Applications, 42 (2015), Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

8 Recommended Bibliography Papers recently published (2) Emma M. Sanchez, Julio B. Clempner and Alexander S. Poznyak. Solving The Mean-Variance Customer Portfolio In Markov Chains Using Iterated Quadratic/Lagrange Programming: A Credit-Card Customer Limits Approach. Expert Systems with Applications. 42 (2015) pp Emma M. Sanchez, Julio B. Clempner and Alexander S. Poznyak. A priori-knowledge/actor-critic reinforcement learning architecture for computing the mean variance customer portfolio: The case of bank marketing campaigns. Engineering Applications of Arti cial Intelligence. Volume 46, Part A, ovember 2015, Pages Julio B. Clempner, Alexander S. Poznyak. Computing the strong ash equilibrium for Markov chains games. Applied Mathematics and Computation, Volume 265, 15 August 2015, Pages Julio B. Clempner, Alexander S. Poznyak. Convergence analysis for pure stationary strategies in repeated potential games: ash, Lyapunov and correlated equilibria. Expert Systems with Applications. Volume 46, 15 March 2016, Pages Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

9 Recommended Bibliography Papers recently published (3) Julio B. Clempner, Alexander S. Poznyak. Solving the Pareto front for multiobjective Markov chains using the minimum Euclidean distance gradient-based optimization method. Mathematics and Computers in Simulation. Volume 119, January 2016, Pages Julio B. Clempner and Alexander S. Poznyak. Constructing the Pareto front for multi-objective Markov chains handling a strong Pareto policy approach. Comp. Appl. Math. DOI /s Julio B. Clempner, Alexander S. Poznyak. Multiobjective Markov chains optimization problem with strong Pareto frontier: Principles of decision making. Expert Systems With Applications 68 (2017) J. Clempner and A. Poznyak. Analyzing An Optimistic Attitude For The Leader Firm In Duopoly Models: A Strong Stackelberg Equilibrium Based On A Lyapunov Game Theory Approach. Economic Computation And Economic Cybernetics Studies And Research. 4 (2017), 50, Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

10 1-st Lecture Day Basic otions on Controllable Markov Chains Models, Decision Making and Production Optimization Problem Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

11 Markov Processes Classical De nition PART 1: MARKOV CHAIS AD DECISIO MAKIG De nition A stochastic dynamic system satis es the Markov property, as it is accepted to say (this de nition was introduced by A.A.Markov in 1906), "if the probable (future) state of the system at any time t > s is independent of the (past) behavior of the system at times t < s, given the present state at time s". This property can be nicely illustrated by considering a classical movement of a particle which trajectory after time s depends only on its coordinates (position) and velocity at time s, so that its behavior before time s has no absolutely any a ect to its dynamic after time s. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

12 Markov Processes Stochastic process: rigorous mathematical de nition De nition x (t, ω) 2 R n is said to be a stochastic process de ned on the probability space (Ω, F, P) with state space R n and the index time-set J := [t 0, T ] [0, ). Here ω 2 Ω is an individual trajectory of the process, Ω a set of elementary events; F is the collection (σ - algebra) of all possible events arrising from Ω; P is a probabilistic measure (probability) de ned for any event A 2 F. The time set J may be - discrete, i.e., J = [t 0, t 1,..., t n,...) - then we talk on a discrete-time stochastic process x (t n, ω) ; - continuous, i.e., J = [t 0, T ) - then we talk on a continuos-time stochastic process x (t, ω) Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

13 Markov Processes Stochastic process: illustrative gures Discrete time process. Continuos time process. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

14 Markov Processes Markov property De nition fx (t, ω)g t2j is called a Markov process (MP), if the following Markov property holds: for any t 0 τ t T an all A 2 B n n o a.s. P x (t, ω) 2 A j F [t0,τ] = P fx (t, ω) 2 A j x (τ, ω)g Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

15 Finite Markov Chains Main de nition Let the phase space of a Markov process fx (t, ω)g t2t be discrete, that is, x (t, ω) 2 X := f(1, 2,..., ) or [ f0gg =1, 2,... is a countable set, or nite De nition A Markov process fx (t, ω)g t2t with a discrete phase space X is said to be a Markov chain (or Finite Markov Chain if is nite) a) in continuous time if T := [t 0, T ), T is admitted to be b) in discrete time if T := ft 0, t 1,..., t T g, T is admitted to be Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

16 Finite Markov Chains Markov property for Markov Chains Corollary The main Markov property for this particular case looks as follows: in continuose time: for any i, j 2 X and any s 1 < < s m < s t 2 T P fx (t, ω) = j j x (s 1, ω) = i i,..., x (s m, ω) = i m, x (s, ω) = ig a.s. = P fx (t, ω) = j j x (s, ω) = ig in discrete time: for any i, j 2 X and any n = 0, 1, 2,... P fx (t n+1, ω) = j j x (t 0, ω) =i 0,..., x (t m, ω) =i m, x (t n, ω) =ig a.s. = P fx (t n+1, ω) = j j x (t n, ω) = ig := π jji (n) Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

17 Finite Markov Chains (Stationary) Markov Chains Homogeneous De nition A Markov Chain is said to be Homogeneous (Stationary) if the transition probabilities are constant, that is, π jji (n) = π jji = const for all n = 0, 1, 2,... Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

18 Finite Markov Chains Transition Matrix Transition matrix Π: 2 π 1j1 π 2j1 π j1 π 1j2 π 2j2 π j2 Π = π 1j π 2j π j = π jji i,j=1,..., Stochastic property π jji = 1 for all i = 1,..., j=1 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

19 Finite Markov Chains Dynamic Model of Finite Markov Chains By the Bayes formula it follows i=1 P fag = P fa j B i g P fb i g i P fx (t n+1, ω) = jg = P fx (t n+1, ω) = j j x (t n, ω) = igp fx (t n, ω) = ig {z } π jji De ning p i (n) := P fx (t n, ω) = ig, we can write the Dynamic Model of Finite Markov Chain as p j (n + 1) = π jji p i (n) i=1 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

20 Finite Markov Chains Dynamic Model of Finite Markov Chains: the vector format In the vector format, Dynamic Model of Finite Markov Chain can be represented as follows Iteration back implies p (n + 1) = Π p (n), p (n) := (p 1 (n),..., p (n)) p (n + 1) = Π p (n) = (Π ) n+1 p (0) Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

21 Finite Markov Chains Ergodic Finite Markov Chains: de nition De nition A Markov Chain is called ergodic if all its states are returnable. The result below shows that homogeneos ergodic Markov chains posses some additional property: after a long time such chains "forget" the initial states from which they have started. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

22 Finite Markov Chains Ergodic Theorem Theorem (the ergodic theorem) Let for some state j 0 2 X of a homogeneous Markov chain and some n 0 > 0, δ 2 (0, 1) for all i 2 (1,..., ) (Π n 0 ) j0 ji δ > 0 i.e., after n 0 -times multiplications Π by itself at least one column of the matrix Π n 0 has all nonzero elements. Then for any initial state distribution P fx (t 0, ω) = ig and for any i, j 2 (1,..., ) there exists the limit p j := lim n! (Π n ) jji > 0 such that for any t 0 this limit is reachable with an exponential rate, namely, (Π n ) jji pj (1 δ) [t n/n 0 ] = e α[t n/n 0 ], α := jln (1 δ)j Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

23 Finite Markov Chains Ergodic property: example Show that the Finite Markov Chain with the transition matrix Π:= is ergodic. Indeed, after 2 steps (n 0 = 2) 2 Π 2 = , Π3 = Π 3 = Π 1+n 0 contains the column j = 2 with strictly positive elements. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

24 Finite Markov Chains Ergodicity coe cient Corollary If for a homogeneous nite Markov chain with transition matrix Π the ergodicity coe cient k erg (n 0 ) is strictly positive,that is, k erg (n 0 ) := max i,j=1,..., m=1 (Π n 0 ) mji (Π n 0 ) mjj > 0 then this chain is ergodic. Th following simple estimate holds k erg (n 0 ) min i=1,..., max j=1,..., (Πn 0 ) jji := k erg (n 0 ) Corollary Si, if k erg (n 0 ) > 0, then the chain is ergodic. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

25 Finite Markov Chains Main Ergodic property Corollary For any j 2 (1, 2,..., ) of an ergodic homogeneous nite Markov chain the components pj of the stationary distribution, satisfy the folowing ergodicity relations i2x p j = or equivalently, in the vector format i2x π jji p i p i = 1, p i > 0 (i = 1, 2,..., ) 9 = p = Π p, p := (p 1,..., p ), Π := πjji i,j=1,..., that is, the positive vector p is the eigenvector of the matrix Π (t) corresponding to its eigenvalue equal to 1. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59 ;

26 Controllable Markov Chains Transition matrices for controllable Finite Markov Chain processes Let Π k (n) := π jji,k (n) i,j=1,..., be the transition matrix with the elements π jji,k (n) :=P fx (t n+1, ω) =j j x (t n, ω) =i, a (t n, ω) =kg, k = 1,..., K where the variable a (t n, ω) is associated with a control action (decision making) from the given set of possible controls (1,..., K ). Each control action a (t n, ω) = k may be selected (realized) in state x (t n, ω) = i with the probability d ki (n) := P fa (t n, ω) = k j x (t n, ω) = ig ful lling the stochastic constraints d ki (n) 0, k=1 d ki (n) = 1 for all i = 1,..., Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

27 Controllable Markov Chains What are control strategies (decision making) for Finite Markov Chain processes? De nitions A sequence fd (0), d (1),...g of a stochastic matrices d (n) := kd ki (n)k i=1,...,;k=1,...,k with the elements satisfying the stochastic constrains is called a control strategy or decision making process. If d (n) = d is a constant stochastic matrix such startegy is named stationary one. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

28 Controllable Markov Chains Pure and mixed strategies De nition If each row of the matrix d contains one element equal to 1 and others equal to zero, i.e., d ki = d k0 i δ k,k0 where δ k,k0 is the Kronecker 1 if k = k0 symbol δ k,k0 :=, then the strategy is referred to as pure, if 0 if k 6= k 0 at least in one row this is not true, then strategy is called mixed. Example 2 d= a pure 2 strategy ; d= a mixed strategy Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

29 Controllable Markov Chains Structure of a controllable Markov Chain Figure: Structure of a controllable Markov Chain. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

30 Controllable Markov Chains Transition matrix for controllable Markov Chains Again by the Bayes formula P fag = P fa j B i g P fb i g we have i k=1 π jji (n) := P fx (t n+1, ω) = j j x (t n, ω) = ig = P fx (t n+1 ) = j j x (t n ) = i, a (t n ) = kgp fa (t n ) = k j x (t n ) = ig {z } {z } π jji,k (n) d kji (n) so, π jji (n) = π jji,k (n) d kji (n) k=1 For homogenous Finite Markov models and stationary under stationary strategies d kji (n) = d kji one has π jji (d) = π jji,k d kji k=1 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

31 Controllable Markov Chains Dynamics of state probabilities For stationary startegy d = kd ki k i=1,...,;k=1,...,k we have p j (n + 1) := P fx (t n, ω) = ig = = i=1 π jji (d) p i (n)! i=1 π jji,k d kji p i (n) k=1 which represents the Dynamic Model of Controllable Finite Markov Chain under a stationary starategy d. If for each d the chain is ergodic, then p j (n)! n! p j satisfying or p j = i=1 k=1 π jji,k d kji p i Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

32 Controllable Markov Chains Convergence illustration Figure: Convergence to stationary distribution. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

33 Controllable Markov Chains Dynamics of state probabilities: the vector form In the vector form the Dynamic Model of Controllable Finite Markov Chain (or Decision Making process) under a stationary strategy d looks as p = Π (d) p Π (d) = π jji,k d ki k=1 i=1,...,;j=1,..., Fact So, the nal distribution p depends also on the strategy d, that is, p = p(d), so that p(d) = Π (d) p(d) Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

34 Simplest Production Optimization Problem Problem formulation (1) PART 2: Simplest Production Optimization Problem Suppose that some company obtains for the transition x (t n, ω) = i, a (t n, ω) =k! x (t n+1, ω) = j from state i to the state j, applying the control k, the following income W jji,k, i, j = 1,..., n, k = 1,..., K Then the average income of this company in stationary state is J (d) := W jji,k i=1 j=1! π jji,k d kji p i = k=1 i=1 K k=1 j=1 where the components p i satis es the ergodicity condition p j (d) = i=1 k=1 π jji,k d kji p i (d) W jji,k π jji,k d kji p i Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

35 Simplest Production Optimization Problem Problem formulation (2) The rigorous mathematical problem formulation is as follows: Problem D adm := J (d) = ( i=1 K k=1 j=1 d kji : p j (d) = d kji 0, W jji,k π jji,k d kji p i (d)! max d 2D adm under the constrains i=1 k=1 π jji,k d kji p i (d), j = 1,... ) d kji = 1, i = 1,... k=1 9 >= >; Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

36 Simplest Production Optimization Problem Best-reply strategy De nition The matrix d br is called the best-reply strategy if 8 >< d br βjα = >: 1 if W jjα,β π jjα,β j=1 0 if not j=1 W jji,k π jji,k Indeed, the upper bound for J (d) can be estimated as J (d) = i=1 K k=1 j=1 W jji,k π jji,k d kji p i (d) i=1 max k which is reachable for d br = d br. It is optimal if and only if kji kji!! max W jji,k π jji,k =max W jjs,k π jjs,k k k j=1 j=1! W jji,k π jji,k p i (d) j=1 8i, s (1) Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

37 Simplest Production Optimization Problem State and action spaces interpretation (1) Example (State and action spaces interpretation) Let the state x (t n, ω) = i be associated with a number of working unites (sta places); the action a (t n, ω) =k is related with the nancial schedule (possible wage increase,decreasing or no changes): k = ( 1, 0, 1); the incomes for these actions may be calculated as W jji,k = [v 0 (v + vk) v 1 ] (j i) where v 0 the price of the product, produced by a single working unit with the salary v, its v adjustment and the production costs v 1 supporting this process. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

38 Simplest Production Optimization Problem State and action spaces interpretation (2) Example (State and action spaces interpretation (continuation-1)) For example, for = 3, i = (10, 20, 30) and v 0 = 400, , v 1 = 20, , v = 80, , v = 5, we have 2 Wjji,k= 1 = 4 2 W jji,k=0 = 4 2 W jji,k=1 = , , , , , , , , , , , , , , , , , , Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

39 Simplest Production Optimization Problem State and action spaces interpretation (3) Example (State and action spaces interpretation (continuation-2)) Let the transition matrices π jji,k be as follows: , {z } k= {z } k=0 2 5, {z } k=1 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

40 Simplest Production Optimization Problem State and action spaces interpretation (4) Example (State and action spaces interpretation (continuation-3)) Then the matrix W jji,k π jji,k, participating in the average income, is j=1 j=1 W jji,k π jji,k = and the best reply strategy is (it is non-optimal) d br = So, no changes are recommended since k = 0 for all states i. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

41 Optimization Problem with Additional Constrains Problem formulation with additional constrains Problem D adm := J (d) = K i=1 k=1 ( i=1 K k=1 j=1 d kji : p j (d) = W jji,k π jji,k d kji p i (d)! max d 2D adm under the constrains i=1 k=1 π jji,k d kji p i (d), j = 1,... d kji 0, d kji = 1, i = 1,... k=1! ) A (l) jji,k π jji,k d kji p i (d) b l, l = 1,..., L j=1 9 >= >; The additional constrains may be interpreted as some nancial limitations. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

42 Optimization Problem with Additional Constrains What can we do in this complex situation? Fact T he best reply strategy in general is non optimal: it may not satisfy condition (1) and the additional constrans. The functional J (d) = as well as the constrains p j (d) = i=1 k=1 i=1 K k=1 j=1 π jji,k d kji p i (d), W jji,k π jji,k d kji p i (d) i=1 are extremely nonlinear functions of d. K k=1 j=1 A (l) jji,k π jji,k d kji p i (d) b l The question is: what can we do in this complex situation? Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

43 Optimization Problem with Additional Constrains What can we do in this complex situation? Answer: c-variables! De nition De ne new variables c ik := d kji p i (d) Then the Production Optimization Problem can be express as a Linear Programming Problem solved by standard Matlab Toolbox: J (d) = C adm := K i=1 k=1 j=1 ( c ik : W jji,k π jji,k d kji p i (d) {z } {z } k=1 i=1 W π ik c jk = K k=1 i=1 K k=1 K = i=1 k=1 c ik π jji,k c ik, j =1,, c ik 0,! ) A (l) jji,k π jji,k j=1 W π ik c ik := J (c)! min c2c adm c ik b l, l =1, L K i=1 k=1 c ik = 1, Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

44 Optimization Problem with Additional Constrains Important properties of c-variables Corollary For c-variables de ned as c ik := d kji p i (d) and found as the solution c of the LPP above 1) we can recuperate the state distribution p i (d ) as p i (d ) = K cik k=1 > 0 (by the ergodicity property) 2) and the optimal control strategy (or decision making) d kji can be recuperated as d kji = c ik p i (d) = c ik K k=1 c ik Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

45 Soltion of LPP umerical solution of LPP with the data of the previous example c ik = p i (d) = (0.0003, , ) 2 d = 4 kji Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

46 Continuous-time controllable Markov chains Main de nition De nition A controllable continuous-time Markov chain is a 4-tuple o CTMC = (S, A, K, Q) where S is the nite state space ns (1),..., s ( ), A is the set of actions: for each s 2 S, A(s) A is the non-empty set of admissible actions at state s 2 S, K = f(s, a)js 2 S, a 2 A(s)g is the i class of admissible state-action pairs and Q = is the transition rates hq (jji,k) with the elements de ned as 8 < q q (jji,k) = (iji,k) = : q (jji,k) 0 if i = j i6=j 0 if i 6= j Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

47 Continuous-time controllable Markov chains Properties of the the transition rates Fact For each xed k the matrix of the transition rates is assumed to be conservative, i.e., from the de nition above it follows that q (jji,k) = 0 and j=1 stable, which means that q (i) := max a (k) 2A(s (i) ) q < 8i. (jji,k) Example 2 q (jji,k=1) = Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

48 Continuous-time controllable Markov chains Transition probabilities De nition Let X s := fi 2 X : P fx (s, ω) = ig 6= 0, s 2 T g. For s t (s, t 2 T ) and i 2 X s, k 2 A, j 2 X de ne the conditional probabilities π jji,k (s, t) := P fx (t, ω) = j j x (s, ω) = i, a (s) = kg which we will call the transition probabilities of a given Markov chain de ning the conditional probability for a process fx (t, ω)g t2t to be in the state j at time t under the condition that it was in the state i at time s < t and in the same tame the decision a (s) = k was done. The transition probabilities to be in the state j at time t under the condition that it was in the state i at time s < t K = k=1 d kji π ij (s, t j d):= K π jji,k (s, t) P fa (s) =k j x (s, ω) =ig k=1 {z } π jji,k (s, t) d kji Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

49 Continuous-time controllable Markov chains Properties of Transition probabilities The function π ij (s, t j d) for any i 2 X s, j 2 X and any s t (s, t 2 T ) should satisfy the following four conditions: 1) π ij (s, t j d) is a conditional probability, and hence, is nonnegative, that is, π i,j (s, t) 0. 2) starting from any state i 2 X s the Markov chain will obligatory occur in some state j 2 X t, i.e., j2x t π ij (s, t j d) = 1. 3) if no transitions, the chain remains to in its starting state with probability one, that is, π ij (s, s j d) = δ ij for any i, j 2 X s, j 2 X and any s 2 T ; 4) the chain can occur in the state j 2 X t passing through any intermediate state k 2 X u (s u t), i.e., π ij (s, t j d) = π ik (s, u j d) π kj (u, t j d) k2x u This relation is known as the Markov (or Chapman-Kolmogorov) equation. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

50 Continuous-time controllable Markov chains Properties of Transition probabilities for homogeneous Markov chains Corollary Since for homogeneous Markov chains the transition probabilities π i,j (s, t) depend only on the di erence (t s), below we will use the notation In this case the Markov equation becomes valid for any h 1, h 2 0. π ij (s t j d) := π ij (s, t j d) (2) π i,j (h 1 + h 2 j d) = π i,k (h 1 j d) π k,j (h 2 j d) (3) k2x Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

51 Continuous-time controllable Markov chains Distribution function of the time just before changing the current state Consider now the time τ (after the time s) just before changing the current state i, i.e., τ > s. By the homogeneity property it follows that distribution function of the time τ 1 (after the time s 1 := s + u, x (s + u, ω) = i) is the same as for the τ (after the time s, x (s, ω) = i) that leads to the following identity P fτ > v j x (s, ω) = ig = P fτ 1 > v j x (s 1, ω) = ig P fτ > v + u j x (s + u, ω) = ig = P fτ > u + v j x (s, ω) = i, τ > u sg since the event fx (s, ω) = i, τ > ug includes as a subset the event fx (s + u, ω) = ig. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

52 Continuous-time controllable Markov chains Lemma on the expectation time before changing a state Lemma The expectation time τ (of the homogenous Markov chain fx (t, ω)g t2t with a discrete phase space X ) to be in the current state x (s, ω) = i before its changing has the exponential distribution P fτ > v j x (s, ω) = ig = e λ i v (4) where λ i is a nonnegative constant which inverse value characterizes the average expectation time before the changing the state x (s, ω) = i, namely, 1 = E fτ j x (s, ω) = ig, λ i = q λ (iji,k) = q (jji,k) (5) i i6=j The constant λ i is usually called the "exit density". Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

53 Continuous-time controllable Markov chains Ideas of the proof (1) Proof. De ne the function f i (u) as f i (u) := P fτ > u j x (s, ω) = ig. By the Bayes formula f i (u + v) := P fτ > u + v j x (s, ω) = ig = P fτ > u + v j x (s, ω) = i, τ > ug P fτ > u j x (s, ω) = ig = P fτ > u + v j x (s, ω) = i, τ > ug f i (u) By the homogeneous property one has which means that f i (u + v) := P fτ > u + v j x (s, ω) = ig = P fτ > v j x (s, ω) = ig f i (u) = f i (v) f i (u) ln f i (u + v) = ln f i (u) + ln f i (v) f i (τ = 0) = P fτ > 0 j x (s, ω) = ig = 1 Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

54 Continuous-time controllable Markov chains Ideas of the proof (2) Proof. [Continuation of the proof] Di erentiation the logarithmic identity by u gives fi 0(u+v ) f i (u+v ) = f i 0(u) which for f i u = 0 becomes (u) fi 0(v ) f i (v ) = f i 0(0) f i (0) = f 0 i (0) := λ i! f i (v) = e λ i v To prove (5) it is su cient to notice that R E fτ j x (s, ω) = ig = td [ f i (t)] = t=0 te λ i t t=0 R t=0 e λ i t dt = R t=0 e λ i t dt = λ 1 i Lemma is proven. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

55 Continuous-time controllable Markov chains The Kolmogorov forward equations For homogenous Markov chains π ij (s, t j d) = π ij (t s j d) and stationary strategies P fa (s) = k j x (s, ω) = ig = d kji the Markov equation becomes (taking s = 0) d dt π ij (t j d) = i q (jji,k)! π ij (t j d) + q (jji,k) π il (t j d) can be written as the matrix di erential equation as follows: Π 0 (t j d) = Π(t j d)q (d) ; Π(0) = I Π(t j d) = kπ i,k (t j d)k 2 R K, Q (d) = q (jji,k) d kji This system can be solved by Π(t j d) = Π(0)e Q (d )t = e Qt := k=1 t n Q n (d) t=0 n! (6) Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

56 Continuous-time controllable Markov chains Stationary distribution At the stationary state, the probability transition matrix is Π (d) = lim t! Π(t j d) De nition The vector P 2 R if i=1 P i = 1 is called the stationary distribution vector Π > (d) P = P Claim This vector can be seen as the long run proportion of time that the process is in state s (i) 2 S. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

57 Continuous-time controllable Markov chains Additional linear constraint Theorem (Xianping Guo, Onesimo Hernandez Lerma, 2009) The the stationary distribution vector P satis es the linear equation Q > (d) P = 0 (7) Fact The Production Optimization Problem, described by a continuous-time controllable Markov chain in stationary states, is the same Linear programming problem (LPP) as for a discrete-time model but with the additional linear constraint (7). Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

58 Conclusion Which topics we have discussed today? 1-st Lecture Day: Basic otions on Controllable Markov Chains Models, Decision Making and Production Optimization Problem. Markov Processes: Classical De nition (Markov), Mathematical De nition (Kolmogorov), Markov property in a general format Finite Markov Chains: Main de nition, Homogeneous (Stationary) Markov Chains, Transition matrix, Dynamic Model of Finite Markov Chains, Ergodic Markov Chains, Ergodic Theorem, Ergodicity coe cient. Controllable Markov Chains: Transition matrices for controllable Finite Markov Chain processes, Pure and mixed strategies. Simplest Production Optimization Problem: c-variables, Linear Programming Problem. Continuous-time controllable Markov chains: Distribution function of the time just before changing the current state, the transition rates, expectation time, Additional linear constraint and LPP problem. Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

59 Conclusion ext lecture ext Lecture Day: The Mean-Variance Customer Portfolio Problem: Bank Credit Policy Optimization. Thank you for your attention! See you soon! Alexander S. Poznyak (Cinvestav, Mexico) Markov Chain Models April / 59

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

Lecture 1- The constrained optimization problem

Lecture 1- The constrained optimization problem Lecture 1- The constrained optimization problem The role of optimization in economic theory is important because we assume that individuals are rational. Why constrained optimization? the problem of scarcity.

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

Learning and Risk Aversion

Learning and Risk Aversion Learning and Risk Aversion Carlos Oyarzun Texas A&M University Rajiv Sarin Texas A&M University January 2007 Abstract Learning describes how behavior changes in response to experience. We consider how

More information

Nash Bargaining Equilibria for Controllable Markov Chains Games

Nash Bargaining Equilibria for Controllable Markov Chains Games Preprints of the 0th World Congress The International Federation of Automatic Control Toulouse, France, July 9-4, 07 Nash Bargaining Equilibria for Controllable Markov Chains Games Kristal K. Trejo Julio

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

ECON0702: Mathematical Methods in Economics

ECON0702: Mathematical Methods in Economics ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 14, 2009 Luo, Y. (SEF of HKU) MME January 14, 2009 1 / 44 Comparative Statics and The Concept of Derivative Comparative Statics

More information

Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models

Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models Prof. George McCandless UCEMA Spring 2008 1 Class 1: introduction and rst models What we will do today 1. Organization of course

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Stochastic solutions of nonlinear pde s: McKean versus superprocesses

Stochastic solutions of nonlinear pde s: McKean versus superprocesses Stochastic solutions of nonlinear pde s: McKean versus superprocesses R. Vilela Mendes CMAF - Complexo Interdisciplinar, Universidade de Lisboa (Av. Gama Pinto 2, 1649-3, Lisbon) Instituto de Plasmas e

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Oscillatory Mixed Di erential Systems

Oscillatory Mixed Di erential Systems Oscillatory Mixed Di erential Systems by José M. Ferreira Instituto Superior Técnico Department of Mathematics Av. Rovisco Pais 49- Lisboa, Portugal e-mail: jferr@math.ist.utl.pt Sra Pinelas Universidade

More information

2 Interval-valued Probability Measures

2 Interval-valued Probability Measures Interval-Valued Probability Measures K. David Jamison 1, Weldon A. Lodwick 2 1. Watson Wyatt & Company, 950 17th Street,Suite 1400, Denver, CO 80202, U.S.A 2. Department of Mathematics, Campus Box 170,

More information

Solving Extensive Form Games

Solving Extensive Form Games Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves

More information

ECON0702: Mathematical Methods in Economics

ECON0702: Mathematical Methods in Economics ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 12, 2009 Luo, Y. (SEF of HKU) MME January 12, 2009 1 / 35 Course Outline Economics: The study of the choices people (consumers,

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Discrete-Time Market Models

Discrete-Time Market Models Discrete-Time Market Models 80-646-08 Stochastic Calculus I Geneviève Gauthier HEC Montréal I We will study in depth Section 2: The Finite Theory in the article Martingales, Stochastic Integrals and Continuous

More information

Answer Key for M. A. Economics Entrance Examination 2017 (Main version)

Answer Key for M. A. Economics Entrance Examination 2017 (Main version) Answer Key for M. A. Economics Entrance Examination 2017 (Main version) July 4, 2017 1. Person A lexicographically prefers good x to good y, i.e., when comparing two bundles of x and y, she strictly prefers

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER. UNIVARIATE DISTRIBUTIONS. Random Variables and Distribution Functions. This chapter deals with the notion of random variable, the distribution

More information

Volume 30, Issue 3. Monotone comparative statics with separable objective functions. Christian Ewerhart University of Zurich

Volume 30, Issue 3. Monotone comparative statics with separable objective functions. Christian Ewerhart University of Zurich Volume 30, Issue 3 Monotone comparative statics with separable objective functions Christian Ewerhart University of Zurich Abstract The Milgrom-Shannon single crossing property is essential for monotone

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Lecture Notes on Bargaining

Lecture Notes on Bargaining Lecture Notes on Bargaining Levent Koçkesen 1 Axiomatic Bargaining and Nash Solution 1.1 Preliminaries The axiomatic theory of bargaining originated in a fundamental paper by Nash (1950, Econometrica).

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB)

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB) Game Theory Bargaining Theory J International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB) (International Game Theory: Doctorate Bargainingin Theory Economic Analysis (IDEA)

More information

Columbia University. Department of Economics Discussion Paper Series. The Knob of the Discord. Massimiliano Amarante Fabio Maccheroni

Columbia University. Department of Economics Discussion Paper Series. The Knob of the Discord. Massimiliano Amarante Fabio Maccheroni Columbia University Department of Economics Discussion Paper Series The Knob of the Discord Massimiliano Amarante Fabio Maccheroni Discussion Paper No.: 0405-14 Department of Economics Columbia University

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Averaging Operators on the Unit Interval

Averaging Operators on the Unit Interval Averaging Operators on the Unit Interval Mai Gehrke Carol Walker Elbert Walker New Mexico State University Las Cruces, New Mexico Abstract In working with negations and t-norms, it is not uncommon to call

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Chapter -1: Di erential Calculus and Regular Surfaces

Chapter -1: Di erential Calculus and Regular Surfaces Chapter -1: Di erential Calculus and Regular Surfaces Peter Perry August 2008 Contents 1 Introduction 1 2 The Derivative as a Linear Map 2 3 The Big Theorems of Di erential Calculus 4 3.1 The Chain Rule............................

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Snell envelope and the pricing of American-style contingent claims

Snell envelope and the pricing of American-style contingent claims Snell envelope and the pricing of -style contingent claims 80-646-08 Stochastic calculus I Geneviève Gauthier HEC Montréal Notation The The riskless security Market model The following text draws heavily

More information

Advanced Microeconomics

Advanced Microeconomics Advanced Microeconomics Cooperative game theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 45 Part D. Bargaining theory and Pareto optimality 1

More information

Essential Mathematics for Economists

Essential Mathematics for Economists Ludwig von Auer (Universität Trier) Marco Raphael (Universität Trier) October 2014 1 / 343 1 Introductory Topics I: Algebra and Equations Some Basic Concepts and Rules How to Solve Simple Equations Equations

More information

THE MINIMAL OVERLAP COST SHARING RULE

THE MINIMAL OVERLAP COST SHARING RULE THE MINIMAL OVERLAP COST SHARING RULE M. J. ALBIZURI*, J. C. SANTOS Abstract. In this paper we introduce a new cost sharing rule-the minimal overlap cost sharing rule-which is associated with the minimal

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Course 495: Advanced Statistical Machine Learning/Pattern Recognition Course 495: Advanced Statistical Machine Learning/Pattern Recognition Lecturer: Stefanos Zafeiriou Goal (Lectures): To present discrete and continuous valued probabilistic linear dynamical systems (HMMs

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem This chapter will cover three key theorems: the maximum theorem (or the theorem of maximum), the implicit function theorem, and

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

Lecture # 1 - Introduction

Lecture # 1 - Introduction Lecture # 1 - Introduction Mathematical vs. Nonmathematical Economics Mathematical Economics is an approach to economic analysis Purpose of any approach: derive a set of conclusions or theorems Di erences:

More information

Stochastic Processes Calcul stochastique. Geneviève Gauthier. HEC Montréal. Stochastic processes. Stochastic processes.

Stochastic Processes Calcul stochastique. Geneviève Gauthier. HEC Montréal. Stochastic processes. Stochastic processes. Processes 80-646-08 Calcul stochastique Geneviève Gauthier HEC Montréal Let (Ω, F) be a measurable space. A stochastic process X = fx t : t 2 T g is a family of random variables, all built on the same

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis

Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis Natalia Lazzati y November 09, 2013 Abstract We study collective choice models from a revealed preference approach given limited

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

Lecture 6: Contraction mapping, inverse and implicit function theorems

Lecture 6: Contraction mapping, inverse and implicit function theorems Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Stochastic Processes (Master degree in Engineering) Franco Flandoli

Stochastic Processes (Master degree in Engineering) Franco Flandoli Stochastic Processes (Master degree in Engineering) Franco Flandoli Contents Preface v Chapter. Preliminaries of Probability. Transformation of densities. About covariance matrices 3 3. Gaussian vectors

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

NBER WORKING PAPER SERIES A NOTE ON REGIME SWITCHING, MONETARY POLICY, AND MULTIPLE EQUILIBRIA. Jess Benhabib

NBER WORKING PAPER SERIES A NOTE ON REGIME SWITCHING, MONETARY POLICY, AND MULTIPLE EQUILIBRIA. Jess Benhabib NBER WORKING PAPER SERIES A NOTE ON REGIME SWITCHING, MONETARY POLICY, AND MULTIPLE EQUILIBRIA Jess Benhabib Working Paper 14770 http://www.nber.org/papers/w14770 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

Widely applicable periodicity results for higher order di erence equations

Widely applicable periodicity results for higher order di erence equations Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu

More information

University of Toronto

University of Toronto A Limit Result for the Prior Predictive by Michael Evans Department of Statistics University of Toronto and Gun Ho Jang Department of Statistics University of Toronto Technical Report No. 1004 April 15,

More information

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587 Cycles of length two in monotonic models José Alvaro Rodrigues-Neto Research School of Economics, Australian National University ANU Working Papers in Economics and Econometrics # 587 October 20122 JEL:

More information

ECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University

ECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University ECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University Instructions: Answer all four (4) questions. Be sure to show your work or provide su cient justi cation for

More information

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes

More information

8 Periodic Linear Di erential Equations - Floquet Theory

8 Periodic Linear Di erential Equations - Floquet Theory 8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within

More information

MA 8101 Stokastiske metoder i systemteori

MA 8101 Stokastiske metoder i systemteori MA 811 Stokastiske metoder i systemteori AUTUMN TRM 3 Suggested solution with some extra comments The exam had a list of useful formulae attached. This list has been added here as well. 1 Problem In this

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0. Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT

More information

Stochastic stability analysis for fault tolerant control systems with multiple failure processes

Stochastic stability analysis for fault tolerant control systems with multiple failure processes Internationa l Journal of Systems Science,, volume, number 1, pages 55±5 Stochastic stability analysis for fault tolerant control systems with multiple failure processes M. Mahmoud, J. Jiang and Y. M.

More information

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries 1. Introduction In this paper we reconsider the problem of axiomatizing scoring rules. Early results on this problem are due to Smith (1973) and Young (1975). They characterized social welfare and social

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Spatial Competition and Collaboration Networks

Spatial Competition and Collaboration Networks Spatial Competition and Collaboration Networks Yasunori Okumura y Kanagawa University May, 009 Abstract In this paper, we discuss the formation of collaboration networks among rms that are located in a

More information

Applied cooperative game theory:

Applied cooperative game theory: Applied cooperative game theory: The gloves game Harald Wiese University of Leipzig April 2010 Harald Wiese (Chair of Microeconomics) Applied cooperative game theory: April 2010 1 / 29 Overview part B:

More information

Initial condition issues on iterative learning control for non-linear systems with time delay

Initial condition issues on iterative learning control for non-linear systems with time delay Internationa l Journal of Systems Science, 1, volume, number 11, pages 15 ±175 Initial condition issues on iterative learning control for non-linear systems with time delay Mingxuan Sun and Danwei Wang*

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Appendix for "O shoring in a Ricardian World"

Appendix for O shoring in a Ricardian World Appendix for "O shoring in a Ricardian World" This Appendix presents the proofs of Propositions - 6 and the derivations of the results in Section IV. Proof of Proposition We want to show that Tm L m T

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection

Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection Geethanjali Selvaretnam Abstract This model takes into consideration the fact that depositors

More information

Optimal Target Criteria for Stabilization Policy

Optimal Target Criteria for Stabilization Policy Optimal Target Criteria for Stabilization Policy Marc P. Giannoni Columbia University y Michael Woodford Columbia University z February 7, Abstract This paper considers a general class of nonlinear rational-expectations

More information

A linear programming approach to constrained nonstationary infinite-horizon Markov decision processes

A linear programming approach to constrained nonstationary infinite-horizon Markov decision processes A linear programming approach to constrained nonstationary infinite-horizon Markov decision processes Ilbin Lee Marina A. Epelman H. Edwin Romeijn Robert L. Smith Technical Report 13-01 March 6, 2013 University

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Central Limit Theorem for Non-stationary Markov Chains

Central Limit Theorem for Non-stationary Markov Chains Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities

More information

Addendum to: International Trade, Technology, and the Skill Premium

Addendum to: International Trade, Technology, and the Skill Premium Addendum to: International Trade, Technology, and the Skill remium Ariel Burstein UCLA and NBER Jonathan Vogel Columbia and NBER April 22 Abstract In this Addendum we set up a perfectly competitive version

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Strategy-proof allocation of indivisible goods

Strategy-proof allocation of indivisible goods Soc Choice Welfare (1999) 16: 557±567 Strategy-proof allocation of indivisible goods Lars-Gunnar Svensson Department of Economics, Lund University, P.O. Box 7082, SE-220 07 of Lund, Sweden (e-mail: lars-gunnar.svensson@nek.lu.se)

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

A fair rule in minimum cost spanning tree problems 1

A fair rule in minimum cost spanning tree problems 1 A fair rule in minimum cost spanning tree problems 1 Gustavo Bergantiños Research Group in Economic Analysis Universidade de Vigo Juan J. Vidal-Puga Research Group in Economic Analysis Universidade de

More information

Negative Advertisement Strategies. Changrong Deng

Negative Advertisement Strategies. Changrong Deng Negative Advertisement Strategies Changrong Deng Saša Pekeµc The Fuqua School of Business, Duke University Preliminary Draft, July PLEASE DO NOT CIRCULATE Abstract We develop a multi-period model of dynamically

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen Strategic Form Games In this part we will analyze games in which the players choose their actions simultaneously (or without the knowledge of other players

More information

c 2007 Je rey A. Miron

c 2007 Je rey A. Miron Review of Calculus Tools. c 007 Je rey A. Miron Outline 1. Derivatives. Optimization 3. Partial Derivatives. Optimization again 5. Optimization subject to constraints 1 Derivatives The basic tool we need

More information