Optimal exit time from casino gambling: strategies of. pre-committed and naive gamblers

Size: px
Start display at page:

Download "Optimal exit time from casino gambling: strategies of. pre-committed and naive gamblers"

Transcription

1 Optimal exit time from casino gambling: strategies of pre-committed and naive gamblers Xue Dong He Sang Hu Jan Ob lój Xun Yu Zhou December 22, 205 Abstract We employ the casino gambling model in He et al. (205) to study the strategies of a precommitted gambler, who commits her future selves to the strategy she sets up today, and of a naive gambler, who fails to do so and thus keeps changing plans at every time. We identify conditions under which the pre-committed gambler, asymptotically, adopts a stop-loss strategy, exhibits the behavior of disposition effect, or does not exit. For a specific parameter setting when the utility function is piece-wise power and the probability weighting functions are concave power, we derive the optimal strategy of the pre-committed gambler in closed-form whenever it exists. Finally, we study the actual behavior of the naive gambler and highlight its marked differences from that of the pre-committed gambler. Key words: casino gambling, cumulative prospect theory, optimal stopping, pre-committed gamblers, naive gamblers, optimal strategies Department of Industrial Engineering and Operations Research, Columbia University, Room 35, Mudd Building, 500 W. 20th Street, New York, NY 0027, US. xh240@columbia.edu. The research of this author was supported by a start-up fund at Columbia University. Risk Management Institute, National University of Singapore, 2 Heng Mui Keng Terrace, Singapore rmihsa@nus.edu.sg. Mathematical Institute, The Oxford-Man Institute of Quantitative Finance and St John s College, University of Oxford, Oxford, UK. Jan.Obloj@maths.ox.ac.uk. Part of this research was completed while this author was visiting CUHK in March 203 and he is grateful for the support from the host. He also gratefully acknowledges support from ERC Starting Grant RobustFinMath Mathematical Institute, The University of Oxford, Woodstock Road, OX2 6GG Oxford, UK, and Oxford Man Institute of Quantitative Finance, The University of Oxford. zhouxy@maths.ox.ac.uk. The research of this author was supported by a start-up fund of the University of Oxford, and research grants from the Oxford Man Institute of Quantitative Finance and Oxford Nie Financial Big Data Lab.

2 Introduction We analyse theoretically, and compare, the behavior of different types of gamblers in a casino gambling model formulated in our recent paper He et al. (205). This model was in turn a generalization of the one initially proposed by Barberis (202). At time 0, a gambler is offered a fair bet in a casino: win or lose $ with equal probability. If the gambler takes the bet, the bet is played out and she either gains $ or loses $ at time. Then the gambler is offered the same bet and she decides whether to play. The same bet is offered and played out repeatedly until the first time the gambler declines the bet. At that time, the gambler exits the casino with all her prior gains and losses. The gambler needs to determine the best time to stop playing, with the objective to maximize her preference value of the final wealth, while the preferences are represented by the cumulative prospect theory (CPT, Tversky and Kahneman, 992). As shown in Barberis (202), when the gambler uses the same CPT preferences to revisit the gambling problem in the future, e.g., at time t > 0, the optimal gambling strategy solved at time 0 is typically no longer optimal. This time-inconsistency arises from the probability weighting in CPT. If the gambler is pre-committed, she commits her future selves to the strategy solved at time 0. If she fails to pre-commit, namely she keeps changing her strategy in the future, then the gambler is said to be naive. The actual strategy implemented by a naive gambler will often be very different from the strategy viewed as optimal at time 0 and which is actually implemented by a pre-committed gambler. In our recent paper, He et al. (205), we illustrate how and why the CPT objectives can be strictly improved in general by considering path-dependent strategies or by tossing independent coins along the way. Moreover, we show that any path-dependent strategy is equivalent to a randomization of path-independent strategies. These results justify the enlargement of the set of feasible strategies to include the randomized ones. Then, in the setting of an infinite time horizon, we develop a systematic and analytical approach to finding a pre-committed gambler s optimal (randomized) strategy; this includes change of decision variable from the stopping time to the The observation that using randomized strategies may be strictly beneficial for the gambler was made independently in Henderson et al. (205). 2

3 stopped distribution, formulation of an infinite dimensional program for the latter, and application of the Skorokhod embedding theory to retrieve the optimal stopping from the optimal stopped distribution. In summary, He et al. (205) establishes a theoretical foundation for solving the casino model, although some of the technical components were left for further research. Most are tackled in this paper and its online appendix 2. In this paper, we continue to study the optimal strategies of a pre-committed gambler, as well as the actual strategies implemented by a naive gambler, for the case of an infinite time horizon. Barberis (202) considered a five-period model and found these strategies by exhaustive search. Such an approach quickly becomes very involved and is clearly impossible to use when there is no end date. Instead, in this paper we are able to derive these strategies analytically which allows us to build a general theoretical comparison. As far as we know, our paper is the first to analytically study the strategies of a pre-committed gambler and compare them with those of a naive gambler in a general discrete time casino gambling setting and using CPT preferences. A similar approach was recently adopted by Henderson et al. (205) in a continuous time setting and used to complement and counter earlier findings in Ebert and Strack (202). Both of these papers, see Section 5 for discussion, rely on the crucial feature of their setting which allows the gambler to construct arbitrarily small random payoffs. This feature stands in a clear distinction with our setting and their results are not applicable here. We first investigate when the optimal strategy of the pre-committed gambler does not exist and find asymptotical optimal strategies in this case. We further divide these asymptotical optimal strategies into three types: loss-exit, gain-exit, and non-exit strategies. In the original terms of Barberis (202) (for a five-period model), a loss-exit strategy is to exit at a fixed loss level and not to exit at any gain level, a gain-exit strategy is the opposite, and a non-exit strategy is not to stop at any gain and loss levels. 3 In the present paper, as the time horizon is infinite and the optimal CPT value may not be attained, a loss-exit strategy is interpreted as exiting at a certain fixed loss level or at an asymptotically large gain level (so essentially it only stops at a loss; hence the name). 2 Available at 3 These three types of strategies mirror respectively the stop-loss, disposition effect (Shefrin and Statman, 985), and buy-and-hold strategies or phenomenon in stock trading. 3

4 Similar interpretations are applied to gain-exit and non-exit strategies. We find conditions under which the pre-committed gambler takes loss-exit, gain-exit, and non-exit strategies, respectively, and our theoretical results are consistent with the numerical results in Barberis (202). We then assume a piece-wise power utility function and concave, power probability weighting functions. The concave, power weighting functions, although not typically used in CPT, are still able to model the overweighing of very small probabilities of large gains and losses. With this specification of the utility and weighting functions, we are able to derive the optimal strategy of the pre-committed gambler completely and analytically. Finally, we study the actual behavior of a naive gambler and compare it with that of the precommitted gambler. We find the conditions under which the naive gambler stays at the casino forever. In particular, we prove that under some conditions, while the pre-committed gambler will stop for sure before the loss hits a certain level, the naive gambler continues to play with a positive probability at any loss level. The remainder of this paper is organized as follows: In Section 2, we recall the casino gambling model formulated in He et al. (205). In Section 3, we study the existence of the optimal strategies and discuss loss-exit, gain-exit, and non-exit strategies. In Section 4, we solve the gambling problem completely when the utility function is piece-wise power and the probability weighting functions are power. We then discuss the strategies taken by a naive gambler in Section 5. Finally, Section 6 concludes. Some of the proofs are placed in Appendices at the end of the paper and the others are placed in an online appendix available at 2 The Model We consider the casino gambling problem formulated in He et al. (205). At time 0, a gambler is offered a fair bet, e.g., roulette wheel, in a casino: win or lose one dollar with equal probability. If the gambler takes the bet, the bet outcome is played out at time. Then the gambler is offered the same bet and she decides whether to play. The same bet is offered and played out repeatedly until the first time the gambler declines the bet. At that time, the gambler leaves the casino with all her prior gains and losses. The gambler needs to decide the time τ to exit the casino. 4

5 Denote by S t, t = 0,,... the cumulative gain or loss of the gambler in the casino at t. Then, {S t : t 0} is a symmetric random walk. The gambler chooses the exit time τ to maximize her preference for S τ, while the preferences are modeled by CPT, assuming the reference point to be her initial wealth at the time she enters the casino. Then, the gambler s preference value of S τ is V (S τ ) = n= ( ) u + (n) w + (P(S τ n)) w + (P(S τ > n)) n= ( ) u (n) w (P(S τ n)) w (P(S τ < n)), () where u + ( ) and u ( ) are the utility function of gains and disutility function of losses, respectively, and w + ( ) and w ( ) are the probability weighting functions regarding gains and losses, respectively. Assume that u ± ( ) are continuous, increasing, and concave, and satisfy u ± (0) = 0. In addition, w ± ( ) are increasing mappings from [0, ] onto [0, ]. We adopt the convention that = so that V (S τ ) := if the second sum of infinite series in () is +. Let T be the set of stopping times that can be chosen by the gambler at time 0. Then, the gambler faces the following optimization problem at time 0: max τ T V (S τ ). (2) Following He et al. (205), we allow the gambler to use the following randomized strategies: at each time t, the gambler can toss an independent and (possibly biased) coin to decide whether to stay in the casino: stay if the outcome is heads and leave otherwise. The probability that the coin turns up tails may depend on time t and on the cumulative gain/loss S t at that time, and this probability is decided by the gambler as part of the strategy. More precisely, the gambler can take any strategy in the following set A C := {τ τ := inf{t 0 ξ t,st = 0} for some {ξ t,x } t 0,x Z that are independent 0- random variables and are independent of {S t } t 0.}. We comment below why there is essentially no loss of generality in considering only such stopping 5

6 (2,2) (,) (0,0) (2,0) (,-) (2,-2) Figure : Gain/loss process represented as a binomial tree. Each node is marked on its right by a pair (t, x), where t stands for the time and x the amount of (cumulative) gain/loss at that time. For example, the node (2, 2) signifies a total loss, relative to the initial wealth, of $2 at time 2. times. Finally, we require {S τ t }, t = 0,,... to be uniformly integrable so as to, among other things, exclude doubling strategies; so T = {τ A C {S τ t } is uniformly integrable}. In the binomial tree representation of {S t }, each node of the tree stands for the current level of cumulative gain/loss (t, S t ) and the path leading to this node stands for the historical levels of cumulative gain/loss; see Figure. For instance, following the path starting from node (0,0), through node (,), and leading to node (2,0), the gambler wins $ in the first bet, loses $ in the second bet, and thus ends up with $0 cumulative gain/loss at time 2. At each node, the gambler can toss a biased coin to decide whether to stay in the casino. Therefore, the gambler s decision depends not only on the current level of cumulative gain/loss but also on the outcome of a random coin that is independent of the bet outcomes in the casino. In our previous paper, He et al. (205), we offer explanation why randomized strategies are in general preferred for a CPT gambler. Moreover, it is shown that, although path-dependent stopping times, both randomized and non-randomized, are not directly included in A C, they are equivalent to certain stopping times in A C in terms of the distribution S t at the stopping times and thus lead to the same CPT preference value. One of the main contributions of He et al. (205) 6

7 is to establish that the set of the distributions of S τ for τ T is { M 0 (Z) = probability measure µ on Z n µ({n}) <, } n µ({n}) = 0. (3) n Z n Z Therefore, once the optimal stopped distribution is found in M 0 (Z), the corresponding optimal stopping time can be recovered via the Skorokhod embedding. This result is the theoretical foundation for solving (2); see the discussion in He et al. (205) and the solution in Section 4 of the present paper. Here, we make two comments on the set T of feasible exit times. First, any exit strategies without randomization and satisfying the uniform integrability condition belong to T. In particular, strategies such as stopping when S t hits one of a lower barrier a and an upper barrier b are feasible. Secondly, randomized strategies in A C are path-independent: the random coin tossed at each time t, and thus the decision of whether to stay, depends only on the level of cumulative gain/loss at that time. Note, however, that in general the gambler needs to toss a random coin at every time in order to implement a strategy in A C. An alternative stopping strategy to realize a same optimal distribution of S t is provided in Appendix B: the gambler only needs to toss a coin occasionally, but she needs to keep track of the historical maximum of the cumulative gain and loss (hence path-dependent). This alternative strategy resembles the so-called trailing stop-loss strategy in stock trading. See the discussions in Example 2 and Appendix B. As shown in Barberis (202), when the gambler uses the same CPT preferences to revisit the gambling problem in the future, e.g., at time t > 0, the optimal strategy of problem (2), which is solved at time 0, is no longer optimal. This arises from the probability weighting function. If the gambler commits her future selves to the strategy solved at time 0, the optimal strategy of problem (2) is the one that is implemented by the gambler. In this case, the gambler is sophisticated and pre-committed. If at each time the gambler fails to commit her future selves to the strategy solved at that time, she keeps changing her strategy in the future. In this case, the gambler is naive, and the actual strategy implemented by the gambler is different from the optimal strategy of problem (2). In the following, we first study the pre-committing strategy, i.e., the optimal strategy of the pre-committed gambler, in Sections 3 and 4. Then, we study the actual behavior of a naive gambler 7

8 in Section 5. The third type of gamblers discussed by Barberis (202) is called sophisticated gamblers without pre-commitment, who know their future selves will not follow today s strategy and thus adjust today s strategy accordingly. In the five-period model considered by Barberis (202), the optimal strategy of such a gambler is defined through a backward induction: starting from the end of the time horizon when the only choice of the gambler is to leave the casino, at the beginning of each period, the gambler decides her strategy knowing what her action will be in the following periods. 4 In our model, such a definition is impossible because the end of the infinite time horizon does not exist. Therefore, we opt not to discuss this type of gamblers in our model. The following usual notations will be used throughout the paper: for any real number x, x stands for the floor of x, i.e., the largest integer dominated by x, and x stands for the ceil of x, i.e., the smallest integer dominating x. For real numbers x and y, x y := min(x, y). 3 Stop Loss, Disposition Effect, and Non-Exit In this section, we study various behaviors of pre-committed gamblers via a class of simple two-level strategies. For two integers b < 0 < a we denote the first exit time from [b, a] by τ a,b := inf{t 0 : S t = a or S t = b}. Theorem Let V be the value of the optimization problem in (2). (i) Suppose there exists s > 0 such that lim inf u +(sx)w + (/x) = +. (4) x + Then, V = and there exists b < 0 such that lim a + V (S τa,b ) = +. 4 It is also called an equilibrium strategy in economics literature. 8

9 (ii) Suppose there exists ɛ > 0 such that lim sup u (x +ɛ )w (/x) = 0. (5) x + Then, V = sup x 0 u + (x). Moreover, the optimal solution to problem (2) does not exist and V (S τa,b ) with a = k ɛ/(+ɛ) and b = k converges to V as k goes to infinity. (iii) Suppose lim x + u + (x) = +, and there exists p (0, ) such that lim sup x + u ( p p x ) u + (x) < w +(p) w ( p). (6) Then, V = and there exists c > 0 such that lim k + V (S τa,b ) = + where a = ck and b = k. Theorem provides three cases in which the optimal solution to problem (2) does not exist. In the first case, (i), the optimal value of the gambling problem (2) is infinite. The strategy τ a,b the gambler takes, as a +, is of the stop-loss-and-let-profit-run type: to exit once reaching a fixed loss level b and not to stop in gain. It is essentially a loss-exit strategy, in Barberis term (Barberis, 202). Such a strategy caps the loss and allows infinite gain, producing a highly skewed distribution which is favored due to probability weighting. Indeed, consider the following lottery: winning sx dollars with probability /x and winning nothing otherwise, where x is a sufficiently large number. The expected payoff of the lottery is s and the CPT value of the lottery is u + (sx)w + (/x). Therefore, condition (4) implies that the gambler has strong preferences for this type of highly skewed lotteries so that with limited wealth s, she can achieve infinite CPT value by purchasing these lotteries. In case (ii), the optimal value is finite or infinite depending on the whether or not u + is uniformly bounded; however either way this value is not attained by any stopping strategy. The gambler likes the following strategy: leave the casino when losing more than k dollars or when winning more than k ɛ/(+ɛ) dollars for sufficiently large k. Note that k ɛ/(+ɛ) is much smaller than k when ɛ is small. For instance, when ɛ = 0.25 and k = 00000, the strategy taken by the gambler is 9

10 to stop gambling when winning $0 or when losing $ This type of strategy exhibits the so-called disposition effect, namely, to sell the winners too soon and hold the losers too long (Shefrin and Statman, 985). To understand the preferences underlying this sort of behaviors, consider the following random loss: losing x +ɛ dollars with /x probability and losing nothing otherwise, where x is sufficiently large. The expected loss is as large as x ɛ. However, condition (5) implies that the CPT value of this random loss, u (x +ɛ )w (/x), is nearly zero. Hence, the dispositional effect is a result of the gambler not willing to pay anything to hedge against the loss, even though the expected loss is large. It corresponds to an essentially gain-exit strategy. In case (iii) of Theorem, condition (6) is related to the large loss aversion degree (LLAD), defined as lim x + u (x)/u + (x), in He and Zhou (20). Indeed, by setting p = /2, condition (6) is implied by the one that LLAD is strictly less than w + (/2)/w (/2), i.e., LLAD is sufficiently low. In this case, condition (6) implies that probability weighting on large gains dominates a combination of weighting on large losses and loss aversion. The corresponding strategy taken by the gambler is to exit when either the gain reaches a sufficiently high level ( ck ) or the loss reaches another sufficiently high level (k), and these two levels are of the same magnitude. Therefore, we can consider it to be an essentially non-exit strategy. We can make the above discussions precise mathematically. For any stopping time τ T, define R(τ) := E[S τ S τ ] E[ S τ S τ ] as the conditional gain-loss ratio. For stopping time τ a,b, we have R(τ a,b ) = a/( b). Thus, Theorem -(i) corresponds to an arbitrarily large conditional gain-loss ratio, while Theorem -(ii) to an arbitrarily small conditional gain-loss ratio. Finally, for τ a,b with a = ck, b = k, and k as in Theorem -(iii), we can see that R(τ a,b ) is approximately a positive constant c and the conditional expected gain E[S τ S τ ] is arbitrarily large. Next, we consider the following specification of the utility function and probability weighting 0

11 Table : Existence of optimal solutions. The utility function is given as in (7), the weighting functions are given as in (8), given as in (9), or power functions w ± (p) = p δ±. Parameter Values Exist. Opt. Solution Strategy α + > δ + No loss-exit α < δ No gain-exit α + > α No non-exit w α + = α, λ < sup + (p)/p α + 0<p< w ( p)/( p) α + No non-exit functions introduced by Tversky and Kahneman (992) u + (x) = x α +, u (x) = λx α, (7) w ± (p) = p δ ± (p δ ± + ( p) δ ±) /δ ±, (8) where α +, α (0, ] and δ +, δ (0, ] such that w ± ( ) are increasing. With this specification, the results of Theorem are summarized in Table. These results also hold in the case of power probability weighting functions w ± (p) = p δ ± and in the case of the probability weighting functions proposed by Tversky and Fox (995): w ± (p) = a ± p δ ± /(a ± p δ ± + ( p) δ ± ) (9) with a ± (0, ]. Barberis (202) assumes (7) (8) with α + = α = α and δ + = δ = δ in his five-period gambling model. By enumerating all possible path-independent strategies, Barberis (202) finds the best strategy of the pre-committed gambler and distinguishes two types strategies: loss-exit strategies (to leave early if the gambler is losing but to stay longer if she is winning) and gain-exit strategies (to leave early if the gambler is winning but to stay longer if she is losing). Note that these two types of strategies are similar to ours. Barberis (202) finds that with some parameter specifications, the best strategy is a loss-exit one and with others, the best strategy is gain-exit; see Barberis (202, Figure 3). In our model, we show theoretically that the gambler takes loss-exit strategies when α > δ and takes gain-exit strategies when α < δ; see Table. These theoretical results coincide with the numerical results in Barberis (202, Figure 3).

12 For the weighting function proposed by Prelec (998) w ± (p) = e a ±( ln p) δ ±, (0) with a ± > 0 and δ ± (0, ], condition (4) is satisfied for any δ + < or for δ + = and α + > a +. In this case, the gambler takes an asymptotical loss-exit strategy. Condition (5), however, is not satisfied for any δ <. 4 Pre-committed Gamblers with Power Utility and Weighting Functions In this section, we solve problem (2) for a pre-committed gambler when the utility function is piece-wise power and the probability weighting functions are power functions, i.e., when u + (x) = x α +, u (x) = λx α, w + (p) = p δ +, w (p) = p δ () with α ± (0, ], δ ± (0, ], and λ > 0. The parameters α + and α measure the diminishing sensitivity of the utility of gains and losses, respectively; the smaller α + and α are, the more risk averse regarding gains and the more seeking regarding losses the gambler is, respectively. The parameter λ is related to loss aversion, a phenomenon that individuals are more sensitive to losses than to gains; see for instance Kahneman et al. (990). Various definitions of loss aversion degree, a measure of the extent to which individuals are averse to losses, have been proposed in the literature, including the definition u (x)/u + (x), x > 0 by Kahneman and Tversky (979), that u ()/u + () by Tversky and Kahneman (992), that u (x)/u +(x), x > 0 by Wakker and Tversky (993), that lim x 0 u (x)/u + (x) by Köbberling and Wakker (2005), and that lim x + u (x)/u + (x) by He and Zhou (20). Save for the definition by Tversky and Kahneman (992), the parameter λ in () does not coincide with all the other definitions unless α + = α. In addition, the definition of loss aversion degree by Tversky and Kahneman (992) depends on the choice of the unit of monetary payoffs; e.g., using dollars and euros as the unit lead to different loss aversion degree by 2

13 this definition. Therefore, λ, although related to loss aversion, can hardly be regarded as the loss aversion degree. However, with α + and α being fixed, the larger λ is, the more loss averse and hence the more overall risk averse the gambler is. Note that a typical probability weighting function is inverse-s-shaped, i.e., is concave in its low end and convex in its high end. The power probability weighting functions in (), however, are concave. We consider the power instead of inverse-s-shaped probability weighting functions for two reasons. First, even with power probability weighting functions, the casino gambling problem is extremely involved to solve. With a general inverse-s-shaped probability weighting function, tractable solutions seems hardly possible. Secondly, two probability weighting functions in CPT are used to model individuals tendency to overweight the tails of payoff distributions. For a mixed gamble, i.e., a gamble with both possible gains and possible losses, the tails refer to significant gains and losses of small probability, so such tendency is captured by the concave parts in the low ends of the two probability weighting functions; see the CPT preference value (). In our gambling problem, the cumulative gain or loss represents a mixed gamble. As a result, the two power probability weighting functions are able to capture the tendency to overweight the tails of payoff distributions. We can see that the smaller δ + and δ are, the more the gambler overweighs large gains and losses of small probability, respectively. In the remainder of this section, we always assume () for the utility and probability weighting functions. Because of the results in Table, we also assume α + δ +, α δ, and α + α throughout this section. 3

14 4. Infinite-Dimensional Programming As shown in He et al. (205), problem () is equivalent to the following infinite-dimensional optimization problem max x,y U(x, y) subject to x x 2... x n... 0, y y 2... y n... 0, (2) x + y, n= x n = n= y n, where we denote x := (x, x 2,... ), y := (y, y 2,... ), and the objective function U(x, y) := (u + (n) u + (n )) w + (x n ) (u (n) u (n )) w (y n ) n= n= with U(x, y) := if the second sum of infinite series is infinity. Here, x and y stand for the decumulative distribution of the gain and the cumulative distribution of the loss, respectively, of S τ for some stopping time τ; i.e., x n = P(S τ n) and y n = P(S τ n), n. Once we solve (2) to obtain distribution function of S τ, we derive the optimal stopping time by Skorokhod embedding; see He et al. (205) for details. Since S 0 = 0, we can see that exit time τ = 0, which means not to gamble, corresponds to x = y = 0, and the objective value U(x, y) = 0. Therefore, the gambler chooses to gamble if and only if there exist x and y such that U(x, y) > Some Notations Problem (2) is difficult to solve for two main reasons: First, the objective function is neither concave nor convex in the decision variable because both w + ( ) and w ( ) are concave. Secondly, there are infinitely many constraints, so the standard Lagrange dual method is not directly applicable even if the objective function were concave. After a detailed analysis, we are able to solve problem 4

15 (2) completely. The analysis is highly technical and involved, so we opt to present it in the online appendix. Here, we only present the optimal solution. To this end, we need to introduce some notations. For any x and y of problem (2) corresponding to some stopping time τ, introduce s + := ( n=2 x n)/x and s := ( n=2 y n)/y. Then, ( ) s + = P(S τ n) /P(S τ ) = E[S τ S τ ]. n=2 Similarly, s = E[ S τ S τ ]. On the other hand, define z + = (z +,, z +,2,... ) with z +,n := x n+ /x, n, and z = (z,, z,2,... ) with z,n := y n+ /x, n. Note that n= z ±,n = s ±. In order to find the optimal x and y, we only need first to find the optimal z + and z with constraint n= z ±,n = s ±, and then to find the optimal x, y, and s ±. Furthermore, s ± imply the conditional expected gain and loss of S τ and the conditional-gain loss ratio; i.e., E[S τ S τ ] = s + +, E[ S τ S τ ] = s +, R(τ) = s + + s +. Therefore, the (asymptotically) optimal value s ± will dictate whether the gambler takes a loss-exit, gain-exit, or non-exit strategy. Finally, after we find optimal z ±, x, y, and s ±, we can compute optimal x and y as x = (x, x z + ) and y = (y, y z ), respectively. Next, we present optimal z ± given s ± = s and the corresponding optimal value functions; the details can be found in Propositions C.2 and C.4 of the online appendix. For each s > 0, define z +(s) := (z+, (s), z +,2 (s),... ) and v +(s) as follows: If α + δ + =, define z +,n(s) =, n =,..., s, z +, s + = s s, z +,n = 0, n s + 2, (3) v + (s) = ( (s s ))( s + ) α + + (s s )( s + 2) α +. (4) 5

16 If α + < δ + <, define z +,n(s) = v + (s) = [ ( ) ] µ(s) (n + ) α + n α + δ +, n =, 2,..., (5) n= {[ ( ) ] } µ(s) δ + (n + ) α + n α + δ + [(n + ) α + n α + ], (6) where µ(s) > 0 is the number such that n= z +,n(s) = s. Assume δ α. For each s > 0, define z (s) := (z, (s), z,2 (s),... ) and v (s) as follows: Define B :=. For each integer m 2, define B m = m when α = and define B m to be the unique number in (m, m) determined by ((m + ) α ) ( ) δ Bm = (m α ) + ((m + ) α m α ) (B m m + ) δ (7) m when α <. For s (m, B m ], define z,n(s) =, n =,..., m, z,m = s m +, z,n = 0, n m +, (8) v (s) = (m α ) + ((m + ) α m α )(s m + ) δ. (9) For s (B m, m], define z,n = s m, n =,..., m, z,n = 0, n m +, (20) ( v (s) = ((m + ) α s ) δ ). (2) m Finally, for s = 0, define v + (s) = v (s) = 0. The definition of z +(s) and z (s) is irrelevant in this case. 4.3 Optimal solution Now, we are ready to present the optimal solution to problem (2) and compute its optimal value, denoted as U. Recall that we assume α + δ +, δ α, and α + α. The gambler does not play in the casino if and only if the optimal solution to problem (2) is x = y = 0, and the 6

17 gambler chooses to play if and only if there exist x and y such that U(x, y) > 0. Theorem 2 Assume α + δ +, δ α, and α + α. Define f(y, s +, s ) := (v + (s + ) + ) ( ) s + δ+ y δ + λ(v (s ) + )y δ, y, s +, s 0. (22) s + + (i) Suppose α + = δ + = α = δ =. If λ, the optimal solution to (2) is x = y = 0, i.e., the gambler does not play in the casino. If λ <, then U =. (ii) Suppose α + = δ + <. Then, U = and there exist s + > 0, x > 0, y, and z (n) +, n satisfying i= z(n) +,i = s + such that (x (n), y) with x (n) := (x, x z (n) + ) is feasible to problem (2) and U(x (n), y) diverges to infinity as n. (iii) Suppose δ δ + and α + < δ +. Define M : = M 2 : = [ sup s + 0,s 0 lim sup s + +s + (v + (s + ) + )(s + ) δ + (v (s ) + )(s + + ) δ (s+ + s + 2) δ + δ [ (v + (s + ) + )(s + ) δ + (v (s ) + )(s + + ) δ (s+ + s + 2) δ + δ ], ]. (23) Then, 0 < M < + and 0 M 2 M. Furthermore, M 2 > 0 if and only if α = α + or α = δ and M 2 = M if α = δ. When λ M, the optimal solution to (2) is x = y = 0, i.e., the gambler does not play in the casino, and when λ < M, the gambler plays in the casino. When M 2 < λ < M, U < and the optimal solution exists and is given as x = (x, x z +(s +)), y = (y, y z (s )) where (s +, s ) argmax f (ŷ(s +, s ), s +, s ), ŷ(s +, s ) := s + 0,s 0 s + + s + + s + 2, y = ŷ(s +, s ), x = s + s + + y. (24) When λ < M 2, the optimal solution does not exist, U = + if α + = α > δ and U < + if α + < α = δ, and there exist (s (n) +, s(n) ) with lim n s (n) = + such that the objective 7

18 value of the pair x (n) := (x (n), x(n) z +(s (n) + )), y(n) := (y (n), y(n) y (n) := s (n) + + s (n) + + x(n) s(n) + 2, = s(n) + z (s (n) y(n) s (n) + + )), where, (25) converges to U as n +. Moreover, s (n) + /s(n) s converge to zero when α = δ and are bounded from zero and from infinity when α = α + > δ. (iv) Suppose δ + < δ. Then, the gambler plays in the casino. When α > δ, U < and the optimal solution is x = (x, x z +(s +)), y = (y, y z (s )) where (s +, s ) argmax f (ȳ(s +, s ), s +, s ), y = ȳ(s +, s ), x = s + s + 0,s 0 s + +, y { (δ+ (v + (s + ) + )(s + ) δ ) } + δ δ + s + + ȳ(s +, s ) := min δ λ(v (s) + )(s + + ) δ +,. s + + s + 2 (26) Moreover, if λ M 3, then we can take s = 0 and s + = s := ((n + ) α + n α + δ ) +, M3 := δ + (s + 2) δ δ + (s + ) δ. (27) δ n= When α = δ and λ > M 4, where M 4 := δ + δ (s + ) δ < M 3, (28) the optimal solution exists and is given as (26) with s + = s and s = m for some nonnegative integer m. If α = δ and λ M 4, then U < +, the optimal solution does not exist, and there exists s + such that the objective value of the pair x (n) := (x (n), x(n) z +(s +)), y (n) := (y (n), y(n) n +. z (s (n) )) with s(n) = n, y(n) := s + + s x(n) + +s(n) +2, = s(n) + s + + y(n) converges to U as The proof of Theorem 2 is presented in Section C of the Online Appendix 5. Except for the case δ + > α + = α > δ, λ = M 2 < M in which we do not know whether 5 Available at 8

19 the optimal solution exists, we have solved problem (2) completely. When the optimal solution exists, it is provided in closed form. Otherwise, a sequence of asymptotically optimal solutions are provided. In the case α + = δ + = α = δ =, the gambler does not play in the casino (i.e., the optimal strategy is x = y = 0) when λ and chooses to play (because the optimal value is infinite and thus larger than the value of not gambling) when λ <. In the case α + = δ + <, the optimal value is infinite. We find a sequence of asymptotically optimal solutions (x (n), y). Note that the conditional expected gain is fixed to be s + + and the conditional expected loss is also fixed. Thus, an infinite CPT value is achieved by taking strategies that have fixed and finite conditional expected gain and loss. In the case δ δ + and α + < δ +, whether the optimal solution exists and whether the gambler plays in the casino depend on λ. When λ M, the gambler chooses not to play; otherwise she plays. When M 2 < λ < M, the optimal solution exists. Finally, when λ < M 2,, which can happen only when α = α + or when α = δ, the optimal solution does not exist. In this case, a sequence of asymptotically optimal solutions are given with s (n) going to infinity and s(n) + /s(n) converging to zero when α = δ or bounded from zero and from infinity when α = α + > δ. Recall that s (n) + + and s(n) + stand for the conditional gain and loss, respectively. Therefore, when λ < M 2, the gambler takes a gain-exit strategy if α = δ and takes a non-exit strategy if α = α + > δ. In the case δ + < δ, the gambler plays in the casino. When α > δ, the optimal solution exists, regardless of the value of λ. When α = δ, the optimal solution exists if and only if λ > M 4. When the optimal solution does not exist, a sequence of asymptotically optimal solutions are given with s (n) + strategy. to be fixed and s(n) going to infinity. Therefore, the gambler takes a gain-exit Finally, when the optimal solution exists, it must take the form x = (x, x z +(s +)), y = (y, y z (s )) for some x 0, y 0, and s ± 0. Moreover, s ± can be solved by a twodimensional optimization problem (e.g., (24) and (26)). 6 Recall z +(s) as defined in (3) and (5). When δ + =, z +,n(s) = 0 for n s + 2, showing that the gambler plans to stop when reaching 6 It is even possible to reduce it into a one-dimensional optimization problem which is easy to solve. We do not provide details here because of space limit. 9

20 certain level of gains. When δ + <, z+,n(s) > 0 for any n, showing that the gambler chooses to continue with positive probabilities at any level of gains. On the other hand, recall z (s) as defined in (8) and (20). We can see that z,n(s) = 0 for n s +, showing that the gambler plans to stop for sure when reaching certain level of losses. Furthermore, in the optimal solution y = (y, y z (s )), because y is the cumulative distribution of the optimal strategy, the gambler chooses to stop at no more than two loss levels (levels s and s + when z (s ) is in the form (8) and levels and s + when z (s ) is in the form (20)). Table 2 summarizes the results in Theorems and 2 regarding the solution to problem (2). 4.4 Examples We consider two numerical examples to illustrate the optimal solution (x, y ) to (2) and the corresponding optimal strategy τ of the pre-committed gambler. Denote {p n} as the optimal distribution, i.e., the distribution of S τ. Obviously, p n = x n x n+, n, p n = y n y n+, n. Example We set α + = 0.5, α = 0.9, δ ± = 0.52, and λ = 2.25, which are reasonable values. 7 According to Theorem 2, the optimal solution (2) exists: the optimal distribution is ) ((n 0.5 (n ) 0.5 ) /0.48 ((n + ) 0.5 n 0.5 ) /0.48, n 2, , n =, p n = 0.885, n =, 0, otherwise. We construct a randomized, path-independent strategy τ as in He et al. (205) to achieve this distribution: at each time t, given the gambler s cumulative gain/loss level j, the gambler tosses a random coin with the probability of the coin showing tails to be r i. If the coin shows heads, the gambler continues; otherwise, the gambler stops. Using the explicit construction in He et al. (205, 7 This example is also reported in He et al. (205). 20

21 Table 2: Optimal solution to problem (2) when the utility function is piece-wise power and the probability weighting functions are power. The third column shows whether the gambler enters the casino to gamble. The fourth column shows whether the optimal solution to problem (2) exists. The last column gives the optimal solution when it exists and the strategies (such as gain-exit, lossexit, and non-exit strategies) the gambler takes when it does not. For the optimal solution, only s +, s, and y are listed. The optimal solution is then x = (x, x z +(s +)), y = (y, y z (s )) where x = (s + )/(s + + )y and z +(s) and z (s) are given as in (3), (5), (8), and (20), respectively. The functions f, ŷ, and ȳ are given as in (22), (24), and (26), respectively. The blank cells indicate the only case in which problem (2) is not fully solved. Gamb. Exist. Strategies α + = α = λ No Yes not gamble δ + = δ = λ < Yes No non-exit α + > δ + any λ Yes No loss-exit α < δ any λ Yes No gain-exit α < α + any λ Yes No non-exit α + = δ + < any λ Yes No certain asymptotical strategy δ δ +, λ M No Yes not gamble α + < δ +, and (s λ < M Yes Yes +, s ) argmaxf (ŷ(s +, s ), s +, s ) α > max(α +, δ ) y = ŷ(s +, s ) δ δ +, λ M No Yes not gamble α + < δ +, and α = δ α + λ < M Yes No gain-exit λ M No Yes not gamble δ δ +, (s M 2 < λ < M Yes Yes +, s ) argmaxf (ŷ(s +, s ), s +, s ) α + < δ +, and y = ŷ(s +, s ) α = α + > δ λ = M 2 < M Yes λ < M 2 Yes No non-exit α > δ > λ M 3 Yes Yes s + = s, s = 0, and y = ȳ(s +, s ) δ + > α + (s λ < M 3 Yes Yes +, s ) argmaxf(ȳ(s +, s ), s +, s ) and y = ȳ(s +, s ) α = δ > s λ > M 4 Yes Yes + = s, s = m for some integer m δ + > α + and y = ȳ(s +, s ) λ M 4 Yes No gain-exit 2

22 Appendix A.2), we obtain r n =, n, r 0 = 0, r = 0.057, r 2 = 0.006, r 3 = , r 4 = 0.004,... Therefore, the gambler s strategy is as follows: stop once reaching ; never stop at 0; toss a coin with tails probability when reaching and stop only if the toss outcome is tails; toss a coin with tails probability when reaching 2 and stop only if the toss outcome is tails; and so on. Example 2 To employ a randomized, path-independent strategy as in He et al. (205), the gambler needs to toss a random coin every time. Other types of randomized strategies are possible since there can well be many different stopping times embedding the same stopped distribution. In particular, we construct the so-called randomized Azema-Yor (AY) strategies in Appendix B. The gambler needs less random coins to follow a randomized AY strategy than to use a randomized path-independent strategy, at the cost of having to be path-dependent. We use an example to compare the randomized path-independent and randomized AY strategies. To this end, we set α + = 0.6, δ + = 0.7, α = 0.8, δ = 0.7, and λ = We solve (2) to get the optimal distribution ((n 0.6 (n ) 0.6 ) /0.3 ((n + ) 0.6 n 0.6 ) /0.3 ), n 2, , n =, p n = 0.626, n =, (29) 0, otherwise. Figure 2 illustrates the randomized path-independent strategy as constructed in He et al. (205) and the randomized AY strategy as constructed in Appendix B of the present paper that lead to the same optimal distribution {p n}. For the randomized path-independent strategy (left panel), black nodes stand for stop, white nodes stand for continue, and grey nodes stand for the cases in which a random coin is tossed and the gambler stops if and only if the coin turns tails. The 8 For illustrative purpose, we use different parameter values from Example. 22

23 probability that the random coin turns tails is computed following the construction in He et al. (205), and the probability is shown on the top of each grey node. This strategy is path-independent since the tail probability of the coin at each node depends only on the cumulative gain/loss level at that node. However, to implement this path-independent randomized strategy, the gambler needs to toss random coins most of the times; see the left panel of Figure 2. 9 The right panel of Figure 2 illustrates the randomized AY strategy. Again, black nodes stand for stop, white nodes stand for continue, and grey nodes stand for the cases in which a random coin is tossed and the gambler stops if and only if the coin turns tails. The probability that the random coin turns tails is computed following the construction given in Appendix B of the present paper and is shown on the top of each grey node. We can see that this strategy is path-dependent. For instance, consider time 3 and cumulative gain/loss level. If the gambler reaches this node along the path from (0,0), through (,) and (2,2), and to (3,), then she stops. If the gambler reaches this node along the path from (0,0), through (,) and (2,0), and to (3,), then she continues. This exhibits a feature of the so-called trailing stop-loss strategy in stock trading, one that stops loss at a percentage below a moving target (which can be the market price or the historical high): One exits at (3,) from (2.2) simply because it is a stop loss from the historical high. The continuation at (3,) from (2,0) is based on the symmetric idea. On the other hand, we can see that in the first six periods, the gambler needs to use at most one random coin; i.e., there is only one grey node in the right panel of Figure 2. Therefore, compared to the randomized independent strategy, the randomized AY strategy involves less randomization at the cost of being path-dependent. 5 Naive Gamblers Barberis (202) finds in his five-period casino gambling model that, due to the probability weighting, if the gambler revisits the gambling problem with the same CPT preferences in the future, she may find the initial strategy decided at time 0 is no longer optimal. As a result, if she cannot commit herself to following the initial strategy, she may change to a new strategy that can be totally different from the initial one. Such type of gamblers are called naive gamblers. 9 Black and white nodes are special cases where the tail probabilities of the coins turn out to be and 0 respectively. 23

24 (5,5) (2,2) (3,3) (4,4) (4,2) (5,5) (5,3) (5,3) (5,) (4,4) (3,3) (5,3) (2,2) (4,2) (,) (3,) (5,) (,) (2,0) (3,) (3,) (4,2) (4,0) (5,3) (5,) (5,) (5,-) (0,0) (2,0) (4,0) (0,0) (3,-) (,-) (3,-) (5,-) (,-) Figure 2: Randomized path-independent strategy (left-panel) and randomized AY strategy (right panel) generating distribution (29). Black nodes stand for stop, white nodes stand for continue, and grey nodes stand for the cases in which a random coin is tossed and the gambler stops if and only if the coin turns tails. Each node is marked on the right by a pair (t, x) representing time t and the cumulative gain/loss level x at that time. Each grey node is marked on the top by a number showing the probability that the random coin tossed at that node turns up tails. 24

25 Formally, suppose now after T rounds of play, at time T the gambler has arrived at some node and has not stopped yet with accumulated gain/loss equal to H. A naive gambler will re-consider the choice between continuing and leaving at that node. Following Barberis (202), we assume the reference point in the CPT preference of the gambler at any time to be the initial wealth level. As a result, the cumulative gain or loss of the gambler at any time n T is S n. Conditioning on S T = H, the gambling problem faced by the naive gambler is max τ TT,H V T,H (S τ ) (30) where T T,H := {τ T τ A C and {(S τ t S T ) : t T } is uniformly integrable} and V T,H (S τ ) = n= ( ) u + (n) w + (P(S τ n S T = H)) w + (P(S τ > n S T = H)) n= ( ) u (n) w (P(S τ n S T = H)) w (P(S τ < n S T = H)), where, as before, V T,H (S τ ) := if the second sum of infinite series is infinite. One can see that the CPT value of the strategy of leaving the casino at T is u(h) := u + (H) H 0 u (H) H<0. As a result, the naive gambler decides to play at time T with cumulative gain/loss level H if and only if the optimal value of problem (30) is strictly larger than u(h). Mathematically, (30) can be solved using exactly the same method for solving problem (2). The following theorem is parallel to Theorem, and the proof is omitted. For two integers b < 0 < a, we denote the relative first exit time after T by τ T a,b := inf{n T : S n S T = a or S n S T = b}. Theorem 3 Fix time T > 0 and level H, T H T. Let V T,H denote the value of the optimization problem (30). (i) Suppose (4) holds. Then, V T,H = and there exists b < 0 such that lim a + V T,H (S τ T a,b ) = +. 25

26 (ii) Suppose (5) holds. Then, V T,H = sup x 0 u + (x). Moreover, the optimal solution to problem (30) does not exist and V T,H (S τ T a,b ) with a = k ɛ/(+ɛ) and b = k converges to V T,H as k goes to infinity. (iii) Assume lim x + u + (x) = + and suppose condition (6) holds. Then, V T,H = and there exists c > 0 such that lim k + V T,H (S τ T a,b ) = + where a = ck and b = k. Theorem 3 shows that when one of the conditions (4) (6) is satisfied, the naive gambler, who adjusts her optimal strategy after each bet, always finds continue gambling more attractive. So in effect, she never stops gambling. In addition, this is true even when we restrict the gambler to simple exit strategies τa,b T, which are in particular path-independent. With piece-wise power utility function (7) and weighting functions (8), or weighting functions (9), or power weighting functions w ± (p) = p δ ±, the naive gambler never stops gambling if one of the conditions in Table is satisfied. For weighting functions (0), the naive gambler never stops gambling if δ + < or if δ + = and α + > a +. Ebert and Strack (202) also study the conditions under which a naive gambler does not stop gambling. They assume the naive gambler can construct strategies with arbitrarily small random payoffs and then show that the naive gambler prefers skewness in the small and thus does not stop gambling. Recently, in a stylized continuous-time example 0, Henderson et al. (205) employ the methodology of Xu and Zhou (202) to obtain analytically the optimal strategy of a pre-committed gambler. They then use it to study actual optimal behaviours of a naive gambler. They show that if a naive gambler is allowed to used randomized strategies and prefers to use natural strategies that are time-homogeneous and Markovian, he/she may stop instantly with a positive probability or even for sure. We note that our approach to comparing pre-committed and naive gamblers is similar in spirit to Henderson et al. (205). However, in our model, the naive gambler cannot construct strategies with arbitrarily small random payoffs because the stake size is fixed at $. Consequently, the results in Ebert and Strack (202) and Henderson et al. (205) do not apply in our setting. In fact, here we show that the naive gambler prefers skewness in the large. Indeed, in Theorem 3, for 0 More elaborate setting with a geometric Brownian motion and CPT preferences is addressed numerically. 26

27 stopping time τa,b T we send either a, or b, or both to infinity, and thus the payoff of τ a,b T can be very large. The following result compares the optimal strategy of the pre-committed gambler and the actual behavior of the naive gambler when the utility function is given as in (7) and the weighting functions are w ± (p) = p δ ±. Theorem 4 (i) Assume δ + <. For any H, there exists feasible (x, y) to problem (30) such that U(x, y) > H α +. (ii) Assume δ + < δ. Then, for any H, there exists feasible (x, y) to problem (30) such that U(x, y) > λ( H) α. The proof of Theorem 4 is provided in Section D. of the Online Appendix. Theorem 4-(i) shows that the naive gambler continues with a positive probability at any gain level, which is the same as a pre-committed gambler. On the other hand, Theorem 4-(ii) indicates that if δ + < δ, at each time, with a positive probability the naive gambler does not stop. More precisely, either she simply continues, or else she might want to toss a coin to decide whether to continue to play or not. In particular, if α + < δ + < δ < α, according to Theorem 2, the pre-committed gambler stops for sure if the loss hits certain level. In sharp contrast, the naive gambler s actual behavior is to play with a positive probability at any loss level. The reason why the naive gambler behaves differently from the pre-committed gambler is the same as explained in Barberis (202): at time 0, the pre-committed gambler decides to stop when the loss reaches certain level, e.g., L, in the future because the probability of having losses strictly larger than L is small from time 0 s perspective and this small probability is exaggerated due to probability weighting. For the naive gambler, when she actually reaches loss level L at some time t, the probability of having losses strictly larger than L is no longer small from time t s perspective. Consequently, loss L is not overweighted by the naive gambler at time t, so she chooses to take a chance and not to stop gambling. Finally, we provide a numerical example to show the actual behavior of the naive gambler. In Appendix W2 of Ebert and Strack (202), the authors also consider preference for skewness in the large when the utility function is piece-wise power. 27

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

Lecture 6 Random walks - advanced methods

Lecture 6 Random walks - advanced methods Lecture 6: Random wals - advanced methods 1 of 11 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitovic Lecture 6 Random wals - advanced methods STOPPING TIMES Our last

More information

Fractals. Mandelbrot defines a fractal set as one in which the fractal dimension is strictly greater than the topological dimension.

Fractals. Mandelbrot defines a fractal set as one in which the fractal dimension is strictly greater than the topological dimension. Fractals Fractals are unusual, imperfectly defined, mathematical objects that observe self-similarity, that the parts are somehow self-similar to the whole. This self-similarity process implies that fractals

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7 Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7 Steve Dunbar Due Mon, November 2, 2009. Time to review all of the information we have about coin-tossing fortunes

More information

M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen

M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen 1. Consider a zero-sum game between Claude and Rachel in which the following matrix shows Claude s winnings: C 1 C 2 C 3 R 1 4 2 5 R G =

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. Converting online to batch. Online convex optimization.

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

ECE 313: Conflict Final Exam Tuesday, May 13, 2014, 7:00 p.m. 10:00 p.m. Room 241 Everitt Lab

ECE 313: Conflict Final Exam Tuesday, May 13, 2014, 7:00 p.m. 10:00 p.m. Room 241 Everitt Lab University of Illinois Spring 1 ECE 313: Conflict Final Exam Tuesday, May 13, 1, 7: p.m. 1: p.m. Room 1 Everitt Lab 1. [18 points] Consider an experiment in which a fair coin is repeatedly tossed every

More information

Uncertainty. Michael Peters December 27, 2013

Uncertainty. Michael Peters December 27, 2013 Uncertainty Michael Peters December 27, 20 Lotteries In many problems in economics, people are forced to make decisions without knowing exactly what the consequences will be. For example, when you buy

More information

Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process

Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process Taking Stock: In the last chapter, we learned that equilibrium problems have an interesting dimension

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 19, 2018 CS 361: Probability & Statistics Random variables Markov s inequality This theorem says that for any random variable X and any value a, we have A random variable is unlikely to have an

More information

Necessary Conditions for the Existence of Utility Maximizing Strategies under Transaction Costs

Necessary Conditions for the Existence of Utility Maximizing Strategies under Transaction Costs Necessary Conditions for the Existence of Utility Maximizing Strategies under Transaction Costs Paolo Guasoni Boston University and University of Pisa Walter Schachermayer Vienna University of Technology

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

2018 Best Student Exam Solutions Texas A&M High School Students Contest October 20, 2018

2018 Best Student Exam Solutions Texas A&M High School Students Contest October 20, 2018 08 Best Student Exam Solutions Texas A&M High School Students Contest October 0, 08. You purchase a stock and later sell it for $44 per share. When you do, you notice that the percent increase was the

More information

Gambling and Information Theory

Gambling and Information Theory Gambling and Information Theory Giulio Bertoli UNIVERSITEIT VAN AMSTERDAM December 17, 2014 Overview Introduction Kelly Gambling Horse Races and Mutual Information Some Facts Shannon (1948): definitions/concepts

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Psychology and Economics (Lecture 3)

Psychology and Economics (Lecture 3) Psychology and Economics (Lecture 3) Xavier Gabaix February 10, 2003 1 Discussion of traditional objections In the real world people would get it right Answer: there are many non-repeated interactions,

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of

More information

Comments on prospect theory

Comments on prospect theory Comments on prospect theory Abstract Ioanid Roşu This note presents a critique of prospect theory, and develops a model for comparison of two simple lotteries, i.e. of the form ( x 1, p1; x 2, p 2 ;...;

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

Stopping Behaviors of Naïve and Non-Committed Sophisticated Agents when They Distort Probability

Stopping Behaviors of Naïve and Non-Committed Sophisticated Agents when They Distort Probability Stopping Behaviors of Naïve and Non-Committed Sophisticated Agents when They Distort Probability Yu-Jui Huang Adrien Nguyen-Huu Xun Yu Zhou September 11, 2017 Abstract We consider the problem of stopping

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Problem Set 4 - Solution Hints

Problem Set 4 - Solution Hints ETH Zurich D-MTEC Chair of Risk & Insurance Economics (Prof. Mimra) Exercise Class Spring 206 Anastasia Sycheva Contact: asycheva@ethz.ch Office Hour: on appointment Zürichbergstrasse 8 / ZUE, Room F2

More information

Sequential Decisions

Sequential Decisions Sequential Decisions A Basic Theorem of (Bayesian) Expected Utility Theory: If you can postpone a terminal decision in order to observe, cost free, an experiment whose outcome might change your terminal

More information

Discrete Random Variables. Discrete Random Variables

Discrete Random Variables. Discrete Random Variables Random Variables In many situations, we are interested in numbers associated with the outcomes of a random experiment. For example: Testing cars from a production line, we are interested in variables such

More information

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS 6.0/6.3 Spring 009 Quiz Wednesday, March, 7:30-9:30 PM. SOLUTIONS Name: Recitation Instructor: Question Part Score Out of 0 all 0 a 5 b c 5 d 5 e 5 f 5 3 a b c d 5 e 5 f 5 g 5 h 5 Total 00 Write your solutions

More information

1 Basic Game Modelling

1 Basic Game Modelling Max-Planck-Institut für Informatik, Winter 2017 Advanced Topic Course Algorithmic Game Theory, Mechanism Design & Computational Economics Lecturer: CHEUNG, Yun Kuen (Marco) Lecture 1: Basic Game Modelling,

More information

1 Normal Distribution.

1 Normal Distribution. Normal Distribution.. Introduction A Bernoulli trial is simple random experiment that ends in success or failure. A Bernoulli trial can be used to make a new random experiment by repeating the Bernoulli

More information

The topics in this section concern with the first course objective.

The topics in this section concern with the first course objective. 1.1 Systems & Probability The topics in this section concern with the first course objective. A system is one of the most fundamental concepts and one of the most useful and powerful tools in STEM (science,

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

A New Interpretation of Information Rate

A New Interpretation of Information Rate A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes

More information

Mathematical Games and Random Walks

Mathematical Games and Random Walks Mathematical Games and Random Walks Alexander Engau, Ph.D. Department of Mathematical and Statistical Sciences College of Liberal Arts and Sciences / Intl. College at Beijing University of Colorado Denver

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Deceptive Advertising with Rational Buyers

Deceptive Advertising with Rational Buyers Deceptive Advertising with Rational Buyers September 6, 016 ONLINE APPENDIX In this Appendix we present in full additional results and extensions which are only mentioned in the paper. In the exposition

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Econ 101A Midterm 2 Th 8 April 2009.

Econ 101A Midterm 2 Th 8 April 2009. Econ A Midterm Th 8 Aril 9. You have aroximately hour and minutes to answer the questions in the midterm. I will collect the exams at. shar. Show your work, and good luck! Problem. Production (38 oints).

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Ambiguity and the Centipede Game

Ambiguity and the Centipede Game Ambiguity and the Centipede Game Jürgen Eichberger, Simon Grant and David Kelsey Heidelberg University, Australian National University, University of Exeter. University of Exeter. June 2018 David Kelsey

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Sequential Decisions

Sequential Decisions Sequential Decisions A Basic Theorem of (Bayesian) Expected Utility Theory: If you can postpone a terminal decision in order to observe, cost free, an experiment whose outcome might change your terminal

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

Introduction to Game Theory: Simple Decisions Models

Introduction to Game Theory: Simple Decisions Models Introduction to Game Theory: Simple Decisions Models John C.S. Lui Department of Computer Science & Engineering The Chinese University of Hong Kong John C.S. Lui (CUHK) Advanced Topics in Network Analysis

More information

Topic 3: Introduction to Probability

Topic 3: Introduction to Probability Topic 3: Introduction to Probability 1 Contents 1. Introduction 2. Simple Definitions 3. Types of Probability 4. Theorems of Probability 5. Probabilities under conditions of statistically independent events

More information

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) subject to for all t Jonathan Heathcote updated, March 2006 1. The household s problem max E β t u (c t ) t=0 c t + a t+1

More information

LEM. New Results on Betting Strategies, Market Selection, and the Role of Luck. Giulio Bottazzi Daniele Giachini

LEM. New Results on Betting Strategies, Market Selection, and the Role of Luck. Giulio Bottazzi Daniele Giachini LEM WORKING PAPER SERIES New Results on Betting Strategies, Market Selection, and the Role of Luck Giulio Bottazzi Daniele Giachini Institute of Economics, Scuola Sant'Anna, Pisa, Italy 2018/08 February

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

3 Intertemporal Risk Aversion

3 Intertemporal Risk Aversion 3 Intertemporal Risk Aversion 3. Axiomatic Characterization This section characterizes the invariant quantity found in proposition 2 axiomatically. The axiomatic characterization below is for a decision

More information

Lecture 23. Random walks

Lecture 23. Random walks 18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable

More information

Limits and Continuity

Limits and Continuity Chapter Limits and Continuity. Limits of Sequences.. The Concept of Limit and Its Properties A sequence { } is an ordered infinite list x,x,...,,... The n-th term of the sequence is, and n is the index

More information

Competitive Equilibria in a Comonotone Market

Competitive Equilibria in a Comonotone Market Competitive Equilibria in a Comonotone Market 1/51 Competitive Equilibria in a Comonotone Market Ruodu Wang http://sas.uwaterloo.ca/ wang Department of Statistics and Actuarial Science University of Waterloo

More information

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620 May 16, 2006 Philip Bond 1 Are cheap talk and hard evidence both needed in the courtroom? Abstract: In a recent paper, Bull and Watson (2004) present a formal model of verifiability in which cheap messages

More information

Appendix (For Online Publication) Community Development by Public Wealth Accumulation

Appendix (For Online Publication) Community Development by Public Wealth Accumulation March 219 Appendix (For Online Publication) to Community Development by Public Wealth Accumulation Levon Barseghyan Department of Economics Cornell University Ithaca NY 14853 lb247@cornell.edu Stephen

More information

Data Abundance and Asset Price Informativeness. On-Line Appendix

Data Abundance and Asset Price Informativeness. On-Line Appendix Data Abundance and Asset Price Informativeness On-Line Appendix Jérôme Dugast Thierry Foucault August 30, 07 This note is the on-line appendix for Data Abundance and Asset Price Informativeness. It contains

More information

DECISIONS UNDER UNCERTAINTY

DECISIONS UNDER UNCERTAINTY August 18, 2003 Aanund Hylland: # DECISIONS UNDER UNCERTAINTY Standard theory and alternatives 1. Introduction Individual decision making under uncertainty can be characterized as follows: The decision

More information

Chapter 2 Random Variables

Chapter 2 Random Variables Stochastic Processes Chapter 2 Random Variables Prof. Jernan Juang Dept. of Engineering Science National Cheng Kung University Prof. Chun-Hung Liu Dept. of Electrical and Computer Eng. National Chiao Tung

More information

arxiv: v1 [math.pr] 26 Mar 2008

arxiv: v1 [math.pr] 26 Mar 2008 arxiv:0803.3679v1 [math.pr] 26 Mar 2008 The game-theoretic martingales behind the zero-one laws Akimichi Takemura 1 takemura@stat.t.u-tokyo.ac.jp, http://www.e.u-tokyo.ac.jp/ takemura Vladimir Vovk 2 vovk@cs.rhul.ac.uk,

More information

How Much Evidence Should One Collect?

How Much Evidence Should One Collect? How Much Evidence Should One Collect? Remco Heesen October 10, 2013 Abstract This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition.

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Game Theory and its Applications to Networks - Part I: Strict Competition

Game Theory and its Applications to Networks - Part I: Strict Competition Game Theory and its Applications to Networks - Part I: Strict Competition Corinne Touati Master ENS Lyon, Fall 200 What is Game Theory and what is it for? Definition (Roger Myerson, Game Theory, Analysis

More information

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING EMILY GENTLES Abstract. This paper introduces the basics of the simple random walk with a flair for the statistical approach. Applications in biology

More information

Risk Aversion over Incomes and Risk Aversion over Commodities. By Juan E. Martinez-Legaz and John K.-H. Quah 1

Risk Aversion over Incomes and Risk Aversion over Commodities. By Juan E. Martinez-Legaz and John K.-H. Quah 1 Risk Aversion over Incomes and Risk Aversion over Commodities By Juan E. Martinez-Legaz and John K.-H. Quah 1 Abstract: This note determines the precise connection between an agent s attitude towards income

More information

Utility and Decision Making 1

Utility and Decision Making 1 Utility and Decision Making 1 Suppose that a casino offers to keep flipping a coin until it comes up heads for the first time. If it comes up heads on the 1st flip, you win $1. If it comes up heads for

More information

Three contrasts between two senses of coherence

Three contrasts between two senses of coherence Three contrasts between two senses of coherence Teddy Seidenfeld Joint work with M.J.Schervish and J.B.Kadane Statistics, CMU Call an agent s choices coherent when they respect simple dominance relative

More information

2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>0)

2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>0) 12 VOLKER BETZ 2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>) (2.1) t u 1 2 σ2 u =. In the simplest case, (2.1) is supposed to hold for all x R n, and all t>, and there

More information

Chapter 7 Probability Basics

Chapter 7 Probability Basics Making Hard Decisions Chapter 7 Probability Basics Slide 1 of 62 Introduction A A,, 1 An Let be an event with possible outcomes: 1 A = Flipping a coin A = {Heads} A 2 = Ω {Tails} The total event (or sample

More information

Explaining the harmonic sequence paradox. by Ulrich Schmidt, and Alexander Zimper

Explaining the harmonic sequence paradox. by Ulrich Schmidt, and Alexander Zimper Explaining the harmonic sequence paradox by Ulrich Schmidt, and Alexander Zimper No. 1724 August 2011 Kiel Institute for the World Economy, Hindenburgufer 66, 24105 Kiel, Germany Kiel Working Paper No.

More information

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 6, 2016 Outline 1. Introduction 2. Chapman-Kolmogrov Equations

More information

Lecture 4 An Introduction to Stochastic Processes

Lecture 4 An Introduction to Stochastic Processes Lecture 4 An Introduction to Stochastic Processes Prof. Massimo Guidolin Prep Course in Quantitative Methods for Finance August-September 2017 Plan of the lecture Motivation and definitions Filtrations

More information

Notes and Solutions #6 Meeting of 21 October 2008

Notes and Solutions #6 Meeting of 21 October 2008 HAVERFORD COLLEGE PROBLEM SOLVING GROUP 008-9 Notes and Solutions #6 Meeting of October 008 The Two Envelope Problem ( Box Problem ) An extremely wealthy intergalactic charitable institution has awarded

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Ex Post Cheap Talk : Value of Information and Value of Signals

Ex Post Cheap Talk : Value of Information and Value of Signals Ex Post Cheap Talk : Value of Information and Value of Signals Liping Tang Carnegie Mellon University, Pittsburgh PA 15213, USA Abstract. Crawford and Sobel s Cheap Talk model [1] describes an information

More information

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples February 24, 2011 Summary: We introduce the Nash Equilibrium: an outcome (action profile) which is stable in the sense that no player

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Recursive Ambiguity and Machina s Examples

Recursive Ambiguity and Machina s Examples Recursive Ambiguity and Machina s Examples David Dillenberger Uzi Segal May 0, 0 Abstract Machina (009, 0) lists a number of situations where standard models of ambiguity aversion are unable to capture

More information

Considering the Pasadena Paradox

Considering the Pasadena Paradox MPRA Munich Personal RePEc Archive Considering the Pasadena Paradox Robert William Vivian University of the Witwatersrand June 2006 Online at http://mpra.ub.uni-muenchen.de/5232/ MPRA Paper No. 5232, posted

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

A technical appendix for multihoming and compatibility

A technical appendix for multihoming and compatibility A technical appendix for multihoming and compatibility Toker Doganoglu and Julian Wright July 19, 2005 We would like to thank two anonymous referees and our editor, Simon Anderson, for their very helpful

More information

Repeated Downsian Electoral Competition

Repeated Downsian Electoral Competition Repeated Downsian Electoral Competition John Duggan Department of Political Science and Department of Economics University of Rochester Mark Fey Department of Political Science University of Rochester

More information

Certainty Equivalent Representation of Binary Gambles That Are Decomposed into Risky and Sure Parts

Certainty Equivalent Representation of Binary Gambles That Are Decomposed into Risky and Sure Parts Review of Economics & Finance Submitted on 28/Nov./2011 Article ID: 1923-7529-2012-02-65-11 Yutaka Matsushita Certainty Equivalent Representation of Binary Gambles That Are Decomposed into Risky and Sure

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Do Shareholders Vote Strategically? Voting Behavior, Proposal Screening, and Majority Rules. Supplement

Do Shareholders Vote Strategically? Voting Behavior, Proposal Screening, and Majority Rules. Supplement Do Shareholders Vote Strategically? Voting Behavior, Proposal Screening, and Majority Rules Supplement Ernst Maug Kristian Rydqvist September 2008 1 Additional Results on the Theory of Strategic Voting

More information

Senior Math Circles November 19, 2008 Probability II

Senior Math Circles November 19, 2008 Probability II University of Waterloo Faculty of Mathematics Centre for Education in Mathematics and Computing Senior Math Circles November 9, 2008 Probability II Probability Counting There are many situations where

More information

Choice under uncertainty

Choice under uncertainty Choice under uncertainty Expected utility theory The agent chooses among a set of risky alternatives (lotteries) Description of risky alternatives (lotteries) a lottery L = a random variable on a set of

More information

Robert Wilson (1977), A Bidding Model of Perfect Competition, Review of Economic

Robert Wilson (1977), A Bidding Model of Perfect Competition, Review of Economic Econ 805 Advanced Micro Theory I Dan Quint Fall 2008 Lecture 10 Oct 7 2008 No lecture on Thursday New HW, and solutions to last one, online Today s papers: Robert Wilson (1977), A Bidding Model of Perfect

More information

S n = x + X 1 + X X n.

S n = x + X 1 + X X n. 0 Lecture 0 0. Gambler Ruin Problem Let X be a payoff if a coin toss game such that P(X = ) = P(X = ) = /2. Suppose you start with x dollars and play the game n times. Let X,X 2,...,X n be payoffs in each

More information

SF2972 Game Theory Exam with Solutions March 15, 2013

SF2972 Game Theory Exam with Solutions March 15, 2013 SF2972 Game Theory Exam with s March 5, 203 Part A Classical Game Theory Jörgen Weibull and Mark Voorneveld. (a) What are N, S and u in the definition of a finite normal-form (or, equivalently, strategic-form)

More information

Expectations and Variance

Expectations and Variance 4. Model parameters and their estimates 4.1 Expected Value and Conditional Expected Value 4. The Variance 4.3 Population vs Sample Quantities 4.4 Mean and Variance of a Linear Combination 4.5 The Covariance

More information

arxiv: v2 [cs.ni] 8 Apr 2014

arxiv: v2 [cs.ni] 8 Apr 2014 Network Non-Neutrality on the Internet: Content Provision Under a Subscription Revenue Model Mohammad Hassan Lotfi ESE Department University of Pennsylvania Philadelphia, PA, 1910 lotfm@seas.upenn.edu

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

CH 3 P1. Two fair dice are rolled. What is the conditional probability that at least one lands on 6 given that the dice land on different numbers?

CH 3 P1. Two fair dice are rolled. What is the conditional probability that at least one lands on 6 given that the dice land on different numbers? CH 3 P1. Two fair dice are rolled. What is the conditional probability that at least one lands on 6 given that the dice land on different numbers? P7. The king comes from a family of 2 children. what is

More information