Distribution of winners in truel games

Similar documents
8 STOCHASTIC PROCESSES

4. Score normalization technical details We now discuss the technical details of the score normalization method.

Analysis of some entrance probabilities for killed birth-death processes

1 Gambler s Ruin Problem

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Solutions to In Class Problems Week 15, Wed.

Elementary Analysis in Q p

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

On the capacity of the general trapdoor channel with feedback

State Estimation with ARMarkov Models

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

Statics and dynamics: some elementary concepts

A Social Welfare Optimal Sequential Allocation Procedure

A Game Theoretic Investigation of Selection Methods in Two Population Coevolution

On a Markov Game with Incomplete Information

On Wald-Type Optimal Stopping for Brownian Motion

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

MATH 2710: NOTES FOR ANALYSIS

Chapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

The non-stochastic multi-armed bandit problem

Session 5: Review of Classical Astrodynamics

Economics 101. Lecture 7 - Monopoly and Oligopoly

SIGNALING IN CONTESTS. Tomer Ifergane and Aner Sela. Discussion Paper No November 2017

The Mathematics of Winning Streaks

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21

7.2 Inference for comparing means of two populations where the samples are independent

arxiv:cond-mat/ v2 25 Sep 2002

Meshless Methods for Scientific Computing Final Project

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

Solution: (Course X071570: Stochastic Processes)

ECE 534 Information Theory - Midterm 2

Convex Optimization methods for Computing Channel Capacity

Homework Solution 4 for APPM4/5560 Markov Processes

arxiv:cond-mat/ v2 [cond-mat.other] 16 Mar 2004

Period-two cycles in a feedforward layered neural network model with symmetric sequence processing

k- price auctions and Combination-auctions

AM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 6 February 12th, 2014

STA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2

Optimism, Delay and (In)Efficiency in a Stochastic Model of Bargaining

Approximating min-max k-clustering

An Analysis of Reliable Classifiers through ROC Isometrics

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

Supplementary Material: Crumpling Damaged Graphene

1-way quantum finite automata: strengths, weaknesses and generalizations

Real Analysis 1 Fall Homework 3. a n.

General Linear Model Introduction, Classes of Linear models and Estimation

San Francisco State University ECON 851 Summer Problem Set 1

Brownian Motion and Random Prime Factorization

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

arxiv: v1 [cs.gt] 2 Nov 2018

Competition among networks highlights the unexpected power of the weak

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Universal Finite Memory Coding of Binary Sequences

Scaling Multiple Point Statistics for Non-Stationary Geostatistical Modeling

Positivity, local smoothing and Harnack inequalities for very fast diffusion equations

An Introduction to Information Theory: Notes

An Estimate For Heilbronn s Exponential Sum

4. CONTINUOUS VARIABLES AND ECONOMIC APPLICATIONS

Sums of independent random variables

Measuring center and spread for density curves. Calculating probabilities using the standard Normal Table (CIS Chapter 8, p 105 mainly p114)

Isolation and Contentment in Segregation Games with Three Types

Feedback-error control

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

1 Probability Spaces and Random Variables

The Longest Run of Heads

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

How to Estimate Expected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty

Module 8. Lecture 3: Markov chain

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem

Measuring center and spread for density curves. Calculating probabilities using the standard Normal Table (CIS Chapter 8, p 105 mainly p114)

15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018

arxiv: v1 [physics.data-an] 26 Oct 2012

Almost 4000 years ago, Babylonians had discovered the following approximation to. x 2 dy 2 =1, (5.0.2)

Series Handout A. 1. Determine which of the following sums are geometric. If the sum is geometric, express the sum in closed form.

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

Coding Along Hermite Polynomials for Gaussian Noise Channels

Calculation of MTTF values with Markov Models for Safety Instrumented Systems

Ordered and disordered dynamics in random networks

Econ 101A Midterm 2 Th 8 April 2009.

Notes on Instrumental Variables Methods

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

Objectives. 6.1, 7.1 Estimating with confidence (CIS: Chapter 10) CI)

Introduction to Probability and Statistics

Estimation of the large covariance matrix with two-step monotone missing data

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

t 0 Xt sup X t p c p inf t 0

Preconditioning techniques for Newton s method for the incompressible Navier Stokes equations

On Doob s Maximal Inequality for Brownian Motion

CMSC 425: Lecture 4 Geometry and Geometric Programming

Game Specification in the Trias Politica

Intrinsic Approximation on Cantor-like Sets, a Problem of Mahler

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21

A construction of bent functions from plateaued functions

6 Stationary Distributions

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

Construction of High-Girth QC-LDPC Codes

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

Transcription:

Distribution of winners in truel games R. Toral and P. Amengual Instituto Mediterráneo de Estudios Avanzados (IMEDEA) CSIC-UIB Ed. Mateu Orfila, Camus UIB E-7122 Palma de Mallorca SPAIN Abstract. In this work we resent a detailed analysis using the Markov chain theory of some versions of the truel game in which three layers try to eliminate each other in a series of oneto-one cometitions, using the rules of the game. Besides reroducing some known exressions for the winning robability of each layer, including the equilibrium oints, we give exressions for the actual distribution of winners in a truel cometition. We also introduce a variation of the game able as a model of oinion formation.has Keywords: Game theory, stochastic rocesses. Proceedings of the 8th Granada Seminar on Comutational Physics Modeling cooerative behavior in the social sciences P.L. Garrido, M.A. Muñoz and J. Marro, eds. AIP Conf. Proc. Volume: 779, 128 (25). INTRODUCTION A truel is a game in which three layers aim to eliminate each other in a series of one-toone cometitions. The mechanics of the game is as follows: at each time ste, one of the layers is chosen and he decides who will be his target. He then aims at this erson and with a given robability he might achieve the goal of eliminating him from the game (this is usually exressed as the layers shooting" and killing" each other, although ossible alications of this simle game do not need to be so violent). Whatever the result, a new layer is chosen amongst the survivors and the rocess reeats until only one of the three layers remains. The aradox is that the layer that has the highest robability of annihilating cometitors does not need to be necessarily the winner of this game. This surrising result was already resent in the early literature on truels, see the bibliograhy in the excellent review of reference [1]. According to this reference, the first mention of truels was in the comendium of mathematical uzzles by Kinnaird [2] although the name truel was coined by Shubik [3] in the 196s. Different versions of the truels vary in the way the layers are chosen (randomly, in fixed sequence, or simultaneous shooting), whether they are allowed to ass", i.e. missing the shoot on urose ( shooting into the air"), the number of tries (or bullets") available for each layer, etc. The strategy of each layer consists in choosing the aroriate target when it is his turn to shoot. Rational layers will use the strategy that maximizes their own robability of winning and hence they will chose the strategy

given by the equilibrium Nash oint. In a series of seminal aers[4, 5, 6], Kilgour has analyzed the games and determined the equilibrium oints under a variety of conditions. In this aer, we analyze the games from the oint of view of Markov chain theory. Besides being able to reroduce some of the results by Kilgour, we obtain the robability distribution for the winners of the games. We restrict our study to the case in which there is an infinite number of bullets and consider two different versions of the truel: random and fixed sequential choosing of the shooting layer. These two cases are resented in sections and, resectively. In section we consider a variation of the game in which, instead of eliminating the cometitors from the game, the objective is to convince them on a toic, making the truel suitable for a model of oinion formation. Some conclusions and directions for future work are resented in section whereas some of the most technical arts of our work are left for the final aendixes. RANDOM FIRING Let us first fix the notation. The three layers are labeled as A,B,C. We denote by a, b and c, resectively, their marksmanshi, defined as the robability that a layer has of eliminating from the game the layer he has aimed at. The strategy of a layer is the set of robabilities he uses in order to aim to a articular layer or to shoot into the air. Obviously, when only two layers remain, the only meaningful strategy is to shoot at the other layer. If three layers are still active, we denote by P AB, P AC and P A the robability of layer A shooting into layer B, C, or into the air, resectively, with equivalent definitions for layers B and C. These robabilities verify P AB + P AC + P C = 1. A ure" strategy for layer A corresonds to the case where one of these three robabilities is taken equal to 1 and the other two equal to, whereas a mixed" strategy takes two or more of these robabilities strictly greater than. Finally, we denote by π(a;b,c) the robability that the layer with marksmanshi a wins the game when he lays against two layers of marksmanshi b and c. The definition imlies π(a;b,c) = π(a;c,b) and π(a;b,c)+π(b;a,c)+π(c;a,b) = 1. In the articular case considered in this section, at each time ste one of the layers is chosen randomly with equal robability amongst the survivors. There are 7 ossible states of this system labeled as ABC, AB, AC, BC, A, B, C, according to the layers who remain in the game. The game can be thought of as a Markov chain with seven states, three of them being absorbent states. The details of the calculation for the winning robabilities of A, B and C as well as a diagram of the allowed transitions between states are left for the aendix. We now discuss the results in different cases. Imagine that the layers do not adot any thought strategy and each one shoots randomly to any of the other two layers. Clearly, this is equivalent to setting P AB = P AC = P BA = P BC = P CA = P CB = 1/2. The winning robabilities in this case are: π(a;b,c) = a a+b+c, π(b;a,c) = b a+b+c, π(c;a,b) = c a+b+c, (1) a logical result that indicates that the layer with the higher marksmanshi ossesses the higher robability of winning. Identical result is obtained if the layers include shooting in the air as one of their equally likely ossibilities.

It is conceivable, though, that layers will not decide the targets randomly, but will use some strategy in order to maximize their winning robability. Comletely rational layers will choose strategies that are best resonses (i.e. strategies that are utility maximizing) to the strategies used by the other layers. This defines an equilibrium oint when all the layers are better off keeing their actual strategy than changing to another one. Accordingly, this equilibrium oint can be defined as the set of robabilities P αβ (with α =A,B,C and β =A,B,C,) such that the winning robabilities have a maximum. This set can be found from the exressions in the aendix, with the result that the equilibrium oint in the case a > b > c is given by P AB = P CA = P BA = 1 and P AC = P A = P BC = P B = P CB = P C =. This is the strongest oonent strategy in which each layer aims at the strongest of his oonents[1]. With this strategy, the winning robabilities are: π(a;b,c) = a 2 (a+c)(a+b+c), π(b;a,c) = b a+b+c, π(c;a,b) = c(c+2a) (a+c)(a+b+c) (2) (notice that these exressions assume a > b > c; other cases can be easily obtained by a convenient redefinition of a, b and c). An analysis of these robabilities leads to the aradoxical result that when all layers use their best strategy, the layer with the worst marksmanshi can become the layer with the highest winning robability. For examle, if a = 1., b =.8, c =.5 the robabilities of A, B and C winning the game are.29,.348 and.362, resectively, recisely in inverse order of their marksmanshi. The aradox is exlained when one realizes that all layers set as rimary target either layers A or B, leaving layer C as the last otion and so he might have the largest winning robability. In Fig.1 we lot the regions in arameter sace (b,c) (after setting a = 1) reresenting the layer with the highest winning robability. Imagine that we set u a truel cometition. Sets of three layers are chosen randomly amongst a oulation whose marksmanshi are uniformly distributed in the interval (, 1). The distribution of winners is characterized by a robability density function, f(x), such that f(x)dx is the roortion of winners whose marksmanshi lies in the interval (x, x + dx). This distribution is obtained as: or f(x) = dadbdc [π(a;b,c)δ(x a)+π(b;a,c)δ(x b)+π(c;a,b)δ(x c)] (3) 1 1 f(x) = 3 db dcπ(x;b,c) (4) If layers use the random strategy, Eq. (1), the distribution of winners is f(x) = 3x[xlnx 2(1+x)ln(1+x)+(2+x)ln(2+x)]. In figure 2 we observe that, as exected, the function f(x) attains its maximum at x = 1 indicating that the best marksmanshi layers are the ones which win in more occasions. We consider now a variation of the cometition in which the winner of one game kees on laying against other two randomly chosen layers. The resulting distribution

1.8.6 c.4.2.2.4.6.8 1 b FIGURE 1. In the arameter sace (b,c) with c < b < a = 1, we indicate by black (res. dark gray, light gray) the regions in which layer A (res. B, C) has the largest robability of winning the truel in the case of random selection of the shooting layer and the use of the otimal strategy, as given by Eq. (2). of layers, f(x), can be comuted as the steady state solution of the recursion equation: f(x,t+1) = dadbdc [π(a;b,c)δ(x a)+π(b;a,c)δ(x b)+π(c;a,b)δ(x c)] f(a,t) or f(x) = 1 3 f(x) 1 1 f(x)+2 db dcπ(x;b,c) f(b) (6) In the case of using the robabilities of Eq. (1) the distribution of winners is 1 f(x) = 2x. For layers adoting the equilibrium oint strategy, Eq.(2), the resulting exression for f(x) is too ugly to be reroduced here, but the result has been lotted in Fig. 3. Notice that, desite the aradoxical result mentioned before, the distribution of winners still has it maximum at x = 1, indicating that the best marksmanshi layers are nevertheless the ones who win in more occasions. In the same figure, we have also lotted the distribution f(x) of the cometition in which the winner of a game kees on laying. In this case, the integral relation Eq.(6) has been solved numerically. (5) SEQUENTIAL FIRING In this version of the truel there is an established order of firing. The layers will shoot in increasing value of their marksmanshi. i.e. if a > b > c the first layer to shoot will be layer C, followed by layer B and the last to shoot is layer A. The sequence reeats until only one layer remains. Again, we have left for the aendix 1 The result is more general: if π(a;b,c) = G(a)/[G(a)+G(b)+G(c)], for an arbitrary function G(x), the solution is f(x) = G(x)/ 1 G(y)dy.

2 1.5 f(x) 1.5.2.4.6.8 1 x FIGURE 2. Distribution function f(x) for the winners of truels of randomly chosen trilets (solid line) in the case of layers using random strategies, Eq. (1); distribution f(x) of winners in the case where the winner of a truel remains in the cometition (dashed line). 1.5 f(x) 1.5.2.4.6.8 1 x FIGURE 3. Similar to Fig.(2) in the case of the cometition where layers use the rational strategy of the equilibrium oint given by eq.(2). the details of the calculation of the winning robabilities. Our analysis of the otimal strategies reroduces that obtained by the detailed study of Kilgour[5]. The result is that there are two equilibrium oints deending on the value of the function g(a,b,c) = a 2 (1 b) 2 (1 c) b 2 c ab(1 bc): if g(a,b,c) > the equilibrium oint is the strongest oonent strategy P AB = P BA = P CA = 1, while for g(a,b,c) < it turns out that the equilibrium oint strategy is P AB = P BA = P C = 1 where the worst layer C is better off by shooting into the air and hoing that the second best layer B succeeds in eliminating the best layer A from the game.

1.8.6 c.4.2.2.4.6.8 1 b FIGURE 4. Same as Fig.1 in the case that layers lay sequentially in increasing order of their marksmanshi. The winning robabilities for this case, assuming a > b > c, are: π(a;b,c) = π(b;a,c) = π(c;a,b) = (1 c)(1 b)a 2 [c(1 a)+a][b(1 a)+a], (1 c)b 2 (c(1 b)+b)(b(1 a)+a), c[bc+a[b(2+b( 1+c) 3c)+c]] [c+a(1 c)][b+a(1 b)][a+b(1 a)], (7) if g(a,b,c) >, and π(a;b,c) = π(b;a,c) = π(c;a,b) = a 2 (1 b)(1 c) 2 [a+(1 a)c][a+b(1 a)+c(1 a)(1 b)], b ( b(1 c) 2 + c ) [b+(1 b)c][a+b(1 a)+c(1 a)(1 b)], ac(1 b)(1 c) a+c(1 a) + c(b+c(1 2b)) b+c(1 b) [a+b(1 a)+c(1 a)(1 b)], (8) if g(a,b,c) <. Again, as in the case of random firing, the aradoxical result aears that the layer with the smallest marksmanshi might have the largest robability to win the game. In figure 4 we summarize the results indicating the regions in arameter sace (b, c) (with a = 1) where each layer has the highest robability of winning. Notice that the best layer A has a much smaller region of winning than comared with the case of random firing. In figure 5 we lot the distribution of winners f(x) and f(x) in a cometition as defined in the revious section. Notice that now the distribution of winners f(x) has a

1.5 1 f(x).5.2.4.6.8 1 x FIGURE 5. Same as Fig.2 in the case that layers lay sequentially in increasing order of their marksmanshi. Notice that now both distributions of winners resent maxima for x < 1 indicating that the best a riori layers do not win the game in the majority of the cases. maximum at x.57 indicating that the layers with the best marksmanshi do not win in the majority of cases. CONVINCING OPINION We reinterret the truel as a game in which three eole holding different oinions, A, B and C, on a toic, aim to convince each other in a series of one-to-one discussions. The marksmanshi a (res. b, c) are now interreted as the robabilities that layer holding oinion A (res. B or C) have of convincing another layer of adoting this oinion. The main difference with the revious sections is that now there are always three layers resent in the game and the different states in the Markov chain are ABC, AAB, ABB, AAC, ACC, BBC, BCC, AAA, BBB and CCC. The analysis of the transition robabilities is left for aendix. We consider only the random case in which the erson that tries to convince another one is chosen randomly amongst the three layers. The equilibrium oint corresonds to the best oonent strategy set of robabilities in which each layer tries to convince the oonent with the highest marksmanshi. The robabilities that the final consensus oinion is A, B or C, assuming a > b > c are given by π(a;b,c) = a2[ 2cb 2 + a ( (a+b) 2 + 2(a+2b)c )] (a+b) 2 (a+c) 2, (a+b+c) π(b;a,c) = b 2 (b+3c) (b+c) 2 (1+b+c), π(c;a,b) = c2[ c 3 + 3(a+b)c 2 + a(a+8b)c+ab(3a+b) ] (a+c) 2 (b+c) 2, (9) (a+b+c)

1.8.6 c.4.2.2.4.6.8 1 b FIGURE 6. Same as Fig.1 for the convincing oinion model. 2 1.5 f(x) 1.5.2.4.6.8 1 x FIGURE 7. Same as Fig.2 for the convincing oinion model. resectively. As shown in Fig. 6, there is still a set of arameter values (a,b,c) for which oinion C has the highest winning robability, although it is smaller than in the versions considered in the revious sections. Similarly to other versions, we lot in figure 7 the distribution of winning oinions, f(x). Notice that, as in the random firing case, it attains its maximum at x = 1 showing that the most convincing layers win the game in more occasions. We have also lotted in the same figure, the distribution f(x) which results where one of the winners of a truel is ket to discuss with two randomly chosen layers in the next round.

CONCLUSIONS As discussed in the review of reference [1], truels are of its interest in many areas of social and biological sciences. In this work, we have resented a detailed analysis of the truels using the methods of Markov chain theory. We are able to reroduce in a language which is more familiar to the Physics community most of the results of the alternative analysis by Kilgour[5]. Besides comuting the otimal rational strategy, we have focused on comuting the distribution of winners in a truel cometition. We have shown that in the random case, the distribution of winners still has its maximum at the highest ossible marksmanshi, x = 1, desite the fact that sometimes layers with a lower marksmanshi have a higher robability of winning the game. In the sequential firing case, the aradox is more resent since even the distribution of winners has a maximum at x < 1. It would be interesting to determine mechanisms by which layers could, in an evolutionary scheme, adat themselves to the otimal values. APPENDIX: CALCULATION OF THE PROBABILITIES Random firing In this game there are seven ossible states according to the remaining layers. These are labeled as,1,...,6. There are transitions between those states, as shown in the diagram in Fig. 8, where i j denotes the transition robability from state i to state j (the self transition robability ii is denoted by r i ). r 4 r 1 r 1 r 2 1 14 24 15 r 5 4 2 2 35 5 States Remaining layers ABC 1 AB 2 AC 3 BC 4 A 5 B 6 C 3 r 3 3 36 26 r 6 6 FIGURE 8. Table with the descrition of all the ossible states for the random firing game, and diagram reresenting the allowed transitions between the states shown in the table. From Markov chain theory[7] we can evaluate the robability u j i that starting from

state i we eventually end u in state j after a sufficiently large number of stes. In articular, if we start from state (with the three layers active), the nature of the game is such that the only non-vanishing robabilities are u 4, u5 and u6 corresonding to the winning of the game by layer A, B and C resectively. The relevant set of equations is 2 : u 4 = 1 u 4 1 + 2 u 4 2 + 3 u 4 3 + r u 4, u5 = 1 u 5 1 + 2 u 5 2 + 3 u 5 3 + r u 5, u 4 1 = 14 u 4 4 + r 1 u 4 1, u5 1 = 15 u 5 5 + r 1 u 5 1, u 4 2 = 24 u 4 4 + r 2 u 4 2, u5 2 = r 2 u 5 2, u 4 3 = r 3 u 4 3, u5 3 = r 3 u 5 3 + 35 u 5 5. Solving for u 4, u5 and u6 we obtain: u 4 1 14 = (1 r )(1 r 1 ) + 2 24 (1 r )(1 r 2 ), u 5 1 15 = (1 r )(1 r 1 ) + 3 35 (1 r )(1 r 3 ), (1) u 6 2 26 = (1 r )(1 r 2 ) + 3 36 (1 r )(1 r 3 ). We can now derive the exressions for the transition robabilities i j. Remember that we denote by a the robability that layer A eliminates from the game the layer he has aimed at (and similarly for b and c), and by P αβ (α =A,C,B and β = A,B,C,) the robability of layer α choosing layer β (or into the air if β = ) as a target when it is his turn to lay (a situation that only aears when the three layers are still active). We have then: r = 1 3 1(a(1 P A)+b(1 P B )+c(1 P C )), 1 = 3 1(aP AC + bp BC ), 2 = 3 1(aP AB+ cp CB ), 3 = 3 1(bP BA + cp CA ), 14 = 24 = 1 2 a, 15 = 35 = 2 1b, 26 = 36 = 1 2 c, r 2 = 1 1 2 (a+c), r 1 = 1 2 1(a+b), 3 = 1 2 1(b+c). (11) Sequential firing As in the random firing case, we describe this game as a Markov chain comosed of 11 different states, also with three absorbent states: 9, 1 and 11. In Fig. 9 we can see the corresonding diagram for this game, together with a table describing all ossible states. Based on this diagram, we can write down the relevant set of equations for the transition robabilities u j i : 2 There is no need to write down the equations for u 6 since it suffices to notice that u4 + u5 + u6 = 1.

States Remaining layers A B C 1 A B C 2 A B C 3 B C 4 A C 5 B C 6 A B 7 A C 8 A B 9 C 1 B 11 A 3 1 3 3 53 5 11 1 15 35 4 4 7 4 11 6 11 16 47 12 1 2 68 2 74 6 8 86 27 28 79 59 9 8 1 1 FIGURE 9. Table: Descrition of the different states of the game for the case of sequential firing. The highlighted layer is the one chosen for shooting in that state. Diagram: scheme reresenting all the allowed transitions between the states shown in the table for the case of a truel with sequential firing in the order C B A with a > b > c. u 9 = 3u 9 3 + 1u 9 1 + 4u 9 4, u1 = 3u 1 3 + 1u 1 1, u11 = 1u 11 1 + 4u 11 4, u 1 1 = 12u 1 2 + 15u 1 5 + 16u 1 6, u9 1 = 12u 9 2 + 15u 9 5, u11 1 = 12u 11 2 + 16u 11 6, u 11 2 = 28u 11 8 + 27u 11 7 + 2u 11, u9 2 = 27u 9 7 + 2u 9, u1 2 = 28u 1 8 + 2u 1, u 9 3 = 35u 9 5, u1 3 = 35u 1 5 + 31, u 9 4 = 47u 9 7, u11 4 = 47u 11 7 + 411, u 9 5 = 53u 9 3 + 59, u 1 5 = 53u 1 3, u 1 6 = 68u 1 8, u11 6 = 68u 11 8 + 611, u 9 7 = 74u 9 4 + 79, u 11 7 = 74u 11 4, u 1 8 = 86u 1 6 + 81, u 11 8 = 86u 11 6. (12)

The general solutions for the robabilities u 9, u1 and u11 are given by [ u 9 1 59 ( 3 35 + 1 15 ) = + ] 79( 4 47 + 1 12 27 ), 1 1 12 2 1 35 53 1 47 [ 74 u 1 1 31 ( 3 + 1 15 53 ) = + ] 1 81 ( 16 68 + 12 28 ), (13) 1 1 12 2 1 35 53 1 68 [ 86 u 11 1 411 ( 4 + 1 12 27 74 ) = + ] 1 611 ( 16 + 12 28 86 ), 1 1 12 2 1 47 74 1 68 86 with transition robabilities given by 1 = (1 c)+cp C, 3 = cp CA, 4 = cp CB, 12 = (1 b)+bp B, 15 = bp BA, 16 = bp CA, 2 = (1 a)+ap A, 27 = ap AB, 28 = ap AC, 35 = 86 = 1 b, 31 = 81 = b, 47 = 68 = 1 a, 411 = 611 = a, 53 = 74 = 1 c, 59 = 79 = c. Convincing oinion For this model we show in Fig. 1 the diagram of all the allowed states and transitions, together with a table describing the ossible states. The corresonding set of equations describing this convincing oinion model, as derived from the diagram, are u 1 = r u 1 + 6u 1 6 + 4u 1 4 + 5u 1 5 + 7u 1 7, u 2 = r u 2 + 4u 2 4 + 5u 2 5 + 8u 2 8 + 9u 2 9, u 3 = r u 3 + 8u 3 8 + 9u 3 9 + 7u 3 7 + 6u 3 6, u 1 4 = r 4u 1 4 + 45u 1 5 + 41, u 2 4 = r 4u 2 4 + 45u 2 5, u 1 5 = r 5u 1 5 + 54u 1 4, u2 5 = r 5u 2 5 + 54u 2 4 + 52, u 1 6 = r 6u 1 6 + 67u 1 7 + 61, u 3 6 = r 6u 3 6 + 67u 3 7, u 1 7 = r 7u 1 7 + 76u 1 6, u3 7 = r 7u 3 7 + 76u 3 6 + 73, u 2 8 = r 8u 2 8 + 89u 2 9 + 82, u 3 8 = r 8u 3 8 + 89u 3 9, u 2 9 = r 9u 2 9 + 98u 2 8, u3 9 = r 9u 3 9 + 98u 3 8 + 93. (14) And the general solution for the robabilities u 1, u2 and u3 is u 1 = u 2 = u 3 = [ 1 61 ( 6 (1 r 7 )+ 7 76 ) + ] 41( 4 (1 r 5 )+ 5 54 ), 1 r (1 r 6 )(1 r 7 ) 67 76 (1 r 4 )(1 r 5 ) 45 [ 54 1 52 ( 4 45 + 5 (1 r 4 )) + ] 82( 8 (1 r 9 )+ 9 98 ), 1 r (1 r 4 )(1 r 5 ) 45 54 (1 r 8 )(1 r 9 ) 89 [ 98 1 73 ( 6 67 + 7 (1 r 6 )) + ] 93( 9 (1 r 8 )+ 8 89 ), (15) 1 r (1 r 6 )(1 r 7 ) 67 76 (1 r 8 )(1 r 9 ) 89 98

r 5 r 4 45 5 52 41 4 4 54 5 2 82 1 8 8 r 8 61 6 9 98 89 States Oinions A B C 1 A A A 2 B B B 3 C C C 4 A A B 5 A B B 6 A A C 7 A C C 8 B B C 9 B C C r 6 6 76 67 r 7 7 7 73 r 3 9 93 r 9 FIGURE 1. Table: descrition of the different states of the oinion model. Diagram: scheme reresenting the allowed transitions between the states. where the transition robabilities are given by 4 = 1 3 cp CA, 6 = 1 3 cp CB, 8 = 1 3 bp BC, 5 = 3 1bP BA, 7 = 1 3 ap AB, 9 = 1 3 ap AC, 41 = 61 = 2 3 c 45 = 98 = 1 3 b, 54 = 76 = 1 3 c, 52 = 82 = 2 3 b, 67 = 89 = 1 3 a, 73 = 93 = 2 3 a, r = 3 1 [3 a b c], r 4 = 2 3 (1 c)+ 3 1(1 b), r 5 = 1 3 (1 c)+ 3 2 (1 b), r 6 = 3 2(1 c)+ 1 3 (1 a), r 7 = 3 1(1 c)+ 3 2(1 a), r 8 = 2 3 (1 b)+ 3 1 (1 a), r 9 = 3 1(1 b)+ 3 2(1 a). (16) ACKNOWLEDGMENTS Acknowledgments We thank Cesáreo Hernández for bringing this roblem to our attention. This work is suorted by MCyT (Sain) and FEDER (EU) rojects FIS24-573-C4-3 and FIS24-953; P.A. acknowledges suort form the Govern Balear, Sain.

REFERENCES 1. D.M. Kilgour and and S.J. Brams, The truel. Mathematics Magazine 1997, 7, 315. 2. C. Kinnaird, Encycloedia of Puzzles and Pastimes, 1946, Citadel, Secaucus, NJ (USA). 3. M. Shubik, Game Theory in the Social Sciences 1982, MIT Press, Cambridge, MA (USA). 4. Kilgour, D. M., The simultaneous truel. Int. Journal of Game Theory, 1972, 1, 229 242. 5. Kilgour, D. M., The sequential truel. Int. Journal of Game Theory, 1975, 4, 151 174. 6. Kilgour, D. M., Equilibrium oints of infinite sequential truels. Int. Journal of Game Theory, 1977, 6, 167 18. 7. Karlin, S., A first course in stochastic rocesses. Academic Press, New York, 1973.