ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains
|
|
- Stuart Patterson
- 6 years ago
- Views:
Transcription
1 Contents ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains Stochastic Signals and Systems Rutgers University, Fall 2014 Sijie Xiong, RUID: The Game of RISK 1 2 Modeling RISK via Markov Chain State Space A Single Round of Rolling Dice Transition Probability Matrix The Probability that the Attacker Wins 4 References 5 1 The Game of RISK Tan [2] represented the detailed rules for the game of RISK. Here, the rules are slightly different. The basic idea is that attacking country and defending country initially have a 0 and d 0 armies respectively, and whether they lose armies depends on the results of rolling dice. That is, at each round, attacker rolls i = min(a 1, 3) dice instead of min(a, 3) in Tan [2] if it has a armies remaining and defender rolls j = min(d, 2) dice if it has d armies remaining. The highest and second highest rolls of the attacker and defender are compared sequentially. When the roll of the attacker is strictly greater, the defender loses 1 army; otherwise the attacker loses 1 army. The battle ends when either side loses all of its armies, or equivalently, the attacker wins if it has at least 1 army and defender has 0 army, and vice versa. Tan s Table 1, which gives an example of a battle, is reproduced with some modifications. It shows that the attacker wins the battle at the 4 th round. Since each outcome of rolling dice is at random, the game can be modeled as a random process. In this report, what is of interest is the probability that the attacker wins. 2 Modeling RISK via Markov Chain 2.1 State Space Let X n = (a n, d n ) denote the state of the system at the n th round: X n = (a n, d n ), 0 a n a 0, 0 d n d 0, n 0, (1) where a n and d n are the remaining armies of the attacker and defender, respectively. The initial state of the system is X 0 = (a 0, d 0 ). The probability of the system changing from one state at the 1
2 Table 1: An example of a battle Round # No. of armies No. of dice rolled Outcome No. of losses attacker defender attacker defender attacker defender attacker defender 1 st ,3,1 6, nd ,4 4, rd th 1 0 n th round to another state at the (n + 1) th round depends only on X n, P [ X n+1 = (a n+1, d n+1 ) X n = (a n, d n ),, X 0 = (a 0, d 0 ) ] = P [ X n+1 = (a n+1, d n+1 ) X n = (a n, d n ) ]. (2) Therefore, {X n, n = 0, 1, } can be characterized as a Markov chain. Obviously, (0, 0) isn t a valid state. The total number of armies that the attacker and defender lose in each round is either 1 or 2. More specifically, a + d = min(i, j) {1, 2}, 0 a 2, 0 d 2, 0 i 3, 0 j 2, (3) where a denotes the number of armies the attacker loses and d denotes the number of armies the defender loses. The possible states can actually be separated into two groups. Intuitively, the states that either side loses all of its armies indicate the end of a battle, and Tan [2] referred to these states as absorbing states. We can order these states and construct a vector of absorbing states, which has (a 0 + d 0 ) entries, A = [ (0, 1), (0, 2),, (0, d 0 ), (1, 0), (2, 0),, (a 0, 0) ] T. (4) On the other side, the states in which both the attacker and defender have at least 1 army are transient, since the system will definitely lose at least 1 army and changes to another state according to equation (3). We can also obtain a vector of transient states, which has (a 0 d 0 ) entries, T = [ (1, 1), (1, 2),, (1, d 0 ), (2, 1), (2, 2), (2, d 0 ),, (a 0, 1), (a 0, 2),, (a 0, d 0 ) ] T. (5) The process of the game RISK can be characterized as follows: the system starts with an initial state (a 0, b 0 ), which is also a transient state, then jumps among different transient states in T, until it reaches an absorbing state in A. 2.2 A Single Round of Rolling Dice Since we are only concerned about the probability that the attacker wins, i.e., the defender loses all of its armies and the attacker still has at least 1 army. These states correspond to the (1 + d 0 ) th to the (a 0 + d 0 ) th entries of the absorbing states vector A, namely (1, 0), (2, 0),, (a 0, 0). Let P ij d denote the probability that the defender loses d armies when the attacker and defender 2
3 Table 2: 14 Distinct values of P ij d i j d P ij d Value P P P P P P P P P P P P P P respectively roll i and j dice. According to Osborne [3], there are a total of 14 distinct values of P ij d, which are computed from the marginal and joint probability distributions of rolling 2 or 3 dice. Osborne s Table 2 is present here as a reference. Note that the sum of probabilities under each pair of (i, j) is Transition Probability Matrix Based on the states vectors A, T in Section 2.1 and Table 2, we are able to construct the transition probability matrix of the system, P = [ ] Q R, (6) 0 I where Q (a 0 d 0 ) (a 0 d 0 ) contains the probabilities of the system going from one transient state to another transient state; R (a 0 d 0 ) (a 0 +d 0 ) contains the probabilities of the system going from a transient state to an absorbing state. According to the discussions in Section 2.1, the system changes among transient states during every single round, until it reaches and stays in an absorbing state. Therefore, the diagonal entries of Q are all 0. Since states (1, 1), (1, 2),, (1, d 0 ) go to (0, 1), (0, 2),, (0, d 0 ) respectively with probability 1, the first d 0 diagonal entries of R are all 1. The identity matrix I (a 0+d 0 ) (a 0 +d 0 ) represents that once the system enters an absorbing state, it will stay in that state with probability 1. The nonzero and less-than-one entries of Q and R are drawn from Table 2. Each row of the transition probability matrix P sums to 1. 3
4 Table 3: Probability that the attacker wins under different initial states (a 0, d 0 ) The Probability that the Attacker Wins Yates [1] showed that the n-step transition matrix P n completely describes the evolution of probabilities in a Markov chain. Similarly, let the (a 0 d 0 ) (a 0 + d 0 ) matrix F n denote the transition probability matrix of the system s final visit to an absorbing state at the n th round from the previous transient state at the (n 1) th round. We have F n = Q (n 1) R. (7) This means that the system must be in transient states during the first (n 1) rounds and the n th must be from a transient state to an absorbing state. According to Osborne [3], the system then proceeds enough rounds to achieve an absorbing state. The transition probability matrix from an initial state to the last absorbing state is therefore: F = F n = n=1 Q (n 1) R = (I Q) 1 R. (8) n=1 It follows that if the attacker wins, the system goes from the initial state (a 0, d 0 ) to one of the a 0 absorbing states (1, 0), (2, 0),, (a 0, 0). These transitions correspond to the last a 0 columns in the (a 0 d 0 ) th (last) row of F. Let A denote the event that the attacker wins the battle, then, P ( A X 0 = (a 0, d 0 ) ) = d 0 +a 0 j=d 0 +1 F(a 0 d 0, j). (9) Table 3 gives some numerical results of P ( A X 0 = (a 0, d 0 ) ) and Figure 1 shows a more detailed relationship between P (A) and different initial states X 0 = (a 0, d 0 ). Three conclusions can be made: a) with the initial number of armies of either side being fixed (a 0 or d 0 fixed), the probability that the opposed side wins increases as its initial number of armies increases; b) if both sides have equal number (at least 10) of armies (a 0 = d 0 ), the chance that the attacker wins increases as a 0 increases, and is greater than 50%; c) if the defender s army is outnumbered by the attacker (a 0 = d 0 + c, where c is a positive constant), the possibility that the attacker wins decreases first and then increases as d 0 increases. 4
5 References Figure 1: The relationship between P (A) and initial states (a 0, d 0 ) [1] Yates, Roy D., and David J. Goodman. Probability and Stochastic Process. (2003). [2] Tan, Baris. Markov chains and the RISK board game. Mathematics Magazine (1997): [3] Osborne, Jason A. Markov chains for the risk board game revisited. Mathematics Magazine (2003):
RISKy Business: An In-Depth Look at the Game RISK
Rose-Hulman Undergraduate Mathematics Journal Volume 3 Issue Article 3 RISKy Business: An In-Depth Look at the Game RISK Sharon Blatt Elon University, slblatt@hotmail.com Follow this and additional works
More informationMarkov Chains for the RISK Board Game Revisited
VOL. 76, NO. 2, APRIL 2003 129 because of similarity. The parameter h for a given object can be thought of as the radius of a sphere that has the same ratio of V to A as the object. It will henceforth
More informationChapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.
35 Mixed Chains In this chapter we learn how to analyze Markov chains that consists of transient and absorbing states. Later we will see that this analysis extends easily to chains with (nonabsorbing)
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationMarkov Chains Absorption Hamid R. Rabiee
Markov Chains Absorption Hamid R. Rabiee Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A
More informationMarkov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.
Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or
More informationThe Boundary Problem: Markov Chain Solution
MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units
More informationMath 166: Topics in Contemporary Mathematics II
Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 14 A Review A Markov process is a finite sequence of experiments
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationQuantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)
Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University
More informationMonopoly An Analysis using Markov Chains. Benjamin Bernard
Monopoly An Analysis using Markov Chains Benjamin Bernard Columbia University, New York PRIME event at Pace University, New York April 19, 2017 Introduction Applications of Markov chains Applications of
More information2 DISCRETE-TIME MARKOV CHAINS
1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More informationPROBABILITY AND STOCHASTIC PROCESSES A Friendly Introduction for Electrical and Computer Engineers
PROBABILITY AND STOCHASTIC PROCESSES A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates Rutgers, The State University ofnew Jersey David J. Goodman Rutgers, The State University
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationDIMACS Technical Report March Game Seki 1
DIMACS Technical Report 2007-05 March 2007 Game Seki 1 by Diogo V. Andrade RUTCOR, Rutgers University 640 Bartholomew Road Piscataway, NJ 08854-8003 dandrade@rutcor.rutgers.edu Vladimir A. Gurvich RUTCOR,
More informationChapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.
29 Markov Chains Definition of a Markov Chain Markov chains are one of the most fun tools of probability; they give a lot of power for very little effort. We will restrict ourselves to finite Markov chains.
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES
MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.
More informationSTA 247 Solutions to Assignment #1
STA 247 Solutions to Assignment #1 Question 1: Suppose you throw three six-sided dice (coloured red, green, and blue) repeatedly, until the three dice all show different numbers. Assuming that these dice
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationarxiv: v1 [math.ho] 14 Dec 2015
TAKING THE RISK OUT OF RISK : CONQUER ODDS IN THE BOARD GAME RISK. SAM HENDEL, CHARLES HOFFMAN, COREY MANACK, AND AMY WAGAMAN arxiv:1.0v1 [math.ho] Dec 0 Abstract. Dice odds in the board game RISK were
More informationProbability and Stochastic Processes
Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University
More informationCHAPTER 6. Markov Chains
CHAPTER 6 Markov Chains 6.1. Introduction A(finite)Markovchainisaprocess withafinitenumberofstates (or outcomes, or events) in which the probability of being in a particular state at step n+1depends only
More informationMarkov Chains Absorption (cont d) Hamid R. Rabiee
Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,
More informationMarkov Model. Model representing the different resident states of a system, and the transitions between the different states
Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationAnalyzing a Tennis Game with Markov Chains
Analyzing a Tennis Game with Markov Chains What is a Markov Chain? A Markov chain is a way to model a system in which: 1) The system itself consists of a number of states, and the system can only be in
More informationThe probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.
c Dr Oksana Shatalov, Fall 2010 1 9.1: Markov Chains DEFINITION 1. Markov process, or Markov Chain, is an experiment consisting of a finite number of stages in which the outcomes and associated probabilities
More informationProbability Basics. Part 3: Types of Probability. INFO-1301, Quantitative Reasoning 1 University of Colorado Boulder
Probability Basics Part 3: Types of Probability INFO-1301, Quantitative Reasoning 1 University of Colorado Boulder September 30, 2016 Prof. Michael Paul Prof. William Aspray Example A large government
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationECE313 Summer Problem Set 7. Reading: Cond. Prob., Law of total prob., Hypothesis testinng Quiz Date: Tuesday, July 3
ECE313 Summer 2012 Problem Set 7 Reading: Cond. Prob., Law of total prob., Hypothesis testinng Quiz Date: Tuesday, July 3 Note: It is very important that you solve the problems first and check the solutions
More informationLesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains
AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationMarkov Chains and Transition Probabilities
Hinthada University Research Journal 215, Vol. 6, No. 1 3 Markov Chains and Transition Probabilities Ko Ko Oo Abstract Markov chain is widely applicable to the study of many real-world phenomene. We represent
More informationAny live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.
2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called
More informationMath 381 Discrete Mathematical Modeling
Math 381 Discrete Mathematical Modeling Sean Griffin Today: -Projects -Central Limit Theorem -Markov Chains Handout Projects Deadlines: Project groups and project descriptions due w/ homework (Due 7/23)
More informationProbability. VCE Maths Methods - Unit 2 - Probability
Probability Probability Tree diagrams La ice diagrams Venn diagrams Karnough maps Probability tables Union & intersection rules Conditional probability Markov chains 1 Probability Probability is the mathematics
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationLecture 4 - Random walk, ruin problems and random processes
Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationChapter 1. Vectors, Matrices, and Linear Spaces
1.7 Applications to Population Distributions 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.7. Applications to Population Distributions Note. In this section we break a population into states and
More informationMarkov Chains. Chapter 16. Markov Chains - 1
Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider
More informationWhere are we in CS 440?
Where are we in CS 440? Now leaving: sequential deterministic reasoning Entering: probabilistic reasoning and machine learning robability: Review of main concepts Chapter 3 Making decisions under uncertainty
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationClass 8 Review Problems 18.05, Spring 2014
1 Counting and Probability Class 8 Review Problems 18.05, Spring 2014 1. (a) How many ways can you arrange the letters in the word STATISTICS? (e.g. SSSTTTIIAC counts as one arrangement.) (b) If all arrangements
More information1 Ways to Describe a Stochastic Process
purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More information1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationContinuous Time Markov Chain Examples
Continuous Markov Chain Examples Example Consider a continuous time Markov chain on S {,, } The Markov chain is a model that describes the current status of a match between two particular contestants:
More informationECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010
ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More information1 Random Walks and Electrical Networks
CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.
More informationSampling Distributions
Sampling Error As you may remember from the first lecture, samples provide incomplete information about the population In particular, a statistic (e.g., M, s) computed on any particular sample drawn from
More information6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS
6.0/6.3 Spring 009 Quiz Wednesday, March, 7:30-9:30 PM. SOLUTIONS Name: Recitation Instructor: Question Part Score Out of 0 all 0 a 5 b c 5 d 5 e 5 f 5 3 a b c d 5 e 5 f 5 g 5 h 5 Total 00 Write your solutions
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationReview of Probability. CS1538: Introduction to Simulations
Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed
More informationISE/OR 760 Applied Stochastic Modeling
ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationNotes on Markov Networks
Notes on Markov Networks Lili Mou moull12@sei.pku.edu.cn December, 2014 This note covers basic topics in Markov networks. We mainly talk about the formal definition, Gibbs sampling for inference, and maximum
More informationConditional probabilities and graphical models
Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within
More informationLecture 2 : CS6205 Advanced Modeling and Simulation
Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete
More informationMATH 446/546 Test 2 Fall 2014
MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these
More informationA Polynomial-time Nash Equilibrium Algorithm for Repeated Games
A Polynomial-time Nash Equilibrium Algorithm for Repeated Games Michael L. Littman mlittman@cs.rutgers.edu Rutgers University Peter Stone pstone@cs.utexas.edu The University of Texas at Austin Main Result
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More informationMath 180B Problem Set 3
Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P
More informationQuestion Paper Code : AEC11T03
Hall Ticket No Question Paper Code : AEC11T03 VARDHAMAN COLLEGE OF ENGINEERING (AUTONOMOUS) Affiliated to JNTUH, Hyderabad Four Year B Tech III Semester Tutorial Question Bank 2013-14 (Regulations: VCE-R11)
More informationYear 1: Fall. Year 1: Spring. HSB Topics - 2 Year Cycle
Year 1: Fall Pigeonhole 1 Pigeonhole 2 Induction 1 Induction 2 Inequalities 1 (AM-GM) Geometry 1 - Triangle Area Ratio Theorem (TART) Contest (Math Battle) Geometry 2 - Inscribed Quadrilaterals, Ptolemy
More informationLectures on Probability and Statistical Models
Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo
Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative
More informationDesigning Information Devices and Systems I Fall 2017 Official Lecture Notes Note 2
EECS 6A Designing Information Devices and Systems I Fall 07 Official Lecture Notes Note Introduction Previously, we introduced vectors and matrices as a way of writing systems of linear equations more
More informationProbability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J.
Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J. Goodman Problem Solutions : Yates and Goodman,.6.3.7.7.8..9.6 3.. 3.. and 3..
More informationMarkov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly
Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results
Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of
More informationT 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:
108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your
More informationProbabilistic Systems Analysis Spring 2018 Lecture 6. Random Variables: Probability Mass Function and Expectation
EE 178 Probabilistic Systems Analysis Spring 2018 Lecture 6 Random Variables: Probability Mass Function and Expectation Probability Mass Function When we introduce the basic probability model in Note 1,
More informationProbability Theory and Applications
Probability Theory and Applications Videos of the topics covered in this manual are available at the following links: Lesson 4 Probability I http://faculty.citadel.edu/silver/ba205/online course/lesson
More informationStochastic Processes
qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot
More informationGame Theory. Solutions to Problem Set 4
1 Hotelling s model 1.1 Two vendors Game Theory Solutions to Problem Set 4 Consider a strategy pro le (s 1 s ) with s 1 6= s Suppose s 1 < s In this case, it is pro table to for player 1 to deviate and
More informationAnswers to selected exercises
Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting
More informationWhere are we in CS 440?
Where are we in CS 440? Now leaving: sequential deterministic reasoning Entering: probabilistic reasoning and machine learning robability: Review of main concepts Chapter 3 Motivation: lanning under uncertainty
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More informationMATH 3C: MIDTERM 1 REVIEW. 1. Counting
MATH 3C: MIDTERM REVIEW JOE HUGHES. Counting. Imagine that a sports betting pool is run in the following way: there are 20 teams, 2 weeks, and each week you pick a team to win. However, you can t pick
More informationMaster s Written Examination - Solution
Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2
More informationWorksheet 2 Problems
Technische Universität München WS 205/6 Lehrstuhl für Informatik V Scientific Computing Dr. T. Neckel 02..205/04..205 M.Sc. D. Jarema Worksheet 2 Problems Eigenvalue Problems and Algebraic Models (I) Exercise
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationLecture 21. David Aldous. 16 October David Aldous Lecture 21
Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these
More informationTMA4265 Stochastic processes ST2101 Stochastic simulation and modelling
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes
More informationMATH 310, REVIEW SHEET 2
MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,
More informationEconomics of Networks Social Learning
Economics of Networks Social Learning Evan Sadler Massachusetts Institute of Technology Evan Sadler Social Learning 1/38 Agenda Recap of rational herding Observational learning in a network DeGroot learning
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationEC3224 Autumn Lecture #03 Applications of Nash Equilibrium
Reading EC3224 Autumn Lecture #03 Applications of Nash Equilibrium Osborne Chapter 3 By the end of this week you should be able to: apply Nash equilibrium to oligopoly games, voting games and other examples.
More informationChapter 4 - Introduction to Probability
Chapter 4 - Introduction to Probability Probability is a numerical measure of the likelihood that an event will occur. Probability values are always assigned on a scale from 0 to 1. A probability near
More informationChapter 4 Markov Chains at Equilibrium
Chapter 4 Markov Chains at Equilibrium 41 Introduction In this chapter we will study the long-term behavior of Markov chains In other words, we would like to know the distribution vector sn) when n The
More informationMCS 341 Probability Theory Name. Final Exam: Probability Theory 17 December 2010
MCS 341 Probability Theory Name. Final Exam: Probability Theory 17 December 2010 This is a closed-book test. You may, however, use one new 3 -by-5 note card, your previous note cards, the tables provided,
More informationMarkov Chains. Chapter Definitions and Examples
Chapter 4 Markov Chains 4.1 Definitions and Examples The importance of Markov chains comes from two facts: (i) there are a large number of physical, biological, economic, and social phenomena that can
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More information7.36/7.91 recitation CB Lecture #4
7.36/7.91 recitation 2-19-2014 CB Lecture #4 1 Announcements / Reminders Homework: - PS#1 due Feb. 20th at noon. - Late policy: ½ credit if received within 24 hrs of due date, otherwise no credit - Answer
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note 2
EECS 6A Designing Information Devices and Systems I Fall 08 Lecture Notes Note Vectors and Matrices In the previous note, we introduced vectors and matrices as a way of writing systems of linear equations
More information