Problem Set 9 - Solutions Due: April 27, 2005

Similar documents
Applied Stochastic Processes

6. Stochastic processes (2)

6. Stochastic processes (2)

CS-433: Simulation and Modeling Modeling and Probability Review

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

= z 20 z n. (k 20) + 4 z k = 4

Expected Value and Variance

Probability and Random Variable Primer

Convergence of random processes

Continuous Time Markov Chains

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Analysis of Discrete Time Queues (Section 4.6)

Assortment Optimization under MNL

Google PageRank with Stochastic Matrix

Problem Set 9 Solutions

International Mathematical Olympiad. Preliminary Selection Contest 2012 Hong Kong. Outline of Solutions

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

Credit Card Pricing and Impact of Adverse Selection

Equilibrium Analysis of the M/G/1 Queue

The Second Anti-Mathima on Game Theory

Continuous Time Markov Chain

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Simulation and Random Number Generation

DS-GA 1002 Lecture notes 5 Fall Random processes

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Exercises of Chapter 2

A be a probability space. A random vector

ANSWERS CHAPTER 9. TIO 9.2: If the values are the same, the difference is 0, therefore the null hypothesis cannot be rejected.

Module 9. Lecture 6. Duality in Assignment Problems

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

HMMT February 2016 February 20, 2016

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

CHAPTER 6 GOODNESS OF FIT AND CONTINGENCY TABLE PREPARED BY: DR SITI ZANARIAH SATARI & FARAHANIM MISNI

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Lecture 3: Shannon s Theorem

Dynamic Programming. Lecture 13 (5/31/2017)

Chapter 1. Probability

Error Probability for M Signals

The Feynman path integral

Stat 642, Lecture notes for 01/27/ d i = 1 t. n i t nj. n j

Hidden Markov Models

FINITE-STATE MARKOV CHAINS

Suggested solutions for the exam in SF2863 Systems Engineering. June 12,

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

Turing Machines (intro)

Queueing Networks II Network Performance

Some modelling aspects for the Matlab implementation of MMA

k t+1 + c t A t k t, t=0

Solutions to Problem Set 6

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

12. The Hamilton-Jacobi Equation Michael Fowler

JAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger

8.592J: Solutions for Assignment 7 Spring 2005

I529: Machine Learning in Bioinformatics (Spring 2017) Markov Models

18.1 Introduction and Recap

Maximizing the number of nonnegative subsets

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Linear Correlation. Many research issues are pursued with nonexperimental studies that seek to establish relationships among 2 or more variables

Homework Assignment 3 Due in class, Thursday October 15

For example, if the drawing pin was tossed 200 times and it landed point up on 140 of these trials,

TCOM 501: Networking Theory & Fundamentals. Lecture 7 February 25, 2003 Prof. Yannis A. Korilis

% & 5.3 PRACTICAL APPLICATIONS. Given system, (49) , determine the Boolean Function, , in such a way that we always have expression: " Y1 = Y2

Negative Binomial Regression

One Dimension Again. Chapter Fourteen

2.3 Nilpotent endomorphisms

CS 798: Homework Assignment 2 (Probability)

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Graph Reconstruction by Permutations

Problem Solving in Math (Math 43900) Fall 2013

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

e i is a random error

Markov Chain Monte Carlo Lecture 6

CS286r Assign One. Answer Key

Lecture 12: Discrete Laplacian

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

AN EXTENDED CLASS OF TIME-CONTINUOUS BRANCHING PROCESSES. Rong-Rong Chen. ( University of Illinois at Urbana-Champaign)

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Stat 543 Exam 2 Spring 2016

As is less than , there is insufficient evidence to reject H 0 at the 5% level. The data may be modelled by Po(2).

Temperature. Chapter Heat Engine

Calculation of time complexity (3%)

14 The Postulates of Quantum mechanics

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

Application of Queuing Theory to Waiting Time of Out-Patients in Hospitals.

A new construction of 3-separable matrices via an improved decoding of Macula s construction

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

At zero K: All atoms frozen at fixed positions on a periodic lattice.

Managing Capacity Through Reward Programs. on-line companion page. Byung-Do Kim Seoul National University College of Business Administration

Introduction to Econometrics (3 rd Updated Edition, Global Edition) Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 13

Statistics Spring MIT Department of Nuclear Engineering

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Lecture 3: Probability Distributions

Transcription:

Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you decde about your popularty beng greater than some value t s the same as the event of havng less than 0 messages n each of the IMPORTANT and OTHER folders at tme t. Snce arrvals nto the IMPORTANT and OTHER folders are ndependent, the probablty that nether has had 0 arrvals by tme t s the product of the probabltes of each recevng less than 0 by that tme. Snce for each, the number of messages receved n [0,t] s a Posson random varable, we have P(T > t) = (λqt) e λqt (λ( p q)t) j e λ( p q)t! j! =0 j=0 (λqt) (λ( p q)t) j = e λ( p)t! j! =0 j=0 and the CDF s F T (t) = P(T > t). (b) At least 0 of the frst non-spam messages must be party nvtatons. Note that f I receve my 0th nvtaton by the tme Iget the th non-spam message, snce there wll be fewer than 0 messages collected n the OTHER folder by that tme, Iwll conclude that Iam very popular. Conversely, f Ihad fewer than 0 tems n the INVITATION folder by the tme Ihave receved non-spam messages, then Imust have at least 0 n the OTHER folder and would conclude that Iam not very popular (c) Havng showed n part (b) that gettng at least 0 nvtatons among the frst non-spam messages s necessary and suffcent to conclude very popular overall, we are lookng for the probablty of that event. The number of nvtatons n the frst non-spam messages s a bnomal random varable, snce each non-spam message s ndependently an nvtaton wth probablty q/( p). So, the probablty of concludng very popular s P(very popular) = (q/( p)) (( p q)/( p)). =0 Here s an alternatve soluton: The probablty that Iconclude that Iam very popular upon recevng kth non-spam message, 0 k, s the probablty that exactly ( out ) of the frst k are nvtatons and and the kth s also an nvtaton,.e., k (q/( p)) 0 (( p q)/( p)) k 0. Summng over k, k P(very popular) = (q/( p)) 0 (( p q)/( p)) k 0. k=0 It can be shown that ths s equal to the frst answer found above. Page of 5

2. (a) We have modeled the fouls as a Posson process wth parameter λ = 8. Ths mples that, neglectng foulng out, the number of fouls n the nterval [0,t] s a Posson random varable wth parameter λt = 8t. All we have to do s adjust for the fact that more than 6 fouls cannot be commted by assgnng the probabltes of 7, 8,... fouls to 6 to obtan: t/8 e (t/8)k, for k = 0,,...,5; k! p Xt (k) = t/8 (t/8) l e, for k = 6, l=6 t/8 e (t/8)k, for k = 0,,...,5; k! = 5 t/8 (t/8) l e, for k = 6. 5 48/8 (48/8)l p X48 (6) = e l=0 l=0 (b) Wallace has fouled out f at the end of the game he has sx fouls. Thus the probablty of foulng out s 6 = e 6 e 6 6 2 e 6 6 3 e 6 6 4 e 6 6 5 e 6 0.5543. 2 6 24 20 (c) When Wallace does not foul out, hs tme wns by (0.25 ponts/mnute) (48 mnutes) = 2 ponts. When Wallace fouls out at tme t, the score dfferental n favor of Wallace s team s 0.25t }{{} 0.5(48 t) 6 }{{}}{{} =0.75t 30. Wallace playng Wallace not playng Wallace s three techncal fouls Thus Wallace s team wns whenever 0.75t 30 > 0or t> 40. In terms of our model for Wallace s fouls, we are nterested n the sxth arrval tme. Wthout restrctng the game length to 48 mnutes for the moment, ths sxth arrval tme has the Erlang PDF of order 6 and parameter λ = 8. Denotng the sxth arrval tme by U, wehave λ 6 u 5 e λu f U (u) =, u 0. 5! The event of nterest s {U > 40} because ths captures Wallace not foulng out and Wallace foulng out late enough n the game that hs team wns. λ 6 5 λu u e P({Wallace s team wns}) = P(U > 40) = f U (u) du = du 6 5 = u e 8 u du 8 5! 40 07 = e 5 0.66 2 40 40 5! where the ntegral can be computed by repeated ntegraton by parts. Page 2 of 5

(d) Ths s very smlar to the prevous part. When Wallace commts hs ffth foul at tme t, the score dfferental n favor of Wallace s team s } 0.25t {{} 0.5(48 t) =0.75t 24. }{{} Wallace playng Wallace not playng Thus Wallace s team wns whenever 0.75t 24 > 0or t> 32. We are now nterested n the ffth arrval tme n the Wallace-foul process. Ths ffth arrval tme has the Erlang PDF of order 5 and parameter λ = 8. Denotng the ffth arrval tme by V, wehave λ 5 4 λv v e f V (v) =, v 0. 4! λ 5 v 4 e λv P({Wallace s team wns}) = P(V > 32) = f V (v) dv = dv 32 32 4! 5 4 = v e 8 v dv 8 4! 32 03 4 = e 0.62 3 where agan the ntegral can be computed by repeated ntegraton by parts. Incdentally, the strategy proposed n part (d) s clearly not optmal for maxmzng the probablty of vctory because the decson of whether or not to play Wallace wth fve fouls should depend also on the tme remanng n the game. For example, assumng the present model, removng Wallace less than 32 mnutes nto the game does not make sense because t leads to certan defeat. 3. (a) The state dagram of the Markov chan s: 0.5 0.5 0.5 2 3 4 5 6 7 0.5 (b) State 5 s reachable from state n a mnmum of three transtons. Paths from state to state 5 also nclude paths wth a loop from back to (of length 3) and/or a loop from 5 back to 5 by way of state 7 (ether length 2 or length 3). Therefore potental path lengths are 3 + 2m +3n, for m, n 0. Therefore, r 5 (n) > 0 for n = 3 or n 5. (c) From states, 2, and 3, all states are accessble because there s a non-zero probablty path from these states by way of state 3 to any other state. From states, 4, 5, 6, and 7, paths only exst to states 5, 6, and 7. Page 3 of 5

(d) States 5-7 are recurrent because by the logc n (c), they can be reached from any other state. States -4 are transent; once the system has transtoned out of state 4, t cannot return to any state other than states 5, 6, or 7. States 5, 6, and 7 form a recurrent class. Because t can be traversed from state 5 back to 5 n ether 2 or 3 steps (as dscussed n (b)), the system can return to state 5 after n steps for any n 2; therefore t s aperodc. (e) One transton must be added to create a sngle recurrent class: for example, addng a transton from state 5 to state would allow every state to be reached from every other state. Any transton from the recurrent class states 5,6, or 7 to any of the states, 2, or 3 would work. 4. (a) Gven L n, the hstory of the process (.e., L n 2,L n 3,...) s rrelevant for determnng the probablty dstrbuton of L n, the number of remanng unlocked doors at tme n. Therefore, L n s Markov. More precsely, P(L n = j L n =, L 2 2 = k,...,l = q) = P(L n = j L n = ) = p j. Clearly, at one step the number of unlocked doors can only decrease by one or stay constant. So, for d, f j =, then p j = P(selectng an unlocked door on day n+ L n = ) = d. For 0 d, f j =, then p j = P(selectng an locked door on day n + L n = ) = d d. Otherwse, p j = 0. To summarze, for 0, j d, we have the followng: d d j = p j = j = d 0 otherwse (b) The state wth 0 unlocked doors s the only recurrent state. All other states are then transent, because from each, there s a postve probablty of gong to state 0, from whch t s not possble to return. (c) Note that once all the doors are locked, none wll ever be unlocked agan. So the state 0 s an absorbng state: there s a postve probablty that the system wll enter t, and once t does, t wll reman there forever. Then, clearly, lm n r 0 (n) = and lm n r j (n) = 0 for all j = 0 and all,. (d) Now, f Ichoose a locked door, the number of unlocked doors wll ncrease by one the next day. Smlarly, the number of unlocked doors wll decrease by f and only f I choose an unlocked door. Hence, d d j = + p j = j = d 0 otherwse Clearly, from each state one can go to any other state and return wth postve probablty, hence all the states n ths Markov chan communcate and thus form one recurrent class. There are no transent states or absorbng states. Note however, that from an even-numbered state (states 0, 2, 4, etc) one can only go to an odd-numbered state Page 4 of 5

n one step, and smlarly all one-step transtons from odd-numbered states lead to even-numbered states. Snce the states can be grouped nto two groups such that all transtons from one lead to the other and vce versa, the chan s perodc wth perod 2. Ths wll lead to r j (n) oscllatng and not convergng as n. For example, r (n) = 0 for all odd n, but postve for even n. (e) In ths case L n s not a Markov process. To see ths, note that P(L n = + L n =, L n 2 = ) = 0 snce accordng to my strategy Ido not unlock doors two days n a row. But clearly, P(L n = + L n = ) > 0for <d snce t s possble to go from a state of unlocked doors to a state of + unlocked doors n general. Thus P(L n = + L n =, L n 2 = ) = P(L n = + L n = ), whch shows that L n does not have the Markov property. Page 5 of 5