Homework Solution 4 for APPM4/5560 Markov Processes
|
|
- Jasper Richardson
- 6 years ago
- Views:
Transcription
1 Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with robability and to the left with robability, but subject this time to the rule that if X n tries to go to the left from or to the right from 4 it stays ut. Find (a) the transition robability for the chain, and the (b) the limiting amount of time the chain sends at each site. (a) (b) The limiting distribution of time sent in each state is 5 (,, 4, 8) In words, if he last shaved k days ago, he will not shave with robability k+. However, when has not shaved for 4 days his wife orders him to shave, and he does so with robability. (a) What is the long-run fraction of time Mickey shaves? (b) Does the stationary distribution for this chain satisfy the detailed balance condition? (a) The limiting distribution of time sent in each state is 4 (4,, 4, ). It follows that the long run fraction of days that Mickey shaves is 4 4. (b) o. In articular = π()(, ) π()(, ) = Let X n be the number of days since Miceky Markov last shaved, calculated at 7:AM when he is trying to decide if he wants to shave today. Suose that X n is a Markov chain with transition matrix 9.6(b) The Markov chain associated with a manufacturing rocess may be described as follows: A art to be manufactured will begin the rocess by entering ste. After ste, % of the arts must be reworked, i.e., returned to ste, % of the arts are thrown away, and 7% roceed to ste. After ste, 5% of the arts must be returned to ste, % to ste, 5% are scraed, and 8% emerge to be sold for a rofit. (a) Formulate a four-state Markov chain with states,,, and 4 where = a art that was scraed and 4 = a art that was sold for a rofit. (b) Comute the robability a art is scraed in the roduction rocess. (b) There are several ways to do this roblem. The following way is more intuitive.
2 The transition robability of the Markov chain is By simle calculation we know π = π = while π and π 4 are undetermined. From the roblem we know a art will always start from ste and end at either ste or 4. In other words we only need to know the value of n (, ) for n. Since π = π =, we have n (i, ) = n (i, ) = for i =,,, 4, and n. By the roerty of Markov chain n (k, ) = 4 (k, j) n (j, ) j= Let a ij = lim n n (i, j) as a brief notation. If a ij exist and unique, then for n, the above equation can be written as a = a = 4 (, j)a j =.a +.7a +.a + ()a 4 j= 4 (, j)a j =.5a +.a +.5a +.8a 4 j= It is obvious that a = a 44 =, a 4 = a 4 =. So we have a =.a +.7a +. a =.5a +.a +.5. By solving the system of equations a =.85, a =.657 and the robability a art is scraed is (b) Six children(dick, Helen, Joni, Mark, Sam, and Tony) lay catch. If Dick has the ball he is equally likely to throw it to Helen, Mark, Sam, and Tony. If Helen has the ball she is equally likely to throw it to Dick, Joni, Sam, and Tony. If Sam has the ball he is equally likely to throw it to Dick, Helen, Mark, and Tony. If either Joni or Tony gets the ball, they kee throwing it to each other. If Mark gets the ball he runs away with it. (a) Find the transition robability and classify the states of the chain. (b) Suose Dick has the ball at the beginning of the game. What is the robability Mark will end u with it? (a) The transition robability of the Markov chain is D H J M S T D /4 /4 /4 /4 H /4 /4 /4 /4 J M S /4 /4 /4 /4 T and we need to find the value of a, where a i = P i (Mark ends u with the ball). Conditioning on the first ste we obtain: a k = 6 (k, j)a j for k =,, 6. j=
3 After simlification we have a = 4 a + 4 a a = 4 a + 4 a 5 a 5 = 4 a + 4 a + 4 and a =.4, a =., a 5 =.4. So the robability Mark will end u with the ball is Couon collector s roblem. We are interested now in the time it takes to collect a set of baseball cards. Let T k be the number of cards we have to buy before we have k that are distinct. Clearly, T =. A little more thought reveals that if each time we get a card chosen at random from all ossibilities, then for k, T k+ T k has a geometric distribution with success robability ( k)/. Use this to show that the mean time to collect a set of baseball cards is log, while the variance is k= /k. Since T k+ T k has a geometrix distribution with robability ( k)/, ( ) k E(T k+ T k ) = = k. So E(T k+ ) = E(T k ) + E(T k+ T k ) = E(T k ) + E(T ) = E(T ) + = E(T ) + log. j= k, ( ) = E(T ) + j = + j= j ( ) + ( ) = Since {T k } are indeendent to each other, V ar(t k+ ) = V ar(t k+ T k ) + V ar(t k ). So V ar(t k+ T k ) = k ) = V ar(t k+ ) = V ar(t k ) + ( k V ar(t ) = V ar(t ) + k ( k) k ( k). ( ) ( ( )) = j = V ar(t ) + ( j) = j= j= ( j) j = k= k j= ( j ) j
4 9.8 A criminal named Xavier and a oliceman named Yacov move between three ossible hideouts according to Markov chains X n and Y n with transition robabilities: Xavier =..6. and Yakov = At time T = min{n : X n = Y n } the game is over and the criminal is caught. (a) Suose X = i and Y = j i. Find the exect value of T. (b) Suose that the two layers generalize their strategies to Xavier = q q q Yakov = q q q q q q If Yakov uses q =.5 as he did in art (a), what values of should Xavier choose to maximize the exected time to get caught? Answer the last question again for q = /. (a) Observe that the terms in the transition matrices for X and Y are each constant along the diagonal as well off-diagonal. Let,, be the three states where X and Y stay. Without loss of generality, suose X is at and Y is at at time k. At time k +, the robability of getting caught or not is shown as caught not caught X=, Y=. X=, Y=. X=, Y=. X=, Y=. X=, Y=. X=, Y=. We can generalize a new Markov chain model with states caught and not caught. The transition robability is caught not caught caught not caught.4.6. The exect caught time is therefore ET = P ( k) = (.6) k =.6 =.5 k= k= (b) Since the states are still symmetric, we can generalize the table (suose X = and Y = at time k): 4
5 X=, Y= X=, Y= X=, Y= X=, Y= X=, Y= X=, Y= X=, Y= X=, Y= X=, Y= caught (-)q (-q) q not caught (-)(-q) (-)q q q q (-q) Counting the robability: ( )q + ( q) + q = + q q ( )( q) + ( )q + q + q + q + ( q) = ( + q q). caught not caught The transition matrix is caught. not caught ( +q-q -(+q-q) ) When q =.5, the transion matrix is, ( )/ ( + )/ The exect caught time would be ET = ( + )/ ( + )/ = +. To maximize the time to be caught, we look at the first derivative of ET : ( ) + (ET ) ( ) + ( + ) = = ( ) = ( ) >. The maximun value can have is.5, so max{et } =.5/.5 =. When q = /, ( + q q) = /, ET = / / =. 9.4 A warehouse has a caacity to hold four items. If the warehouse is neither full nor emty, the number of items in the warehouse changes whenever a new item is roduced or an item is sold. Suose that (no matter when we look) the robability that the next event is a new item is roduced is and that the new event is a sale is. If there is currently one item in the warehouse, what is the robability that the warehouse will become full before it becomes emty. We create a Markov chain with absorbing states at emty and full. That is, emty full emty. full 5
6 Let P (i) be the robability that at state i the warehouse becomes full before becoming emty. Then using that P () =, and P (4) =, it follows that P () = P (), P () = P () + P (), and P () = P () +. Solving this linear system gives that AP = b where A =, P = P () P () b =. P (), Solving the system gives that P = 7 (4, 6, 7). In articular P () = General birth and death chains. The state sace is {,,,...} and the transition robability has (x, x + ) = x (x, x ) = q x for x > (x, x) = r x for x while the other (x, y) =. Let V y = min{n : X n = y} be the time of the first visit to y and let h (x) = P x (V < V ). By considering what haens on the first ste, we can write h (x) = x h (x + ) + r x h (x) + q x h (x ). Set h () = c and solve this equation to conclude that is recurrent if and only if y y= x= q x/ x = where by convention x= =. First, h () = P (V < V ) = from the definition. Second, we find the solution of h (x). From the one-ste equation, We will rove h (x + ) = x [( r x )h (x) q x h (x )] = x [( x + q x )h (x) q x h (x )] = h (x) + q x x (h (x) h (x )) h (x) = c x i i= j= j by induction. The equality holds when x =. Suose it holds when x = t, then when x = t +, h (t + ) = h (t) + q t t (h (t) h (t )) t i = c i= j= t i = c i= j= t+ i = c i= j= + q t t c j t j. j + t j= j j= j 6
7 x i q Thus h (x) = c j i= j= j is the solution. Third, we find the form of c. Since h () = P (V < V ) = by definition, h () = c i i= j= i c = i= j= j j = c = h () and is the robability that we start at and visit before visiting. Define c = lim c i = i= j= j to be the robability that we start at and visit,,,... before visiting. Fourth, we make the roof. If i i= j= j =, c =. Then it is certain we will visit. And since we can only visit from or, we will visit infinitely often, so is recurrent. If i i= j= j >, c >. We will visit every state before visiting, which means we never visit so is transient Consider the Markov chain with state sace {,,,...} and transition robability (m, m + ) = ( ) for m m + (m, m ) = ( + ) for m m + and (, ) = (, ) = /4. Find the stationary distribution π. Since this is a birth-death chain, we exloit the detailed-balanced condition: Iterating the recursion leads to v m+ = (m, m + ) (m +, m) v (m + )(m + ) m = (m + )(m + 4) v m.. and so on, which simlifies to v = v 4 ; v = v ; v = v ; v = v 4 ; v = v 5 ; v = v 4 6 ; 7
8 from which the following attern is aarent (you can easily check that this solves the detailed-balanced condition): v v m = (m + ) (m + ). ow to determine the value of v by imosing that n= π n = : = n= v (n + )(n + ) = v = v ( n + ) n + n= ( n + n + + n + n + n= ) = v ( + ) = 9 4 v, so v = 4/9 and π n =, for all n. 4(n + )(n + ) 9.5 Consider a branching rocess as defined in Examle 7., in which each family has a number of children that follows a shifted geometric distribution: k = ( ) k for k, which counts the number of failures before the first success when success has robability. Comute the robability that starting from one individual the chain will be absorbed at. Let e be the extinction robability starting at one individual. It follows that e = ( ) k ( e ) k k= ( ) = (( )( e )) k k= ( ) = e ( ). It follows that e =, and hence the robability that the chain will be absorbed at, starting with one individual is. 7. Suose that the time to reair a machine is exonentially distributed random variable with mean. (a) What is the robability the reair takes more than hours. (b) What is the robability that the reair takes more than 5 hours given that it takes more than hours. (a) Since the mean is, the rate is /. P (T > ) = e / = e. (b) P (T > 5 T > ) = P (T > ) = e. 7. The lifetime of radio is exonentially distributed with mean 5 years. If Ted buys a 7 year-old radio, what is the robability it will be working years later? The rate is /5. P (T > T > 7) = P (T > ) = e /5. 8
1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More information6 Stationary Distributions
6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π
More informationSolution: (Course X071570: Stochastic Processes)
Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationAnalysis of some entrance probabilities for killed birth-death processes
Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction
More information8 STOCHASTIC PROCESSES
8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular
More informationAn Introduction to Information Theory: Notes
An Introduction to Information Theory: Notes Jon Shlens jonshlens@ucsd.edu 03 February 003 Preliminaries. Goals. Define basic set-u of information theory. Derive why entroy is the measure of information
More informationNamed Entity Recognition using Maximum Entropy Model SEEM5680
Named Entity Recognition using Maximum Entroy Model SEEM5680 Named Entity Recognition System Named Entity Recognition (NER): Identifying certain hrases/word sequences in a free text. Generally it involves
More informationBrownian Motion and Random Prime Factorization
Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........
More informationExtension of Minimax to Infinite Matrices
Extension of Minimax to Infinite Matrices Chris Calabro June 21, 2004 Abstract Von Neumann s minimax theorem is tyically alied to a finite ayoff matrix A R m n. Here we show that (i) if m, n are both inite,
More informationB8.1 Martingales Through Measure Theory. Concept of independence
B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.
More informationAPPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One
APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationECE 534 Information Theory - Midterm 2
ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You
More information15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018
5-45/65: Design & Analysis of Algorithms October 23, 208 Lecture #7: Prediction from Exert Advice last changed: October 25, 208 Prediction with Exert Advice Today we ll study the roblem of making redictions
More informationHENSEL S LEMMA KEITH CONRAD
HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar
More informationModule 8. Lecture 3: Markov chain
Lecture 3: Markov chain A Markov chain is a stochastic rocess having the roerty that the value of the rocess X t at time t, deends only on its value at time t-1, X t-1 and not on the sequence X t-2, X
More informationTowards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK
Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More information1 Probability Spaces and Random Variables
1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1
More informationECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010
ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,
More informationConvex Optimization methods for Computing Channel Capacity
Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationRANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES
RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete
More informationUnderstanding and Using Availability
Understanding and Using Availability Jorge Luis Romeu, Ph.D. ASQ CQE/CRE, & Senior Member Email: romeu@cortland.edu htt://myrofile.cos.com/romeu ASQ/RD Webinar Series Noviembre 5, J. L. Romeu - Consultant
More informationNotes on Instrumental Variables Methods
Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of
More informationCHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules
CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is
More informationA BOUND FOR THE COPS AND ROBBERS PROBLEM *
SIAM J. DISCRETE MATH. Vol. 25, No. 3,. 1438 1442 2011 Society for Industrial and Alied Mathematics A BOUND FOR THE COPS AND ROBBERS PROBLEM * ALEX SCOTT AND BENNY SUDAKOV Abstract. In this short aer we
More informationReal Analysis 1 Fall Homework 3. a n.
eal Analysis Fall 06 Homework 3. Let and consider the measure sace N, P, µ, where µ is counting measure. That is, if N, then µ equals the number of elements in if is finite; µ = otherwise. One usually
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationTopic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar
15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider
More information1 Random Variables and Probability Distributions
1 Random Variables and Probability Distributions 1.1 Random Variables 1.1.1 Discrete random variables A random variable X is called discrete if the number of values that X takes is finite or countably
More informationMATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression
1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression
More informationEconomics 101. Lecture 7 - Monopoly and Oligopoly
Economics 0 Lecture 7 - Monooly and Oligooly Production Equilibrium After having exlored Walrasian equilibria with roduction in the Robinson Crusoe economy, we will now ste in to a more general setting.
More information4. Score normalization technical details We now discuss the technical details of the score normalization method.
SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules
More information3. Show that if there are 23 people in a room, the probability is less than one half that no two of them share the same birthday.
N12c Natural Sciences Part IA Dr M. G. Worster Mathematics course B Examles Sheet 1 Lent erm 2005 Please communicate any errors in this sheet to Dr Worster at M.G.Worster@damt.cam.ac.uk. Note that there
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationMath 5330 Spring Notes Prime Numbers
Math 5330 Sring 208 Notes Prime Numbers The study of rime numbers is as old as mathematics itself. This set of notes has a bunch of facts about rimes, or related to rimes. Much of this stuff is old dating
More information1 Random Experiments from Random Experiments
Random Exeriments from Random Exeriments. Bernoulli Trials The simlest tye of random exeriment is called a Bernoulli trial. A Bernoulli trial is a random exeriment that has only two ossible outcomes: success
More informationCSE 599d - Quantum Computing When Quantum Computers Fall Apart
CSE 599d - Quantum Comuting When Quantum Comuters Fall Aart Dave Bacon Deartment of Comuter Science & Engineering, University of Washington In this lecture we are going to begin discussing what haens to
More informationThe non-stochastic multi-armed bandit problem
Submitted for journal ublication. The non-stochastic multi-armed bandit roblem Peter Auer Institute for Theoretical Comuter Science Graz University of Technology A-8010 Graz (Austria) auer@igi.tu-graz.ac.at
More informationUnderstanding and Using Availability
Understanding and Using Availability Jorge Luis Romeu, Ph.D. ASQ CQE/CRE, & Senior Member C. Stat Fellow, Royal Statistical Society Past Director, Region II (NY & PA) Director: Juarez Lincoln Marti Int
More informationAM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 6 February 12th, 2014
AM 221: Advanced Otimization Sring 2014 Prof. Yaron Singer Lecture 6 February 12th, 2014 1 Overview In our revious lecture we exlored the concet of duality which is the cornerstone of Otimization Theory.
More informationOutline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu
and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationBERNOULLI TRIALS and RELATED PROBABILITY DISTRIBUTIONS
BERNOULLI TRIALS and RELATED PROBABILITY DISTRIBUTIONS A BERNOULLI TRIALS Consider tossing a coin several times It is generally agreed that the following aly here ) Each time the coin is tossed there are
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Introduction to Optimization (Spring 2004) Midterm Solutions
MASSAHUSTTS INSTITUT OF THNOLOGY 15.053 Introduction to Otimization (Sring 2004) Midterm Solutions Please note that these solutions are much more detailed that what was required on the midterm. Aggregate
More informationOn random walks. i=1 U i, where x Z is the starting point of
On random walks Random walk in dimension. Let S n = x+ n i= U i, where x Z is the starting point of the random walk, and the U i s are IID with P(U i = +) = P(U n = ) = /2.. Let N be fixed (goal you want
More informationLearning Sequence Motif Models Using Gibbs Sampling
Learning Sequence Motif Models Using Gibbs Samling BMI/CS 776 www.biostat.wisc.edu/bmi776/ Sring 2018 Anthony Gitter gitter@biostat.wisc.edu These slides excluding third-arty material are licensed under
More information56:171 Operations Research Final Exam December 12, 1994
56:171 Operations Research Final Exam December 12, 1994 Write your name on the first page, and initial the other pages. The response "NOTA " = "None of the above" Answer both parts A & B, and five sections
More informationAvailability and Maintainability. Piero Baraldi
Availability and Maintainability 1 Introduction: reliability and availability System tyes Non maintained systems: they cannot be reaired after a failure (a telecommunication satellite, a F1 engine, a vessel
More informationChapter 7: Special Distributions
This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli
More informationLecture 14: Introduction to Decision Making
Lecture 14: Introduction to Decision Making Preferences Utility functions Maximizing exected utility Value of information Actions and consequences So far, we have focused on ways of modeling a stochastic,
More informationTopic 7: Using identity types
Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules
More informationA Social Welfare Optimal Sequential Allocation Procedure
A Social Welfare Otimal Sequential Allocation Procedure Thomas Kalinowsi Universität Rostoc, Germany Nina Narodytsa and Toby Walsh NICTA and UNSW, Australia May 2, 201 Abstract We consider a simle sequential
More informationChapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population
Chater 7 and s Selecting a Samle Point Estimation Introduction to s of Proerties of Point Estimators Other Methods Introduction An element is the entity on which data are collected. A oulation is a collection
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More information14 Branching processes
4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of
More informationDepartment of Mathematics
Deartment of Mathematics Ma 3/03 KC Border Introduction to Probability and Statistics Winter 209 Sulement : Series fun, or some sums Comuting the mean and variance of discrete distributions often involves
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationChapter 7 Rational and Irrational Numbers
Chater 7 Rational and Irrational Numbers In this chater we first review the real line model for numbers, as discussed in Chater 2 of seventh grade, by recalling how the integers and then the rational numbers
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More information1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th
18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider
More informationCHAPTER 3: TANGENT SPACE
CHAPTER 3: TANGENT SPACE DAVID GLICKENSTEIN 1. Tangent sace We shall de ne the tangent sace in several ways. We rst try gluing them together. We know vectors in a Euclidean sace require a baseoint x 2
More informationMATH 361: NUMBER THEORY EIGHTH LECTURE
MATH 361: NUMBER THEORY EIGHTH LECTURE 1. Quadratic Recirocity: Introduction Quadratic recirocity is the first result of modern number theory. Lagrange conjectured it in the late 1700 s, but it was first
More informationSan Francisco State University ECON 851 Summer Problem Set 1
San Francisco State University Michael Bar ECON 85 Summer 05 Problem Set. Suose that the wea reference relation % on L is transitive. Prove that the strict reference relation is transitive. Let A; B; C
More informationProbabilistic Analysis of a Computer System with Inspection and Priority for Repair Activities of H/W over Replacement of S/W
International Journal of Comuter Alications (975 8887) Volume 44 o. Aril Probabilistic Analysis of a Comuter ystem with Insection and Priority for Reair Activities of /W over Relacement of /W Jyoti Anand
More informationCERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education
CERIAS Tech Reort 2010-01 The eriod of the Bell numbers modulo a rime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education and Research Information Assurance and Security Purdue University,
More informationSolved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.
Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the
More informationOn a Markov Game with Incomplete Information
On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information
More informationRandom variables. Lecture 5 - Discrete Distributions. Discrete Probability distributions. Example - Discrete probability model
Random Variables Random variables Lecture 5 - Discrete Distributions Sta02 / BME02 Colin Rundel Setember 8, 204 A random variable is a numeric uantity whose value deends on the outcome of a random event
More informationMATH HOMEWORK PROBLEMS D. MCCLENDON
MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X
More informationHARMONIC EXTENSION ON NETWORKS
HARMONIC EXTENSION ON NETWORKS MING X. LI Abstract. We study the imlication of geometric roerties of the grah of a network in the extendibility of all γ-harmonic germs at an interior node. We rove that
More informationOptimism, Delay and (In)Efficiency in a Stochastic Model of Bargaining
Otimism, Delay and In)Efficiency in a Stochastic Model of Bargaining Juan Ortner Boston University Setember 10, 2012 Abstract I study a bilateral bargaining game in which the size of the surlus follows
More informationEinführung in Stochastische Prozesse und Zeitreihenanalyse Vorlesung, 2013S, 2.0h March 2015 Hubalek/Scherrer
Name: Mat.Nr.: Studium: Bitte keinen Rotstift verwenden! 15.593 Einführung in Stochastische Prozesse und Zeitreihenanalyse Vorlesung, 213S, 2.h March 215 Hubalek/Scherrer (Dauer 9 Minutes, Permissible
More informationEcon 101A Midterm 2 Th 8 April 2009.
Econ A Midterm Th 8 Aril 9. You have aroximately hour and minutes to answer the questions in the midterm. I will collect the exams at. shar. Show your work, and good luck! Problem. Production (38 oints).
More informationDRAFT - do not circulate
An Introduction to Proofs about Concurrent Programs K. V. S. Prasad (for the course TDA383/DIT390) Deartment of Comuter Science Chalmers University Setember 26, 2016 Rough sketch of notes released since
More informationSTA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2
STA 25: Statistics Notes 7. Bayesian Aroach to Statistics Book chaters: 7.2 1 From calibrating a rocedure to quantifying uncertainty We saw that the central idea of classical testing is to rovide a rigorous
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationDistributed Rule-Based Inference in the Presence of Redundant Information
istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced
More informationProof: We follow thearoach develoed in [4]. We adot a useful but non-intuitive notion of time; a bin with z balls at time t receives its next ball at
A Scaling Result for Exlosive Processes M. Mitzenmacher Λ J. Sencer We consider the following balls and bins model, as described in [, 4]. Balls are sequentially thrown into bins so that the robability
More informationStatics and dynamics: some elementary concepts
1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and
More informationSolutions to In Class Problems Week 15, Wed.
Massachusetts Institute of Technology 6.04J/18.06J, Fall 05: Mathematics for Comuter Science December 14 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised December 14, 005, 1404 minutes Solutions
More informationHAUSDORFF MEASURE OF p-cantor SETS
Real Analysis Exchange Vol. 302), 2004/2005,. 20 C. Cabrelli, U. Molter, Deartamento de Matemática, Facultad de Cs. Exactas y Naturales, Universidad de Buenos Aires and CONICET, Pabellón I - Ciudad Universitaria,
More informationVarious Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems
Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various
More informationRESOLUTIONS OF THREE-ROWED SKEW- AND ALMOST SKEW-SHAPES IN CHARACTERISTIC ZERO
RESOLUTIONS OF THREE-ROWED SKEW- AND ALMOST SKEW-SHAPES IN CHARACTERISTIC ZERO MARIA ARTALE AND DAVID A. BUCHSBAUM Abstract. We find an exlicit descrition of the terms and boundary mas for the three-rowed
More informationSums of independent random variables
3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationarxiv:math/ v2 [math.nt] 21 Oct 2004
SUMS OF THE FORM 1/x k 1 + +1/x k n MODULO A PRIME arxiv:math/0403360v2 [math.nt] 21 Oct 2004 Ernie Croot 1 Deartment of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332 ecroot@math.gatech.edu
More informationRecursive Estimation of the Preisach Density function for a Smart Actuator
Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator
More information1 Riesz Potential and Enbeddings Theorems
Riesz Potential and Enbeddings Theorems Given 0 < < and a function u L loc R, the Riesz otential of u is defined by u y I u x := R x y dy, x R We begin by finding an exonent such that I u L R c u L R for
More informationOn Wald-Type Optimal Stopping for Brownian Motion
J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of
More informationBirth-death chain models (countable state)
Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the
More informationPractice Final Solutions
Practice Final Solutions 1. True or false: (a) If a is a sum of three squares, and b is a sum of three squares, then so is ab. False: Consider a 14, b 2. (b) No number of the form 4 m (8n + 7) can be written
More informationSplit the integral into two: [0,1] and (1, )
. A continuous random variable X has the iecewise df f( ) 0, 0, 0, where 0 is a ositive real number. - (a) For any real number such that 0, rove that the eected value of h( X ) X is E X. (0 ts) Solution:
More informationStochastic integration II: the Itô integral
13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the
More informationMODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL
Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management
More informationElementary theory of L p spaces
CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )
More informationGENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS
GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS PALLAVI DANI 1. Introduction Let Γ be a finitely generated grou and let S be a finite set of generators for Γ. This determines a word metric on
More information