ANALYTICAL MODEL OF A VIRTUAL BACKBONE STABILITY IN MOBILE ENVIRONMENT

Size: px
Start display at page:

Download "ANALYTICAL MODEL OF A VIRTUAL BACKBONE STABILITY IN MOBILE ENVIRONMENT"

Transcription

1 (The 4th New York Metro Area Networking Workshop, New York City, Sept. 2004) ANALYTICAL MODEL OF A VIRTUAL BACKBONE STABILITY IN MOBILE ENVIRONMENT Ibrahim Hökelek 1, Mariusz A. Fecko 2, M. Ümit Uyar 1 1 Electrical Engineering Dept., CCNY, The City University of New York, NY, USA 2 Applied Research Area, Telcordia Technologies, Inc., Piscataway, NJ, USA 1 INTRODUCTION The reliable server pooling (RSP) [4] provides a client s application (also called Pool User (PU)) with a range of reliability services, from server selection to an automatic sessionfailover capability. Servers with the same application functionality, which are also called Pool Elements (PEs), are grouped into server pools identified by a pool handle. A PU accesses the PEs by querying its Primary Name Server (PNS). Recently, we have defined, implemented, and demonstrated a new RSP architecture called Dynamic Survivable Resource Pooling (DSRP) [1], which deploys NSs on a dynamic virtual backbone (VB) for ad hoc networks. The DSRP architecture consists of two independent parts: (1) formation and maintenance of the backbone, where the most stable nodes are dynamically selected as the backbone nodes, and (2) distribution of resource registrations, requests, and replies over the mesh of backbone nodes. This paper focuses on modeling the first part of the DSRP. The base model employs the dynamics of nodes (NS, PE, PU) and VB driven by a node movement, where link creation/failure is modeled via a random walk with probabilistic state-transition matrix [3]. Because the backbone formation algorithm gives the preference to nodes with the small number of link changes and the high degree, the link arrivals and departures determine the probability (and thus the expected time) for an NS to leave, join, or remain in the backbone, i.e., the stability of a dynamic structure of NSs. We developed an initial model for the DSRP to calculate the following end-user metric: What is the expected delay to get service request resolved? The stability and accessibility of a virtual backbone is a major contributing factor to this delay. We thus obtain several related metrics: the probability of a PE/PU not having an operational PNS; the stability of an NS, Prepared through collaborative participation in the Communications & Networks Consortium sponsored by the U.S. Army Research Lab under the Collaborative Technology Alliance Program, Cooperative Agreement DAAD The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon. Copyright c 2004 Telcordia Technologies, Inc. and City University of New York. All rights reserved. i.e., the expected time for an NS to leave the backbone; and the expected delay for a PE/PU to find another PNS when the previous one becomes unavailable. 2 BASE MODEL In our approach, we construct a state transition matrix M so that each element M i,j could represent the probability to transit from the i-th to the j-th state within one time unit. When we use the steady state probabilities for M, we can derive a new Markov chain whose states represent the number of available links for a specific node. There may be multiple link arrivals and multiple link departures in one time unit, characterized by a pair of random variables. Based on these random variables, we obtain the stochastic point process that defines the probabilities of a node s degree to drop below or exceed the threshold as the node moves. The expected time of this event determines when an NS turns into a non-backbone node (and vice versa). Ref. [3] uses the discrete-time random walk model to predict the route life time in multi-hop mobile ad hoc networks. Two mobile stations can move in one unit time, and a vector representing a wireless link between two mobile stations is called a state. For example, if one two nodes are in locations (0,0) and (x,y), respectively, then link state between these nodes is < x, y > (< x 0, y 0 >). A random walk model is applied to formulate how a wireless link changes states. Consider any wireless link < x, y > connecting two mobile nodes: after one time unit each mobile node moves into one of its six neighbor cells with probability of 1/6 for each direction. Therefore, there are 36 possible combinations for the next link state. A state reduction technique decreases this number to only 19 possible combinations for the next link state. Finally, a state transition matrix M such that each element M i,j represents the probability to transit from the ith state to the jth state is constructed. Matrix M represents the state transition probabilities after one time unit. Similar to Ref. [3], we further partition cells into layers, where cell (0,0) is on layer 0, the six cells surrounding cell (0,0) are on layer 1, and the outer cells surrounding cells at layer i are on layer i + 1. We followed the procedure in 1

2 Ref. [3] to construct matrix M. However, in our case, M contains more outer states because we have to consider all possible available and unavailable links that may become available. Moreover, to approximate a limited geographic area, our M matrix is constructed in such a way that outer states in the last two layers bounce back to both the inner and the outer states. 3 ANALYTICAL MODEL For N mobile nodes and a given link with state i, which can be either available or unavailable, let P a (i) and P u (i) denote the probabilities that the state will be available or unavailable in the next time unit, respectively. Given k available links, we want to calculate the probability P k,k+1 that there will be k+1 available links in the next time unit. This is possible only if l links disappear and l+1 links appear in the next time unit for l = 0, 1, 2,..., k. Let P dap (k, l) denote the probability that l of k available links will disappear and P ap (K u, l + 1) denotes the probability that l+1 of K u unavailable links will appear in one time unit. If we use the steady state values of the state transition matrix M i,j, then P a (i) and P u (i) will be the same for each link state i without depending on the initial link states. Also, for some initial outer (i.e., link unavailable) link states, the probability of going from an outer state to all inner (i.e., link available) states in one time unit is zero when P a (i) and P u (i) are used without the steady state values (P a (i) = 0 for some outer link states i). However, there may be a certain probability that a link in an outer state will be in an inner state after a certain time. When we use the stationary distribution of M i,j, this probability will be accounted for in the model. Then, the steady state values of P a (i) and P u (i) are calculated as follows: P u (i) = P a (i) = S T j=s a+1 s a j=0 M i,j = 1 s a j=0 M i,j = 1 P a = P u (1) M i,j = P a = 1 P u (2) where j = 0, 1, 2,...., s a represent available link states (inner states) and j = s a +1, s a +2,...., S T represent unavailable link states (outer states). When we use the steady state probabilities of the M matrix, then P dap (k, l) and P ap (K u, l + 1) are as follows: ( ) k P dap (k, l) = Pu l P k l P ap (K u, l + 1) = l ( Ku l + 1 ) P l+1 a a (3) P Ku l 1 u (4) where P a and P u denotes the steady state values of the probabilities that a given link will be available and unavailable at the steady state, respectively. Then, P k,k+1 can be calculated as follows: P k,k+1 = P dap (k, l) P ap (K u, l + 1) (5) The above formulation represents only the probability for a transition from (k)th state to (k+1)th state. However, there may be a transition from (k)th state to (k+h)th state in one time unit for any possible k and 0 k + h K. Given k available links, we want to calculate the probability that there will be k+h available links in the next time unit. This is possible only if l of k available links disappears and l+h of K u unavailable links appears in the next time unit for l = 0, 1, 2,..., k and 0 l + h K u. Let P ap (K u, l + h) denote the probability that l+h links will appear in one time unit. P dap (k, l) has been formulated above. If we use the steady state values of P a (i) = P a and P u (i) = P u, then P ap (K u, l + h) = ( Ku l + h ) P l+h a P Ku l h u (6) Given k available links, let P k,k+h denote the probability that there will be k+h available links in the next time unit, where 0 k K and 0 k + h K. P k,k+h = P dap (k, l) P ap (K u, l + h) (7) By exploiting Eqs. (3) and (6) in Eq. (7), we obtain: P k,k+h = ( ) ( ) k Ku Pa k+h Pu Ku h (8) l l + h where 0 k K, 0 k + h K, and 0 l + h K u We then obtain a new finite state Markov chain using the stationary distribution of the state transition matrix M i,j. In the new chain, there is a transition probability from any state to other states: for each k = 0, 1, 2,..., K and k + h = 0, 1, 2,..., K, there exists a certain probability of going from (k)th state to (k+h)th state, P k,k+h. Let us denote this new Markov chain M i,j. Since this Markov chain is ergodic (finite, connected, and aperiodic), there is a stationary distribution for this process: π k = P r(state = k) for k = 0, 1, 2,..., K. 2 Our aim is now to find the probability mass function for the number of link changes in one time unit. In other words, the new process describes multiple link arrivals and multiple

3 link departures within one time unit. The number of link changes is equal to the number of link arrivals minus the number of link departures, both in one time unit. We want to calculate the probability mass function of the number of link changes, which can be both positive or negative. Let us define a random variable Z that denotes the number of link changes in one time unit. The probability distribution of Z can be calculated as follows: p z (l) = P r(z = l) = K P (k, k + l) π k (9) k=0 where l is an integer between -K and K. We can now obtain the probability mass function for the number of link changes for a node in one time unit. The actual metric that we would like to calculate is the expected time T that a non-ns node will be an NS node or vice versa. Let us concentrate on the former case since the latter case can be found using the similar procedure. Let us define a new set of random variables Z 1,..., Z m that represent the number of link changes for 1 st, 2 nd,..., m th step, respectively. Then the total net number of link changes from the initial time to m-th step will be the sum of the link changes occurred in each step. Let us define a new random variable S m which denotes the total net number of link changes until the mth step which the degree of a node will be equal or greater than the threshold degree: S m = Z 1 + Z Z m. Variable thr 0 represents the difference between the threshold number of available links and the initial number of available links: thr 0 = d thr d 0. We first calculate the probability A m that the degree of a node will be equal or greater than the threshold for the first time, assuming that this event will happen at the m-th step. This means that the degree of this node will be less than the threshold at all steps before the mth step and equal or greater than the threshold at the mth step. Having derived A m, we then obtain the sought metric T : A m = T = m P r(s i thr 0 ( j [0, i])s j < thr 0 ) (10) i=1 m A m (11) m=0 3.1 First passage time analysis for nonns-to-ns case The original problem is to calculate the expected time that a non-ns node, which has initially a smaller number of available links (d 0 ) than the threshold (d thr ), will become an NS. This is possible only when the number of available links for this node will be equal to or greater than the threshold. 3 Table 1: Transition matrix Q: NonNs-to-NS case Q(i, j) = P (i, j) if i < d thr, j < d thr Q(i, d thr ) = K j=d thr P (i, j) if i < d thr, j d thr Q(d thr, j) = 0 if i d thr, j < d thr Q(d thr, d thr ) = 1 Table 2: Transition matrix Q: NS-to-nonNS case Q(i, j) = P (i, j) if i > d thr, j > d thr Q(i, d thr ) = d thr j=0 P (i, j) if i > d thr, j d thr Q(d thr, j) = 0 if i d thr, j > d thr Q(d thr, d thr ) = 1 The above metric T can be obtained as the solution to the Eq. (11), for which we use the first passage time analysis. The first passage time analysis allows us to find the number of transitions made by the process in going from one state to another for the first time. The expected first passage time is the expected number of transitions made by the process in going from one state to another for the first time. Typically, the first passage time analysis is performed for a pair of states. However, in our case, given that a node has the smaller number of available links than the threshold (state i where i = d 0 < d thr ), we want to find the expected first time that the number of available links will be equal to (j = d thr ) or greater than the threshold (j = d thr +1, d thr +2,...., K). In other words, we want to calculate the expected first time that the number of link changes (arrivals) will be equal or exceed a certain number (thr 0 = d thr d 0 ). In the original chain, we combined the states that represent the number of available links equal to or greater than the threshold into a single state (d thr ). We also found the entries of the new transition matrix as shown in Table 1. This new state transition matrix Q is a (d thr + 1) (d thr + 1) matrix and the corresponding Markov chain has d thr + 1 different states. Let us define a new set of random variables X 0, X 1, X 2..., X m that represent the number of available links at initial, 1 st, 2 nd,..., m th time units, respectively. The number of transitions made by the process in going from state i to j for the first time is Let T = min{m 1 : X m = j X 0 = i} (12) denote the probability that T = m. Then f (1) = q (1) = q, = q ik f (m 1) kj (13) k j 0, 1 (14)

4 where q is the element of the ith row and jth column of Q matrix, which denotes the state transition probability of going from state i to j in one time unit. Then the expected first passage time can be calculated as follows: µ = m if if < 1 = First passage time analysis for NS-to-nonNS case (15) All the steps applied in the previous section (nonns-to-ns) are valid in this case. The only differences are that we modify the P and Q matrices as shown in Table 2. 4 NUMERICAL RESULTS In our calculations, the total number of layers (n tot ) is 20 and the number of layers representing the available (inner) link states (n av ) is 5. The total number of layers gives us the maximum distance in terms of hexagonal cells between two mobile nodes in the network. If a distance between two nodes is less than or equal to 5 layers, we assume that these two nodes can communicate with each other through an available wireless link. Each layer may have the different number of link states. For example, layer 0 has only one state < 0, 0 >, layer 1 has only one state < 1, 0 >, layer 2 has two states < 2, 0 > and < 1, 1 >, and layer 20 has 11 states < 20, 0 >, < 19, 1 >,..., and < 10, 10 >. The total number of states is 121 for all layers (n tot = 20). The total number of inner states is 12 for five inner layers (n av = 5) and thus the total number of outer states is 109 (121-12=109). In our calculations, the number of mobile nodes is 106 so that there are 5,565 total bi-directional links in the network and 105 bi-directional links for a single node. These links can be available or unavailable. As numerical results, we report the expected time that a non- NS node becomes a NS (T NS (n tot, n av )), the expected time that a NS node becomes a non-ns (T nonns (n tot, n av )) and the mean number of neighbors (N(n tot, n av )) for a particular node. The condition for being a NS is to have a number of neighbors which is equal to or greater than the threshold and the condition for being a non-ns is to have a number of neighbors which is less than the threshold. The expected times and the mean number of neighbors are function of n tot and n av for a fixed N. 4.1 The expected first times from nonns to NS The expected first passage times (in time units) are shown in Table 3. Table 3: Expected Times for the nonns-to-ns case d 0 d thr T NS (20, 5) By examining Table 3, it is clear that the expected times from any initial state to a certain threshold state are independent of the initial state (the initial number of the available links). This result is expected since we used the steady state probabilities for the availability and unavailability of a given link. A given link without depending on its initial link state (available or unavailable initially) will be available with the probability P a and unavailable with the probability P u in the next time unit. Therefore, the state transition matrix Q has the identical elements at each column, and for a given threshold, the probability mass functions of the first passage times from all states to this threshold state are the same. Since the expected times from any initial state to a certain threshold are independent of the initial state (i.e., the initial number of the available links) as shown in Table 3, we set the initial number of available links to 0 and obtained the expected times for different threshold values. Various cases for different values of parameters are analyzed below. We vary the number of mobile nodes N, the total number of layers n tot, and the number of available layers n av. For all numerical results, N is fixed to 106 and n tot and n av are varied to obtain the different dense networks. Here, the density of a network is defined as the number of mobile nodes per a hexagonal cell. The physical area of a network is determined by the total number of layers n tot. Since we fixed N as 106 and n av as 5 for all numerical results, the density of a network depends only on n tot. We assume that the size of a hexagonal cell is the same for all cases. For n tot = 20, we define the network as medium in terms of its density. We define the network as sparse for n tot = 30, sparsest for n tot = 40, dense for n tot = 15, and densest for n tot = 10. Below we present numerical results for different cases, with the comparison of expected times shown in Fig. 1. 4

5 4.1.1 Medium Network I (n tot = 20 and n av = 5) When the threshold is set to one, there are only two states (0,1) that represent the number of available links for a particular node. Since the initial state is zero, at least one unit time is needed for going from state 0 to state 1 and some expected time ( ). The latter component (0.0002) comes from non-zero probabilities of staying at state 0 in next time units. When the threshold is set to two, there are three possible states (0,1,2). The probabilities of staying at some states (0 or 1) before going into state 2 are higher in this case, and therefore the expected time is higher compared to the case where the threshold is one. From the P matrix, N(20, 5) is calculated as We observed that when the threshold is less than or around 10, the expected first times are small. This observation indicates that the process has a tendency to fluctuate around the expected values. However, when the threshold is greater than 10, the expected first times increase exponentially Sparsest Network (n tot = 40 and n av = 5) We used the total number of layers as 40 and the number of layers that represents the available (inner) link states as 5. The probability of being available in the next time unit at the steady state P a will be much lower than the P a values of the medium and densest networks. From the P matrix, N(40, 5) is calculated as 2.08 compared to N(20, 5) and N(10, 5) calculated as 8.33 and 34.90, respectively. Therefore, it is expected that T NS (40, 5) will be greater than T NS (20, 5) and T NS (10, 5). Moreover, we could not obtain the expected first times for the threshold values greater than 11. Since N(40, 5) is about 2, it takes much more time to visit the states greater than 11. Expected time 10 6 The expected first times vs threshold densest dense normal sparse sparsest threshold Figure 1: The expected times vs threshold to become an NS in different density networks. 5 Future Work The presented work allows the modeling of ad hoc network schemes that depend on either the degree of a node or the rate of change in the set of node s links. Thus far we have focused on the former, but the model can easily be extended to calculate the expected stability of the node, e.g., the number of links transitioning between the states of available and unavailable. This extension should help model the DSRP more accurately (i.e., the stability of a node s links is one of the criteria for joing the backbone), but also to analyze the convergence of routing protocols or bandwidth estimation techniques that depend on the link stability Densest Network (n tot = 10 and n av = 5) We used the total number of layers as 10 and the number of layers which represents the available (inner) link states as 5. The probability of being available in the next time unit at the steady state P a will be much higher than the P a values of the medium and sparsest networks. From the P matrix, N(10, 5) is calculated as compared to N(20, 5) and N(40, 5) calculated as 8.33 and 2.08, respectively. Therefore, T NS (10, 5) is much lower than T NS (20, 5) and T NS (40, 5). Moreover, the expected first times are very close to the ones for the threshold values lower than N(10, 5) = The one unit time is necessary for going from the initial state to the next state and the probability of going into the threshold values is very high, which can be observed from the P a value and the number of nodes in the network. N(10, 5) is about 35, therefore it is expected that it will take much less time to visit the states than References [1] M.A. Fecko, U.C. Kozat, S. Samtani, M.U. Uyar, and I. Hökelek. Dynamic survivable resource pooling in mobile ad hoc networks. In Proc. IEEE Int l Symp. Comput. Commun. (ISCC), Alexandria, Egypt, [2] U.C. Kozat and L. Tassiulas. Service discovery in mobile ad hoc networks: An overall perspective on architectural choices and network layer support issues. [Elsevier] Ad-Hoc Netw. 2(1), pp , [3] Y.-C. Tseng, Y.-F. Li, and Y.-C. Chang. On route lifetime in multihop mobile ad hoc networks. IEEE Trans. Mob. Comput. 2(4), pp , [4] M.U. Uyar, J. Zheng, M.A. Fecko, S. Samtani, and P.T. Conrad. Evaluation of architectures for reliable server pooling in wired and wireless environments. In Li et al., eds, Recent Advances in Service Overlay Networks (S.I.), IEEE J. Select. Areas Commun. 22(1), pp The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Lab or the U.S. Government.

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels Jie Luo, Anthony Ephremides ECE Dept. Univ. of Maryland College Park, MD 20742

More information

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing Michael J. Neely University of Southern California http://www-bcf.usc.edu/ mjneely 1 Abstract This collection of notes provides a

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Dynamic Power Allocation and Routing for Time Varying Wireless Networks

Dynamic Power Allocation and Routing for Time Varying Wireless Networks Dynamic Power Allocation and Routing for Time Varying Wireless Networks X 14 (t) X 12 (t) 1 3 4 k a P ak () t P a tot X 21 (t) 2 N X 2N (t) X N4 (t) µ ab () rate µ ab µ ab (p, S 3 ) µ ab µ ac () µ ab (p,

More information

MATH 446/546 Test 2 Fall 2014

MATH 446/546 Test 2 Fall 2014 MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Energy Optimal Control for Time Varying Wireless Networks. Michael J. Neely University of Southern California

Energy Optimal Control for Time Varying Wireless Networks. Michael J. Neely University of Southern California Energy Optimal Control for Time Varying Wireless Networks Michael J. Neely University of Southern California http://www-rcf.usc.edu/~mjneely Part 1: A single wireless downlink (L links) L 2 1 S={Totally

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information

arxiv: v1 [cs.cr] 20 Dec 2012

arxiv: v1 [cs.cr] 20 Dec 2012 Modeling and Performance Evaluation of Computer Systems Security Operation D. Guster N. K. Krivulin arxiv:1212.5289v1 [cs.cr] 20 Dec 2012 Abstract A model of computer system security operation is developed

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School

More information

EE263 Review Session 1

EE263 Review Session 1 EE263 Review Session 1 October 5, 2018 0.1 Importing Variables from a MALAB.m file If you are importing variables given in file vars.m, use the following code at the beginning of your script. close a l

More information

Information in Aloha Networks

Information in Aloha Networks Achieving Proportional Fairness using Local Information in Aloha Networks Koushik Kar, Saswati Sarkar, Leandros Tassiulas Abstract We address the problem of attaining proportionally fair rates using Aloha

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

STABILITY OF MULTICLASS QUEUEING NETWORKS UNDER LONGEST-QUEUE AND LONGEST-DOMINATING-QUEUE SCHEDULING

STABILITY OF MULTICLASS QUEUEING NETWORKS UNDER LONGEST-QUEUE AND LONGEST-DOMINATING-QUEUE SCHEDULING Applied Probability Trust (7 May 2015) STABILITY OF MULTICLASS QUEUEING NETWORKS UNDER LONGEST-QUEUE AND LONGEST-DOMINATING-QUEUE SCHEDULING RAMTIN PEDARSANI and JEAN WALRAND, University of California,

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

A Novel Low-Complexity HMM Similarity Measure

A Novel Low-Complexity HMM Similarity Measure A Novel Low-Complexity HMM Similarity Measure Sayed Mohammad Ebrahim Sahraeian, Student Member, IEEE, and Byung-Jun Yoon, Member, IEEE Abstract In this letter, we propose a novel similarity measure for

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Target Tracking and Classification using Collaborative Sensor Networks

Target Tracking and Classification using Collaborative Sensor Networks Target Tracking and Classification using Collaborative Sensor Networks Xiaodong Wang Department of Electrical Engineering Columbia University p.1/3 Talk Outline Background on distributed wireless sensor

More information

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Upper Bounds on Expected Hitting Times in Mostly-Covered Delay-Tolerant Networks

Upper Bounds on Expected Hitting Times in Mostly-Covered Delay-Tolerant Networks Upper Bounds on Expected Hitting Times in Mostly-Covered Delay-Tolerant Networks Max F. Brugger, Kyle Bradford, Samina Ehsan, Bechir Hamdaoui, Yevgeniy Kovchegov Oregon State University, Corvallis, OR

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Time Reversibility and Burke s Theorem

Time Reversibility and Burke s Theorem Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence

Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence Tim Holliday, Nick Bambos, Peter Glynn, Andrea Goldsmith Stanford University Abstract This paper presents a new

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Cooperative HARQ with Poisson Interference and Opportunistic Routing

Cooperative HARQ with Poisson Interference and Opportunistic Routing Cooperative HARQ with Poisson Interference and Opportunistic Routing Amogh Rajanna & Mostafa Kaveh Department of Electrical and Computer Engineering University of Minnesota, Minneapolis, MN USA. Outline

More information

AN EXACT SOLUTION FOR OUTAGE PROBABILITY IN CELLULAR NETWORKS

AN EXACT SOLUTION FOR OUTAGE PROBABILITY IN CELLULAR NETWORKS 1 AN EXACT SOLUTION FOR OUTAGE PROBABILITY IN CELLULAR NETWORKS Shensheng Tang, Brian L. Mark, and Alexe E. Leu Dept. of Electrical and Computer Engineering George Mason University Abstract We apply a

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

On max-algebraic models for transportation networks

On max-algebraic models for transportation networks K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 98-00 On max-algebraic models for transportation networks R. de Vries, B. De Schutter, and B. De Moor If you want to cite this

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Efficient Nonlinear Optimizations of Queuing Systems

Efficient Nonlinear Optimizations of Queuing Systems Efficient Nonlinear Optimizations of Queuing Systems Mung Chiang, Arak Sutivong, and Stephen Boyd Electrical Engineering Department, Stanford University, CA 9435 Abstract We present a systematic treatment

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

On the Average Pairwise Connectivity of Wireless Multihop Networks

On the Average Pairwise Connectivity of Wireless Multihop Networks On the Average Pairwise Connectivity of Wireless Multihop Networks Fangting Sun and Mark Shayman Department of Electrical and Computer Engineering University of Maryland, College Park, MD 2742 {ftsun,

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

More information

QUEUING MODELS AND MARKOV PROCESSES

QUEUING MODELS AND MARKOV PROCESSES QUEUING MODELS AND MARKOV ROCESSES Queues form when customer demand for a service cannot be met immediately. They occur because of fluctuations in demand levels so that models of queuing are intrinsically

More information

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Markov Reliability and Availability Analysis. Markov Processes

Markov Reliability and Availability Analysis. Markov Processes Markov Reliability and Availability Analysis Firma convenzione Politecnico Part II: Continuous di Milano e Time Veneranda Discrete Fabbrica State del Duomo di Milano Markov Processes Aula Magna Rettorato

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Markov Repairable Systems with History-Dependent Up and Down States

Markov Repairable Systems with History-Dependent Up and Down States Markov Repairable Systems with History-Dependent Up and Down States Lirong Cui School of Management & Economics Beijing Institute of Technology Beijing 0008, P.R. China lirongcui@bit.edu.cn Haijun Li Department

More information

Queueing Networks and Insensitivity

Queueing Networks and Insensitivity Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Achievable Performance Improvements. Provided by Route Diversity in Multihop. Wireless Networks

Achievable Performance Improvements. Provided by Route Diversity in Multihop. Wireless Networks Achievable Performance Improvements 1 Provided by Route Diversity in Multihop Wireless Networks Stephan Bohacek University of Delaware Department of Electrical and Computer Engineering Newark, DE 19716

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Message Delivery Probability of Two-Hop Relay with Erasure Coding in MANETs

Message Delivery Probability of Two-Hop Relay with Erasure Coding in MANETs 01 7th International ICST Conference on Communications and Networking in China (CHINACOM) Message Delivery Probability of Two-Hop Relay with Erasure Coding in MANETs Jiajia Liu Tohoku University Sendai,

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

Perron eigenvector of the Tsetlin matrix

Perron eigenvector of the Tsetlin matrix Linear Algebra and its Applications 363 (2003) 3 16 wwwelseviercom/locate/laa Perron eigenvector of the Tsetlin matrix RB Bapat Indian Statistical Institute, Delhi Centre, 7 SJS Sansanwal Marg, New Delhi

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Operational Laws Raj Jain

Operational Laws Raj Jain Operational Laws 33-1 Overview What is an Operational Law? 1. Utilization Law 2. Forced Flow Law 3. Little s Law 4. General Response Time Law 5. Interactive Response Time Law 6. Bottleneck Analysis 33-2

More information

Quality of Real-Time Streaming in Wireless Cellular Networks : Stochastic Modeling and Analysis

Quality of Real-Time Streaming in Wireless Cellular Networks : Stochastic Modeling and Analysis Quality of Real-Time Streaming in Wireless Cellular Networs : Stochastic Modeling and Analysis B. Blaszczyszyn, M. Jovanovic and M. K. Karray Based on paper [1] WiOpt/WiVid Mai 16th, 2014 Outline Introduction

More information

CS 798: Homework Assignment 3 (Queueing Theory)

CS 798: Homework Assignment 3 (Queueing Theory) 1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of

More information

Generalized Two-Hop Relay for Flexible Delay Control in MANETs

Generalized Two-Hop Relay for Flexible Delay Control in MANETs Generalized Two-Hop Relay for Flexible Delay Control in MANETs 12 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

More information

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition

More information

Stochastic models in product form: the (E)RCAT methodology

Stochastic models in product form: the (E)RCAT methodology Stochastic models in product form: the (E)RCAT methodology 1 Maria Grazia Vigliotti 2 1 Dipartimento di Informatica Università Ca Foscari di Venezia 2 Department of Computing Imperial College London Second

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

HITTING TIME IN AN ERLANG LOSS SYSTEM

HITTING TIME IN AN ERLANG LOSS SYSTEM Probability in the Engineering and Informational Sciences, 16, 2002, 167 184+ Printed in the U+S+A+ HITTING TIME IN AN ERLANG LOSS SYSTEM SHELDON M. ROSS Department of Industrial Engineering and Operations

More information

TCP over Cognitive Radio Channels

TCP over Cognitive Radio Channels 1/43 TCP over Cognitive Radio Channels Sudheer Poojary Department of ECE, Indian Institute of Science, Bangalore IEEE-IISc I-YES seminar 19 May 2016 2/43 Acknowledgments The work presented here was done

More information

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions Score-Function Quantization for Distributed Estimation Parvathinathan Venkitasubramaniam and Lang Tong School of Electrical and Computer Engineering Cornell University Ithaca, NY 4853 Email: {pv45, lt35}@cornell.edu

More information

Outline Network structure and objectives Routing Routing protocol protocol System analysis Results Conclusion Slide 2

Outline Network structure and objectives Routing Routing protocol protocol System analysis Results Conclusion Slide 2 2007 Radio and Wireless Symposium 9 11 January 2007, Long Beach, CA. Lifetime-Aware Hierarchical Wireless Sensor Network Architecture with Mobile Overlays Maryam Soltan, Morteza Maleki, and Massoud Pedram

More information

Power Allocation and Coverage for a Relay-Assisted Downlink with Voice Users

Power Allocation and Coverage for a Relay-Assisted Downlink with Voice Users Power Allocation and Coverage for a Relay-Assisted Downlink with Voice Users Junjik Bae, Randall Berry, and Michael L. Honig Department of Electrical Engineering and Computer Science Northwestern University,

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Edmond Nurellari The University of Leeds, UK School of Electronic and Electrical

More information

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P = / 7 8 / / / /4 4 5 / /4 / 8 / 6 P = 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Andrei Andreevich Markov (856 9) In Example. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 P (n) = 0

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Fuzzy Optimization and Normal Simulation for Solving Fuzzy Web Queuing System Problems

Fuzzy Optimization and Normal Simulation for Solving Fuzzy Web Queuing System Problems Fuzzy Optimization and Normal Simulation for Solving Fuzzy Web Queuing System Problems Xidong Zheng, Kevin Reilly Dept. of Computer and Information Sciences University of Alabama at Birmingham Birmingham,

More information

Blocking Probability and Channel Assignment in Wireless Networks

Blocking Probability and Channel Assignment in Wireless Networks IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 5, NO. 4, APRIL 2006 1 Blocking Probability and Channel Assignment in Wireless Networks Murtaza Zafer and Eytan Modiano, Senior Member, IEEE Abstract

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS J.R. ARTALEJO, Department of Statistics and Operations Research, Faculty of Mathematics, Complutense University of Madrid,

More information

M/M/1 Retrial Queueing System with N-Policy. Multiple Vacation under Non-Pre-Emtive Priority. Service by Matrix Geometric Method

M/M/1 Retrial Queueing System with N-Policy. Multiple Vacation under Non-Pre-Emtive Priority. Service by Matrix Geometric Method Applied Mathematical Sciences, Vol. 4, 2010, no. 23, 1141 1154 M/M/1 Retrial Queueing System with N-Policy Multiple Vacation under Non-Pre-Emtive Priority Service by Matrix Geometric Method G. AYYAPPAN

More information

G-networks with synchronized partial ushing. PRi SM, Universite de Versailles, 45 av. des Etats Unis, Versailles Cedex,France

G-networks with synchronized partial ushing. PRi SM, Universite de Versailles, 45 av. des Etats Unis, Versailles Cedex,France G-networks with synchronized partial ushing Jean-Michel FOURNEAU ;a, Dominique VERCH ERE a;b a PRi SM, Universite de Versailles, 45 av. des Etats Unis, 78 05 Versailles Cedex,France b CERMSEM, Universite

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

Detecting Wormhole Attacks in Wireless Networks Using Local Neighborhood Information

Detecting Wormhole Attacks in Wireless Networks Using Local Neighborhood Information Detecting Wormhole Attacks in Wireless Networks Using Local Neighborhood Information W. Znaidi M. Minier and JP. Babau Centre d'innovations en Télécommunication & Intégration de services wassim.znaidi@insa-lyon.fr

More information