Model Reduction Via Aggregation for Markovian Models with Application to Occupancy Estimation

Size: px
Start display at page:

Download "Model Reduction Via Aggregation for Markovian Models with Application to Occupancy Estimation"

Transcription

1 Model Reduction Via Aggregation for Markovian Models with Application to Occupancy Estimation Joseph S. Niedbalski, Prashant G. Mehta, and Sean Meyn Abstract This paper is concerned with model reduction methods for large Markovian models. The motivation here arises from applications involving estimation problems for a random walk on large graph. The model reduction objective is to aggregate states into super-states to obtain a Markov model on the aggregated state space. Motivated by the success of Hankel norm ideas from linear systems, the proposed methodology is based on the concept of the potential matrix that may be regarded as a generalization of the Lyapunov equation used in linear systems theory. For model reduction, the entropy metric is proposed as a counterpart for l 2 energy for the linear systems. Several examples are discussed together with an application of reduced order models for estimation. I. INTRODUCTION In many applications the system of interest can be modeled by simple local rules even though the overall system is highly complex. In this paper we consider the simplest such model, described as a random walk on a large graph. Although special, such models can be applied in a range of applications, from queuing networks, sensor networks, building systems, and molecular systems. The focus of this paper is on a Markovian model of people movement in a large building, with much of the discussion focused on egress. That is, we consider a transient model in which the occupants will leave the building following a transient period. Egress may be due to an emergency, or the end of a work day. In this case, the Markov model contains a single absorbing state that represents an exit. Such models are commonly used to describe evacuation of buildings, planes, or outdoor walkways [1], [2], [3], [4]. Model reduction is essential in this application since common models explode in complexity with building size and topology. For example, a natural Markov model for the simple building illustrated in Fig. 1 has 400 states. This may grow into the billions for a large office building model with similar local dynamics. The particular approach to model reduction chosen depends of course on the purpose of the model. In this paper the focus is estimation. For example, during a fire it is valuable to have estimates of occupancy before and during egress to decide where to allocate resources for rescue as well as fire control. Such estimates can be constructed based on sensors scattered throughout the building, and a model of the behavior of the occupants. In the applications envisioned here the number of sensors is far less than the number of The authors are with the Coordinated Sciences Laboratory, University of Illinois at Urbana-Champaign, 1308 West Main Street, Urbana, IL niedbals@uiuc.edu, mehtapg@uiuc.edu, and meyn@control.csl.uiuc.edu Fig. 1. Layout of Office Model where corresponds to the sample path of the agent, are the sensors, the heavy black lines show the divisions between the corridors, red grid spaces are walls, and the blue grid space is the exit. The agent starts in the bottom-right corner and proceeds toward the exit according to simple probabilistic rules. states, so estimation error is inevitable. The goal is to carry out model reduction so that any performance degradation in the estimation error is benign. In the Markovian setting of this paper, subject to conditions on the observation process, the system can be described as a hidden Markov model (HMM) [5], [6], [7], [8], [9], [10]. The most natural estimator in this context is the Bayesian estimator, which is optimal with respect to an l 2 criterion [7]. A direct application of Bayesian techniques is not practical in many cases due to complexity. A realistic model of a building will contain billions of states to capture congestion and dynamics of individuals. However, it may be acceptable to estimate occupancy with respect to a coarse performance criterion. For example, it is important to have estimates of how many people are in a corridor, but not the exact location of the individuals. In this case it is reasonable to search for a simpler model that can be used to construct a state estimator. Perhaps the most natural approach to model reduction is to aggregate states into super-states to obtain a Markov model on the aggregated state space. This approach can be justified using the notion of nearly completely decomposable Markov chains introduced by Ando and Fisher [11]. This is the background for the decomposition results using singular perturbation in [12] (see also [13], [14]). In recent literature, multi-scale approximations of dynamical systems have been justified using stochastic averaging [15], [16], [17]. For a certain restricted class of Markov chains, aggregation has also been considered using the concept of lumpability [18]. This approach can offer substantial computational savings in estimation [19], [20], as well as control. An important new tool for understanding multi-scale phe-

2 nomenon is based on the spectral theory of Markov models. This technique is closely related to spectral graph theory that has a longer history [21], [22], [23]. Spectral graph theory is a set of techniques to decompose a large graph for various purposes. One approach is based on the construction of a Markov transition matrix. Upon computing its associated second eigenvector, states with similar eigenvector values are aggregated to perform a partition of the graph. Because of the enormous flexibility of graphical models, this technique has broad application. A recent example is the work of [24] in which a database of photographs is treated as a graph for the purposes of computer-aided facial recognition. Although a graph is an inherently static object, spectral theory is a well-known component of the dynamical theory of Markov models. In particular, for a finite state-space ergodic Markov chain the second eigenvalue is precisely the rate of convergence to stationarity. A more recent contribution to the theory of Markov chains is the use of the second eigenvector (or eigenfunction) to obtain intuition regarding dynamics, as well as methods for aggregation in complex models. This technique was introduced as a heuristic in [25], [26], [27], [28] (see also [15], [16]) to obtain a state-space decomposition based on an analysis of the Perron cluster of eigenvalues for the generator of a Markov process. The technique has been applied in diverse settings: [27] considers analysis of the nonlinear chaotic dynamics of Chua s circuit model, [25], [26], [29] concerns molecular models, [28] considers model validation for combustion dynamics, and [30] treats transport phenomena in building systems. In each of these papers it is shown through numerical examples that the associated eigenvectors carry significant information regarding dynamics. In particular, its sign structure can be used to obtain a decomposition for defining super-states. Theory to support this aggregation technique is contained in [29], based on a change of measure similar to what is used to establish large deviations techniques. In this way, these results may be regarded as an extension of the classical Wentzell Freidlin theory [31], [32], [33]. The first goal of this paper is to extend this prior work to the specific application considered in this paper. The main result of [29] requires exponential ergodicity of the Markov model, while the model considered here is transient. Next we consider refinements of this theory to treat state estimation. We find that a natural metric is based on entropy. The aggregation techniques proposed in this paper are related to the classical Hankel norm based approach for model reduction in linear systems [34], [35]. In fact, a counterpart of this approach is proposed for model reduction of HMM in the recent thesis work of Kotsalis [36] (see also [37]). Kotsalis proposes a balanced truncation algorithm that uses Hankel norm ideas to eliminate states contributing least to the system. In order to apply these ideas, the HMM is associated with a jump linear system (JLS). After reducing the JLS, the HMM of the same order is obtained by solving a non-convex optimization problem. For the Markov model considered in this paper, model reduction is based on the potential matrix. This is the solution to a linear equation that may be regarded as a generalization of the Lyapunov equation used in linear systems theory. The potential matrix is a general-purpose tool. However, to evaluate and refine a particular algorithm for estimation, which is a central motivation of this paper, we must choose a particular metric. We argue that just as energy (captured using an l 2 norm) is used as a metric to reduce the number of states in linear systems, uncertainty (captured using entropy) is a natural metric for application to model reduction in Markov models. We find that entropy is closely tied to the estimation problems of interest here, and that this metric is conveniently analyzed through the potential matrix. Both the potential matrix and entropy metric are very different from considerations of [36] where the idea was to map an HMM into a JLS with i.i.d. input. The remainder of the paper is organized as follows. In Sec. II, we describe the model reduction of a system using generalizations of observability ideas from linear systems. In Sec. III, we present several examples with Markov models. Sec. IV presents the results on reduced order estimation for the problem described in Fig. 1. Finally, in Sec. V, conclusions and directions for future research are summarized. II. MODEL REDUCTION A. Observability of Linear Systems Consider a stable discrete-time linear system, x(t + 1) = Ax(t), y(t) = Cx(t), (1) where x R m is the state, y R l is the output (with l m) and A has eigenvalues strictly inside the unit circle. We briefly summarize the concept of observability that is used for model reduction purposes (see for e.g., [34]). In particular, starting from an arbitrary initial condition x 0 R m, one considers the output sequence: x 0 {y(t)} t=0. (2) By considering the l 2 energy of this sequence: E x0 ({y(t)}) = t=0 < CA t x 0,CA t x 0 >, (3) one can assign a measure of observability to the initial condition. In particular, (suitably normalized) initial conditions x 0 with lower energy E x0 are deemed to be less observable than the ones with higher energy. Using Lyapunov theory, this can be made precise as follows. The energy is easily seen to be E x0 =< x 0,Gx 0 >, (4) where G is the solution to the Lyapunov equation A GA G = C C. (5) Since G is positive definite and symmetric, Eq. (4) defines the so-called observability ellipsoid, whose principle axis can be used to visualize energy. These axes also provide coordinates for carrying out model reduction.

3 B. Potential Matrix for a Markov Model Consider a random walk on a large graph with nodes and a single absorbing state. This model is used to describe the motion of a single agent in a building whose position at time t is denoted as a random variable using X(t). A sequence of random variables {X(1),X(2),...,X(n)} is denoted by X n 1. The initial condition is denoted as x 0. The transition matrix is defined for each i, j via, P i j = Prob(X(t + 1) = d j X(t) = d i ), (6) where D = {d i } M i=1 are the nodes of the graph. It is assumed that the absorbing state d D c, where by absorbing we mean that P(d,d ) = 1. When considering model reduction there are two different settings of interest: 1) When there are no sensors in the building so the description is a Markov chain, 2) When there are sensors in the building so the description is a hidden Markov model. A suitable emission matrix O is defined for this case. In this paper, we focus on a model reduction framework without sensors. Suppose that x 0 and x 0 are two initial conditions for this model. In our application to egress, the initial conditions correspond to two different possibilities for the initial position of the agent in the building. These initial conditions are similar if their resulting probabilistic behavior is similar. One way to quantify this is through the potential matrix, G = t=0 P t (7) where P t is the t-fold matrix product of P. We have P t i j = Prob(X(t) = d j X(0) = d i ), and hence the potential matrix has the following interpretation, G i j = E[Number of times x t = j prior to exit x 0 = d i ]. The potential matrix is the solution to a linear equation analogous to the Lyapunov equation (5), G PG = I. (8) This suggests a natural approach to decomposition of the model. Let µ x0 ( j) denote G i j for i = x 0 and j = 1,...,M. µ x0 is the also the solution to the equation: µ x0 µ x0 P = δ x0, (9) where δ x0 is the Dirac delta measure supported on initial condition x 0. It is not difficult to see that the solution is obtained using the potential matrix: With δ x0 and µ x0 regarded as row vectors we have µ x0 = δ x0 G. If µ x0 µ x, then from these initial conditions the mean 0 behavior of the process is very similar. In building examples we find that initial conditions in separate corridors will give rise to approximately singular measures µ x0, µ x, while initial 0 conditions in a common room result in nearly equal values. Examples are provided in the next subsection following the introduction of entropy to be used as an enhancement to aggregation, and to evaluate estimation performance. C. Entropy of Markov Sequences As noted in the introduction, to evaluate an approximate model or an estimation algorithm we require an appropriate metric. We argue that entropy is an analogue of energy to generalize the Hankel norm approach to model reduction. In particular, much like energy is used for reduction (initial states with low energy are discarded), entropy can be used for aggregation (initial states with a similar level of long-term uncertainty are aggregated). For a given initial condition we denote the entropy of the distribution of the Markov sequence X t 1 by H(X t 1 x 0) = E[log p(x t 1 x 0)], (10) where p(x t 1 x 0) denotes the joint probability distribution [38]. The limit as t is the infinite-horizon entropy considered here, H (x 0 ) = lim t H(X t 1 x 0). (11) The limit is finite since the Markov chain is assumed to be absorbed at d with probability one from each initial condition. Eq. (11) can be simplified to H (x 0 ) = lim t t i=1 t = lim t i=1 H(X(i) X i 1 1,x 0 ) H(X(i) X(i 1)), (12) where the first step is due to the chain rule for entropy and the second step is a result of the Markov assumption [38]. Writing the recursion, π t+1 = π t P, we have a formula for this entropy: H (x 0 ) = t π 0 = δ x0, (13) i, j D. Outline of Numerical Examples π t (i)p i j log(p i j ) = µ x0 (i)p i j log(p i j ). (14) i, j In the following two sections, we discuss numerical examples of aggregation and its use in estimation. In Sec. III, we first consider a 1-D example of symmetric and asymmetric random walk with a single absorbing state. Next, a small scale example consisting of two offices with a single exit (representing the absorbing state) is considered. Finally, aggregation for the model described in Fig. 1 is discussed. In Sec. IV, we employ the reduced order models obtained for the purposes of estimation. The performance comparisons between full and reduced order estimation are given.

4 µ x0 (i) µ x0 (i) Fig. 2. The 1-D example consists of 20 cells, each of which has a one step transition only with its closest neighbors to the left and right. The leftmost and rightmost cells are the only states that can directly move into the exit, which is an absorbing state. x i x i Fig. 4. µ x0 calculated for the 1-D example with an ε = 0 (left) and ε = 0.1 (right). Notice that µ x0 is symmetric about the middle of the line for ε = 0 case. Fig. 3. The entropy calculated for the 1-D example with varying values of ε. Notice that the highest entropy shifts from the left to the right as ε is increased. III. EXAMPLES OF AGGREGATION METHOD A. 1-D Example The 1-D example is depicted in Fig. 2 with 20 states and 1 absorbing state. The transition probability from the i th state to the (i 1) th ((i+1) th ) state is given by P i,i 1 = 0.5+ε (P i,i+1 = 0.5 ε). The left and rightmost cells are the only states that can directly move into the exit, where the exit is an absorbing state. With ε = 0, the problem is symmetric about the centerline. H (x 0 ) is calculated by varying x 0 through all of the grid locations, the results are shown in Fig. 3 for various values of ε. For ε = 0, the entropy is largest at the centermost cell and furthermore, it is symmetric about this maximum entropy point. For ε = 0.1 (biased to move left), the entropy is maximum right of center. The leftmost cells have smaller entropy than the rightmost cells. Finally, the ε = 0.1 case is a mirror image of the ε = 0.1 case. Next, we compare the µ x0 vectors for the different initial conditions; see Eq. (9). Fig. 4 depicts µ x0 as a color plot for each possible choice of x 0. For ε = 0, µ x0 is symmetric about the centerline, as seen in Fig. 4. The leftmost cells have the same entropy as the rightmost cells (see Fig. 3) and µ x0 for leftmost cells is aligned and orthogonal to the rightmost cells. This would suggest aggregating cells about the centerline. Fig. 4 also depicts that with ε = 0.1 (biased to move left), more mass of µ x0 is concentrated to the left of the center of the line. For this case, one would think that just as the uncertainty peak has moved to the right, so should the boundary between the two partitions: aggregate cells 1-12 Fig. 5. The second (left) and third (right) eigenfunctions for the 1-D example. and For ε = 0.1, µ x0 would be a mirror image of right portion of Fig. 4 about the diagonal. In addition to the entropy and µ x0, we also computed the (right) eigenfunctions of the Markov chain P. Fig. 5 displays the second right eigenfunction. The entropy in Fig. 3 and the second eigenfunction have a similar shape. As there are many methods known that use the second eigenfunction to create a partition, this similarity is promising. Additionally, the third eigenfunction, also depicted in Fig. 5, shows a clear sign structure. It is curious that the sign structure is independent of the value of ε. B. Small-Scale Office We next consider two adjacent 2 2 grids (offices) with a common exit and a layout depicted in Fig. 6. The motion for each office is described by a Markov chain whereby the common exit has a small positive probability from the grid locations closest to the exit (top-right for left office and topleft for right office). The exit is the absorbing state. The states starting in the right office do not communicate with the left, however, state 2 (in left office) can transition to state 5 (in right office) with a small probability. Table I displays H (x 0 ) for each grid location. The obvious observation is that the entropy increases as the grid locations are farther away from the exit. This is expected because entropy is a measure of uncertainty; with increased distance from the exit, more uncertainty results. Also, one sees that the entropy is higher on average for the left office. This is due to the small amount of leakage from the left office into

5 Fig. 6. Two adjacent offices that share an exit (top-middle grid location). The heavy, blue, vertical line represents the division between the two offices. Fig. 8. µ x0 on the geometry of the building model. TABLE I VALUES OF H (x 0 ) AND µ x0 FOR VARYING x 0. x 0 H (x 0 ) µ x [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] the right office. Due to a small probability that an agent could move from any state in the left office to any state in the right office, and thus the increased number of possible paths, there is an increase in entropy. Finally, the support of µ x0 indicates that the correct aggregation would be to aggregate states in the right office (support of µ x0 for x 0 = 5,6,7,8). C. Large-Scale Office Building In this example, we consider a large scale problem with 400 cells seen in Fig. 1 (model of a single floor in a building). Fig. 7 depicts the entropy plot (H (x 0 ) for initial conditions on the grid). The only exit in this model (absorbing state) is located in the top-left cell and locations with zero entropy correspond to walls. As with the small-scale example, the entropy increases with distance from the exit. Fig. 8 shows µ x0, where state x 0 is the leftmost cell in the second row. This figure depicts all the possible cells that Fig. 9. The entropy that is supported by µ x0 sorted in ascending order. an agent can visit from a given starting position of state x 0. The higher values of µ x0 ( j) indicate that cell j is visited more often during the possible trajectories. Due to the way in which the Markov matrix that defines the dynamics of the system is formed, note that an agent starting from the given x 0 does not enter the offices along the corridor when attempting to exit the building. In this large building example entropy can be used to aggregate cells that are supported by µ x0 in a similar fashion to the previous two examples. The entropy of the cells on the support of µ x0 are displayed in ascending order in Fig. 9. By assigning a threshold, one can aggregate the cells into two groups. The graphical interpretation of this aggregation technique is displayed in Fig. 10. For the single threshold, at the mean entropy value, the partition gives 2 super-states. To acquire more super-states, one uses multiple threshold values to split the entropy of the cells on the support of µ x0. Fig. 10 uses 3 (8) threshold values for 4 (9) aggregated states. One interesting observation regarding the partition with 9 superstates is that cells near the exits of the individual offices are grouped together. It would be expected that the super-states would approximate the system relatively well because of this aggregation of cells. IV. FULL AND REDUCED ORDER ESTIMATION Fig. 7. The entropy defined by Eq. (14) for each grid location. Notice the increased uncertainty with farther distances from the exit. In this section, we use aggregation for the purposes of reduced order estimation.

6 Fig. 10. Depictions of 2, 4, and 9 aggregated states grouped using entropy on the support of µ x0. The bold lines indicate the different super-states. A. Problem Statement A Markov transition function for the building example in Sec. III-C describes the probability of a single agent moving towards the exit. The objective is to use the sensor observation data to estimate the correct corridor of the agent (see Fig. 1). Within the Markovian modeling framework, sensors are modeled using the observation matrix: O i j = Prob(s i X(t) = d j ), (15) where {s i } N i=1 are the observation states corresponding to N sensors and s 0 corresponds to no observation. O i j represents the conditional probability of observing s i (i th sensor firing) given the agent occupies the node d j. In the following, we discuss Bayesian estimation using full and reduced order Markov models and compare their respective performances. B. Bayesian Estimation The non-normalized recursive estimation equation is given by ˆυ t+1 = ˆυ t Pdiag(O Yt+1 ), (16) where ˆυ t is the current estimate, P is the Markov matrix, and diag(o Yt+1 ) is a diagonal matrix with diagonal constructed from the row of the observation matrix corresponding to the observation at time = t + 1 [7]. A normalized estimate is obtained by normalizing ˆυ t+1 to be a probability vector. It represents the conditional expectation of the agent occupying a node given all of the observations. C. Reduced Order Estimation For the purposes of reduced order estimation, super-states are created by aggregating states, as described in Sec. III. Aggregation leads to a coarse graph with nodes {c i } for which reduced order Markov and observation matrices are defined. The reduced order Markov matrix, denoted as P red, is obtained using P red i j = u c i v c j P uv K i, (17) where u c i denotes the cells (nodes) of the original graph that have been aggregated to form the super-state c i and K i = j u c i v c j P uv (18) Fig. 11. Estimate probabilities of being located in Corridors 3 or 4. Full order estimation as well as reduced order estimation using 4, 9, and 16 super-states are displayed. is a normalization constant. The rationale for using (17)-(18) is that it represents a uniform smearing of probability within any node of the coarse graph. To define a reduced order observation matrix, we follow the heuristic of uniformly smearing the probability. As a result, there is now a probability of detection o = # of sensors in c j # of cells in c j (19) for an agent in (one of the cells contained in) c j. This probability is used to construct a reduced order observation matrix O red with respect to the sensors present. We note that the number of observation states for a single agent can at most be the number of aggregated states, plus the additional unobserved state. Once P red and O red are defined, the reduced-order estimation is carried out following Eq. (16). D. Estimation Results We applied the reduced order model to estimate the correct corridor for the agent as she exits the building starting from

7 the location depicted in Fig. 1. The results of this estimation are summarized in Fig. 11. The full order estimation used the original 400 state model while reduced order estimation was carried out with models consisting of 4, 9, and 16 aggregated states. The full order estimate predicts the correct corridor for almost all time steps. The one exception is where the agent moves at the boundary between corridors 1 and 4 (see Fig. 1). The reduced order estimator, though not as good as the full order estimator, also predict the corridor correctly. Using simulations, it was seen that a 16 state model provided nearly as good an estimate as the 400 state model. A couple of remarks concerning the fine-scale performance of reduced order estimator: 1) The super-states define the resolution of estimation. In particular, one would not be able to distinguish the two corridors if the agent is predicted to be in a super-state that comprises of cells from both. 2) The reduced order estimator was observed to fail in its prediction whenever the full order estimator failed. For example, the agent moves at the boundary of two corridors at time = 18 and none of the estimators is able to pick the correct corridor. V. CONCLUSION In this paper, we proposed an aggregation based model reduction method for large Markovian models. There are some natural connections between this method and the Hankel norm based model reduction ideas in linear systems. In particular, the potential matrix is a generalization of the Lyapunov equation and the entropy metric was proposed as a counterpart of the l 2 energy for the linear systems. Several examples were used to demonstrate the intuitive appeal of the proposed method. Finally, the aggregation method was used for the purposes of reduced order estimation of an agent location as she exits a large building. The initial simulation results presented here are promising. The future work will focus on the performance analysis of the estimator vis-a-vis the proposed aggregation algorithm. Several extensions and modifications of the basic algorithm to account for the presence of 1) sensors and 2) large number of agents will be considered. We note that the former is closely tied to the HMM model reduction problem while the explosion of complexity in the presence of large number of agents represents a key barrier in application of these methods. REFERENCES [1] S. Gwynne, E. Galea, M. Owen, P. Lawrence, and L. Filippidis, A systematic comparison of building EXODUS predictions with experimental data from the Stapelfeldt trials and the Milburn House evacuation, Applied Mathematical Modelling, vol. 29, pp , [2] D. Helbing, L. Buzna, A. Johansson, and T. Werner, Self-organized pedestrian crowd dynamics: Experiments, simulations, and design solutions, Transportation Science, vol. 39, no. 1, pp. 1 24, February [3] S. Sharma and S. Gifford, Using RFID to evaluate evacuation behavior models, in Annual Conference of the North American Fuzzy Information Processing Society-NAFIPS, 2005, pp [4] T.-S. Shen, ESM: a building evacuation simulation model, Building and Environment, vol. 40, pp , [5] L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE, vol. 77, no. 2, pp , February [6] D. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming. Cambridge, Mass: Atena Scientific, [7] R. J. Elliott, L. Aggoun, and J. B. Moore, Hidden Markov Models: Estimation and Control, 1st ed. New York, NY: Springer-Verlag New York, Inc., [8] Y. Ephraim, Hidden markov processes, IEEE Trans on Information Theory, vol. 48, no. 6, [9] J. Grace and J. Baillieul, Stochastic strategies for autonomous robotic surveillance, IEEE Conference on Decision and Control, December [10] D. Culler, D. Estrin, and M. Srivastava, Overview of sensor networks, IEEE Computer Society, pp , August [11] H. Simon and A.Ando, Aggregation of variables in dynamic systems, Econometrica, vol. 29, pp , [12] R. G. Phillips and P. V. Kokotovic, A singular pertubation approach to modeling and control of Markov chains, IEEE Transactions on Automatic Control, vol. 26, no. 5, [13] Z. Pan and T.Basar, H control of large-scale jump linear systems via averaging and aggregation, Internatational Journal of Control, vol. 72, pp , [14] G. G. Yin and Q. Zhang, Discrete-time Markov Chains, 1st ed. Springer, [15] M. Dellnitz and O. Junge, On the approximation of complicated dynamical behavior, SIAM Journal on Numerical Analysis, vol. 36, pp , [16], Set oriented numerical methods for dynamical systems, in Handbook of Dynamical Systems II: Towards Applications, G. I. B. Fiedler and N. Kopell, Eds. World Scientific, 2002, pp [17] G. Froyland, Extracting dynamical behaviour via Markov models, Submitted to Nonlinear dynamics and statistics, manuscript available at citeseer.ist.psu.edu/ html. [18] L. B. White, R. Mahony, and G. D. Brushe, Lumpable hidden Markov models-model reduction and reduced complexity filtering, IEEE Transactions on Automatic Control, vol. 45, no. 12, [19] M. Huang and S. Dey, Distributed state estimation for hidden Markov models by sensor networks with dynamic quantization, Proceedings of the 2004 Intelligent Sensors, Sensor Network and Information Processing Conference, ISSNIP 04, pp , [20] S. Dey and I. Mareels, Reduced-complexity estimation for large-scale hidden Markov models, IEEE Transactions on Singal Processing, vol. 52, no. 5, [21] F. Chung, Spectral Graph Theory. Providence, RI: Amer. Math. Society Press, [22] R. Preis, Analyses and design of efficient graph partitioning methods, Ph.D. dissertation, University of Paderborn, Germany, [23] B. Hendrickson and R. Leland, An improved spectral graph partitioning algorithm for mapping parallel computations, Society for Industrial and Applied Mathematics Journal on Scientific Computing, vol. 16, no. 2, pp , [24] R. Coifman, S. Lafon, A. Lee, M. Maggioni, B. Nadler, F. Warner, and S. Zucker, Geometric diffusions as a tool for harmonic analysis and structure definition of data. part i: diffusion maps, Proceedings of the National Academy of Sciences, vol. 102, no. 21, pp , [25] C. Schütte, A. Fischer, W. Huisinga, and P. Deuflhard, A direct approach to conformational dynamics based on hybrid Monte Carlo, J. Comput. Phys., Special Issue on Computational Biophysics, vol. 151, pp , [26] C. Schütte, W. Huisinga, and P. Deuflhard, Transfer operator approach to conformational dynamics in biomolecular systems, in Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems, B. Fiedler, Ed. Springer, [27] O. Junge and M. Dellnitz, Almost invariant sets in Chua s circuit, Int. J. Bif. and Chaos, vol. 7, pp , [28] I. Mezic and A. Banaszuk, Comparison of systems with complex behavior, Physica D, vol. 197, pp , [29] W. Huisinga, S. Meyn, and C. Schütte, Phase transitions and metastability in Markovian and molecular systems, Ann. Appl. Probab., vol. 14, no. 1, pp , 2004, presented at the 11TH INFORMS Applied Probability Society Conference, NYC, JULY 25 - July 27.

8 [30] P. G. Mehta, M. Dorobantu, and A. Banaszuk, Graph-based multiscale analysis of building system transport models, in Procs. of American Controls Conference, Minneapolis, 2006, pp [31] A. Bovier, M. Eckhoff, V. Gayrard, and M. Klein, Metastability in stochastic dynamics of disordered mean-field models, Probab. Theory Related Fields, vol. 119, no. 1, pp , [32] A. Bovier and F. Manzo, Metastability in Glauber dynamics in the low-temerature limit: beyond exponential asymptotics, November 2001, Preprint, Weierstrass-Institute Für Angewandte Analysis und Stochastik, Mohrenstrasse 39, Berlin, Germany [33] L. Rey-Bellet and L. E. Thomas, Asymptotic behavior of thermal nonequilibrium steady states for a driven chain of anharmonic oscillators, Comm. Math. Phys., vol. 215, no. 1, pp. 1 24, [34] M. G. Safonov, R. Y. Chiang, and D. Limebeer, Optimal Hankel model reduction for nonminimal systems, IEEE Transactions on Automatic Control, vol. 35, no. 4, pp , [35] S. Gugercin and A. C. Antoulas, Model reduction of large-scale systems by least squares, Linear Algebra and its Applications, vol. 415, pp , [36] G. Kotsails, Model reduction for hidden Markov models, Ph.D. dissertation, Massachusetts Institute of Technology, [37] G. Kotsalis, A. Megretski, and M. A. Dahleh, A model reduction algorithm for hidden Markov models, Proceedings of the 45 th IEEE Conference on Decision & Control, pp , [38] T. M. Cover and J. A. Thomas, Elements of Information Theory, 1st ed. John Wiley & Sons, Inc., 1991.

Resource Pooling for Optimal Evacuation of a Large Building

Resource Pooling for Optimal Evacuation of a Large Building Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Resource Pooling for Optimal Evacuation of a Large Building Kun Deng, Wei Chen, Prashant G. Mehta, and Sean

More information

A tutorial on numerical transfer operator methods. numerical analysis of dynamical systems

A tutorial on numerical transfer operator methods. numerical analysis of dynamical systems A tutorial on transfer operator methods for numerical analysis of dynamical systems Gary Froyland School of Mathematics and Statistics University of New South Wales, Sydney BIRS Workshop on Uncovering

More information

Adaptive simulation of hybrid stochastic and deterministic models for biochemical systems

Adaptive simulation of hybrid stochastic and deterministic models for biochemical systems publications Listing in reversed chronological order. Scientific Journals and Refereed Proceedings: A. Alfonsi, E. Cances, G. Turinici, B. Di Ventura and W. Huisinga Adaptive simulation of hybrid stochastic

More information

Lecture: Local Spectral Methods (1 of 4)

Lecture: Local Spectral Methods (1 of 4) Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

ON STOCHASTIC ANALYSIS APPROACHES FOR COMPARING COMPLEX SYSTEMS

ON STOCHASTIC ANALYSIS APPROACHES FOR COMPARING COMPLEX SYSTEMS Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 2-5, 2005 ThC.5 O STOCHASTIC AAYSIS APPROACHES FOR COMPARIG COMPE SYSTEMS

More information

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part 9. A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part Sandip Roy Ali Saberi Kristin Herlugson Abstract This is the second of a two-part paper describing a control-theoretic

More information

Online solution of the average cost Kullback-Leibler optimization problem

Online solution of the average cost Kullback-Leibler optimization problem Online solution of the average cost Kullback-Leibler optimization problem Joris Bierkens Radboud University Nijmegen j.bierkens@science.ru.nl Bert Kappen Radboud University Nijmegen b.kappen@science.ru.nl

More information

A Recursive Learning Algorithm for Model Reduction of Hidden Markov Models

A Recursive Learning Algorithm for Model Reduction of Hidden Markov Models A Recursive Learning Algorithm for Model Reduction of Hidden Markov Models Kun Deng, Prashant G. Mehta, Sean P. Meyn, and Mathukumalli Vidyasagar Abstract This paper is concerned with a recursive learning

More information

Permutation Excess Entropy and Mutual Information between the Past and Future

Permutation Excess Entropy and Mutual Information between the Past and Future Permutation Excess Entropy and Mutual Information between the Past and Future Taichi Haruna 1, Kohei Nakajima 2 1 Department of Earth & Planetary Sciences, Graduate School of Science, Kobe University,

More information

An indicator for the number of clusters using a linear map to simplex structure

An indicator for the number of clusters using a linear map to simplex structure An indicator for the number of clusters using a linear map to simplex structure Marcus Weber, Wasinee Rungsarityotin, and Alexander Schliep Zuse Institute Berlin ZIB Takustraße 7, D-495 Berlin, Germany

More information

A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S

A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S ZHENYANG LI, PAWE L GÓ, ABRAHAM BOYARSKY, HARALD PROPPE, AND PEYMAN ESLAMI Abstract Keller [9] introduced families of W

More information

1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner), Second Revised Edition, Springer-Verlag, New York, 2001.

1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner), Second Revised Edition, Springer-Verlag, New York, 2001. Paul Dupuis Professor of Applied Mathematics Division of Applied Mathematics Brown University Publications Books: 1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner),

More information

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

Scalable Algorithms for Complex Networks

Scalable Algorithms for Complex Networks Scalable Algorithms for Complex Networks Tuhin Sahai United Technologies Research Center University of Connecticut Outline / Acknowledgements Decentralized Clustering Alberto Speranzon (UTRC) Andrzej Banaszuk

More information

Networks as vectors of their motif frequencies and 2-norm distance as a measure of similarity

Networks as vectors of their motif frequencies and 2-norm distance as a measure of similarity Networks as vectors of their motif frequencies and 2-norm distance as a measure of similarity CS322 Project Writeup Semih Salihoglu Stanford University 353 Serra Street Stanford, CA semih@stanford.edu

More information

Duality in Stability Theory: Lyapunov function and Lyapunov measure

Duality in Stability Theory: Lyapunov function and Lyapunov measure Forty-Fourth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Sept 27-29, 2006 WID.150 Duality in Stability Theory: Lyapunov function and Lyapunov measure Umesh Vaidya Abstract In this paper,

More information

Limit Cycles in High-Resolution Quantized Feedback Systems

Limit Cycles in High-Resolution Quantized Feedback Systems Limit Cycles in High-Resolution Quantized Feedback Systems Li Hong Idris Lim School of Engineering University of Glasgow Glasgow, United Kingdom LiHonIdris.Lim@glasgow.ac.uk Ai Poh Loh Department of Electrical

More information

Infinite fuzzy logic controller and maximum entropy principle. Danilo Rastovic,Control systems group Nehajska 62,10000 Zagreb, Croatia

Infinite fuzzy logic controller and maximum entropy principle. Danilo Rastovic,Control systems group Nehajska 62,10000 Zagreb, Croatia Infinite fuzzy logic controller and maximum entropy principle Danilo Rastovic,Control systems group Nehajska 62,10000 Zagreb, Croatia Abstract. The infinite fuzzy logic controller, based on Bayesian learning

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974 THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS by S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT This note describes a simulation experiment involving

More information

A Family of Piecewise Expanding Maps having Singular Measure as a limit of ACIM s

A Family of Piecewise Expanding Maps having Singular Measure as a limit of ACIM s Ergod. Th. & Dynam. Sys. (,, Printed in the United Kingdom c Cambridge University Press A Family of Piecewise Expanding Maps having Singular Measure as a it of ACIM s Zhenyang Li,Pawe l Góra, Abraham Boyarsky,

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

THERE has been a growing interest in using switching diffusion

THERE has been a growing interest in using switching diffusion 1706 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 5, MAY 2007 Two-Time-Scale Approximation for Wonham Filters Qing Zhang, Senior Member, IEEE, G. George Yin, Fellow, IEEE, Johb B. Moore, Life

More information

Non-negative matrix factorization with fixed row and column sums

Non-negative matrix factorization with fixed row and column sums Available online at www.sciencedirect.com Linear Algebra and its Applications 9 (8) 5 www.elsevier.com/locate/laa Non-negative matrix factorization with fixed row and column sums Ngoc-Diep Ho, Paul Van

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Consensus Protocols for Networks of Dynamic Agents

Consensus Protocols for Networks of Dynamic Agents Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu

More information

Undirected Graphical Models

Undirected Graphical Models Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional

More information

An Adaptive Clustering Method for Model-free Reinforcement Learning

An Adaptive Clustering Method for Model-free Reinforcement Learning An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

El Botellón: Modeling the Movement of Crowds in a City

El Botellón: Modeling the Movement of Crowds in a City El Botellón: Modeling the Movement of Crowds in a City Jonathan E. Rowe Rocio Gomez School of Computer Science, University of Birmingham, Birmingham B15 2TT, United Kingdom A simulation of crowd movement

More information

HMM part 1. Dr Philip Jackson

HMM part 1. Dr Philip Jackson Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. HMM part 1 Dr Philip Jackson Probability fundamentals Markov models State topology diagrams Hidden Markov models -

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang Chapter 4 Dynamic Bayesian Networks 2016 Fall Jin Gu, Michael Zhang Reviews: BN Representation Basic steps for BN representations Define variables Define the preliminary relations between variables Check

More information

I R TECHNICAL RESEARCH REPORT. Risk-Sensitive Probability for Markov Chains. by Vahid R. Ramezani and Steven I. Marcus TR

I R TECHNICAL RESEARCH REPORT. Risk-Sensitive Probability for Markov Chains. by Vahid R. Ramezani and Steven I. Marcus TR TECHNICAL RESEARCH REPORT Risk-Sensitive Probability for Markov Chains by Vahid R. Ramezani and Steven I. Marcus TR 2002-46 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies and teaches advanced

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition

More information

c 2012 by Kun Deng. All rights reserved.

c 2012 by Kun Deng. All rights reserved. c 2012 by Kun Deng. All rights reserved. MODEL REDUCTION OF MARKOV CHAINS WITH APPLICATIONS TO BUILDING SYSTEMS BY KUN DENG DISSERTATION Submitted in partial fulfillment of the requirements for the degree

More information

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology

More information

1. Introductory Examples

1. Introductory Examples 1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example

More information

A Geometric Interpretation of the Metropolis Hastings Algorithm

A Geometric Interpretation of the Metropolis Hastings Algorithm Statistical Science 2, Vol. 6, No., 5 9 A Geometric Interpretation of the Metropolis Hastings Algorithm Louis J. Billera and Persi Diaconis Abstract. The Metropolis Hastings algorithm transforms a given

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

Space-Variant Computer Vision: A Graph Theoretic Approach

Space-Variant Computer Vision: A Graph Theoretic Approach p.1/65 Space-Variant Computer Vision: A Graph Theoretic Approach Leo Grady Cognitive and Neural Systems Boston University p.2/65 Outline of talk Space-variant vision - Why and how of graph theory Anisotropic

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Michael Rabbat Joint with: Syamantak Datta Gupta and Mark Coates (McGill), and Stephane Blouin (DRDC)

More information

Multiscale bi-harmonic Analysis of Digital Data Bases and Earth moving distances.

Multiscale bi-harmonic Analysis of Digital Data Bases and Earth moving distances. Multiscale bi-harmonic Analysis of Digital Data Bases and Earth moving distances. R. Coifman, Department of Mathematics, program of Applied Mathematics Yale University Joint work with M. Gavish and W.

More information

Spectral Clustering on Handwritten Digits Database

Spectral Clustering on Handwritten Digits Database University of Maryland-College Park Advance Scientific Computing I,II Spectral Clustering on Handwritten Digits Database Author: Danielle Middlebrooks Dmiddle1@math.umd.edu Second year AMSC Student Advisor:

More information

Speeding up Inference in Markovian Models

Speeding up Inference in Markovian Models Speeding up Inference in Markovian Models Theodore Charitos, Peter de Waal and Linda C. van der Gaag Institute of Information and Computing Sciences, Utrecht University P.O Box 80.089, 3508 TB Utrecht,

More information

An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence

An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence Oliver Junge Department of Mathematics and Computer Science University of Paderborn

More information

Learning from Sensor Data: Set II. Behnaam Aazhang J.S. Abercombie Professor Electrical and Computer Engineering Rice University

Learning from Sensor Data: Set II. Behnaam Aazhang J.S. Abercombie Professor Electrical and Computer Engineering Rice University Learning from Sensor Data: Set II Behnaam Aazhang J.S. Abercombie Professor Electrical and Computer Engineering Rice University 1 6. Data Representation The approach for learning from data Probabilistic

More information

Stability of Deterministic Finite State Machines

Stability of Deterministic Finite State Machines 2005 American Control Conference June 8-10, 2005. Portland, OR, USA FrA17.3 Stability of Deterministic Finite State Machines Danielle C. Tarraf 1 Munther A. Dahleh 2 Alexandre Megretski 3 Abstract We approach

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

Stability of interval positive continuous-time linear systems

Stability of interval positive continuous-time linear systems BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 66, No. 1, 2018 DOI: 10.24425/119056 Stability of interval positive continuous-time linear systems T. KACZOREK Białystok University of

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Lesson 4: Non-fading Memory Nonlinearities

Lesson 4: Non-fading Memory Nonlinearities Lesson 4: Non-fading Memory Nonlinearities Nonlinear Signal Processing SS 2017 Christian Knoll Signal Processing and Speech Communication Laboratory Graz University of Technology June 22, 2017 NLSP SS

More information

Markovian Description of Irreversible Processes and the Time Randomization (*).

Markovian Description of Irreversible Processes and the Time Randomization (*). Markovian Description of Irreversible Processes and the Time Randomization (*). A. TRZĘSOWSKI and S. PIEKARSKI Institute of Fundamental Technological Research, Polish Academy of Sciences ul. Świętokrzyska

More information

Building Thermal Model Reduction via Aggregation of States

Building Thermal Model Reduction via Aggregation of States 200 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 200 FrA2.5 Building Thermal Model Reduction via Aggregation of States Kun Deng, Prabir Barooah, Prashant G. Mehta,

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana

OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE Bruce Hajek* University of Illinois at Champaign-Urbana A Monte Carlo optimization technique called "simulated

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Using the computer to select the right variables

Using the computer to select the right variables Using the computer to select the right variables Rationale: Straight Line Distance Curved Transition Distance Actual transition difficulty represented by curved path Lake Carnegie, Princeton, NJ Using

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325 Dynamical Systems and Chaos Part I: Theoretical Techniques Lecture 4: Discrete systems + Chaos Ilya Potapov Mathematics Department, TUT Room TD325 Discrete maps x n+1 = f(x n ) Discrete time steps. x 0

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator

Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator James Daniel Whitfield March 30, 01 By three methods we may learn wisdom: First, by reflection, which is the noblest; second,

More information

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint

More information

GAUSSIAN PROCESS TRANSFORMS

GAUSSIAN PROCESS TRANSFORMS GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

A Novel Low-Complexity HMM Similarity Measure

A Novel Low-Complexity HMM Similarity Measure A Novel Low-Complexity HMM Similarity Measure Sayed Mohammad Ebrahim Sahraeian, Student Member, IEEE, and Byung-Jun Yoon, Member, IEEE Abstract In this letter, we propose a novel similarity measure for

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Data Analyzing and Daily Activity Learning with Hidden Markov Model

Data Analyzing and Daily Activity Learning with Hidden Markov Model Data Analyzing and Daily Activity Learning with Hidden Markov Model GuoQing Yin and Dietmar Bruckner Institute of Computer Technology Vienna University of Technology, Austria, Europe {yin, bruckner}@ict.tuwien.ac.at

More information

Contribution from: Springer Verlag Berlin Heidelberg 2005 ISBN

Contribution from: Springer Verlag Berlin Heidelberg 2005 ISBN Contribution from: Mathematical Physics Studies Vol. 7 Perspectives in Analysis Essays in Honor of Lennart Carleson s 75th Birthday Michael Benedicks, Peter W. Jones, Stanislav Smirnov (Eds.) Springer

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Sean P. Meyn Joint work with Darshan Shirodkar and Prashant Mehta Coordinated Science Laboratory and the Department of

More information

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step. 2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called

More information

Social network analysis: social learning

Social network analysis: social learning Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading

More information

Basis Construction from Power Series Expansions of Value Functions

Basis Construction from Power Series Expansions of Value Functions Basis Construction from Power Series Expansions of Value Functions Sridhar Mahadevan Department of Computer Science University of Massachusetts Amherst, MA 3 mahadeva@cs.umass.edu Bo Liu Department of

More information

Population Games and Evolutionary Dynamics

Population Games and Evolutionary Dynamics Population Games and Evolutionary Dynamics William H. Sandholm The MIT Press Cambridge, Massachusetts London, England in Brief Series Foreword Preface xvii xix 1 Introduction 1 1 Population Games 2 Population

More information

898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE

898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE 898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER 2001 Short Papers The Chaotic Mobile Robot Yoshihiko Nakamura and Akinori Sekiguchi Abstract In this paper, we develop a method

More information

COMPUTER-AIDED MODELING AND SIMULATION OF ELECTRICAL CIRCUITS WITH α-stable NOISE

COMPUTER-AIDED MODELING AND SIMULATION OF ELECTRICAL CIRCUITS WITH α-stable NOISE APPLICATIONES MATHEMATICAE 23,1(1995), pp. 83 93 A. WERON(Wroc law) COMPUTER-AIDED MODELING AND SIMULATION OF ELECTRICAL CIRCUITS WITH α-stable NOISE Abstract. The aim of this paper is to demonstrate how

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

The Important State Coordinates of a Nonlinear System

The Important State Coordinates of a Nonlinear System The Important State Coordinates of a Nonlinear System Arthur J. Krener 1 University of California, Davis, CA and Naval Postgraduate School, Monterey, CA ajkrener@ucdavis.edu Summary. We offer an alternative

More information

Machine Learning for OR & FE

Machine Learning for OR & FE Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David

More information

WE consider finite-state Markov decision processes

WE consider finite-state Markov decision processes IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 7, JULY 2009 1515 Convergence Results for Some Temporal Difference Methods Based on Least Squares Huizhen Yu and Dimitri P. Bertsekas Abstract We consider

More information

Hybrid HMM/MLP models for time series prediction

Hybrid HMM/MLP models for time series prediction Bruges (Belgium), 2-23 April 999, D-Facto public., ISBN 2-649-9-X, pp. 455-462 Hybrid HMM/MLP models for time series prediction Joseph Rynkiewicz SAMOS, Université Paris I - Panthéon Sorbonne Paris, France

More information

Towards inference for skewed alpha stable Levy processes

Towards inference for skewed alpha stable Levy processes Towards inference for skewed alpha stable Levy processes Simon Godsill and Tatjana Lemke Signal Processing and Communications Lab. University of Cambridge www-sigproc.eng.cam.ac.uk/~sjg Overview Motivation

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Vladislav B. Tadić Abstract The almost sure convergence of two time-scale stochastic approximation algorithms is analyzed under

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

TOWARDS AUTOMATED CHAOS VERIFICATION

TOWARDS AUTOMATED CHAOS VERIFICATION TOWARDS AUTOMATED CHAOS VERIFICATION SARAH DAY CDSNS, Georgia Institute of Technology, Atlanta, GA 30332 OLIVER JUNGE Institute for Mathematics, University of Paderborn, 33100 Paderborn, Germany KONSTANTIN

More information