A Brief Introduction to Markov Chains and Hidden Markov Models
|
|
- Bertram Mills
- 5 years ago
- Views:
Transcription
1 A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes, we defined a Markov process to be a process with the foowing property: For arbitrary times t 1 < t 2 < < t k < t k+1, P[X(t k+1 )=x k+1 X(t 1 )=x 1, X(t 2 )=x 2,,X(t k )=x k ]=P[X(t k+1 )=x k+1 X(t k )=x k ] In other words, given the present, the future of the process is independent of the past 1 The consequence of this, the Markov property, is that given any conditiona probabiity mass function or probabiity density function that is conditioned on severa time instants, we can aways reduce to a conditiona pmf/pdf that is ony conditioned on the most recent time instant Because a information about the future evoution of the process is summarized by the current vaue, X(t), we refer to X(t) as the state of the process We have seen many Markov processes aready: the sum process, the Poisson process, the Random teegraph process, and the Wiener process are a exampes of Markov processes 2 In this set of notes, we wi focus excusivey on Markov processes in which the vaues of X(t) come from a discrete set, which wi usuay be mapped to the integers Such a Markov process is commony caed a Markov chain In some cases, X(t) may take on vaues from a finite set, in which case we say that it is a finite-state Markov chain Furthermore, our interest wi be on discrete-time processes, hence we wi adopt the notation {X n } to refer to the process from this point forward For Markov chains, we typicay assume that time starts at n = 0 In genera, a Markov chain can be characterized by giving the initia pmf, p j (0) =P[X 0 = j], 1 When we presented this property before, we were focused primariy on continuous-vaued processes, hence we used pdfs In this deveopment, we wi mosty focus on discrete-vaued processes, hence we wi use pmfs 2 The iid process is a trivia exampe of a Markov process, too and the one-step state-transition probabiities, p (n) ij = P[X n+1 = j X n = i] However, we wi assume that p ij (n) =p ij is constant for a time n A Markov chain with this property is said to be time homogenous These one-step transition probabiities are often represented as a
2 abriefintroductiontomarkovchainsandhiddenmarkovmodes 2 matrix, known as the transition probabiity matrix 2 P = 6 4 p 00 p 0 02 p 10 p 1 12 p i0 p i1 Note that each row of this matrix must add up to 1, as the ith row represents the conditiona pmf over the next state, given that the current state is i If the Markov chain is finite with n states, then P wi be an n n matrix Some Probabiity Computations From this point, it is easy to evauate the joint pmf for the first n time instants of the process: P[X 0 = i 0, X 1 = i 1,,X n 1 = i n 1 ]=P[X 0 = i 0 ]P[X 1 = i 1 X 0 = i 0 ] P[X n 1 = i n 1 X n 2 = i n 2 ] = p i0 (0)p i0 i 1 p i1 i 2 p in 2 i n 1 However, to find the joint pmf for arbitrary time instants, we need to find the transition probabiity for an arbitrary number of steps Let p ij (n) be the probabiity that the chain moves from state i to state j in n steps, that is P[X k+n = j X k = i] First, consider the two-step transition probabiity, p ij (2) By the theorem on tota probabiity, we have p ij (2) =P[X k+2 = j X k = i] = Â P[X k+2 = j, X k+1 = X k = i] = Â P[X k+2 = j X k+1 =, X k = i]p[x k+1 = X k = i] = Â P[X k+2 = j X k+1 = ]P[X k+1 = X k = i] = Â p j p i This is a we and good, but it is much more convient to write this in matrix form Let P(2) be the matrix of two-step transition probabiities Then the equation above impies that P(2) =P P = P 2 A simiar argument can be constructed to find P(n), the n-step transition probabiity Foowing the same argument as above, we
3 abriefintroductiontomarkovchainsandhiddenmarkovmodes 3 have that p ij (m + n)) = P[X k+m+n = j X k = i] = Â P[X k+m+n = j, X k+m = X k = i] = Â p i (m)p j (n) This equation, which is extremey important in the anaysis of Markov chains, is caed the Chapman-Komogorov equation Note that in matrix form, it tes us that P(m + n) =P(m) P(n) We can combine this, using induction, with the fact that P(1) =P, by definition, to obtain that P(n) =P n For finite state Markov chains, P n can be computed numericay to determine n-state transition behavior We can now find the pmf for arbitrary time instants If n 1 < n 2 < < n k, then P[X n1 = i 1, X n2 = i 2,,X nk = i k ]=p i1 (n 1 ) p i1 i 2 (n 2 n 1 ) p i2 i 3 (n 3 n 2 ) p ik 1 i k (n k n k 1 ) where p i1 (n 1 ) is the state probabiity at time n 1 Often, it is these state probabiities at time n, p i (n), in which we are most interested Given the initia state distribution, ~p(0), these are easy to compute as p i (n) =Â p k (0)p ki (n) k Moreover, in matrix notation, we have that the probabiity distribution over states at time n is given by ~p(n) =~p(0) P n For exampe, Figure 1 shows a two-state Markov chain This simpe chain has a surprisingy arge number of appications, particuary in communications It is commony used to represent channe conditions (which are modeed as switching between a "good" state and a "bad" state) and primary user activity (in which a primary user switches between "active" and "ide" states) It has aso been used to mode the activity of human speaker on a teephone ca, who moves between periods of speaking and periods of sience a 1 a b Figure 1: A simpe two-state Markov chain b
4 abriefintroductiontomarkovchainsandhiddenmarkovmodes 4 The transition probabiity matrix of this Markov chain is given by: " # 1 a a P = b 1 b As discussed above, we can find the n-step probabiity transition matrix by finding P n And, moreover, if we know the initia state distribution, ~p(0), then we can find the state probabiity distribution at time n given by ~p(n) =~p(0)p n For exampe, if we take a = 01 and b = 02, then we can use Matab to compute the reevant probabiities above In particuar, in this case the one-step transition probabiity matrix is P = " Using Matab (or equivaent), we can find arbitrary powers of P But, in particuar, we might see that " # P = By this point, it appears that P(n) =P n is converging to a matrix in which each row is (2/3, 1/3) More on this shorty # Stationary and Limiting Probabiity Distributions Suppose that we are given a Markov chain with transition probabiity matrix P A stationary distribution is a distribution ~p over the state space with the property that ~p = ~pp Such a distribution is very specia because if the initia distribution is ~p, that is, if ~p(0) =p, then we wi have that the distribution is ~p at every time instant n, ~p(n)p In other words, if the state distribution of the Markov chain is a stationary distribution, then the state distribution wi remain the same forevermore Indeed, a Markov chain whose initia state distribution is a stationary distribution of the chain wi be a stationary random process For a finite-state Markov chain with n states, we can find the stationary distribution(s) by soving the system of n inear equations given by ~p = ~pp However, this system of equations wi be underconstrained 3 We need one additiona equation, Â k p k = 1, in order to find a soution of ~p = ~pp that is a probabiity distribution A homogeneous inear system of equations, such as the one given by ~p = ~pp aways has a trivia soution, which is the a-zero soution, 3 Specificay, it is a system of homogeneous inear equations
5 abriefintroductiontomarkovchainsandhiddenmarkovmodes 5 but this obviousy cannot be scaed to be probabiity distribution If we can find a nonzero soution ~p to the system of homogeneous equations ~p = ~pp with  k p k <, then q k = p k /  j p j wi be a stationary distribution of the Markov chain Returning to the exampe two-state chain above, the homogeneous system of equations for a stationary distribution is given in matrix form as h i h p 0 p 1 = p 0 p 1 i " 1 a a b 1 b Or, if we write out the system of homogeneous equations, then we have # p 0 =(1 a)p 0 + bp 1 p 1 = ap 0 +(1 b)p 1 These two equations are redundant (both can be rewritten as ap 0 = bp 1 ) To sove the system, we need the additiona equation p 0 + p 1 = 1 Soving the system with this additiona equation produces: p 0 = b a + b p 1 = a a + b If we substitute in the vaues a = 01 and b = 02, then we find the soution is p 0 = 2/3 and p 1 = 1/3 Reca that these appeared to be the vaues to which the rows of P(n) =P n were converging A reasonabe question to ask at this point is: When wi a Markov chain with an arbitrary initia distribution converge to a stationary state distribution? That is, when can we say that im n! ~p(n) = p for arbitrary initia distributions ~p(0)? This turns out to be a somewhat compicated question to answer in genera So, in the interest of time, we wi answer it for a specia case 4 To answer this question, we first need to define some additiona properties of Markov chains We say that state j is accessibe from state i, written i! j, if for some n 0, p ij (n) > 0 That is, if there is some sequence of transitions from i to j that has nonzero probabiity We say that states i and j communicate, written i $ j, if they are accessibe to each other, that is if i! j and j! i A Markov chain in which every state communicates with every other state is said to be irreducibe 5 4 This specia case turns out to be the core of the genera soution, too 5 In the event that a Markov chain is not irreducibe, it can be decomposed into communicating casses The anaysis of genera Markov chains begins with such a decomposition
6 abriefintroductiontomarkovchainsandhiddenmarkovmodes 6 Suppose that we start a Markov chain in state i State i is said to be recurrent if the probabiity that the process returns to state i is 1 That is, state i is recurrent if f i = P[returning to state i X 0 = i] =1 It is easy to show that a state is recurrent if and ony if  p ii (n) = n=1 So, of course, a state is transient if and ony if  n= ii(n) < Furthermore, we can show that if i $ j, then i is recurrent if and ony if j is recurrent Thus, we can concude that if a Markov chain is irreducibe then either a of its states are recurrent or a of its states are transient Moreover, if an irreducibe Markov chain is finite then a states must be recurrent Suppose that we start a Markov chain in a recurrent state, i, at time 0, X 0 = i Let T i (1) be the first time n 1 > 0 that the chain returns to state i That is, T i (1) =n 1 is the smaest number such that X n1 = i Subsequenty, et T i (k) be the time between the k 1th return to state i and the kth return to state i In other words, the chain returns to state i at times T i (1), T i (1) +T i (2), T i (1) +T i (2) +T i (3), The T i form an iid sequence, since each return time is independent of previous return times The proportion of time spent in state i after k returns to the state is given by k T i (1)+T i (2)+T i (3)+ + T i (k) By the strong aw of arge numbers, though, we know that T i (1)+T i (2)+T i (3)+ + T i (k) k! E[T i ] Hence, the ong term proportion of time spent in state i is given by q i = 1/E[T i ] If q i > 0, then we say that the state i is positive recurrent If q i = 0, then we say that the state is nu recurrent Note that nu recurrent states are a bit odd: The probabiity of returning to them is 1, but the expected time between visits is infinity! If i $ j, then i is positive recurrent if and ony if j is positive recurrent Thus, if {X n } is an irreducibe Markov chain, we can say that the ong term proportion of time spent in state i is q i Further, for any irreducibe Markov chain, ~q wi satisfy ~q = ~qp And, if the Markov chain is positive recurrent, then we wi have  i q i = 1, and so ~q wi be a stationary distribution of the chain The structure of the state transition probabiities can impose periodicity in the Markov chain We say that a state i has period d if it can
7 abriefintroductiontomarkovchainsandhiddenmarkovmodes 7 ony reoccur at times that are mutipes of d and d is the argest integer with this property That is p ii (n) =0 whenever n is not a mutipe of d We say that state i is aperiodic if it has period 1 (and periodic if it has period d > 1) Further, we can show that if i $ j, then i and j wi have the same period Thus, we say that an irreducibe Markov chain is aperiodic if any of its states are aperiodic (because this wi impy that a of its states are aperiodic) This brings us to the key resut of this section, and arguaby the key resut of Markov chain theory: If {X n } is an irreducibe, aperiodic Markov chain, then exacty one of the foowing assertions hods: A states are transient or a states are nu recurrent; p ij (n)! 0 as n! for a i and j; and no stationary distribution over the states exists A states are positive recurrent; there exists a unique stationary distribution ~p; and p ij (n)! p j as n! for a i and j Returning to our prior exampe, the two-state exampe is an irreducibe, aperiodic Markov chain Thus, we can immediatey concude that, because it is irreducibe and finite, it is positive recurrent In this case, the theorem tes us that there exists a unique stationary distribution, ~p, which we aready found to be ~p =(2/3, 1/3), and, moreover, that P(n) converges to the matrix in which each row is ~p This is what our numerica exampe (with a = 01 and b = 02) suggested earier For a more compex exampe, consider the one-sided random wak exampe shown in figure 2 This chain is ceary irreducibe and aperiodic, thus it is appropriate to appy the theorem above In this case, we proceed to appy the theorem by first attempting to find a stationary distribution In particuar, we can show that a soution to the system of homogeneous equations ~p = ~pp must satisfy p k p k = p 0 for k = 1, 2, 3, For ~p to be a stationary distribution, though, it must aso be the case that  k=0 p k = 1 But we have aready seen that {p k } forms a geometric sequence We can concude that it is possibe to find a stationary distribution ony if p/() < 1; that is, ony if p < 1/2 Thus, by appying the theorem, we have that when p < 1/2, the Markov chain is positive recurrent and the imiting distribution is the unique stationary distribution When p 1/2, according to the theorem the the Markov chain is either transient or nu recurrent 6 6 It is possibe to show that the chain is nu recurrent if and ony if p = 1/2 This represents the somewhat genera concusion that nu recurrence occurs on the knife edge between positive recurrence and transience
8 abriefintroductiontomarkovchainsandhiddenmarkovmodes 8 p p p p Figure 2: A one-sided random wak Hidden Markov Modes (HMMs) Suppose that {X n } is a time-homogenous, discrete-time Markov chain, ike those discussed in the previous two sections, with transition probabiity matrix P and initia probabiity distribution ~p(0) However, suppose that instead of observing the Markov chain itsef, we observe a sequence of signas {Y n }, which are correated with the underying chain, but that we are not abe to observe the chain directy In particuar, suppose that the Markov chain has N s states and that there are N o different possibe signas (observations) that can be observed Furthermore, suppose that the conditiona probabiity of observing a particuar signa is P(Y n = X n = x) =b x,, where B is an N s N o observation matrix and where Y n is conditionay independent of X k and Y k for a k 6= n The parameters of this mode are ~p(0), P, and B If these parameters are known, then it is possibe to write down a compete pmf for the mode In particuar, T 1 T p(~x,~y ~p(0), P, B) =p x0 p xt,x t+1 b xt,y t n=0 n=0 The ony thing about this pmf that is even moderatey interesting is the fact that we have expicity written the dependence on the parameters; otherwise, it is exacty what you woud expect given the description of the mode above There are three primary probems that are of interest with regards to HMMs: Given the observed data and the parameters ~p(0), P, and B, compute the conditiona distribution of the state Given the observed data and the parameters ~p(0), P, and B, compute the most ikey sequence for hidden states Given the observed data, compute the maximum ikeihood (ML) estimate of ~p(0), P, and B 7 The first and second probems are basic probabiity computations They coud be soved using conventiona methods for computing 7 In this probem, it is assumed that N s and N o are known
9 abriefintroductiontomarkovchainsandhiddenmarkovmodes 9 probabiities that we have discussed a semester However, such approaches wi have high computationa compexity We-known recursive agorithms provide a much more efficient soution In the case of the first probem, the objective is to compute, given the observations, either the probabiity distribution over states or the probabiity associated with a particuar state transition from time t to time t These probabiities may be computed either based on past observations (simiar to the case of causa fitering, seen previousy) or based on both past and future observations (simiar to the case of smoothing, described previousy) Interestingy, the computation in this case can be computed using a we-known agorithm know as the forward-backward agorithm or (especiay in coding theory, where this probem aso arises) as the BCJR agorithm The second probem asks a sighty different question Namey, it asks what the most ikey sequence of states {X n } is, given the observations {Y n } The Viterbi agorithm is a we-known, computationayefficient soution to this probem Note that, athough this probem is reated to the first, the desired end-resut is quite different Namey, one wants to know the singe most ikey sequence of states that ed to a given set of observations The third probem is actuay the most interesting (and the one most associated with HMMs) In this case, it assumed that ~p(0), P, and B are unknown The objective is to use the data to find the best estimate of these system parameters The agorithm proceeds in a very interesting way, and makes use of the soution to the first probem Suppose that we coud observe X n, in addition to Y n Then, one approach to estimating ~p(0), P, and B woud be to find the vaues of ~p(0), P, and B that woud maximize p(~x,~y ~p(0), P, B) This is maximum ikeihood estimation, as we discussed earier in the term Since the vaues of X n are not actuay avaiabe, we estimate this probabiity (actuay, the og of this probabiity, the og-ikeihood) using the conditiona expectation, given an estimate of the parameters That is, given estimates ~p (k) (0), P (k), and B (k), we compute 8 Obviousy, if we can compute the probabiity distribution over states at time t given the observations, ~p(t), then the probabiity of the transition from state i to state j from time t to time t + 1 wi be simpy p i (t)p ij Q(~p(0), P, B ~p (k) (0), P (k), B (k) )=E[og p(x, y ~p(0), P, B) ~p (k) (0), P (k), B (k) ] Then, we find the vaues of ~p(0), P, and B that maximize Q(~p(0), P, B ~p (k) (0), P (k), B (k) ), and we et ~p (k+1) (0), P (k+1), B (k+1) equa these vaues Then, we iterate Finding Q(~p(0), P, B ~p (k) (0), P (k), B (k) ) essentiay invoves using the forward-backward agorithm to compute the state and transition probabiities that we discussed with respect to the first probem above Once those probabiities are computed, the form of
10 a brief introduction to markov chains and hidden markov modes 10 Q(~p(0), P, B ~p (k) (0), P (k), B (k) ) is then actuay quite simpe, so that finding ~p (k+1) (0), P (k+1), B (k+1) is easy This particuar agorithm for finding the parameters of a HMM is known as the Baum-Wech agorithm Athough it is computationay quite efficient, it can sti be quite onerous to compute if N s or N o is arge or if the number of observations T is arge On the other hand, it is difficut to get good estimates for the parameters is T is sma For more information, see Chapter 5 of the course notes of Bruce Hajek, avaiabe on the web (See syabus for detais) His notation is not identica to mine, but he works out the agorithms here in some further detai
MARKOV CHAINS AND MARKOV DECISION THEORY. Contents
MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After
More informationCS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view
More informationXSAT of linear CNF formulas
XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open
More informationMATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES
MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is
More informationBayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?
Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine
More informationTurbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University
Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver
More information4 Separation of Variables
4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE
More informationAST 418/518 Instrumentation and Statistics
AST 418/518 Instrumentation and Statistics Cass Website: http://ircamera.as.arizona.edu/astr_518 Cass Texts: Practica Statistics for Astronomers, J.V. Wa, and C.R. Jenkins, Second Edition. Measuring the
More informationThe EM Algorithm applied to determining new limit points of Mahler measures
Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,
More informationLecture Note 3: Stationary Iterative Methods
MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or
More informationA. Distribution of the test statistic
A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch
More informationStochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract
Stochastic Compement Anaysis of Muti-Server Threshod Queues with Hysteresis John C.S. Lui The Dept. of Computer Science & Engineering The Chinese University of Hong Kong Leana Goubchik Dept. of Computer
More informationEfficiently Generating Random Bits from Finite State Markov Chains
1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown
More informationC. Fourier Sine Series Overview
12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a
More informationFOURIER SERIES ON ANY INTERVAL
FOURIER SERIES ON ANY INTERVAL Overview We have spent considerabe time earning how to compute Fourier series for functions that have a period of 2p on the interva (-p,p). We have aso seen how Fourier series
More informationEfficient Generation of Random Bits from Finite State Markov Chains
Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown
More informationA NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC
(January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract
More informationStochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997
Stochastic utomata etworks (S) - Modeing and Evauation Pauo Fernandes rigitte Pateau 2 May 29, 997 Institut ationa Poytechnique de Grenobe { IPG Ecoe ationae Superieure d'informatique et de Mathematiques
More informationCryptanalysis of PKP: A New Approach
Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in
More informationExpectation-Maximization for Estimating Parameters for a Mixture of Poissons
Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating
More informationNIKOS FRANTZIKINAKIS. N n N where (Φ N) N N is any Følner sequence
SOME OPE PROBLEMS O MULTIPLE ERGODIC AVERAGES IKOS FRATZIKIAKIS. Probems reated to poynomia sequences In this section we give a ist of probems reated to the study of mutipe ergodic averages invoving iterates
More informationProblem set 6 The Perron Frobenius theorem.
Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator
More information4 1-D Boundary Value Problems Heat Equation
4 -D Boundary Vaue Probems Heat Equation The main purpose of this chapter is to study boundary vaue probems for the heat equation on a finite rod a x b. u t (x, t = ku xx (x, t, a < x < b, t > u(x, = ϕ(x
More informationChemical Kinetics Part 2
Integrated Rate Laws Chemica Kinetics Part 2 The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates the rate
More informationTHE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES
THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES by Michae Neumann Department of Mathematics, University of Connecticut, Storrs, CT 06269 3009 and Ronad J. Stern Department of Mathematics, Concordia
More informationNew Efficiency Results for Makespan Cost Sharing
New Efficiency Resuts for Makespan Cost Sharing Yvonne Beischwitz a, Forian Schoppmann a, a University of Paderborn, Department of Computer Science Fürstenaee, 3302 Paderborn, Germany Abstract In the context
More informationA proposed nonparametric mixture density estimation using B-spline functions
A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),
More informationChemical Kinetics Part 2. Chapter 16
Chemica Kinetics Part 2 Chapter 16 Integrated Rate Laws The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates
More informationMore Scattering: the Partial Wave Expansion
More Scattering: the Partia Wave Expansion Michae Fower /7/8 Pane Waves and Partia Waves We are considering the soution to Schrödinger s equation for scattering of an incoming pane wave in the z-direction
More informationAsynchronous Control for Coupled Markov Decision Systems
INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of
More informationHigher dimensional PDEs and multidimensional eigenvalue problems
Higher dimensiona PEs and mutidimensiona eigenvaue probems 1 Probems with three independent variabes Consider the prototypica equations u t = u (iffusion) u tt = u (W ave) u zz = u (Lapace) where u = u
More informationT.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA
ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network
More informationMONTE CARLO SIMULATIONS
MONTE CARLO SIMULATIONS Current physics research 1) Theoretica 2) Experimenta 3) Computationa Monte Caro (MC) Method (1953) used to study 1) Discrete spin systems 2) Fuids 3) Poymers, membranes, soft matter
More informationSeparation of Variables and a Spherical Shell with Surface Charge
Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation
More informationV.B The Cluster Expansion
V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f( q ) = exp ( βv( q )), which is obtained by summing over
More informationComponentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems
Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,
More informationDiscrete Techniques. Chapter Introduction
Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, as we as various
More informationORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION
J. Korean Math. Soc. 46 2009, No. 2, pp. 281 294 ORHOGONAL MLI-WAVELES FROM MARIX FACORIZAION Hongying Xiao Abstract. Accuracy of the scaing function is very crucia in waveet theory, or correspondingy,
More informationV.B The Cluster Expansion
V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f(q ) = exp ( βv( q )) 1, which is obtained by summing over
More informationNOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS
NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS TONY ALLEN, EMILY GEBHARDT, AND ADAM KLUBALL 3 ADVISOR: DR. TIFFANY KOLBA 4 Abstract. The phenomenon of noise-induced stabiization occurs
More information(1 ) = 1 for some 2 (0; 1); (1 + ) = 0 for some > 0:
Answers, na. Economics 4 Fa, 2009. Christiano.. The typica househod can engage in two types of activities producing current output and studying at home. Athough time spent on studying at home sacrices
More informationIterative Decoding Performance Bounds for LDPC Codes on Noisy Channels
Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channes arxiv:cs/060700v1 [cs.it] 6 Ju 006 Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department University
More informationStatistical Learning Theory: A Primer
Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO
More informationDiscrete Techniques. Chapter Introduction
Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, we as various
More informationFirst-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries
c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische
More informationFormulas for Angular-Momentum Barrier Factors Version II
BNL PREPRINT BNL-QGS-06-101 brfactor1.tex Formuas for Anguar-Momentum Barrier Factors Version II S. U. Chung Physics Department, Brookhaven Nationa Laboratory, Upton, NY 11973 March 19, 2015 abstract A
More informationDelay Asymptotics with Retransmissions and Fixed Rate Codes over Erasure Channels
Deay Asymptotics with Retransmissions and Fixed Rate Codes over Erasure Channes Jian Tan, Yang Yang, Ness B. Shroff, Hesham E Gama Department of Eectrica and Computer Engineering The Ohio State University,
More information6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7
6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the
More informationTesting for the Existence of Clusters
Testing for the Existence of Custers Caudio Fuentes and George Casea University of Forida November 13, 2008 Abstract The detection and determination of custers has been of specia interest, among researchers
More informationBDD-Based Analysis of Gapped q-gram Filters
BDD-Based Anaysis of Gapped q-gram Fiters Marc Fontaine, Stefan Burkhardt 2 and Juha Kärkkäinen 2 Max-Panck-Institut für Informatik Stuhsatzenhausweg 85, 6623 Saarbrücken, Germany e-mai: stburk@mpi-sb.mpg.de
More informationSTA 216 Project: Spline Approach to Discrete Survival Analysis
: Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing
More informationMaximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem
1 Maximum ikeihood decoding of treis codes in fading channes with no receiver CSI is a poynomia-compexity probem Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department
More information6 Wave Equation on an Interval: Separation of Variables
6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.
More information17 Lecture 17: Recombination and Dark Matter Production
PYS 652: Astrophysics 88 17 Lecture 17: Recombination and Dark Matter Production New ideas pass through three periods: It can t be done. It probaby can be done, but it s not worth doing. I knew it was
More informationarxiv:hep-ph/ v1 15 Jan 2001
BOSE-EINSTEIN CORRELATIONS IN CASCADE PROCESSES AND NON-EXTENSIVE STATISTICS O.V.UTYUZH AND G.WILK The Andrzej So tan Institute for Nucear Studies; Hoża 69; 00-689 Warsaw, Poand E-mai: utyuzh@fuw.edu.p
More informationData Discovery and Anomaly Detection Using Atypicality: Theory
Data Discovery and Anomay Detection Using Atypicaity: Theory Anders Høst-Madsen, Feow, IEEE, Eyas Sabeti, Member, IEEE, Chad Waton Abstract A centra question in the era of big data is what to do with the
More informationPartial permutation decoding for MacDonald codes
Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics
More informationMat 1501 lecture notes, penultimate installment
Mat 1501 ecture notes, penutimate instament 1. bounded variation: functions of a singe variabe optiona) I beieve that we wi not actuay use the materia in this section the point is mainy to motivate the
More informationRate-Distortion Theory of Finite Point Processes
Rate-Distortion Theory of Finite Point Processes Günther Koiander, Dominic Schuhmacher, and Franz Hawatsch, Feow, IEEE Abstract We study the compression of data in the case where the usefu information
More informationDavid Eigen. MA112 Final Paper. May 10, 2002
David Eigen MA112 Fina Paper May 1, 22 The Schrodinger equation describes the position of an eectron as a wave. The wave function Ψ(t, x is interpreted as a probabiity density for the position of the eectron.
More information<C 2 2. λ 2 l. λ 1 l 1 < C 1
Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima
More informationCoded Caching for Files with Distinct File Sizes
Coded Caching for Fies with Distinct Fie Sizes Jinbei Zhang iaojun Lin Chih-Chun Wang inbing Wang Department of Eectronic Engineering Shanghai Jiao ong University China Schoo of Eectrica and Computer Engineering
More informationProvisions estimation for portfolio of CDO in Gaussian financial environment
Technica report, IDE1123, October 27, 2011 Provisions estimation for portfoio of CDO in Gaussian financia environment Master s Thesis in Financia Mathematics Oeg Maximchuk and Yury Vokov Schoo of Information
More informationIntegrating Factor Methods as Exponential Integrators
Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been
More informationHomework 5 Solutions
Stat 310B/Math 230B Theory of Probabiity Homework 5 Soutions Andrea Montanari Due on 2/19/2014 Exercise [5.3.20] 1. We caim that n 2 [ E[h F n ] = 2 n i=1 A i,n h(u)du ] I Ai,n (t). (1) Indeed, integrabiity
More informationSEMINAR 2. PENDULUMS. V = mgl cos θ. (2) L = T V = 1 2 ml2 θ2 + mgl cos θ, (3) d dt ml2 θ2 + mgl sin θ = 0, (4) θ + g l
Probem 7. Simpe Penduum SEMINAR. PENDULUMS A simpe penduum means a mass m suspended by a string weightess rigid rod of ength so that it can swing in a pane. The y-axis is directed down, x-axis is directed
More informationThroughput Optimal Scheduling for Wireless Downlinks with Reconfiguration Delay
Throughput Optima Scheduing for Wireess Downinks with Reconfiguration Deay Vineeth Baa Sukumaran vineethbs@gmai.com Department of Avionics Indian Institute of Space Science and Technoogy. Abstract We consider
More informationApplied Nuclear Physics (Fall 2006) Lecture 7 (10/2/06) Overview of Cross Section Calculation
22.101 Appied Nucear Physics (Fa 2006) Lecture 7 (10/2/06) Overview of Cross Section Cacuation References P. Roman, Advanced Quantum Theory (Addison-Wesey, Reading, 1965), Chap 3. A. Foderaro, The Eements
More informationASummaryofGaussianProcesses Coryn A.L. Bailer-Jones
ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe
More informationUniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete
Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity
More informationCombining reaction kinetics to the multi-phase Gibbs energy calculation
7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation
More informationLimits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework
Limits on Support Recovery with Probabiistic Modes: An Information-Theoretic Framewor Jonathan Scarett and Voan Cevher arxiv:5.744v3 cs.it 3 Aug 6 Abstract The support recovery probem consists of determining
More informationAlgorithms to solve massively under-defined systems of multivariate quadratic equations
Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations
More informationCS 331: Artificial Intelligence Propositional Logic 2. Review of Last Time
CS 33 Artificia Inteigence Propositiona Logic 2 Review of Last Time = means ogicay foows - i means can be derived from If your inference agorithm derives ony things that foow ogicay from the KB, the inference
More informationSmoothness equivalence properties of univariate subdivision schemes and their projection analogues
Numerische Mathematik manuscript No. (wi be inserted by the editor) Smoothness equivaence properties of univariate subdivision schemes and their projection anaogues Phiipp Grohs TU Graz Institute of Geometry
More informationFFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection
FFTs in Graphics and Vision Spherica Convoution and Axia Symmetry Detection Outine Math Review Symmetry Genera Convoution Spherica Convoution Axia Symmetry Detection Math Review Symmetry: Given a unitary
More informationInductive Bias: How to generalize on novel data. CS Inductive Bias 1
Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow
More informationSUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS
ISEE 1 SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS By Yingying Fan and Jinchi Lv University of Southern Caifornia This Suppementary Materia
More informationPreconditioned Locally Harmonic Residual Method for Computing Interior Eigenpairs of Certain Classes of Hermitian Matrices
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.mer.com Preconditioned Locay Harmonic Residua Method for Computing Interior Eigenpairs of Certain Casses of Hermitian Matrices Vecharynski, E.; Knyazev,
More informationApproximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation
Approximation and Fast Cacuation of Non-oca Boundary Conditions for the Time-dependent Schrödinger Equation Anton Arnod, Matthias Ehrhardt 2, and Ivan Sofronov 3 Universität Münster, Institut für Numerische
More informationSequential Decoding of Polar Codes with Arbitrary Binary Kernel
Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient
More informationAge of Information: The Gamma Awakening
Age of Information: The Gamma Awakening Eie Najm and Rajai Nasser LTHI, EPFL, Lausanne, Switzerand Emai: {eie.najm, rajai.nasser}@epf.ch arxiv:604.086v [cs.it] 5 Apr 06 Abstract Status update systems is
More informationarxiv: v1 [math.ca] 6 Mar 2017
Indefinite Integras of Spherica Besse Functions MIT-CTP/487 arxiv:703.0648v [math.ca] 6 Mar 07 Joyon K. Boomfied,, Stephen H. P. Face,, and Zander Moss, Center for Theoretica Physics, Laboratory for Nucear
More informationBASIC NOTIONS AND RESULTS IN TOPOLOGY. 1. Metric spaces. Sets with finite diameter are called bounded sets. For x X and r > 0 the set
BASIC NOTIONS AND RESULTS IN TOPOLOGY 1. Metric spaces A metric on a set X is a map d : X X R + with the properties: d(x, y) 0 and d(x, y) = 0 x = y, d(x, y) = d(y, x), d(x, y) d(x, z) + d(z, y), for a
More informationAsymptotic Properties of a Generalized Cross Entropy Optimization Algorithm
1 Asymptotic Properties of a Generaized Cross Entropy Optimization Agorithm Zijun Wu, Michae Koonko, Institute for Appied Stochastics and Operations Research, Caustha Technica University Abstract The discrete
More informationPhysicsAndMathsTutor.com
. Two points A and B ie on a smooth horizonta tabe with AB = a. One end of a ight eastic spring, of natura ength a and moduus of easticity mg, is attached to A. The other end of the spring is attached
More informationDistributed average consensus: Beyond the realm of linearity
Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,
More informationTwo-sample inference for normal mean vectors based on monotone missing data
Journa of Mutivariate Anaysis 97 (006 6 76 wwweseviercom/ocate/jmva Two-sampe inference for norma mean vectors based on monotone missing data Jianqi Yu a, K Krishnamoorthy a,, Maruthy K Pannaa b a Department
More informationHaar Decomposition and Reconstruction Algorithms
Jim Lambers MAT 773 Fa Semester 018-19 Lecture 15 and 16 Notes These notes correspond to Sections 4.3 and 4.4 in the text. Haar Decomposition and Reconstruction Agorithms Decomposition Suppose we approximate
More informationhttps://doi.org/ /epjconf/
HOW TO APPLY THE OPTIMAL ESTIMATION METHOD TO YOUR LIDAR MEASUREMENTS FOR IMPROVED RETRIEVALS OF TEMPERATURE AND COMPOSITION R. J. Sica 1,2,*, A. Haefee 2,1, A. Jaai 1, S. Gamage 1 and G. Farhani 1 1 Department
More informationLECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October 7, 202 Prof. Aan Guth LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL
More informationOn the Goal Value of a Boolean Function
On the Goa Vaue of a Booean Function Eric Bach Dept. of CS University of Wisconsin 1210 W. Dayton St. Madison, WI 53706 Lisa Heerstein Dept of CSE NYU Schoo of Engineering 2 Metrotech Center, 10th Foor
More informationMath 124B January 17, 2012
Math 124B January 17, 212 Viktor Grigoryan 3 Fu Fourier series We saw in previous ectures how the Dirichet and Neumann boundary conditions ead to respectivey sine and cosine Fourier series of the initia
More informationA Solution to the 4-bit Parity Problem with a Single Quaternary Neuron
Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria
More informationGauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law
Gauss Law 1. Review on 1) Couomb s Law (charge and force) 2) Eectric Fied (fied and force) 2. Gauss s Law: connects charge and fied 3. Appications of Gauss s Law Couomb s Law and Eectric Fied Couomb s
More information14 Separation of Variables Method
14 Separation of Variabes Method Consider, for exampe, the Dirichet probem u t = Du xx < x u(x, ) = f(x) < x < u(, t) = = u(, t) t > Let u(x, t) = T (t)φ(x); now substitute into the equation: dt
More informationu(x) s.t. px w x 0 Denote the solution to this problem by ˆx(p, x). In order to obtain ˆx we may simply solve the standard problem max x 0
Bocconi University PhD in Economics - Microeconomics I Prof M Messner Probem Set 4 - Soution Probem : If an individua has an endowment instead of a monetary income his weath depends on price eves In particuar,
More informationFrom Margins to Probabilities in Multiclass Learning Problems
From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error
More informationMulti-server queueing systems with multiple priority classes
Muti-server queueing systems with mutipe priority casses Mor Harcho-Bater Taayui Osogami Aan Scheer-Wof Adam Wierman Abstract We present the first near-exact anaysis of an M/PH/ queue with m > 2 preemptive-resume
More informationAbsolute Value Preconditioning for Symmetric Indefinite Linear Systems
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.mer.com Absoute Vaue Preconditioning for Symmetric Indefinite Linear Systems Vecharynski, E.; Knyazev, A.V. TR2013-016 March 2013 Abstract We introduce
More information