Stationary Priors for Bayesian Target Tracking

Size: px
Start display at page:

Download "Stationary Priors for Bayesian Target Tracking"

Transcription

1 Stationary Priors for Bayesian Target Tracking Brian R. La Cour Applied Research Laboratories The University of Texas at Austin Austin, Texas, U.S.A Abstract A dynamically-based birth/death process is proposed for use in Bayesian single-target tracking. The underlying dynamics are based on an integrated Ornstein-Uhlenbeck process, which is stationary in velocity but not in position. Target birth death events are defined in terms of a given spatial region of interest. Target death (or, escape) occurs when the dynamics move the target outside this region. Target birth (or, entrance) is specified statistically to achieve a desired stationary probability distribution, which then serves as a Bayesian prior. An implementation of this process for both grid particlebased distribution representations is outlined, numerical examples are provided to illustrate convergence. Keywords: Bayes procedures, stochastic processes, target detection, target tracking I. INTRODUCTION Accurate tracking relies upon modeling the underlying dynamics of moving targets ], 2]. Typically, such models take the form of stochastic differential equations, with the stochasticity representing a diffusive process noise to be added to the target s otherwise deterministic motion. The deterministic portion of the model gives rise to dispersion. Together, they allow the tracker to anticipate target motion between temporally separated measurement updates while allowing for additional flexibility in light of inevitable model mismatch. Due both to the dispersive diffusive nature of the motion, target state uncertainties tend to increase with time between measurement updates. If such updates occur frequently enough, this uncertainty remains bounded within reasonable limits. In the absence of such measurements, however, the uncertainty tends to grow without bound, the track will tend to follow rom clutter. Now, in the general Bayesian tracking problem, one is interested in estimating both the number of targets present their collective kinematic state. The formal Bayes procedure to do this consists of a series of motion measurement updates on a given initial prior distribution to obtain the current posterior distribution. The choice for the prior is often given short shrift is usually selected based merely on convenience. Justification for a particular choice of prior, when sought, is often made by appeal to information theory 3]. Hill Spall 4], for example, argue that the maximally uninformative prior for Bayesian inference over a sequency of measurements is that which maximizes the Kullback-Leibler entropy between the final posterior original prior. Of course, if one considers any other state variable in one-to-one correspondence with the kinematic variables but connected via a non-lebesgue measure preserving mapping, then an altogether different prior may be obtained. Physically, one would expect that the timeevolved posterior, in the absence of measurement updates, should revert back to the starting prior. It is perhaps more natural, then, to think of the prior distribution as instead having a dynamical origin 5]. This paper sets out to develop a target motion model which combines the continuous kinematic evolution with the discrete target number distribution. The corresponding stationary distribution, a mixture of both the discrete continuous components, is then taken to be the prior for the purposes of Bayesian inference. Specifically, consider a target tracking scenario in which, at any time, there is at most one target present in a given spatial region, R. The target s kinematic state at time t, S(t), is taken to be composed of a d- dimensional position, X(t), velocity, V (t), component; thus, S(t) X(t), V (t)]. This is similar in form to a jump Markov system for an Interacting Multiple Model (IMM) tracker 2], 6], wherein target birth death are treated as rom transitions between different target models according to preassigned probabilities. The key difference here is that the transition probabilities are derived from the target dynamics; hence, the process need not be strictly Markovian. Let p t (m) denote the probability that there are m targets present (m, ), let ρ t (s) denote the probability density that, given a target is present, S(t) s. Given an arbitrary initial distribution characterized by p (m) ρ (s), we seek a physically reasonable stochastic model for the target dynamics such that p t (m) ρ t (s) approach a unique stationary distribution as t with no measurement updates. It is this stationary distribution which we shall take to be the Bayesian prior. The approach, outline, of the paper is as follows. In Sec. II the preliminary (i.e., without birth death) target dynamics are defined. Section III extends this model to incorporate a stochastic, but dynamically induced, birth death process. A solution for the stationary distribution, given the birth statistics, is outlined in Sec. IV. Alternatively, it is shown that, for a desired stationary distribution, the required birth statistics are easily determined. A specific example is worked out wherein the stationary kinematic distribution is uniform in position Gaussian in velocity. Section V

2 concerns asymptotic convergence to the stationary distribution, with some rate estimates. Finally, Secs. VI VII consider specific implementations of the motion model the results of convergence through simulation. A summary of findings conclusions is given in Sec. VIII. II. PRELIMINARY TARGET DYNAMICS Suppose the target dynamics are given by a set of linear stochastic differential equations of the form ds(t) ΓS(t)dt + Σ dw (t), () where Γ Σ are 2d 2d matrices W (t) is a 2ddimensional Wiener (Brownian motion) process. (This is an example of a multidimensional Langevin equation.) The matrix Γ gives the basic kinematic relation, giving rise to dispersion, while Σ describes the diffusion process. The solution to this equation gives the time-evolved probability density function (PDF) ρ t (s). It can be shown that, if S() is independent of {W (t) : t }, then the solution is given by S(t) S() + t e Γ(t t ) Σ dw (t ), (2) where the distribution of S() is given by ρ (s). The target is taken to follow an Integrated Ornstein-Uhlenbeck (IOU) process 7], with Γ I γ I ] Σ ], (3) σ I where I is the d d identity matrix. Conditioned on S(), the above Itô integral is an independent Gaussian rom variable with mean A(t)S() covariance Q(t), where A(t) Q(t) are of the following form. ] I γ A(t) ( e γt ) I e γt (4) I ] q (t) I q Q(t) 2 (t) I, (5) q 2 (t) I q 22 (t) I respectively. The elements of A i (t) shall be denoted a (t), a 2 (t),..., while those of Q i (t) are given by q (t) σ2 2γ 3 2γt 4( e γt ) + e 2γt] (6a) q 2 (t) σ2 ( e γt ) 2 q2 2γ 2 (t) (6b) q 22 (t) σ2 ( e 2γt ) 2γ (6c) Being a sum of two independent rom variables, the probability density for S(t) is therefore ρ t (s) K t (s s )ρ (s ) ds, (7) where the Markov motion kernel, K t (s s ), is a Gaussian probability density with mean A(t)s covariance Q(t). 97 For large t, the velocity components of A(t) Q(t) approach σ 2 /(2γ)I, respectively. Thus, the velocity distribution of the target asymptotically approaches a stationary Gaussian distribution with zero mean a root-mean-square speed of v : σ/ 2γ. The linear diffusion term, (σ/γ) 2 t, in q (t), however, causes the position to become unbounded, with no stationary distribution possible. In the next section we will consider ways to achieve a truly stationary distribution. III. TARGET BIRTH/DEATH PROCESS Since the the motion is spatially unbounded, we need to account for the target possibly leaving the area of interest, R. To achieve stationarity, we must also suppose that a new target may enter this area. These considerations suggest that the target population should be modeled as a stochastic birth/death process, which is characterized as follows. Let Φ t ( ) denote the probability of a target remaining absent at time t, given that none was initially present. If a new target appears at time t, where none was present initially, then the probability density for state s is Φ t (s ). In a similar manner, let Φ t ( s ) be the probability that a target is not present at time t, given that it was present in state s initially. Finally, let Φ t (s s ) denote the probability density for the target to remain present be in state s at time t, given that it was present in state s initially. (Note that this is not the same as K t (s s ) above.) Considering the preliminary target dynamics of the previous section, the probability that a target in state s at time will still be in the region of interest at time t > is Φ t ( s ) K t (s s ) ds. (8) R R d Since no more than one target can be present, the population distribution at time t is p t () p () ( β t ) + p () ε t (s ) ρ (s )ds (9a) p t () p () β t + p () ε t (s )] ρ (s )ds (9b) p t (m), m > (9c) where β t : Φ t ( ) is the target birth probability ε t (s ) : Φ t ( s ) is the conditional target escape probability. Given that a target is present, the kinematic distribution is given by an IOU process, conditioned on the target being present in R at time t; thus, ρ t (s) p ()b t (s) + p () Φ t (s s )ρ (s )ds, () where b t (s) : Φ t (s ) is the kinematic PDF for a newly entered target, Φ t (s s ) K t(s s ) ε t (s ) R Rd(s) () is the conditional motion-update kernel, R R d( ) is the indicator function on the set R R d.

3 IV. STATIONARY DISTRIBUTION An initial distribution p p ρ ρ will be stationary if only if the following relations are satisfied. p() p() β t + p() ε t (s )]ρ(s )ds (2) ρ(s) p() b t (s) + p() Φ t (s s )ρ(s )ds. (3) Given β t b t, one may, in theory, solve for p ρ. Equation (3) is recognized as a Fredholm integral equation of the second kind 8]. By the Fredholm Alternative, a unique solution exists for any b t, provided p(). This solution, in turn, may be approximated via a Neumann series which, to first order in p(), is ρ(s) p() b t (s) + p() Φ t (s s )b t (s )ds ]. (4) Substituting this result into Eqn. (2) retaining terms that are first order in p() gives ] p() p() β t + p() ε t (s )]b t (s )ds. (5) The resulting quadratic equation is readily solved, giving p. If, instead, a desired p ρ are given, we may readily solve for β t b t (s) to obtain β t p() p() b t (s) p() ε t (s )ρ(s )ds, (6) ] ρ(s) p() Φ t (s s )ρ(s )ds. (7) Thus, a desired stationary distribution, characterized by a given p ρ, may be achieved by selecting the characteristics of newly entering targets, described by β t b t (s), in the manner prescribed by Eqns. (6) (7). The choice of p ρ is not entirely arbitrary, however. The value of β t, for example, is guaranteed to fall within the unit interval only for p() /2. Similarly, a large value for p() could result in b t (s) taking on negative values. On the other h, for p() /2 we may approximate b t (s) by ρ(s). More directly, taking b t ρ β t as per Eqn. (6) implies a new stationary distribution, characterized by p ρ given implicitly by p() p() p() p() ρ(s) ρ(s) p() p() ε t (s )]ρ(s ) ρ(s )]ds (8) Φ t (s s ) ρ(s )ds ]. (9) For small p(), the true stationary distribution, given by p ρ, will be close to the nominal stationary distribution, given by p ρ. In practice, then, it is not necessary to determine p ρ explicitly. 97 A. Uniform/Gaussian Case Let p() /2 be given consider the following nominal stationary density. ρ(s) c R (x) g(v, v 2 I), (2) where c is a normalization constant g(, v 2 I) is a Gaussian PDF with mean covariance v 2 I. The value for β t will be given by β t p() p() ( α t), (2) where the survival probability, α t, is determined by α t : ε t (s )]ρ(s )ds K t (s s ) ds ρ(s ) ds R R d g(s As, Q) dv ρ(s ) dv dx dx. R The integral over v reduces as follows. g(s As,Q) dv ρ(s ) dv g(x x + a 2 v, q I) g(v, v 2 I) dv g (v x x, q ) a 2 a 2 I g(v, v 2 I) dv ( ( 2 x x g, v 2 + q ) ) I where a 2 g ( x x, r 2 I ), r(t) a 2 2 (22) (23) q (t) + v 2 a 2 2 (t). (24) Now suppose R R R d, where R i ξ i R, ξ i + R] R > for each i,..., d. Then α t i erf 2R ξi +R ξi +R ξ i R ξ i R g(x i x i, r(t) 2 ) dx i dx i ( 2R/r(t) ) exp ( 2R 2 /r(t) 2) 2πR/r(t) ] d. (25) A plot of α t versus t is shown in Fig. for typical parameter values appropriate for submarine tracking. Since r(t) increases with t, for small t we may approximate α t using the asymptotic expansion of the error function 9]. Thus, for t min{/γ, R/ v}, α t r(t) 2πR ] d vt 2πR ] d. (26) This shows that the survival probability is primarily a function of the ratio of the mean drift distance the region size, as given by vt R, respectively. The target birth PDF, b t (s), can be determined in a similar manner, using Eqn. (7), but the integral therein does not lend to an analytic solution. Nevertheless, one can show that

4 972.8 Exact Approximate.8.8 R 25 km α t R km α τ.4 R 5 km R km t (min) p().2 Figure. Plot of α t versus t for d 2, γ 2 6 /s, v 5 m/s, several values of R. The solid curves are the exact values, while the dashed curves are the approximation from Eqn. (26) Marginal Birth PDF (/km) t min t 3 min t 6 min 5 5 x (km) Figure 2. Plot of Monte Carlo estimates for b t(x, v) dv versus x for d, p().5, R km, γ 2 6 /s, v 5 m/s, several values of t. The black line is the nominal stationary PDF, which is uniform. the PDF separates into the product of a gaussian PDF in velocity, identical to that of Eqn. (2), a marginal PDF in position; thus, the position velocity of the birth particle are statistically independent. Monte Carlo estimates of this marginal for several values of t are shown in Fig. 2. As p(), the marginal PDF tends to a uniform distribution, corresponding to the nominal prior. For p() > t small, the PDF tends to weight regions near the boundary more heavily, corresponding intuitively to a target entering the region. V. CONVERGENCE The previous section showed that p ρ form a stationary distribution for the choices of β t b t given in Eqns. (6) (7). It remains, then, to be shown whether, for an arbitrary Figure 3. Plot of λ τ versus p() α τ. The black curve indicates the locus of points for which λ τ, where convergence is most rapid. choice of p ρ, it is true that p t ρ t converge to p ρ, respectively. Consider the case in which ρ ρ. Considering p t to be a column vector, we may write p t M t p, where ] βt α M t : t. (27) β t α t We immediately note that, since α t as t, lim p t() p() p () t p() lim p t() p() p () t p() (28a) (28b) Clearly, p t does not converge to p unless p p. If, however, the motion model is applied iteratively with, say, t nτ M nτ approximated by Mτ n, a different result is found. An eigenvalue decomposition of M τ yields two eigenvalues: unity, corresponding to the stationary state, λ τ : α τ β τ, (29) which, since λ τ < for α τ <, gives the rate of convergence to the stationary distribution. Specifically, the time scale for convergence will be about τ/ log λ τ. Convergence will be fastest when λ τ ; i.e., when p() α τ. Figure 3 illustrates the dependence of λ τ on the prior target probability, p(), the survival probability, α τ. Note that there is no convergence (i.e., λ τ ) when p() (+α τ )/2. Of course, the true time evolution of p t is complicated by the fact that it is coupled to that of ρ t, which varies in time even in the case that ρ ρ. Thus, λ τ should be viewed only as a rough guide to the rate of convergence, valid in the case that ρ t ρ. VI. IMPLEMENTATION For the practitioner, it is useful to detail how one might efficiently implement the above theoretical formulation. We

5 suppose two possible representations of ρ t (s). The first is a grid-based representation, wherein the PDF is approximated on a set of discrete points. The second is a particle representation, wherein the PDF is represented by a set of state values associated weights. A. Grid-based Representation In the grid-based representation, the PDF is approximated on a set {C i } of disjoint grid cells covering the spatial region of interest the velocity state space. Thus, we write ρ t (s) ρ t (j) Cj (s)/ ds, (3) j where the probability mass function (PMF), ρ t ( ), is ρ t (j) : ρ t (s)ds. (3) In terms of the initial PMF the target birth distribution, we find where ρ t (j) p () b t (j) + p () i Φ t (j i) : Φ t (j i) ρ (i), (32) bt (j) : b t (s)ds (33) Φ t (s s )ds ds/ ds. (34) C i C i Suppose the grid cells take the following form. a i, b i ) u i, v i ), (35) i where a i < b i u i < v i are the bounds in position velocity, respectively, for the given cell. For the uniform/gaussian stationary distribution defined in Sec. IV-A, the birth PMF takes the following form. bt (j) ρ(s)ds bi vi dx i g(v 2R i, v 2 )dv i i a i ( ) d 4R i u i (b i a i ) ( ) ( )] vi ui erf erf, 2 v 2 v (36) where we have assumed that b i > ξ i R/2 a i < ξ i +R/2. The motion update for targets which do not escape is given by the transition probability Φ t (j i). Although this may be computed directly via numerical integration, the discrete form of the transition probability suggests the following Monte Carlo approach. First, generate a uniformly rom set {s n} Ni n of points in C i. Next, apply the motion model to each point in this set, resulting in s n A(t)s n + Q(t) /2 z n, where z n are independent identically distributed stard Gaussian rom variables. Let N ij denote the number of 973 points in {s n } N i n contained in. Now, Ni j N ij will be the number of points that do not escape from the region of interest. Since some points may escape from the spatial region of interest, it will generally be true that Ni N i. For N i sufficiently large, the motion transition probability is Φ t (j i) N ij N i (b i a i )(v i u i )], (37) i the escape probability is ε t (s )ρ (s )ds ij B. Particle Representation N ij N i ρ (i) (38) In the particle representation, the PDF is approximated by a set of N target states, s n, associated weights, w n. Thus, ρ t (s) N w n δ(s s n ), (39) n where δ( ) is the impulse function. In what follows we shall assume that the particles have been renormalized so that the weights are, in fact, uniform (i.e., w n /N). A straightforward application of the motion model to the original particle set, {s n} N n, leads to a new set, {s n } N n. Of these, a subset {s n} N n will remain in the spatial region of interest. The escape probability is therefore ε t (s )ρ (s )ds N (4) N Removal of the escaped particles gives a sample representative of the conditional motion model; i.e., Φ t (s s )ρ (s )ds. To obtain a sample representative of ρ t (s), we must draw from this latter sample in proportion to p () augment with samples from the birth distribution, in proportion to p (). This may be achieved as follows. Let N p ()N N N N. If N N, then remove the last N N particles from {s n} N n, resulting in {s n} N n. If N < N, then draw N N particles romly from {s n} N n add these to the sample, resulting in {s n} N n {s n k } N N k. Finally, draw N samples from the birth distribution add the resulting sample, {ŝ n } N n, to the aforementioned sample. The final sample is therefore if N N, otherwise {s n } N n {s n} N n {ŝ n} N n, (4) {s n } N n {s n} N n {s n k } N N k {ŝ n } N n. (42) VII. SIMULATION EXAMPLE As an illustration of convergence to the stationary distribution, consider the following example. Let p().5 let ρ be given by Eqn. (2) with d 2, R 2 km, v 5 m/s. The initial distribution is taken to represent a postmeasurement detection, with p (). ρ as ρ but with R 2 km v.5 m/s. The time-evolved distribution, p t

6 974 Entropy Stationary Prior σ.5 m/s 3/2 σ. m/s 3/2 σ.2 m/s 3/ Time (min) Figure 4. Plot of the entropy, Hρ t ], as a function of time. The green, blue, red curves correspond to σ.5,.,.2 m/s 3/2, respectively. The black curve is the equilibrium value. Probability Target Present Stationary Prior σ.5 m/s 3/2 σ. m/s 3/2 σ.2 m/s 3/ Time (min) Figure 5. Plot of the probability that a target is present, p t (), as a function of time. The green, blue, red curves correspond to σ.5,.,.2 m/s 3/2, respectively. The black curve is the equilibrium value. ρ t, will be determined iteratively with intervals of length τ 6 sec. Several values of σ will be considered. When ρ t is in equilibrium, convergence of p t () will occur on the time scale τ/ log λ τ. For the above parameters, α τ.988, so λ τ.977. This implies a convergence time scale on the order of 3 min. Convergence is expected to occur after about three times this value, i.e., in about 9 min. To quantify the convergence of ρ t the Boltzmann-Shannon entropy is used, where Hρ t ] : ρ t (s) log ρ t (s) ds. (43) For the equilibrium case, it can be shown that Hρ] d log(2r) + d 2 log ( 2π v 2) + d 2. (44) Convergence will imply that Hρ t ] Hρ] as t. The results of a simulation using 5 particles over a fourhour period are shown in Figures 4 5. Three values of σ were used:.5,.,.2 (in units of m/s 3/2 ), which are shown in green, blue, red, respectively. Figures 4 5 show Hρ t ] p t (), respectively, versus time. From the figures we see that the convergence of p t lags that of ρ t, with convergence of the latter occurring in the first hour of elapsed time. The target probability, p t () converged about 9 min later, in rough agreement with the theoretical prediction. Increasing the value of σ, which controls the rate of diffusion, tended to increase the rate of convergence for the entropy, hence, for the target probability as well. VIII. CONCLUSION This paper introduced a single-target stochastic birth/death model based on the Integrated Ornstein-Uhlenbeck process the notion that target presence is defined in terms of a given spatial region of interest. The probability of a new target entering the region, its initial kinematic distribution, were chosen so as to obtain a specified stationary prior distribution. The resulting coupled birth/death kinematic time evolution process is, in general, not Markovian, but theoretical numerical analysis suggests that the stationary distribution, for typical model parameters, is indeed asymptotically stable. These investigations suggest that it is necessary for the kinematic portion of the distribution to converge, through the motion-induced dispersion, before the target probability converges to its equilibrium (i.e., prior) value. ACKNOWLEDGMENT This work was support by the Office of Naval Research under Contract No. N4-6-G REFERENCES ] Y. Bar-Shalom T. E. Fortmann, Tracking Data Association. San Diego: Academic Press, ] Y. Bar-Shalom X.-R. Li, Multitarget-Multisensor Tracking: Principles Techniques. Storrs, CT: YBS Publishing, ] E. T. Jaynes, Information theory statistical mechanics, Physical Review, vol. 6, no. 4, pp , ] S. D. Hill J. C. Spall, Noninformative Bayesian priors for large samples based on Shannon information theory, in Proceedings of the 26th Conference on Decision Control. Los Angeles, CA: IEEE, December 987, pp ] A. Lasota M. C. Mackey, Chaos, Fractals, Noise. New York: Springer-Verlag, ] C. Andrieu, M. Davy, A. Doucet, Efficient particle filtering for jump Markov systems: Application to time-varying autoregressions, IEEE Transactions on Signal Processing, vol. 5, no. 7, 23. 7] L. D. Stone, C. A. Barlow, T. L. Corwin, Bayesian Multiple Target Tracking. Boston: Artech House, ] D. Porter D. S. G. Stirling, Integral Equations. Cambridge: Cambridge University Press, 99. 9] M. Abramowitz I. A. Stegun, Hbook of Mathematical Functions with Formulas, Graphs, Mathematical Tables, 9th ed. New York: Dover, 964.

Generalizations to the Track-Oriented MHT Recursion

Generalizations to the Track-Oriented MHT Recursion 18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 Generalizations to the Track-Oriented MHT Recursion Stefano Coraluppi and Craig Carthel Systems & Technology Research

More information

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute

More information

Suppression of impulse noise in Track-Before-Detect Algorithms

Suppression of impulse noise in Track-Before-Detect Algorithms Computer Applications in Electrical Engineering Suppression of impulse noise in Track-Before-Detect Algorithms Przemysław Mazurek West-Pomeranian University of Technology 71-126 Szczecin, ul. 26. Kwietnia

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Inferring biological dynamics Iterated filtering (IF)

Inferring biological dynamics Iterated filtering (IF) Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

4452 Mathematical Modeling Lecture 16: Markov Processes

4452 Mathematical Modeling Lecture 16: Markov Processes Math Modeling Lecture 16: Markov Processes Page 1 4452 Mathematical Modeling Lecture 16: Markov Processes Introduction A stochastic model is one in which random effects are incorporated into the model.

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stat 451 Lecture Notes Numerical Integration

Stat 451 Lecture Notes Numerical Integration Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

Table of Contents [ntc]

Table of Contents [ntc] Table of Contents [ntc] 1. Introduction: Contents and Maps Table of contents [ntc] Equilibrium thermodynamics overview [nln6] Thermal equilibrium and nonequilibrium [nln1] Levels of description in statistical

More information

RESEARCH ARTICLE. Online quantization in nonlinear filtering

RESEARCH ARTICLE. Online quantization in nonlinear filtering Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming Control and Cybernetics vol. 35 (2006) No. 4 State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming by Dariusz Janczak and Yuri Grishin Department

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Stochastic Particle Methods for Rarefied Gases

Stochastic Particle Methods for Rarefied Gases CCES Seminar WS 2/3 Stochastic Particle Methods for Rarefied Gases Julian Köllermeier RWTH Aachen University Supervisor: Prof. Dr. Manuel Torrilhon Center for Computational Engineering Science Mathematics

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

On the Nature of Random System Matrices in Structural Dynamics

On the Nature of Random System Matrices in Structural Dynamics On the Nature of Random System Matrices in Structural Dynamics S. ADHIKARI AND R. S. LANGLEY Cambridge University Engineering Department Cambridge, U.K. Nature of Random System Matrices p.1/20 Outline

More information

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression Hilbert Space Methods for Reduced-Rank Gaussian Process Regression Arno Solin and Simo Särkkä Aalto University, Finland Workshop on Gaussian Process Approximation Copenhagen, Denmark, May 2015 Solin &

More information

Optimal Fusion Performance Modeling in Sensor Networks

Optimal Fusion Performance Modeling in Sensor Networks Optimal Fusion erformance Modeling in Sensor Networks Stefano Coraluppi NURC a NATO Research Centre Viale S. Bartolomeo 400 926 La Spezia, Italy coraluppi@nurc.nato.int Marco Guerriero and eter Willett

More information

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8 Contents LANGEVIN THEORY OF BROWNIAN MOTION 1 Langevin theory 1 2 The Ornstein-Uhlenbeck process 8 1 Langevin theory Einstein (as well as Smoluchowski) was well aware that the theory of Brownian motion

More information

MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES

MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES XX IMEKO World Congress Metrology for Green Growth September 9 14, 212, Busan, Republic of Korea MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES A B Forbes National Physical Laboratory, Teddington,

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

A new iterated filtering algorithm

A new iterated filtering algorithm A new iterated filtering algorithm Edward Ionides University of Michigan, Ann Arbor ionides@umich.edu Statistics and Nonlinear Dynamics in Biology and Medicine Thursday July 31, 2014 Overview 1 Introduction

More information

Self Adaptive Particle Filter

Self Adaptive Particle Filter Self Adaptive Particle Filter Alvaro Soto Pontificia Universidad Catolica de Chile Department of Computer Science Vicuna Mackenna 4860 (143), Santiago 22, Chile asoto@ing.puc.cl Abstract The particle filter

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Derivation of Itô SDE and Relationship to ODE and CTMC Models

Derivation of Itô SDE and Relationship to ODE and CTMC Models Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)

More information

CHAPTER V. Brownian motion. V.1 Langevin dynamics

CHAPTER V. Brownian motion. V.1 Langevin dynamics CHAPTER V Brownian motion In this chapter, we study the very general paradigm provided by Brownian motion. Originally, this motion is that a heavy particle, called Brownian particle, immersed in a fluid

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the

More information

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

NPTEL

NPTEL NPTEL Syllabus Nonequilibrium Statistical Mechanics - Video course COURSE OUTLINE Thermal fluctuations, Langevin dynamics, Brownian motion and diffusion, Fokker-Planck equations, linear response theory,

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Stochastic Stokes drift of a flexible dumbbell

Stochastic Stokes drift of a flexible dumbbell Stochastic Stokes drift of a flexible dumbbell Kalvis M. Jansons arxiv:math/0607797v4 [math.pr] 22 Mar 2007 Department of Mathematics, University College London, Gower Street, London WC1E 6BT, UK January

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Memory and hypoellipticity in neuronal models

Memory and hypoellipticity in neuronal models Memory and hypoellipticity in neuronal models S. Ditlevsen R. Höpfner E. Löcherbach M. Thieullen Banff, 2017 What this talk is about : What is the effect of memory in probabilistic models for neurons?

More information

Bayesian room-acoustic modal analysis

Bayesian room-acoustic modal analysis Bayesian room-acoustic modal analysis Wesley Henderson a) Jonathan Botts b) Ning Xiang c) Graduate Program in Architectural Acoustics, School of Architecture, Rensselaer Polytechnic Institute, Troy, New

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

The Diffusion Model of Speeded Choice, from a Rational Perspective

The Diffusion Model of Speeded Choice, from a Rational Perspective The Diffusion Model of Speeded Choice, from a Rational Perspective Matt Jones, University of Colorado August 7, 017 1 Binary Decision Tasks This chapter considers tasks in which an experimental subject

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Hmms with variable dimension structures and extensions

Hmms with variable dimension structures and extensions Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance

Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance Oliver C Schrempf, Dietrich Brunn, and Uwe D Hanebeck Abstract This paper proposes a systematic procedure

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is COMM. MATH. SCI. Vol. 1, No. 2, pp. 385 391 c 23 International Press FAST COMMUNICATIONS NUMERICAL TECHNIQUES FOR MULTI-SCALE DYNAMICAL SYSTEMS WITH STOCHASTIC EFFECTS ERIC VANDEN-EIJNDEN Abstract. Numerical

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

Hairer /Gubinelli-Imkeller-Perkowski

Hairer /Gubinelli-Imkeller-Perkowski Hairer /Gubinelli-Imkeller-Perkowski Φ 4 3 I The 3D dynamic Φ 4 -model driven by space-time white noise Let us study the following real-valued stochastic PDE on (0, ) T 3, where ξ is the space-time white

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

ICES REPORT Model Misspecification and Plausibility

ICES REPORT Model Misspecification and Plausibility ICES REPORT 14-21 August 2014 Model Misspecification and Plausibility by Kathryn Farrell and J. Tinsley Odena The Institute for Computational Engineering and Sciences The University of Texas at Austin

More information

Integrating Correlated Bayesian Networks Using Maximum Entropy

Integrating Correlated Bayesian Networks Using Maximum Entropy Applied Mathematical Sciences, Vol. 5, 2011, no. 48, 2361-2371 Integrating Correlated Bayesian Networks Using Maximum Entropy Kenneth D. Jarman Pacific Northwest National Laboratory PO Box 999, MSIN K7-90

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Fuzzy Inference-Based Dynamic Determination of IMM Mode Transition Probability for Multi-Radar Tracking

Fuzzy Inference-Based Dynamic Determination of IMM Mode Transition Probability for Multi-Radar Tracking Fuzzy Inference-Based Dynamic Determination of IMM Mode Transition Probability for Multi-Radar Tracking Yeonju Eun, Daekeun Jeon CNS/ATM Team, CNS/ATM and Satellite Navigation Research Center Korea Aerospace

More information

A SCALED STOCHASTIC NEWTON ALGORITHM FOR MARKOV CHAIN MONTE CARLO SIMULATIONS

A SCALED STOCHASTIC NEWTON ALGORITHM FOR MARKOV CHAIN MONTE CARLO SIMULATIONS A SCALED STOCHASTIC NEWTON ALGORITHM FOR MARKOV CHAIN MONTE CARLO SIMULATIONS TAN BUI-THANH AND OMAR GHATTAS Abstract. We propose a scaled stochastic Newton algorithm ssn) for local Metropolis-Hastings

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Variational inference

Variational inference Simon Leglaive Télécom ParisTech, CNRS LTCI, Université Paris Saclay November 18, 2016, Télécom ParisTech, Paris, France. Outline Introduction Probabilistic model Problem Log-likelihood decomposition EM

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

The Brownian Oscillator

The Brownian Oscillator Chapter The Brownian Oscillator Contents.One-DimensionalDiffusioninaHarmonicPotential...38 The one-dimensional Smoluchowski equation, in case that a stationary flux-free equilibrium state p o (x) exists,

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES THE ROYAL STATISTICAL SOCIETY 9 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES The Society provides these solutions to assist candidates preparing

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Bayesian estimation of the discrepancy with misspecified parametric models

Bayesian estimation of the discrepancy with misspecified parametric models Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Preliminary statistics

Preliminary statistics 1 Preliminary statistics The solution of a geophysical inverse problem can be obtained by a combination of information from observed data, the theoretical relation between data and earth parameters (models),

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Incorporating Track Uncertainty into the OSPA Metric

Incorporating Track Uncertainty into the OSPA Metric 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 211 Incorporating Trac Uncertainty into the OSPA Metric Sharad Nagappa School of EPS Heriot Watt University Edinburgh,

More information

Variational Principal Components

Variational Principal Components Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

A Tree Search Approach to Target Tracking in Clutter

A Tree Search Approach to Target Tracking in Clutter 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 A Tree Search Approach to Target Tracking in Clutter Jill K. Nelson and Hossein Roufarshbaf Department of Electrical

More information