Nonlinear and non-gaussian state-space modelling by means of hidden Markov models
|
|
- Austen Sparks
- 5 years ago
- Views:
Transcription
1 Nonlinear and non-gaussian state-space modelling by means of hidden Markov models University of Göttingen St Andrews, 13 December 2010 bla bla bla bla
2 1 2 Glacial varve thickness
3 (General) state-space model (SSM): y t 1 0y t 0 y t+1 (observable)... g t 1 0g t 0 g t+1... (non-observable) y t = a(g t, ɛ t ) g t = b(g t 1, η t ) a, b: known functions (not necessarily linear) ɛ t, η t iid (not necessarily N )
4 (General) state-space model (SSM): y t 1 0y t 0 y t+1 (observable)... g t 1 0g t 0 g t+1... (non-observable) y t = a(g t, ɛ t ) g t = b(g t 1, η t ) a, b: known functions (not necessarily linear) ɛ t, η t iid (not necessarily N )
5 Example 1. Stochastic volatility model: y t = ɛ t β exp(g t /2) g t = φg t 1 + ση t ɛ t iid tν or N (0, 1), η t iid N (0, 1) g t determines variance (volatility) of y t
6 Example 2. Poisson autoregression: y t Poisson ( β exp(g t ) ) g t = φg t 1 + ση t η t iid N (0, 1) g t determines mean (and variance) of y t
7 Desired: parameter estimation state decoding forecasts model checking SSM likelihood: L(y) =... } {{ } n fold f (y, g) dg can not be evaluated directly... (SSM linear & Gaussian Kalman filter optimal)
8 Desired: parameter estimation state decoding forecasts model checking SSM likelihood: L(y) =... } {{ } n fold f (y, g) dg can not be evaluated directly... (SSM linear & Gaussian Kalman filter optimal)
9 Desired: parameter estimation state decoding forecasts model checking SSM likelihood: L(y) =... } {{ } n fold f (y, g) dg can not be evaluated directly... (SSM linear & Gaussian Kalman filter optimal)
10 Parameter estimation in case of nonlinearity/non-gaussianity: Extended Kalman filter + simple implementation in general poor approximation (Generalized) method of moments + simple implementation low efficiency, no state decoding Monte Carlo methods + high efficiency computer-intensive nonstandard models require nontrivial modifications
11 Parameter estimation in case of nonlinearity/non-gaussianity: Extended Kalman filter + simple implementation in general poor approximation (Generalized) method of moments + simple implementation low efficiency, no state decoding Monte Carlo methods + high efficiency computer-intensive nonstandard models require nontrivial modifications
12 Parameter estimation in case of nonlinearity/non-gaussianity: Extended Kalman filter + simple implementation in general poor approximation (Generalized) method of moments + simple implementation low efficiency, no state decoding Monte Carlo methods + high efficiency computer-intensive nonstandard models require nontrivial modifications
13 Parameter estimation in case of nonlinearity/non-gaussianity: Extended Kalman filter + simple implementation in general poor approximation (Generalized) method of moments + simple implementation low efficiency, no state decoding Monte Carlo methods + high efficiency computer-intensive nonstandard models require nontrivial modifications
14 1 2 Glacial varve thickness
15 Hidden Markov model: y t 1 0y t 0 y t+1 (observable)... g t 1 0g t 0 g t+1... (non-observable) Non-observable process: N-state Markov chain g t initial distribution δ i = P(g 1 = i) transition probabilities γ ij = P(g t = j g t 1 = i) Observable process: y t state-dependent density f (y t g t )
16 Hidden Markov model: y t 1 0y t 0 y t+1 (observable)... g t 1 0g t 0 g t+1... (non-observable) Non-observable process: N-state Markov chain g t initial distribution δ i = P(g 1 = i) transition probabilities γ ij = P(g t = j g t 1 = i) Observable process: y t state-dependent density f (y t g t )
17 Hidden Markov model: y t 1 0y t 0 y t+1 (observable)... g t 1 0g t 0 g t+1... (non-observable) Non-observable process: N-state Markov chain g t initial distribution δ i = P(g 1 = i) transition probabilities γ ij = P(g t = j g t 1 = i) Observable process: y t state-dependent density f (y t g t )
18 Key idea: HMMs have the same two-process structure as SSMs in SSMs: g t continuous-valued discretizing g t yields approximation by HMM benefit: HMM methodology becomes applicable
19 split essential range of g t into m equidistant intervals B i := [b i 1, b i ] = b i : midpoint of B i L(y) = f (g 1 )f (y 1 g 1 )... n t=2 m P(g 1 B i1 )f (y 1 g 1 =bi 1 ) i 1 =1 L(y) =: L approx (y) f (y, g) dg f (g t g t 1 )f (y t g t ) dg n... dg 1 n t=2 i t=1 m P(g t B it g t 1 =bi t 1 )f (y t g t =bi t )
20 split essential range of g t into m equidistant intervals B i := [b i 1, b i ] = b i : midpoint of B i L(y) = f (g 1 )f (y 1 g 1 )... n t=2 m P(g 1 B i1 )f (y 1 g 1 =bi 1 ) i 1 =1 L(y) =: L approx (y) f (y, g) dg f (g t g t 1 )f (y t g t ) dg n... dg 1 n t=2 i t=1 m P(g t B it g t 1 =bi t 1 )f (y t g t =bi t )
21 split essential range of g t into m equidistant intervals B i := [b i 1, b i ] = b i : midpoint of B i L(y) = f (g 1 )f (y 1 g 1 )... n t=2 m P(g 1 B i1 )f (y 1 g 1 =bi 1 ) i 1 =1 L(y) =: L approx (y) f (y, g) dg f (g t g t 1 )f (y t g t ) dg n... dg 1 n t=2 i t=1 m P(g t B it g t 1 =bi t 1 )f (y t g t =bi t )
22 split essential range of g t into m equidistant intervals B i := [b i 1, b i ] = b i : midpoint of B i L(y) = f (g 1 )f (y 1 g 1 )... n t=2 m P(g 1 B i1 )f (y 1 g 1 =bi 1 ) i 1 =1 L(y) =: L approx (y) f (y, g) dg f (g t g t 1 )f (y t g t ) dg n... dg 1 n t=2 i t=1 m P(g t B it g t 1 =bi t 1 )f (y t g t =bi t )
23 split essential range of g t into m equidistant intervals B i := [b i 1, b i ] = b i : midpoint of B i L(y) = f (g 1 )f (y 1 g 1 )... n t=2 m P(g 1 B i1 )f (y 1 g 1 =bi 1 ) i 1 =1 L(y) =: L approx (y) f (y, g) dg f (g t g t 1 )f (y t g t ) dg n... dg 1 n t=2 i t=1 m P(g t B it g t 1 =bi t 1 )f (y t g t =bi t )
24 Consider HMM with m-state MC (possible outcomes: midpoints b i ) transition probabilities: γ ij := P(g t B j g t 1 = b i ) transition probability matrix: Γ = (γ ij ) initial distribution: δ i := P(g 1 B i ) observable process: state-dependent density: f (y t g t = b i ) P(y t ): diag. matrix with ith entry f (y t g t = b i ) L approx (y) = δp(y 1 )ΓP(y 2 )Γ ΓP(y n 1 )ΓP(y n )1 t the HMM (δ, Γ, f (y t )) approximates the SSM
25 Consider HMM with m-state MC (possible outcomes: midpoints b i ) transition probabilities: γ ij := P(g t B j g t 1 = b i ) transition probability matrix: Γ = (γ ij ) initial distribution: δ i := P(g 1 B i ) observable process: state-dependent density: f (y t g t = b i ) P(y t ): diag. matrix with ith entry f (y t g t = b i ) L approx (y) = δp(y 1 )ΓP(y 2 )Γ ΓP(y n 1 )ΓP(y n )1 t the HMM (δ, Γ, f (y t )) approximates the SSM
26 Consider HMM with m-state MC (possible outcomes: midpoints b i ) transition probabilities: γ ij := P(g t B j g t 1 = b i ) transition probability matrix: Γ = (γ ij ) initial distribution: δ i := P(g 1 B i ) observable process: state-dependent density: f (y t g t = b i ) P(y t ): diag. matrix with ith entry f (y t g t = b i ) L approx (y) = δp(y 1 )ΓP(y 2 )Γ ΓP(y n 1 )ΓP(y n )1 t the HMM (δ, Γ, f (y t )) approximates the SSM
27 Consider HMM with m-state MC (possible outcomes: midpoints b i ) transition probabilities: γ ij := P(g t B j g t 1 = b i ) transition probability matrix: Γ = (γ ij ) initial distribution: δ i := P(g 1 B i ) observable process: state-dependent density: f (y t g t = b i ) P(y t ): diag. matrix with ith entry f (y t g t = b i ) L approx (y) = δp(y 1 )ΓP(y 2 )Γ ΓP(y n 1 )ΓP(y n )1 t the HMM (δ, Γ, f (y t )) approximates the SSM
28 Pros and Cons (HMM method): + likelihood directly available extensions straightforward simple formulae for residuals, forecasts, decoding m and range of g t have to be chosen only feasible for one-dimensional state spaces
29 Glacial varve thickness 1 2 Glacial varve thickness
30 Glacial varve thickness s considered in Langrock (2010): stochastic volatility earthquake counts polio counts (seasonal) daily rainfall occurrence (seasonal) glacial varve thickness
31 Glacial varve thickness s considered in Langrock (2010): stochastic volatility earthquake counts polio counts (seasonal) daily rainfall occurrence (seasonal) glacial varve thickness
32 Glacial varve thickness varves: layers of sediment deposited by melting glaciers can be useful for long-term climate research source: Shumway and Stoffer (Time Series Analysis and Its s, 2006)
33 Glacial varve thickness varves: layers of sediment deposited by melting glaciers can be useful for long-term climate research source: Shumway and Stoffer (Time Series Analysis and Its s, 2006) 150 varve thickness in mm years Figure: Series of glacial varve thicknesses for a location in Massachusetts.
34 Glacial varve thickness y t = ɛ t β exp(g t ) g t = φg t 1 + ση t ɛ t Gamma ( shape = cv 2, scale = cv 2 ) Properties: E(y t g t ) = β exp(g t ) (Conditional) coefficient of variation: sd(y t g t ) E(y t g t ) = c v
35 Glacial varve thickness y t = ɛ t β exp(g t ) g t = φg t 1 + ση t ɛ t Gamma ( shape = cv 2, scale = cv 2 ) Properties: E(y t g t ) = β exp(g t ) (Conditional) coefficient of variation: sd(y t g t ) E(y t g t ) = c v
36 Glacial varve thickness y t = ɛ t β exp(g t ) g t = φg t 1 + ση t ɛ t Gamma ( shape = cv 2, scale = cv 2 ) Properties: E(y t g t ) = β exp(g t ) (Conditional) coefficient of variation: sd(y t g t ) E(y t g t ) = c v
37 Glacial varve thickness Table: Estimated model parameters and bootstrap 95% confidence intervals (400 replications). para. estimate c.i. φ [0.90, 0.97] σ [0.11, 0.19] β [19.1, 31.1] c v [0.37, 0.42] resolution: m = 200 g t range: b 0 = 3, b m = 3
38 Glacial varve thickness 150 varve thickness Figure: Series of glacial varve thicknesses (solid grey line) and decoded mean sequence of the fitted gamma SSM (crosses). years
39 HMM approximation convenient in SSM context whole HMM methodology applicable simple implementation of standard and nonstandard models Langrock, R., MacDonald, I. M., Zucchini, W., 2010 Estimating standard and nonstandard stochastic volatility models using structured hidden Markov models. (submitted) Langrock, R., 2010 Some applications of nonlinear and non-gaussian state-space modeling by means of hidden Markov models. (submitted)
Hidden Markov Models for precipitation
Hidden Markov Models for precipitation Pierre Ailliot Université de Brest Joint work with Peter Thomson Statistics Research Associates (NZ) Page 1 Context Part of the project Climate-related risks for
More informationSTOCHASTIC MODELING OF ENVIRONMENTAL TIME SERIES. Richard W. Katz LECTURE 5
STOCHASTIC MODELING OF ENVIRONMENTAL TIME SERIES Richard W Katz LECTURE 5 (1) Hidden Markov Models: Applications (2) Hidden Markov Models: Viterbi Algorithm (3) Non-Homogeneous Hidden Markov Model (1)
More informationECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS
ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationChapter 8 - Extensions of the basic HMM
Chapter 8 - Extensions of the basic HMM 02433 - Hidden Markov Models Martin Wæver Pedersen, Henrik Madsen 28 March, 2011 MWP, compiled March 31, 2011 Extensions Second-order underlying Markov chain. Multinomial-like
More informationNonparametric inference in hidden Markov and related models
Nonparametric inference in hidden Markov and related models Roland Langrock, Bielefeld University Roland Langrock Bielefeld University 1 / 47 Introduction and motivation Roland Langrock Bielefeld University
More informationParticle Filters. Outline
Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53
State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State
More informationJuly 31, 2009 / Ben Kedem Symposium
ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time
More informationSpatio-temporal precipitation modeling based on time-varying regressions
Spatio-temporal precipitation modeling based on time-varying regressions Oleg Makhnin Department of Mathematics New Mexico Tech Socorro, NM 87801 January 19, 2007 1 Abstract: A time-varying regression
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49
State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing
More informationMachine Learning for OR & FE
Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David
More informationThe Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo
NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models
More informationOn some special-purpose hidden Markov models
On some special-purpose hidden Markov models Dissertation presented for the degree of Doctor of Philosophy at the Faculty of Economics and Business Administration of the Georg-August-Universität Göttingen
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationSemi-Parametric Importance Sampling for Rare-event probability Estimation
Semi-Parametric Importance Sampling for Rare-event probability Estimation Z. I. Botev and P. L Ecuyer IMACS Seminar 2011 Borovets, Bulgaria Semi-Parametric Importance Sampling for Rare-event probability
More informationOnline appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US
Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus
More informationThe Hot Hand in Professional Darts
The Hot Hand in Professional Darts arxiv:1803.05673v1 [stat.ap] 15 Mar 2018 Marius Ötting, Roland Langrock, Christian Deutscher, Vianey Leos-Barajas Abstract We investigate the hot hand phenomenon in professional
More informationForecasting & Futurism
Article from: Forecasting & Futurism July 2013 Issue 7 Hidden Markov Models and You By Doug Norris & Brian Grossmiller This is the first of a series of articles exploring uses of Hidden Markov Models in
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationA Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling
A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling G. B. Kingston, H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School
More informationIntroduction. log p θ (y k y 1:k 1 ), k=1
ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE
More informationThe Hot Hand in Professional Darts
The Hot Hand in Professional Darts arxiv:1803.05673v2 [stat.ap] 6 Aug 2018 Marius Ötting, Roland Langrock, Christian Deutscher, Vianey Leos-Barajas Abstract We investigate the hot hand hypothesis in professional
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationTime Series I Time Domain Methods
Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT
More informationLinear Dynamical Systems (Kalman filter)
Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete
More informationNote Set 5: Hidden Markov Models
Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationX t = a t + r t, (7.1)
Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationGENERALIZED LINEAR MODELING APPROACH TO STOCHASTIC WEATHER GENERATORS
GENERALIZED LINEAR MODELING APPROACH TO STOCHASTIC WEATHER GENERATORS Rick Katz Institute for Study of Society and Environment National Center for Atmospheric Research Boulder, CO USA Joint work with Eva
More informationConvergence of Random Processes
Convergence of Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Define convergence for random
More informationA Gaussian state-space model for wind fields in the North-East Atlantic
A Gaussian state-space model for wind fields in the North-East Atlantic Julie BESSAC - Université de Rennes 1 with Pierre AILLIOT and Valï 1 rie MONBET 2 Juillet 2013 Plan Motivations 1 Motivations 2 Context
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationModeling and Simulating Rainfall
Modeling and Simulating Rainfall Kenneth Shirley, Daniel Osgood, Andrew Robertson, Paul Block, Upmanu Lall, James Hansen, Sergey Kirshner, Vincent Moron, Michael Norton, Amor Ines, Calum Turvey, Tufa Dinku
More informationState Space Models for Wind Forecast Correction
for Wind Forecast Correction Valérie 1 Pierre Ailliot 2 Anne Cuzol 1 1 Université de Bretagne Sud 2 Université de Brest MAS - 2008/28/08 Outline 1 2 Linear Model : an adaptive bias correction Non Linear
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationEstimation for state space models: quasi-likelihood and asymptotic quasi-likelihood approaches
University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2008 Estimation for state space models: quasi-likelihood and asymptotic
More informationRecursive Kernel Density Estimation of the Likelihood for Generalized State-Space Models
Recursive Kernel Density Estimation of the Likelihood for Generalized State-Space Models A.E. Brockwell February 28, 2005 Abstract In time series analysis, the family of generalized state-space models
More informationEnsemble Kalman Filter
Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents
More informationState-Space Methods for Inferring Spike Trains from Calcium Imaging
State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline
More information* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.
Name of the course Statistical methods and data analysis Audience The course is intended for students of the first or second year of the Graduate School in Materials Engineering. The aim of the course
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationTowards inference for skewed alpha stable Levy processes
Towards inference for skewed alpha stable Levy processes Simon Godsill and Tatjana Lemke Signal Processing and Communications Lab. University of Cambridge www-sigproc.eng.cam.ac.uk/~sjg Overview Motivation
More informationChapter 3 - Estimation by direct maximization of the likelihood
Chapter 3 - Estimation by direct maximization of the likelihood 02433 - Hidden Markov Models Martin Wæver Pedersen, Henrik Madsen Course week 3 MWP, compiled June 7, 2011 Recall: Recursive scheme for the
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationFundamental Probability and Statistics
Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are
More informationRobust Backtesting Tests for Value-at-Risk Models
Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society
More informationGeneralized Autoregressive Score Smoothers
Generalized Autoregressive Score Smoothers Giuseppe Buccheri 1, Giacomo Bormetti 2, Fulvio Corsi 3,4, and Fabrizio Lillo 2 1 Scuola Normale Superiore, Italy 2 University of Bologna, Italy 3 University
More informationChapter 3 - Temporal processes
STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect
More informationMeta-heuristic ant colony optimization technique to forecast the amount of summer monsoon rainfall: skill comparison with Markov chain model
Meta-heuristic ant colony optimization technique to forecast the amount of summer monsoon rainfall: skill comparison with Markov chain model Presented by Sayantika Goswami 1 Introduction Indian summer
More informationHidden Markov models: definition and properties
CHAPTER Hidden Markov models: definition and properties. A simple hidden Markov model Consider again the observed earthquake series displayed in Figure. on p.. The observations are unbounded counts, making
More informationThe Unscented Particle Filter
The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Hidden Markov Models Barnabás Póczos & Aarti Singh Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed
More informationAutomated Likelihood Based Inference for Stochastic Volatility Models using AD Model Builder. Oxford, November 24th 2008 Hans J.
Automated Likelihood Based Inference for Stochastic Volatility Models using AD Model Builder Oxford, November 24th 2008 Hans J. Skaug (with Jun Yu and David Fournier) Outline AD Model Builder ADMB Foundation
More informationHMM part 1. Dr Philip Jackson
Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. HMM part 1 Dr Philip Jackson Probability fundamentals Markov models State topology diagrams Hidden Markov models -
More informationAsymptotic quasi-likelihood based on kernel smoothing for nonlinear and non-gaussian statespace
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2007 Asymptotic quasi-likelihood based on kernel smoothing for nonlinear
More informationModelling Wind Farm Data and the Short Term Prediction of Wind Speeds
Modelling Wind Farm Data and the Short Term Prediction of Wind Speeds An Investigation into Wind Speed Data Sets Erin Mitchell Lancaster University 6th April 2011 Outline 1 Data Considerations Overview
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian
More informationEnKF-based particle filters
EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.
More informationDevelopment of Stochastic Artificial Neural Networks for Hydrological Prediction
Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental
More informationWhy do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationAST 418/518 Instrumentation and Statistics
AST 418/518 Instrumentation and Statistics Class Website: http://ircamera.as.arizona.edu/astr_518 Class Texts: Practical Statistics for Astronomers, J.V. Wall, and C.R. Jenkins Measuring the Universe,
More informationLearning Static Parameters in Stochastic Processes
Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We
More informationResidual Bootstrap for estimation in autoregressive processes
Chapter 7 Residual Bootstrap for estimation in autoregressive processes In Chapter 6 we consider the asymptotic sampling properties of the several estimators including the least squares estimator of the
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationLogistic regression. 11 Nov Logistic regression (EPFL) Applied Statistics 11 Nov / 20
Logistic regression 11 Nov 2010 Logistic regression (EPFL) Applied Statistics 11 Nov 2010 1 / 20 Modeling overview Want to capture important features of the relationship between a (set of) variable(s)
More informationWeb-based Supplementary Materials for A Robust Method for Estimating. Optimal Treatment Regimes
Biometrics 000, 000 000 DOI: 000 000 0000 Web-based Supplementary Materials for A Robust Method for Estimating Optimal Treatment Regimes Baqun Zhang, Anastasios A. Tsiatis, Eric B. Laber, and Marie Davidian
More informationTowards a Bayesian model for Cyber Security
Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute
More informationProblem Set 2 Solution Sketches Time Series Analysis Spring 2010
Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Forecasting 1. Let X and Y be two random variables such that E(X 2 ) < and E(Y 2 )
More informationModelling residual wind farm variability using HMMs
8 th World IMACS/MODSIM Congress, Cairns, Australia 3-7 July 2009 http://mssanz.org.au/modsim09 Modelling residual wind farm variability using HMMs Ward, K., Korolkiewicz, M. and Boland, J. School of Mathematics
More informationProbabilistic Machine Learning
Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent
More informationCS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 7: Learning Fully Observed BNs Theo Rekatsinas 1 Exponential family: a basic building block For a numeric random variable X p(x ) =h(x)exp T T (x) A( ) = 1
More informationGENERALIZED LINEAR MODELING APPROACH TO STOCHASTIC WEATHER GENERATORS
GENERALIZED LINEAR MODELING APPROACH TO STOCHASTIC WEATHER GENERATORS Rick Katz Institute for Study of Society and Environment National Center for Atmospheric Research Boulder, CO USA Joint work with Eva
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationHmms with variable dimension structures and extensions
Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationL09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms
L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationStatistical Methods in HYDROLOGY CHARLES T. HAAN. The Iowa State University Press / Ames
Statistical Methods in HYDROLOGY CHARLES T. HAAN The Iowa State University Press / Ames Univariate BASIC Table of Contents PREFACE xiii ACKNOWLEDGEMENTS xv 1 INTRODUCTION 1 2 PROBABILITY AND PROBABILITY
More informationStable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence
Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,
More informationSequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007
Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember
More informationFinancial Econometrics
Financial Econometrics Nonlinear time series analysis Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Nonlinearity Does nonlinearity matter? Nonlinear models Tests for nonlinearity Forecasting
More information\ fwf The Institute for Integrating Statistics in Decision Sciences
# \ fwf The Institute for Integrating Statistics in Decision Sciences Technical Report TR-2007-8 May 22, 2007 Advances in Bayesian Software Reliability Modelling Fabrizio Ruggeri CNR IMATI Milano, Italy
More informationNonlinear Time Series
Nonlinear Time Series Recall that a linear time series {X t } is one that follows the relation, X t = µ + i=0 ψ i A t i, where {A t } is iid with mean 0 and finite variance. A linear time series is stationary
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationIMPLIED DISTRIBUTIONS IN MULTIPLE CHANGE POINT PROBLEMS
IMPLIED DISTRIBUTIONS IN MULTIPLE CHANGE POINT PROBLEMS J. A. D. ASTON 1,2, J. Y. PENG 3 AND D. E. K. MARTIN 4 1 CENTRE FOR RESEARCH IN STATISTICAL METHODOLOGY, WARWICK UNIVERSITY 2 INSTITUTE OF STATISTICAL
More informationVariational Autoencoder
Variational Autoencoder Göker Erdo gan August 8, 2017 The variational autoencoder (VA) [1] is a nonlinear latent variable model with an efficient gradient-based training procedure based on variational
More informationMeasurements made for web data, media (IP Radio and TV, BBC Iplayer: Port 80 TCP) and VoIP (Skype: Port UDP) traffic.
Real time statistical measurements of IPT(Inter-Packet time) of network traffic were done by designing and coding of efficient measurement tools based on the Libpcap package. Traditional Approach of measuring
More informationModeling conditional distributions with mixture models: Applications in finance and financial decision-making
Modeling conditional distributions with mixture models: Applications in finance and financial decision-making John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università
More information