Computer Intensive Methods in Mathematical Statistics

Similar documents
Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics

Sequential Monte Carlo Methods for Bayesian Computation

An Brief Overview of Particle Filtering

Particle Filters. Outline

Auxiliary Particle Methods

A Note on Auxiliary Particle Filters

Introduction. log p θ (y k y 1:k 1 ), k=1

An introduction to Sequential Monte Carlo

State-Space Methods for Inferring Spike Trains from Calcium Imaging

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Monte Carlo Approximation of Monte Carlo Filters

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Lecture 7: Optimal Smoothing

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

The Hierarchical Particle Filter

Particle Filtering for Data-Driven Simulation and Optimization

An Introduction to Sequential Monte Carlo for Filtering and Smoothing

Linear Dynamical Systems

The chopthin algorithm for resampling

Target Tracking and Classification using Collaborative Sensor Networks

Negative Association, Ordering and Convergence of Resampling Methods

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Answers and expectations

Probabilistic Graphical Models

cappe/

Sensor Fusion: Particle Filter

Computer Intensive Methods in Mathematical Statistics

Sequential Bayesian Updating

TSRT14: Sensor Fusion Lecture 8

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Blind Equalization via Particle Filtering

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Markov Chain Monte Carlo Methods for Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

An introduction to particle filters

Evolution Strategies Based Particle Filters for Fault Detection

Lecture 6: Bayesian Inference in SDE Models

Inferring biological dynamics Iterated filtering (IF)

Robert Collins CSE586, PSU Intro to Sampling Methods

Robert Collins CSE586, PSU Intro to Sampling Methods

Markov Chain Monte Carlo Methods for Stochastic

SEQUENTIAL MONTE CARLO METHODS WITH APPLICATIONS TO COMMUNICATION CHANNELS. A Thesis SIRISH BODDIKURAPATI

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Lecture Particle Filters. Magnus Wiktorsson

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Langevin and Hamiltonian based Sequential MCMC for Efficient Bayesian Filtering in High-dimensional Spaces

The Kalman Filter ImPr Talk

PARTICLE FILTERS WITH INDEPENDENT RESAMPLING

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Hmms with variable dimension structures and extensions

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Perfect simulation algorithm of a trajectory under a Feynman-Kac law

Particle Filters: Convergence Results and High Dimensions

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Controlled sequential Monte Carlo

Sequential Monte Carlo Samplers for Applications in High Dimensions

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

AUTOMOTIVE ENVIRONMENT SENSORS

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Introduction to Machine Learning

2D Image Processing (Extended) Kalman and particle filter

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Particle Learning for Sequential Bayesian Computation Rejoinder

If we want to analyze experimental or simulated data we might encounter the following tasks:

STA 4273H: Statistical Machine Learning

Self Adaptive Particle Filter

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

LIMIT THEOREMS FOR WEIGHTED SAMPLES WITH APPLICATIONS TO SEQUENTIAL MONTE CARLO METHODS

Lecture Particle Filters

ECE521 Lecture 19 HMM cont. Inference in HMM

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Introduction to Probability and Statistics (Continued)

An intro to particle methods for parameter inference in state-space models

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Machine Learning for Data Science (CS4786) Lecture 24

Seminar: Data Assimilation

Introduction to Particle Filters for Data Assimilation

Sequential Monte Carlo Methods (for DSGE Models)

Particle Methods as Message Passing

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 7 and 8: Markov Chain Monte Carlo

L3 Monte-Carlo integration

Bayesian Machine Learning - Lecture 7

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Towards a Bayesian model for Cyber Security

28 : Approximate Inference - Distributed MCMC

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

USING PARALLEL PARTICLE FILTERS FOR INFERENCES IN HIDDEN MARKOV MODELS

Monte Carlo Methods. Geoff Gordon February 9, 2006

Advanced Introduction to Machine Learning

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Package RcppSMC. March 18, 2018

p L yi z n m x N n xi

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS

Transcription:

Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1)

Plan of today s lecture 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (2)

Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (3)

Last time: sequential MC problems In the framework of SMC the principal goal is to generate sequences of weighted samples (X0:n i, ωi n) N i=1 that can be used for estimating expectations τ n = E fn (φ(x 0:n )) = φ(x 0:n )f n (x 0:n ) dx 0:n X n over spaces X n of increasing dimension. The densities (f n ) n 0 are supposed to be known up to normalizing constants only; i.e. for every n 0, f n (x 0:n ) = z n(x 0:n ) c n, where c n is an unknown constant and z n is a known positive function on X n. Computer Intensive Methods (4)

Last time: sequential importance sampling (SIS) In order to derive such a procedure we proceeded recursively. Assume that we have generated particles (X i 0:n )N i=1 from g n(x 0:n ) such that N i=1 ω i n N l=1 ωl n φ(x i 0:n ) E f n (φ(x 0:n )), where, as usual, ω i n = ω n (X i 0:n ) = z n(x i 0:n )/g n(x i 0:n ). Key trick: use a sequence (g k (x k+1 x 0:k )) k 0 of conditional distributions to specify the instrumental distribution: n 1 g n (x 0:n ) = g 0 (x 0 ) g k (x k+1 x 0:k ). k=0 Computer Intensive Methods (5)

Last time: SIS (cont.) Consequently, X0:n+1 i and ωi n+1 can be generated by 1 keeping the previous X0:n i, 2 simulating Xn+1 i g n(x n+1 X0:n i ), 3 setting X0:n+1 i = (X n+1 i, X 0:n i ), and 4 computing ωn+1 i = z n+1(x0:n+1 i ) g n+1 (X0:n+1 i ) = = z n+1 (X i 0:n+1 ) z n (X i 0:n )g n(x i n+1 X i 0:n ) z n(x i 0:n ) g n (X i 0:n ) z n+1 (X i 0:n+1 ) z n (X i 0:n )g n(x i n+1 X i 0:n ) ωi n. Computer Intensive Methods (6)

Last time: SIS (cont.) So, by running the SIS algorithm we have updated (ω i n, X i 0:n )N i=1 (ωi n+1, X i 0:n+1 )N i=1 by only adding a component Xn+1 i to X 0:n i and updating recursively the weights. At time zero the algorithm is initialized by standard importance sampling. We note that for each n, an unbiased estimate of c n can, as usual, be obtained through the average weight: 1 N N ωn i c n. i=1 Computer Intensive Methods (7)

Filtering in HMMs Graphically: Y k 1 Y k Y k+1 (Observations)... X k 1 X k X k+1... (Markov chain) Y k X k = x k p(y k x k ) X k+1 X k = x k q(x k+1 x k ) X 0 χ(x 0 ) (observation density) (transition density) (initial distribution) Computer Intensive Methods (8)

Filtering in HMMs Given a sequence of observed values (y n ) n 0, we want to estimate sequentially the filter means τ n = E(X n Y 0:n = y 0:n ) = x }{{} n φ(x 0:n ) χ(x 0 )p(y 0 x 0 ) n 1 k=0 q(x k+1 x k )p(y k+1 x k+1 ) L n (y 0:n ) } {{ } z n(x 0:n )/c n dx 0:n in a single sweep of the data as new observations become available. Computer Intensive Methods (9)

Filtering in HMMs To obtain a SIS implementation, we simply set This implies g n+1 (x n+1 x 0:n ) = q(x n+1 x n ), = ω i n+1 = z n+1 (X i 0:n+1 ) z n (X i 0:n )g n+1(x i n+1 X i 0:n )ωi n χ(x0 i )p(y 0 X0 i ) n k=0 q(x k+1 i X k i )p(y k+1 Xk+1 i ) χ(x0 i )p(y 0 X0 i ){ n 1 k=0 q(x k+1 i X k i )p(y k+1 Xk+1 i )}q(x n+1 i X n) i ωi n = p(y n+1 X i n+1 )ωi n. Computer Intensive Methods (10)

Linear/Gaussian HMM, SIS implementation (cont d) Consider a linear Gaussian HMM: Y k = X k + Sε k, p(y k x k ) X k+1 = AX k + Rɛ k+1, q(x k+1 x k ) X 0 = R/(1 A 2 )ɛ 0, χ(x 0 ) where A < 1 and (ɛ k ) and (ε k ) are Gaussian noise. then each update (X i n, ω i n) (X i n+1, ωi n+1 ) involves 1 drawing X i n+1 N(AX i n, R 2 ), 2 setting ω i n+1 = N(Y n+1; X i n+1, S2 )ω i n. Computer Intensive Methods (11)

Linear/Gaussian HMM, SIS implementation (cont d) In Matlab: N = 1000; n = 60; tau = zeros(1,n); % vector of estimates p = @(x,y) normpdf(y,x,s); % observation density, for weights part = R*sqrt(1/(1 - A^2))*randn(N,1); % initialization w = p(part,y(1)); tau(1) = sum(part.*w)/sum(w); for k = 1:n, % main loop part = A*part + R*randn(N,1); % mutation w = w.*p(part,y(k + 1)); % weighting tau(k + 1) = sum(part.*w)/sum(w); % estimation end Computer Intensive Methods (12)

Linear/Gaussian HMM, SIS implementation (cont.) 3 Filtered means for Kalman filter and SIS 2 1 0 1 2 3 4 5 0 10 20 30 40 50 60 70 Time step Figure: Comparison of SIS ( ) with exact values ( ) provided by the Kalman filter (possible only for linear/gaussian models). Computer Intensive Methods (13)

Linear/Gaussian HMM, SIS implementation (cont.) 1000 500 n = 1 0 40 35 30 25 20 15 10 5 0 1000 n = 5 500 0 40 35 30 25 20 15 10 5 0 100 n = 15 50 0 40 35 30 25 20 15 10 5 0 Importance weights (base 10 logarithm) Figure: Distribution of importance weights Computer Intensive Methods (14)

Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (15)

Interlude: IS with resampling (ISR) Having at hand a weighted sample (X i, ω(x i )) N i=1 approximating f, a uniformly weighted sample can be formed by resampling, with replacement, new variables ( X i ) N i=1 from (X i ) N i=1 according to the weights (ω(x i )) N i=1. This can be achieved using the following algorithm: Data: (X i, ω(x i )) N i=1 Result: ( X i ) N i=1 for i = 1 N do set I i ω(x j ) j w. pr. N l=1 ω(x l ) ; set X i X Ii ; end Computer Intensive Methods (16)

Interlude: ISR (cont.) The ISR algorithm replaces N i=1 ω(x i ) N l=1 ω(x l ) φ(x i ) by 1 N N φ( X i ). The resampling step does not add bias to the estimator in the following sense. i=1 Theorem (unbiasedness, multinomial resampling) For all N 1 it holds that ( ) ( 1 N N ) E φ( N X i ω(x i ) ) = E N l=1 ω(x l ) φ(x i ). i=1 i=1 Computer Intensive Methods (17)

Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (18)

Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (19)

SIS + ISR A simple but revolutionary! idea: duplicate/kill particles with large/small weights! (Gordon et al., 1993) The most natural approach to such selection is to simply draw, as in the ISR algorithm, new particles ( X i 0:n )N i=1 among the SIS-produced particles (X i 0:n )N i=1 with probabilities given by the normalized importance weights. Formally, this amounts to set, for i = 1, 2,..., N, X 0:n i = X j ω 0:n w. pr. n j N. l=1 ωl n Computer Intensive Methods (20)

SIS + ISR (cont.) After this, the resampled particles ( X 0:n i )N i=1 are assigned equal weights ω n i = 1 and we replace N i=1 ωn i N φ(x l=1 ωl 0:n i ) by 1 N n N φ( X 0:n i ). i=1 As established above, resampling does not add bias: Corollary For all N 1 and n 0, ( ) ( 1 N N ) E φ( N X 0:n i ) ωn i = E N φ(x i l=1 ωl 0:n ). n i=1 i=1 Computer Intensive Methods (21)

... = SIS with resampling (SISR) After selection, we proceed with standard SIS and move the selected particles ( X i 0:n )N i=1 according to g n(x n+1 x 0:n ). The full scheme goes as follows. Given (X i 0:n, ωi n) N i=1, 1 (selection) draw, with replacement, ( X 0:n i )N i=1 among (X0:n i )N i=1 according to probabilities (ωi n/ N l=1 ωl n) N i=1 2 (mutation) draw, for all i, Xn+1 i g n(x n+1 X 0:n i ), 3 set, for all i, X0:n+1 i = ( X 0:n i, X n+1 i ), and 4 set, for all i, ωn+1 i z n+1 (X0:n+1 i = ) z n (X0:n i )g n(xn+1 i X 0:n i ). Computer Intensive Methods (22)

SISR (cont.) At every time step n, both N i=1 ωn i N φ(x l=1 ωl 0:n i ) and 1 N n are valid estimators of τ n. N φ( X 0:n i ) i=1 The former incorporates however somewhat less randomness and has thus slightly lower variance. Computer Intensive Methods (23)

SISR, estimation of c n When the particles are resampled, the weight average 1 N N i=1 ωi n is no longer a valid MC estimator of the normalizing constant c n. Instead, one takes c SISR N,n = ( n k=0 Remarkably, the estimator c SISR N,n Theorem For all n 0 and N 1, ( E c SISR N,n 1 N N i=1 ) = c n. ω i k ). is unbiased! Computer Intensive Methods (24)

Linear/Gaussian HMM, SISR implementation N = 1000; n = 60; tau = zeros(1,n); % vector of filter means w = zeros(n,1); p = @(x,y) normpdf(y,x,s); % observation density, for weights part = R*sqrt(1/(1 - A^2))*randn(N,1); % initialization w = p(part,y(1)); tau(1) = sum(part.*w)/sum(w); for k = 1:n, % main loop part = A*part + R*randn(N,1); % mutation w = p(part,y(k + 1)); % weighting tau(k + 1) = sum(part.*w)/sum(w); % estimation ind = randsample(n,n,true,w); % selection part = part(ind); end Computer Intensive Methods (25)

Linear/Gaussian HMM, SISR implementation (cont d) 3 Filtered means for Kalman filter, SIS, and SISR 2 1 0 1 2 3 4 5 0 10 20 30 40 50 60 70 Time step Figure: Comparison of SIS ( ) and SISR (, blue) with exact values (, red) provided by the Kalman filter. Computer Intensive Methods (26)

Film time! Computer Intensive Methods (27)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Computer Intensive Methods (28)

Alternative selection strategies Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (29)

Alternative selection strategies Residual resampling There are several alternatives to multinomial selection. In the residual resampling scheme the number Nn i of offspring of particle i is semi-deterministically set to N i n = N ωn i N l=1 ωl n + Ñi n, where the Ñi n s are random integers obtained by randomly distributing the remaining N R offspring, with N ω j R = def n N, j=1 among the ancestors as follows. N l=1 ωl n Computer Intensive Methods (30)

Alternative selection strategies Residual resampling, pseudo-code for i = 1 N do set Ñi n 0; set ω i n 1 N R ( N ωn i N l=1 ωl n end for r = 1 N R do set I r j with probability ω j n; set ÑIr n ÑIr n + 1; end return (Ñi n) N ωn i N l=1 ωl n ) ; Computer Intensive Methods (31)

Alternative selection strategies Residual resampling, unbiasedness Consequently, residual resampling replaces N ωn i N φ(x l=1 ωl 0:n i ) by 1 N N N nφ(x i 0:n i ). n i=1 i=1 Also residual resampling is unbiased (exercise!): Theorem For all N 1 and n 0, ( ) ( 1 N N ) E N i N nφ(x0:n i ) ωn i = E N φ(x i l=1 ωl 0:n ). n i=1 Residual selection can be proven to have lower variance than multinomial selection. Computer Intensive Methods (32) i=1

Alternative selection strategies Other selection strategies Other selection strategies are Stratified resampling Bernoulli branching Poisson branching... Computer Intensive Methods (33)

Further properties of SISR Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (34)

Further properties of SISR SISR: Some theoretical results Even though the theory of SISR is hard, there is a number of results establishing the convergence of the algorithm as N tends to infinity. For instance, N ( N i=1 ω i n N l=1 ωl n φ(x i 0:n ) τ n ) d. N(0, σ 2 n(φ)), where σ 2 n(φ) lacks generally closed-form expression. Note that the convergence rate is still N. Open problem: find an effective and numerically stable online estimator of σ 2 n(φ)! Computer Intensive Methods (35)

Further properties of SISR SISR: long-term numerical stability The dependence of σ 2 n(φ) on n is crucial. However, for filtering in HMMs one may prove, under weak assumptions, that there is a constant c < such that σ 2 n c, n 0. Thus, the SISR estimates are, in agreement with empirical observations, indeed numerically stable in n. Otherwise the method would be practically useless! Computer Intensive Methods (36)

Further properties of SISR SISR: particle path degeneracy 1.5 D 1 0.5 State Value 0 0.5 1 1.5 2 0 20 40 60 80 100 120 Time Index Figure: When m n, the set (X0:n i (m))n i=1 is composed of a few different elements only, yielding poor estimation of the marginal of f n w.r.t. x m. However, estimation w.r.t. x n is always efficient. Computer Intensive Methods (37)

Further properties of SISR SISR: particle path degeneracy (cont.) As the particle smoother resamples systematically the particles, it will always hold that for a fixed k n, (X i 0:n+1 (m))n i=1 (X i 0:n (m))n i=1. Thus, as time increases, the effective number of (different) particle components (X0:n i (m))n i=1 at time m will decrease. After while, all particle trajectories have collapsed into a single one. Consequently, the estimates will suffer from large variance for large n. This is a universal problem when applying the SISR algorithm to the path space. Computer Intensive Methods (38)

Further properties of SISR SISR: particle path degeneracy (cont.) Still, in its most basic form, SISR works well in the important case where φ is a function of the last component x n only, i.e., τ n = φ(x n )f n (x 0:n ) dx 0:n, X n which is the case for, e.g., filtering in HMMs. for estimation of the normalizing constant c n through c SISR N,n. Computer Intensive Methods (39)

Further properties of SISR SISR: particle path degeneracy (cont.) The path degeneracy may be kept back somewhat by resampling the particles only when the skewness of the weights, as measured, e.g., by the coefficient of variation ( 2 CV N = 1 N ωn i N 1), N i=1 N l=1 ωl n exceeds some threshold. The CV N is minimal (zero) when all weights are equal and maximal ( N 1) when only one particle has non-zero weight. A related criterion is the so-called efficient sample size given by N/(1 + CV 2 N). Computer Intensive Methods (40)

Further properties of SISR Coping with path degeneracy Recently, advanced extensions of the standard SISR algorithm have been proposed in order to cope with the particle path degeneracy. Most of these methods, which are beyond the scope of the course, apply backward simulation on the grid of particle locations formed by the SISR algorithm. Efficient online implementation is generally possible in the important case of additive objective functions of form n 1 φ(x 0:n ) = φ l (x l:l+1 ). l=0 Computer Intensive Methods (41)

Further properties of SISR A few references on particle filtering Cappé, O., Moulines, E., and Rydén, T. (2005) Inference in Hidden Markov Models. Springer. Doucet, A., De Freitas, N., and Gordon, N. (2001) Sequential Monte Carlo Methods in Practice. Springer. Fearnhead, P. (1998) Sequential Monte Carlo Methods in Filter Theory. Ph.D. thesis, University of Oxford. Computer Intensive Methods (42)

Outline 1 Last time: sequential importance sampling (SIS) 2 IS with resampling (ISR) 3 SIS with resampling (SISR) Alternative selection strategies Further properties of SISR 4 HA1 Computer Intensive Methods (43)

HA1: sequential Monte Carlo methods HA1 deals with SMC-based mobility tracking in cellular networks. The model is described by an HMM, and the assignment deals with SIS-based optimal filtering of a target s position, an SISR-based approach to the same problem, model calibration via SMC-based maximum likelihood estimation. The class E3 after easter will be used as question time for HA1 (or other questions related to the course). Bring your laptop if you want! Computer Intensive Methods (44)

Submission of HA1 The following is to be submitted: An email containing the report files as well as all your m-files with a file group number-ha1-matlab.m that runs your analysis. This email has to be sent to johawes@kth.se by Friday 28 April, 13:00:00. A report, named group number-ha1-report.pdf, in PDF format (No MS Word-files) of maximum 7 pages (with names). One printed and stitched copy of the report are brought (physically!) to the lecture on Friday 28 April. A report, named HA1-report.pdf, identical to the above report but without group number or names. This is used for the peer-review which will be sent out through email. Computer Intensive Methods (45)