2 Particle ælters 2. The deænition of particle ælters Particle ælters are the class of simulation ælters which recursively approximate the æltering ra

Size: px
Start display at page:

Download "2 Particle ælters 2. The deænition of particle ælters Particle ælters are the class of simulation ælters which recursively approximate the æltering ra"

Transcription

1 Auxiliary variable based particle ælters Michael K. Pitt & Neil Shephard Prepared for a book on ësequential Monte Carlo Methods in Practice", edited by Nando de Freitas, Arnaud Doucet and Neil Gordon, to be published by Chapman and Hall, London, 999. Introduction We model a time series fy t ;t=;:::; ng using a state space framework with the fy t jæ t g being independent and with the state fæ t g assumed to be Markovian. The task will be to use simulation to estimate fæ t jf t, t =; :::; n, where F t is contemporaneously available information. We assume a known `measurement' density fy t jæ t and the ability to simulate from the `transition' density fæ t+ jæ t. Sometimes we will also assume that we can evaluate fæ t+ jæ t. Filtering can be thought ofas the repeated application of the iteration fæ t+ jf t+ fy t+ jæ t+ Z fæ t+ jæ t df æ t jf t : This implies the data can be processed in a single sweep, updating our knowledge about the states as we receive more information. This is straightforward if æ t+ jæ t has a ænite set of known discrete points of support as can be computed exactly. When the support is continuous and the integrals cannot be analytically solved then numerical methods have to be used. There have been numerous attempts to provide algorithms which approximate the æltering densities. Important recent work includes Kitagawa 987, West 992, Gerlach, Carter, and Kohn 999 and those papers reviewed in West and Harrison 997, Ch. 3 and 5. Here we use simulation to perform æltering following an extensive recent literature. Our approach is to extend the particle ælter using an auxiliary variable, an idea which ærst appeared in Pitt and Shephard 999a. The literature on particle æltering is reviewed extensively in previous Chapters of this book and will not be repeated here. The outline of the paper is as follows. In Section 2 we analyse the statistical basis of particle ælters and focus on its weaknesses. In Section 3 we review the main focus of the chapter, which is an auxiliary particle ælter method. Section 4 discusses æxed lag æltering, while Section 5 uses stratiæed sampling to improve the performance of the algorithm.

2 2 Particle ælters 2. The deænition of particle ælters Particle ælters are the class of simulation ælters which recursively approximate the æltering random variable æ t jf t =y ; :::; y t 0 by `particles' æt ; :::; æm t, with associated discrete probability masses çt ; :::; çm t. Hence a continuous variable is approximated by a discrete one with random support. These discrete points n oare thought of as samples from fæ t jf t. In our work we will always set the çt M to be equal to M where M is taken to be very large. Then we require that as M!, the particles can be used to increasingly well approximate the density of æ t jf t. Particle ælters treat the discrete support generated by the particles as the true æltering density, then this is chained to produce a new density bfæ t+ jf t+ fy t+ jæ t+ MX j= fæ t+ jæ j t; the `empirical æltering density' as an approximation to the true æltering density. Generically particle ælters then sample from this density to produce new particles æ t+ ; :::; æm t+. This procedure can then be iterated through the data. We will call a particle ælter `fully adapted' if it produces independent and identically distributed samples from 2. There may be advantages in deliberately inducing negative correlations amongst the particles. This was ærst explicitly pointed out by Carpenter, Cliæord, and Fearnhead Sampling the empirical prediction density One way of sampling from the empirical prediction density is to think of M MX j= fæ t+ jæ j t as a `prior' density b fæ t+ jf t which is combined with the `likelihood' fy t+ jæ t+ to produce a posterior. We can sample from b fæ t+ jy t bychoosing æ j t with probability M and then drawing from fæ t+jæ j t. If we can also evaluate fy t+ jæ t+ up to proportionality this leaves us with three sampling methods to draw from fæ t+ jf t+.. Samplingimportance resampling. 2. Acceptance sampling. 3. Markov chain Monte Carlo MCMC. In the rest of this section we write the prior as fæ and the likelihood as fyjæ, abstracting from subscripts and conditioning arguments, in order to brieæy describe these methods in this context. 2 2

3 2.2. Samplingimportance resampling SIR This method Rubin 987 and Smith and Gelfand 992 draws æ ; :::; æ R from fæ and then associates with each of these draws the weights ç j where w j = fyjæ j ; ç j = w j P R i= w ; j =; :::; R: i The weighted sample will converge, as R!, to a non-random sample from the desired posterior fæjy as P R, R i= w p i! fy. The non-random sample can be converted into a random sample of size M by resampling the æ ; :::; æ R using weights ç ; :::; ç R. This requires R! and R éé M. The use of this method has been suggested in the particle ælter framework by Gordon, Salmond, and Smith 993, Kitagawa 996, Berzuini, Best, Gilks, and Larizza 997 and Isard and Blake 996. To understand the eæciency of the SIR method it is useful to think of SIR as an approximation to the importance sampler of the moment E fç fhæg = Z hæçædf æ; by R RX j= hæ j çæ j ; where æ ç fæ and çæ =fyjæ=fy. Liu 996 suggested the variance of this estimator is approximately for slowly varying hæ proportional to E f fçæ 2 g =R. Hence the SIR method will become very imprecise when the ç j become very variable. This will happen if the likelihood is highly peaked compared to the prior Adaption The above SIR algorithm samples from fæjy by making blind proposals æ ; :::; æ R from the prior, ignoring the fact that we know the value of y. This is the main feature of the initial particle ælter proposed by Gordon, Salmond, and Smith 993. We say that a particle ælter is adapted if we make proposals which take into account the value of y. An adapted SIR based particle ælter has the following general structure. Draw from æ ; :::; æ R ç gæjy 2. Evaluate w j = fyjæ j fæ j =gæ j jy, j =; :::; R. 3. Resample amongst the fæ j g using weights proportional to fw j g to produce a sample of size M. Although this looks attractive, for a particle ælter fæ = P M j= fæ t+jæt j which implies we have to at least evaluate M æ R densities in order to generate M samples from fæjy. Given M and R are typically very large, it implies adaption is not generally feasible for SIR based particle ælters. 3

4 2.2.3 Rejection and MCMC sampling Exactly the same remarks hold for rejection sampling. A blind rejection sampling based particle ælter will simulate from f æ and accept with probability çæ = fyjæ=fyjæ max, where æ max = arg max æ fyjæ. This has been proposed byhíurzeler and Kíunsch 995 and used on a univariate log-normal stochastic volatility model by Kim, Shephard, and Chib 998. Again the rejection becomes worse if the var f fçæg is high and adaption is diæcult as it will again typically involve evaluating f æ and so is computationally infeasible. Another alternative to SIR is the use of a blind MCMC method see Gilks, Richardson, and Spiegelhalter 996 for a review. In this context the MCMC accepts amovefrom a current state æ i to æ i+ ç fæ with probability min ; fyjæi+ fyjæ i otherwise it sets æ i+ = æ i. Again if the likelihood is highly peaked there may be a large amount of rejection which will mean the Markov chain will have a great deal of dependence. This suggests adapting, when this is possible, the MCMC method to draw from gæjy and then accept these draws with probability min ; fyjæi+ fæ i+ fyjæ i fæ i ; gæ i jy gæ i+ jy The problem with this, as for the previous adapted SIR, is that evaluating fæ is very expensive. 2.3 Particle ælter's weaknesses The particle ælter based on SIR has two basic weaknesses. The ærst is well known, that when there is an outlier, the weights ç j will be very unevenly distributed and so it will require an extremely large value of R for the draws to be close to samples from the empirical æltering density. This is of particular concern if the measurement density fy t+ jæ t+ is highly sensitive toæ t+. Notice this is not a problem of having too small a value of M. Instead, the diæculty is, given that degree of accuracy, how to eæciently sample from 2? We will show how to do this in the next section. The second weakness holds in general for particle ælters. As R!, so the weighted samples can be used to arbitrarily well approximate 2. However, the P tails of M M j= fæ t+jæt j usually only poorly approximate the true tails of æ t+ jf t due to the use of the mixture approximation. As a result 2 can only ever poorly approximate the true fæ t+ jf t+ when there is an outlier. Hence the second question is how do we improve the empirical prediction density's behaviour in the tails? This will be discusses in Section 4. 3 Auxiliary variable 3. The basics A fundamental problem with conventional particle ælters is that their mixture structure means that it is diæcult to adapt the SIR, rejection or MCMC sampling methods 4 :

5 without massively slowing the running of the ælter. Pitt and Shephard 999a have argued that many of these problems are reduced when we perform particle æltering in a higher dimension. In this section we review this argument. Our task will be to sample from the joint density fæ t+ ;kjf t+, where k is an index on the mixture in 2. Let us deæne fæ t+ ;kjf t+ fy t+ jæ t+ fæ t+ jæ k t ; k =; :::; M: 3 If we draw from this joint density and then discard the index we produce a sample from the empirical æltering density 2 as required. We call k an auxiliary variable as it is present simply to aid the task of the simulation. Generic particle ælters of this type will be labelled auxiliary particle ælters. We can now sample from fæ t+ ;kjf t+ using SIR, rejection sampling or MCMC. The SIR idea will be to make R proposals æ j t+;k j ç gæ t+ ;kjf t+ and then construct resampling weights w j = fy t+jæt+fæ j t+jæ j t kj gæt+;k j j jf t+ ; ç j = w j P R i= w ; j =; :::; R: i We have complete control over the design of g:, which can depend on y t+ and æt k, in order to make the weights even. Thus this method is adaptable and extremely æexible. In the next subsection we will give a convenient generic suggestion for the choice of g:. Rejection sampling for auxiliary particle æltering could also be used in this context. An example of this appears in Section We can also make proposals for an MCMC variate of the auxiliary particle ælter from æ i+ t+ ;k i+ ç gæ t+ ;kjf t+, where gæ t+ ;kjf t+ is some arbitrary density, then these moves are accepted with probability = min 8 é min : ; fy t+jæ i+ fy t+ jæ i ç ç : ; w i+ w i t+ fæ i+ t+ jæt ki+ t+fæt+jæ i t ki gæt+;k i i jf t+ gæ i+ t+ ;k i+ jf t+ A special case of this argument has appeared in Berzuini, Best, Gilks, and Larizza 997 who put gæ t+ ;kjf t+ fæ t+ jæ k t which means their method is again blind. 3.2 A generic SIR based auxiliary proposal 3.2. The method Here we will give a generic g: which can be broadly applied. We will base our discussion on the SIR algorithm, although we could equally have used an MCMC method. We approximate 3 by gæ t+ ;kjf t+ fy t+ jç k t+fæ t+ jæ k t ; k =; :::; M; where ç k t+ is the mean, the mode, a draw, or some other likely value associated with the density of æ t+ jæt k. The form of the approximating density is designed so that gkjf t+ Z fy t+ jç k t+df æ t+ jæ k t =fy t+ jç k t+: 9 = ; 5

6 Thus we can sample from gæ t+ ;kjf t+ by simulating the index with probability ç k gkjf t+, and then sampling from the transition density given the mixture fæ t+ jæt k. We call the ç k the ærst stage weights. The implication is that we will simulate from particles which are associated with large predictive likelihoods. Having sampled the joint density of gæ t+ ;kjf t+ R times we perform a reweighting, putting on the draw æt+;k j j the weights proportional to the so-called second stage weights w j = fy t+jæt+ j fy t+ jçt+ ; ç j = w j P R kj i= w ; j =; :::; R: i The hope is that these second stage weights are much less variable than for the original SIR method. We might resample from this discrete distribution to produce a sample of size M. By making proposals which havehigh conditional likelihoods we reduce the costs of sampling many times from particles which have very low likelihoods and so will not be resampled at the second stage of the process. This improves the statistical eæciency of the sampling procedure and means that we can reduce the value of R substantially. To measure the statistical eæciency of these procedures we argued earlier that we could look at minimizing E fçæ 2 g. Here we compare a standard SIR with a SIR based on our auxiliary variable. Then for a standard SIR based particle ælter, for large R, R fyt+ jæ t+ 2 df æ t+ jæt k where f k = E n çæ 2o = P M M k= n P M M k= Z 2 fyt+ jæ t+ fy t+ jç k df æ t+jæ k t and f æ k = t+ R o = M P M k= ç2 k f k fyt+ jæ t+ df æ t+ jæt k 2 ç PM k= ç 2 ; kfkç æ Z fyt+ jæ t+ fy t+ jç k df æ t+ jæ k t. t+ The same calculation for a SIR based auxiliary variable particle ælter gives E n ç æ æ 2o = P M k= ç kf k çpm k= ç kf æ kç 2 ; which shows an eæciency gain if MX k= ç k f k ém MX k= ç 2 k f k. If f k does not vary over k then the auxiliary variable particle ælter will be more eæcient as P M k= ç k = ç P M M M k= ç2 k. More likely is that f k will depend on k but only mildly as fæ t+ jæt k will be typically quite tightly peaked much more tightly peaked than fæ t+ jy t compared to the conditional likelihood. 6

7 3.2.2 Example: a time series of angles The model In this section we will compare the performance of the particle and auxiliary particle ælter methods for an angular time series model; the bearings-only model. We consider the simple scenario described by Gordon, Salmond, and Smith 993. The observer is considered stationary at the origin of the x, z plane and the ship is assumed to gradually accelerate or decelerate randomly over time. We use the following discretisation of this system, where æ t =x t ;vx t ;z t ;vz t 0, æ t+ = C 0 0 A æ 0 t +ç ç C 0 A u t; u t ç NID0; I: In obvious notation x t, z t represent the ship's horizontal and vertical position at time t and vx t, vz t represent the corresponding velocities. The state evolution is thus a VAR of the form æ t+ = Tæ t +Hu t. The model indicates that the source of state evolution error is due to the accelerations being white noise. The initial state describes the ship's starting positions and velocities æ ç NIDa ;P : This prior together with the state evolution of 4 describes the overall prior for the states. Our model will be based on a mean direction ç t = tan, z t =x t. The measured angle will be assumed to be wrapped Cauchy whose density is see for example Fisher 993, p. 46 fy t jç t =,ç 2 2ç+ç 2,2çcosy t, ç t ; 0 ç y t é 2ç; 0 ç ç ç : 5 ç is termed the mean resultant length. The simulated scenario In order to assess the relative eæciency of the particle ælter and the basic auxiliary method, discussed in section 3.2, we have closely followed the setup described by Gordon, Salmond, and Smith 993. They consider ç ç =0:00 and ç " =0:005, where z t jç t ç NIDç t ;ç". 2 We choose ç =,ç"yielding 2 the same circular dispersion for our wrapped Cauchy density. The actual initial starting vector of this is taken to be æ =,0:05; 0:00; 0:2;,0: By contrast with Gordon, Salmond, and Smith 993, we wish to have an extremely accurate and tight prior for the initial state. This is because we want the variance of quantities arising from the æltered posterior density to be small enabling reasonably conclusive evidence to be formulated about the relative eæciency of the auxiliary method to the standard method. We therefore take a = æ and have a diagonal initial variance P with the elements 0:0 æ 0:5 2 ; 0:005 2 ; 0:3 2 ; 0:0 2 onthe diagonal. Figure illustrates a realization of the model for the above scenario with T = 0. The ship is moving in a South-Easterly direction over time. The trajectories given by the posterior æltered means from the particle SIR method and the auxiliary SIR method M = 300, R = 500 in both cases are both fairly close to the true path despite the small amount ofsimulation used. Monte Carlo comparison The two methods are now compared using a Monte Carlo study of the above scenario with T = 0. The ëtrue" æltered mean is calculated for each replication by using the auxiliary method with M = 00; 000 and 7

8 Figure : Plot of the angular measurements from origin, the true trajectory solid line, crosses, the particle æltered mean trajectory dashed line, boxes and the auxiliary particle mean trajectory dotted line, circles. Ship moving South-East. T = 0, M = 300, R = 500. R = 20; 000. Within each replication the mean squared error for the particle method for each component of the state over time is evaluated by running the method, with a diæerent random number seed, S times and recording the average of the resulting squared diæerence between the resulting particle ælter's estimated mean and the ëtrue" æltered mean. Hence for replication i, state component j, at time t we calculate MSE P i;j;t = S SX s= æ i t;j;s, eæ i t;j 2 ; where æ i t;j;s is the particle mean for replication i, state component j, at time t, for simulation s and eæ i t;j is the ëtrue" æltered mean replication i, state component j, at time t. The log mean squared error for component j at time t is obtained as LMSE P j;t = log REP REP X i= MSE P i;j;t : The same operation is performed for the auxiliary method to deliver the corresponding quantity LMSEj;t AM. For this study we set use REP = 40 and S = 20. We allow M = 4; 000 or 8; 000, and for each of these values we set R = M or 2M. Figure 2 shows the relative performance of the two methods for each component of the state vector over time. For each component j, the quantity LMSEj;t AM,LMSEj;t P is plotted against time. Values close to 0 indicate that the two methods are broadly equivalent 8 6

9 -.25 M = 4K, R = 4K M = 8K, R = 8K M = 4K, R = 8K M = 8K, R = 6K 0 M = 4K, R = 4K M = 4K, R = 8K M = 8K, R = 8K M = 8K, R = 6K M = 4K, R = 4K M = 4K, R = 8K M = 8K, R = 8K M = 8K, R = 6K M = 4K, R = 4K M = 4K, R = 8K M = 8K, R = 8K M = 8K, R = 6K Figure 2: Plot of the relative mean square error performance on the log-scale of the particle ælter and the auxiliary based particle ælter for the bearings only tracking problem. Numbers below zero indicate a superior performance by the auxiliary particle ælter. In these graphs M = 4; 000 or 8; 000 while R = M or R = 2M. Throughout SIR is used as the sampling mechanism. Top left: æ t = x t, Bottom left: æ t3 = z t, while Top right: æ t2 = vx t and Bottom right: æ t4 = vz t. in performance whilst negative values indicate that the auxiliary method performs better than the standard particle ælter. The graphs give the expected result with the auxiliary particle ælter typically being more precise, but with the diæerence between the two methods falling as R increases. 3.3 Examples of adaption 3.3. Basics Although the above generic scheme can usually reduce the variability of the second stage weights, there are sometimes other adaption schemes which use the speciæc structure of the time series model which allow us to achieve yet more even weights. If we can achieve exactly equal weights then we say that we have fully adapted the procedure to the model, for now we can produce i.i.d. samples from 2. This situation is particularly interesting as we are then close to the assumptions made by Kong, Liu, and Wong 994 for their sequential importance sampler. 9

10 3.3.2 Non-linear Gaussian measurement model In the Gaussian measurement case, the absorption of the measurement density into the transition equation is particularly convenient. Consider a non-linear transition density with æ t+ jæ t ç N fç æ t ;ç 2 æ t g and y t+ jæ t+ ç Næ t+ ;. Then fæ t+ ;kjf t+ fy t+ jæ t+ fæ t+ jæ k t =g k y t+ fæ t+ jæ k t ;y t+; where ç,æ2 k =+ç,2 çæ k tç and fæ t+ jæ k t ;y t+ =N ç ç æ k ;çæ2 k ç ; ç æ k =ç æ2 k 8 é : ç ç ç æt k ç 2 çæ k t ç +y t+ 9 = ; : This implies that the ærst stage weights are g k y t+ ç ç çæ k æ k t ç exp 8 é é: ç æ2 k 2ç æ2 k ç ç, ç 2 æt k ç 2ç çæ 2 t k 9 é= é; : The Gaussian measurement density implies the second stage weights are all equal. An example of this is a Gaussian ARCH model see, for example, Bollerslev, Engle, and Nelson 994 observed with independent Gaussian error. So we have y t jæ t ç Næ t ;ç 2 ; æ t+ jæ t ç N0;æ 0 +æ æ 2 t: This model is fully adaptable. It has received a great deal of attention in the econometric literature as it has some attractive multivariate generalizations: see the work by Diebold and Nerlove 989, Harvey, Ruiz, and Sentana 992 and King, Sentana, and Wadhwani 994. As far as we know no likelihood methods exist in the literature for the analysis of this type of model and its various generalizations although a number of very good approximations have been suggested Log-concave measurement densities Suppose again that fæ t+ jæ k t is Gaussian, but the measurement density is logconcave as a function of æ t+, then we might extend the above argument by Taylor expanding log fy t+ jæ t+ to a second order term, again around ç k t+, to give the approximation then log gy t+ jæ t+ ;ç k t+ = log fy t+ jç k t++ ç æ t+, ç k t+ + 2 Rearranging, we can express this as ç ç æ t+, ç k 2 log fy t+ jç k t+ 0 gæ t+ ;kjf t+ gy t+ jæ t+ ; ç k t+fæ t+ jæ k t : gæ t+ ;kjf t+ gy t+ jç k t+gæ t+ jæ k t ;y t+; ç k t+; 0 ç log fy t+ jç k t+ ç æ t+, ç k t+ ç ;

11 which means we could simulate the index with probability proportional to gy t+ jç k t+ and then draw from gæ t+ jæ k t ;y t+;ç k t+. The resulting reweighted sample's second stage weights are proportional to the hopefully fairly even weights for j = ; :::; R with w j = fy t+ jæt+fæ j t+ jæt kj gy t+ jç kj ;y t+ ;ç kj t+gæt+jæ j t kj t+ = fy j t+jæt+ P Ri= w i : gy t+ jæt+; j çt+ ; ç j = w j kj Thus, we can exploit the special structure of the model, if available, to improve upon the auxiliary particle ælter Stochastic volatility and rejection sampling The same argument carries over when we use a ærst order Taylor expansion to construct gy t+ jæ t+ ;ç k t+, but in this case we know that gy t+ jæ t+ ;ç k t+ ç fy t+ jæ t+ for any value of ç k t+ due to the assumed log-concavity of the measurement density. Thus fæ t+ ;kjy t+ fy t+ jæ t+ fæ t+ jæ k t ç gy t+ jæ t+ ; ç k t+fæ t+ jæ k t = gy t+ jç k t+gæ t+ jæ k t ;y t+; ç k t+ gæ t+ ;kjy t+ : Thus we can perform rejection sampling from fæ t+ ;kjf t+ by simply sampling k with probability proportional to gy t+ jç k t+ and then drawing æ t+ from gæ t+ jæ k t ;y t+; ç k t+. This pair is then accepted with probability fy t+ jæ t+ =gy t+ jæ t+ ; ç k t+: This argument applies to the stochastic volatility SV model y t = æ t æ expæ t =2; æ t+ = çæ t + ç t ; 7 where æ t and ç t are independent Gaussian processes with variances of and ç 2 respectively. Here æ has the interpretation as the modal volatility, ç the persistence in the volatility shocks and ç 2 ç is the volatility ofthe volatility. This model has attracted much recent attention in the econometrics literature as a way of generalizing the Black-Scholes option pricing formula to allow volatility clustering in asset returns; see, for instance, Hull and White 987. MCMC methods have been used on this model by, for instance, Jacquier, Polson, and Rossi 994, Shephard and Pitt 997 and Kim, Shephard, and Chib 998. For this model log fy t+ jæ t+ is concave in æ t+ so that, for ç k t+ = çæ k t, log gy t+ jæ t+ ; ç k t+ =const, n ç ço 2 æ t+, y2 t 2æ 2 exp,çk t+, æ t+, ç k t+ : The implication is that gæ t+ jæ k t ;y t+; ç k t+ =N "ç kt+ + ç2 2 y 2 ç t ç æ exp,ç k 2 t+, ;ç =Nç 2 æk t+ ;ç2 :

12 Likewise gy t+ jç k t+ = exp Finally the log-probability of acceptance is ç ç çç ç çç ç æk2 2ç 2 t+, ç k2 t+ exp, y2 t 2æ exp,ç k 2 t+ +ç k t+ç : h n ç çoi, y2 t exp,æt+, exp,ç k 2æ t+, æ 2 t+, ç k t+ : Notice that as ç 2 falls to zero so the acceptance probability goestoone. Finally, the same argument goes through when we use a SIR algorithm instead of rejection sampling. The proposals are made in exactly the same way, but now instead of computing log-probabilities of accepting these become log-second stage weights. Simulation experiment The basic SV model was deæned in Section We construct 00 times the compound daily returns on the US Dollar against the UK Sterling from the ærst day of trading in 997 and for the next 200 days of active trading. This data is discussed in more detail in Pitt and Shephard 999b, where the parameters of the model were estimated using Bayesian methods. Throughout we take ç =0:9702, ç ç =0:78 and æ =0:5992, the posterior means of the model for a long time series of returns up until the end of 996. Figure 3 graphs these daily returns against time. The Figure also displays the estimated quantiles of the æltering density, f fæ expæ t =2jF t g computed using an auxiliary particle ælter. Throughout the series we set M = 5; 000, R = 6; 000. We have also displayed the posterior mean of the æltering random variable. This is always very slightly above the posterior median as æ t jy t is very close to being symmetric. The picture shows that the æltered volatility jumps up more quickly than it tends to go down. This reæects the fact that the volatility is modelled on the log scale. To compare the eæciency of the simple particle ælter, our basic auxiliary particle ælter and the rejection based fully adapted particle ælter discussed in Section 3.3.4, we again conducted a simulation experiment measuring mean square error for each value of t using the above model and again having n = 50. The data was simulated using the model parameters discussed above. The results are reported using a log scale in Figure 4. To make the problem slightly more realistic and challenging we set " 2 = 2:5 for each series, so there is a signiæcant outlier at that point. For this study we set use REP =40andS= 20. We allow M =2;000 or 4; 000, and for each of these values we set R = M or 2M. For the rejection based particle ælter algorithm it only makes sense to take M = R and so when R ém werepeat the calculations as if M = R. Finally, the rejection based method takes around twice the time of the SIR based particle ælter when M = R. Figure 4 shows that the fully adapted particle ælter is considerably more accurate than the other particle ælters. It also has the advantage that it does not depend on R. The auxiliary particle ælter is more eæcient than the plain particle ælter, but the diæerence is small reæecting the fact that for the SV model the conditional likelihood is not very sensitive to the state. 2

13 Figure 3: Bottom graph shows the daily returns on the Dollar against UK Sterling from the ærst day of trading in 997 for 200 trading days. We display in the top graph the posterior æltered mean heavy line of æ expæ t =2jY t, together with the 5, 20, 50, 80, 95 percentage points of the distribution. Notice the median is always below the mean. M =5;000, R =6; Mixtures of normals Suppose f æ t+ jæ t is Gaussian, but the measurement density is a discrete mixture of normals P P j= ç jf j y t+ jæ t+. Then we can perfectly sample from fæ t+ ;kjf t+ by working with fæ t+ ;k;jjf t+ ç j f j y t+ jæ t+ f ç æ t+ jæ k t ç = wj;k f j æ t+ jæ k t ;y t+: Then we sample from fæ t+ ;k;jjf t+ by selecting the index k; j with probability proportional to w j;k and then drawing from f j æ t+ jæ k t ;y t+. The disadvantage of this approach is that the complete enumeration and storage of w j;k involves PM calculations. This approach can be trivially extended to cover the case where f æ t+ jæ t is a mixture of normals. MCMC smoothing methods for state space models with mixtures have been studied by, for example, Carter and Kohn 994 and Shephard Fixed lag æltering The auxiliary particle ælter method can also be used when we update the estimates of the states not by a single observation but by a block of observations. This idea 3

14 -8.5 SIR AD SIR AUX SIR -8.5 SIR AD SIR AUX SIR SIR AUX SIR -9 AD SIR SIR AUX SIR AD SIR Figure 4: Plot of the mean square error performance on the log-scale of the particle ælter to the auxiliary based particle ælter and an adapted particle ælter. The lower the number the more eæcient the method. Top graphs have M =2;000, the bottom have M =4;000. The Left graphs have R = M, while the right ones have R =2M appeared in the initial working paper version of Pitt and Shephard 999a which was circulated in 997, but was taken out of the paper which appeared in the journal. Again suppose that we approximate the density of æ t jf t = y ; :::; y t 0 by a distribution with discrete support at the points æ t ; :::; æm t ; with mass n M o. Then the task will be to update this distribution to provide a sample from æ t+ ; :::; æ t+p jf t+p. At ærst sight this result seems specialized as it is not often that we have to update after the arrival of a block of observations. However, as well as solving this problem it also suggests a way of reducing the bias caused by using the empirical prediction density as an approximation to fæ t+ jf t. Suppose that instead of updating p future observations simultaneously, we store p, observations and update those observations together with an empirical prediction density for fæ t,p+2 jf t,p+. This would provide us with draws from fæ t+ jf t+ asrequired. We call this æxed lag æltering. The hope is that the inæuence of the empirical prediction density will be reduced as it will have been propagated p times through the transition density. This may reduce the inæuence of outliers on the auxiliary method. This can be carried out by using a straightforward particle ælter using SIR, rejection sampling or MCMC, or by building in an auxiliary variable so that we sample from æ t+ ; :::; æ t+p ;kjf t+p. Typically the gains from using the auxiliary approach is greater here for as p grows so naive implementations of the particle ælter will become less and less eæcient due to not being able to adapt the sampler to the measurement density. 4

15 To illustrate this general setup consider the use of an auxiliary particle ælter where we take gkjf t+p Z fy t+p jç k t+p:::fy t+ jç k t+df æ t+p jæ t+p, :::df æ t+ jæ k t = fy t+p jç k t+p:::fy t+ jç k t+; and then sampling the index k with weights proportional to gkjf t+p. Having selected the index k j we then propagate the transition equation p steps to produce a draw æ j t+; :::; æ j t+p, j =; :::; R: These are then reweighted according to the ratio fy t+p jæt+p:::fy j t+ jæt+ j fy t+p jç kj t+p:::fy t+ jçt+ : kj This approach has three main diæculties. First it requires us to store p sets of observations and p æ M mixture components. This is more expensive than the previous method as well as being slightly harder to implement. Second, each auxiliary variable draw now involves 3p density evaluations and the generation of p simulated propagation steps. Third, the auxiliary variable method is based on approximating the true density of fk; æ t,p+ ; :::; æ t jy t, and this approximation is likely to deteriorate as p increases. This suggests that the more sophisticated adaptive sampling schemes, discussed above, may be particularly useful at this point. Again however, this complicates the implementation of the algorithm. We will illustrate the use of this sampler at the end of the next section on an outlier problem. 5 Reduced random sampling 5. Basic ideas The generic auxiliary particle ælter given in Section 3.2 has two sets of weighted bootstraps and that this introduces a large degree of randomness to the procedure. This is most stark in the case where fy t+ jæ t+ does not depend on æ t+ and so y t+ is uninformative about æ t+. For such a problem the ærst stage weights are ç k gkjf t+ while the second stage weights w j. The implication is that it would have been better simply to propagate every æt k through fæ t+ jæt k once to produce a new æt+. k This would produce a more eæcient sample than our method which samples with replacement from these populations, twice killing interesting particles for no good reason. This observation has appeared in the particle æltering literature on a number of occasions. Liu and Chen 995 discuss carrying weights forward instead of resampling in order to keep alive particles which would otherwise die. Carpenter, Cliæord, and Fearnhead 998 think about the same issue using stratiæcation ideas taken from sampling theory. Here we use a method which is similar to an idea discussed by Liu and Chen 998 in this context. We resample from a population æt ; :::; æm t with weights çt ; :::; çm t to produce a sample of size R in the following way. We produce stratiæed uniforms 5

16 eu t ; :::; eur t by writing eu k t = k, + uk t R ; where u k t iid ç UID0; : This is the scheme suggested by Carpenter, Cliæord, and Fearnhead 998. Then we compute the cumulative probabilities eç r t = rx s= ç s t ; r =; :::; M: We allocate n k copies of the particle æt k to the new population, where n k is the number of eu t ; :::; eur t in the interval k, X s= ç s t ; kx The computation of each of n eu k to,feç r t gand n n ko is straightforward and so this type of stratiæed sampling is very fast. In fact it is much faster than simple random sampling. Although, as we noted above, this idea is not new it is particularly useful in the context of our generic auxiliary particle ælter suggestion which has two weighted bootstraps while a typical SIR based particle ælter only has one. Hence we might expect the gains to be made over the original suggestion in Pitt and Shephard 999a would be particularly large. 5.2 Simple outlier example We tried random and stratiæed sampling using æxed lag versions of SIR based particle and auxiliary particle ælters on a diæcult outlier problem where the analytic solution is available via the Kalman ælter. We assume the observations arise from an autoregression observed with noise s= ç s t : y t = æ t + " t ; " t ç NID0; 0:707 2 æ t+ = 0:9702æ t + ç t ; ç t ç NID0; 0:78 2 ; 8 where " t and ç t are independent processes. The model is initialised by æ t 's stationary prior. We added to the simulated y n=2 a shock 6:5 æ 0:707, which represents a very signiæcant outlier. Throughout we set M = R = 500 and measure the precision of the ælter by the log mean square error criteria 6, taking REP = 30 and S = 20. As the problem is Gaussian the Kalman ælter's MSE divided by M provides a lower bound on the mean square error criteria. Figure 5 shows the results from the experiment, recording the mean square errors and bias of the various particle ælters. It is important to note that the mean square errors are drawn on the log0 scale. The main features of the graphs are that: i when there is no outlier all the particle ælters are basically unbiased with stratiæcation being important. The use of the auxiliary variable does not have much impact in this situation although it is still better. ii during an outlier, the ASIR methods dominate both in terms of bias and MSE. Stratiæcation makes very little diæerence in this situation. iii after the outlier, stratiæed ASIR continues to work well while ASIR returns to being less eæective than stratiæed SIR. iv The introduction of æxed 6

17 -2 (a), MSE: no fixed lag filtering MSE SIR MSE ASIR MSE strat SIR MSE strat ASIR MSE iid particle (b), MSE: using two lags MSE SIR MSE ASIR MSE strat SIR MSE strat ASIR MSE iid particle (c), Bias: no fixed lag filtering (d), Bias: using two lags bias SIR bias ASIR bias strat SIR bias strat ASIR bias SIR bias ASIR bias strat SIR bias strat ASIR Figure 5: The mean square error MSE, using a log0 scale, and bias of four diæerent particle ælters using no and two æltered lag ælters. The x-axis is always time, but we only graph results for t = T=4; T=4+; :::; 3T=4 in order to focus on the crucial aspects. The four particle ælters are: SIR, ASIR, stratiæed SIR and stratiæed ASIR. The results are grouped according to the degree of æxed lag æltering. In particular: a shows the MSE when p = 0, b shows the MSE when p = 2. c shows the bias when p =0,while d indicates the bias with p =2. Throughout we have taken M = R = 500. lag æltering reduces the bias of all methods by an order of magnitude while the MSE reduces quite considerably. In order to benchmark these results, we have repeated the same experiment but now with M = R = The results are given in Figure 6. This picture is remarkably similar to Figure 5 but with smaller bias and MSE. An important feature of this experiment is that the reduction in bias and MSE of a æve fold increase in M and R produces around the same impact as the introduction of æxed lag æltering. 6 Conclusion This Chapter has studied the weaknesses of the very attractive particle æltering method originally proposed by Gordon, Salmond, and Smith 993. The SIR implementation of this method is not robust to outliers for two diæerent reasons: sampling eæciency and the unreliability of the empirical prediction density in the tails of the distribution. We introduce an auxiliary variable into the particle ælter to overcome the ærst of these problems, providing a powerful framework which is as simple as SIR, but more æexible and reliable. We show that when it is possible to adapt the 7

18 -2 (a), MSE: no fixed lag filtering MSE SIR MSE ASIR MSE strat SIR MSE strat ASIR MSE iid particle -2 (b), MSE: using two lags MSE SIR MSE ASIR MSE strat SIR MSE strat ASIR MSE iid particle (c), Bias: no fixed lag filtering (d), Bias: using two lags bias SIR bias ASIR bias strat SIR bias strat ASIR bias SIR bias ASIR bias strat SIR bias strat ASIR Figure 6: Repeat of Figure but with M = R = particle ælter then this can bring about large eæciency gains. The æxed-lag ælter partially tackles the second problem, which suggests a possible real improvement in the reliability of these methods. 7 Acknowledgements Michael K Pitt thanks the ESRC for their ænancial support through the grant ëmodelling and analysis of econometric and ænancial time series." References Berzuini, C., N. G. Best, W. R. Gilks, and C. Larizza 997. Dynamic conditional independence models and Markov chain Monte Carlo methods. J. American Statistical Association 92, Bollerslev, T., R. F. Engle, and D. B. Nelson 994. ARCH models. In R. F. Engle and D. McFadden Eds., The Handbook of Econometrics, Volume 4, pp. 2959í3038. North-Holland. Carpenter, J. R., P. Cliæord, and P. Fearnhead 998. An improved particle ælter for non-linear problems. Working paper, January 998, Dept. of Statistics, University of Oxford, OX 3TG, UK. Carter, C. K. and R. Kohn 994. On Gibbs sampling for state space models. Biometrika 8, 54í53. 8

19 Diebold, F. X. and M. Nerlove 989. The dynamics of exchange rate volatility: a multivariate latent factor ARCH model. J. Applied Econometrics 4, í2. Fisher, N. I Statistical Analysis of Circular Data. Cambridge: Cambridge University Press. Gerlach, R., C. Carter, and R. Kohn 999. Diagnostics for time series analysis. J. Time Series Analysis 2. Forthcoming. Gilks, W. K., S. Richardson, and D. J. Spiegelhalter 996. Markov Chain Monte Carlo in Practice. London: Chapman & Hall. Gordon, N. J., D. J. Salmond, and A. F. M. Smith 993. A novel approach to non-linear and non-gaussian Bayesian state estimation. IEE-Proceedings F 40, 07í3. Harvey, A. C., E. Ruiz, and E. Sentana 992. Unobserved component time series models with ARCH disturbances. J. Econometrics 52, 29í58. Hull, J. and A. White 987. The pricing of options on assets with stochastic volatilities. J. Finance 42, 28í300. Híurzeler, M. and H. R. Kíunsch 995. Monte Case approximations for general state space models. Unpublished paper: Seminar fíur Statistik, ETH. Isard, M. and A. Blake 996. Contour tracking by stochastic propagation of conditional density. Proceedings of the European Conference on Computer Vision, Cambridge, 343í356. Jacquier, E., N. G. Polson, and P. E. Rossi 994. Bayesian analysis of stochastic volatility models with discussion. J. Business and Economic Statist. 2, 37í 47. Kim, S., N. Shephard, and S. Chib 998. Stochastic volatility: likelihood inference and comparison with ARCH models. Rev. Economic Studies 65, 36í93. King, M., E. Sentana, and S. Wadhwani 994. Volatility and links between national stock markets. Econometrica 62, 90í933. Kitagawa, G Non-Gaussian state space modelling of non-stationary time series. J. American Statistical Association 82, 503í54. Kitagawa, G Monte Carlo ælter and smoother for non-gaussian nonlinear state space models. J. Computational and Graphical Statistics 5, í25. Kong, A., J. S. Liu, and W. H. Wong 994. Sequential imputations and Bayesian missing data problems. J. American Statistical Association 89, 278í88. Liu, J Metropolized independent sampling with comparison to rejection sampling and importance sampling. Statistics and Computing 6, 3í9. Liu, J. and R. Chen 995. Blind deconvolution via sequential imputation. J. American Statistical Association 90, 567í76. Liu, J. and R. Chen 998. Sequential Monte Carlo methods for dynamic systems. J. American Statistical Association 93, 032í044. Pitt, M. K. and N. Shephard 999a. Filtering via simulation based on auxiliary particle ælters. J. American Statistical Association 94. Forthcoming. 9

20 Pitt, M. K. and N. Shephard 999b. Time varying covariances: a factor stochastic volatility approach with discussion. In J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith Eds., Bayesian Statistics 6. Oxford: Oxford University Press. Forthcoming. Rubin, D. B A noniterative samplingimportance resampling alternative to the data augmentation algorithm for creating a few imputations when the fraction of missing information is modest: the SIR algorithm. Discussion of Tanner and Wong 987. J. American Statistical Association 82, 543í546. Shephard, N Partial non-gaussian state space. Biometrika 8, 5í3. Shephard, N. and M. K. Pitt 997. Likelihood analysis of non-gaussian measurement time series. Biometrika 84, 653í67. Smith, A. F. M. and A. E. Gelfand 992. Bayesian statistics without tears: a sampling-resampling perspective. American Statistican 46, 84í88. Tanner, M. A. and W. H. Wong 987. The calculation of posterior distributions by data augmentation with discussion. J. American Statistical Association 82, 528í50. West, M Mixture models, Monte Carlo, Bayesian updating and dynamic models. Computer Science and Statistics 24, 325í333. West, M. and J. Harrison 997. Bayesian Forecasting and Dynamic Models 2 ed.. New York: Springer-Verlag. 20

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

Monte Carlo Integration using Importance Sampling and Gibbs Sampling

Monte Carlo Integration using Importance Sampling and Gibbs Sampling Monte Carlo Integration using Importance Sampling and Gibbs Sampling Wolfgang Hörmann and Josef Leydold Department of Statistics University of Economics and Business Administration Vienna Austria hormannw@boun.edu.tr

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

Towards inference for skewed alpha stable Levy processes

Towards inference for skewed alpha stable Levy processes Towards inference for skewed alpha stable Levy processes Simon Godsill and Tatjana Lemke Signal Processing and Communications Lab. University of Cambridge www-sigproc.eng.cam.ac.uk/~sjg Overview Motivation

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Research Division Federal Reserve Bank of St. Louis Working Paper Series

Research Division Federal Reserve Bank of St. Louis Working Paper Series Research Division Federal Reserve Bank of St Louis Working Paper Series Kalman Filtering with Truncated Normal State Variables for Bayesian Estimation of Macroeconomic Models Michael Dueker Working Paper

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Bayesian Monte Carlo Filtering for Stochastic Volatility Models

Bayesian Monte Carlo Filtering for Stochastic Volatility Models Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Self Adaptive Particle Filter

Self Adaptive Particle Filter Self Adaptive Particle Filter Alvaro Soto Pontificia Universidad Catolica de Chile Department of Computer Science Vicuna Mackenna 4860 (143), Santiago 22, Chile asoto@ing.puc.cl Abstract The particle filter

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Generalized Autoregressive Score Models

Generalized Autoregressive Score Models Generalized Autoregressive Score Models by: Drew Creal, Siem Jan Koopman, André Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

consistency is faster than the usual T 1=2 consistency rate. In both cases more general error distributions were considered as well. Consistency resul

consistency is faster than the usual T 1=2 consistency rate. In both cases more general error distributions were considered as well. Consistency resul LIKELIHOOD ANALYSIS OF A FIRST ORDER AUTOREGRESSIVE MODEL WITH EPONENTIAL INNOVATIONS By B. Nielsen & N. Shephard Nuæeld College, Oxford O1 1NF, UK bent.nielsen@nuf.ox.ac.uk neil.shephard@nuf.ox.ac.uk

More information

Generalized Autoregressive Score Smoothers

Generalized Autoregressive Score Smoothers Generalized Autoregressive Score Smoothers Giuseppe Buccheri 1, Giacomo Bormetti 2, Fulvio Corsi 3,4, and Fabrizio Lillo 2 1 Scuola Normale Superiore, Italy 2 University of Bologna, Italy 3 University

More information

Particle Learning and Smoothing

Particle Learning and Smoothing Particle Learning and Smoothing Carlos Carvalho, Michael Johannes, Hedibert Lopes and Nicholas Polson This version: September 2009 First draft: December 2007 Abstract In this paper we develop particle

More information

Part I State space models

Part I State space models Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Kobe University Repository : Kernel

Kobe University Repository : Kernel Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Michael Johannes Columbia University Nicholas Polson University of Chicago August 28, 2007 1 Introduction The Bayesian solution to any inference problem is a simple rule: compute

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths January 2004 Siddhartha Chib Olin School of Business Washington University chib@olin.wustl.edu Michael Dueker Federal

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Efficient Monitoring for Planetary Rovers

Efficient Monitoring for Planetary Rovers International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France. AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Surveying the Characteristics of Population Monte Carlo

Surveying the Characteristics of Population Monte Carlo International Research Journal of Applied and Basic Sciences 2013 Available online at www.irjabs.com ISSN 2251-838X / Vol, 7 (9): 522-527 Science Explorer Publications Surveying the Characteristics of

More information

Figure : Learning the dynamics of juggling. Three motion classes, emerging from dynamical learning, turn out to correspond accurately to ballistic mot

Figure : Learning the dynamics of juggling. Three motion classes, emerging from dynamical learning, turn out to correspond accurately to ballistic mot Learning multi-class dynamics A. Blake, B. North and M. Isard Department of Engineering Science, University of Oxford, Oxford OX 3PJ, UK. Web: http://www.robots.ox.ac.uk/vdg/ Abstract Standard techniques

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Bootstrap tests of multiple inequality restrictions on variance ratios

Bootstrap tests of multiple inequality restrictions on variance ratios Economics Letters 91 (2006) 343 348 www.elsevier.com/locate/econbase Bootstrap tests of multiple inequality restrictions on variance ratios Jeff Fleming a, Chris Kirby b, *, Barbara Ostdiek a a Jones Graduate

More information

Quantile Filtering and Learning

Quantile Filtering and Learning Quantile Filtering and Learning Michael Johannes, Nicholas Polson and Seung M. Yae October 5, 2009 Abstract Quantile and least-absolute deviations (LAD) methods are popular robust statistical methods but

More information

MONTE CARLO METHODS. Hedibert Freitas Lopes

MONTE CARLO METHODS. Hedibert Freitas Lopes MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Filtering and Likelihood Inference

Filtering and Likelihood Inference Filtering and Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania July 10, 2011 Jesús Fernández-Villaverde (PENN) Filtering and Likelihood July 10, 2011 1 / 79 Motivation Introduction

More information

Approximate Bayesian Computation and Particle Filters

Approximate Bayesian Computation and Particle Filters Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra

More information

Cross-sectional space-time modeling using ARNN(p, n) processes

Cross-sectional space-time modeling using ARNN(p, n) processes Cross-sectional space-time modeling using ARNN(p, n) processes W. Polasek K. Kakamu September, 006 Abstract We suggest a new class of cross-sectional space-time models based on local AR models and nearest

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications Yongmiao Hong Department of Economics & Department of Statistical Sciences Cornell University Spring 2019 Time and uncertainty

More information

Spatio-temporal precipitation modeling based on time-varying regressions

Spatio-temporal precipitation modeling based on time-varying regressions Spatio-temporal precipitation modeling based on time-varying regressions Oleg Makhnin Department of Mathematics New Mexico Tech Socorro, NM 87801 January 19, 2007 1 Abstract: A time-varying regression

More information

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, ) Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Inferring biological dynamics Iterated filtering (IF)

Inferring biological dynamics Iterated filtering (IF) Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

A radial basis function artificial neural network test for ARCH

A radial basis function artificial neural network test for ARCH Economics Letters 69 (000) 5 3 www.elsevier.com/ locate/ econbase A radial basis function artificial neural network test for ARCH * Andrew P. Blake, George Kapetanios National Institute of Economic and

More information

DAG models and Markov Chain Monte Carlo methods a short overview

DAG models and Markov Chain Monte Carlo methods a short overview DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex

More information

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model

More information

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of

More information

Recursive Kernel Density Estimation of the Likelihood for Generalized State-Space Models

Recursive Kernel Density Estimation of the Likelihood for Generalized State-Space Models Recursive Kernel Density Estimation of the Likelihood for Generalized State-Space Models A.E. Brockwell February 28, 2005 Abstract In time series analysis, the family of generalized state-space models

More information

LS-N-IPS: AN IMPROVEMENT OF PARTICLE FILTERS BY MEANS OF LOCAL SEARCH

LS-N-IPS: AN IMPROVEMENT OF PARTICLE FILTERS BY MEANS OF LOCAL SEARCH LS--IPS: A IMPROVEMET OF PARTICLE FILTERS BY MEAS OF LOCAL SEARCH Péter Torma Csaba Szepesvári Eötvös Loránd University, Budapest Mindmaker Ltd. H-1121 Budapest, Konkoly Thege Miklós út 29-33. Abstract:

More information

Machine Learning for OR & FE

Machine Learning for OR & FE Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David

More information

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Rao-Blackwellized Particle Filter for Multiple Target Tracking Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo Särkkä, Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao-Blackwellized

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains

More information

A Note on Bayesian Inference After Multiple Imputation

A Note on Bayesian Inference After Multiple Imputation A Note on Bayesian Inference After Multiple Imputation Xiang Zhou and Jerome P. Reiter Abstract This article is aimed at practitioners who plan to use Bayesian inference on multiplyimputed datasets in

More information

Basic Sampling Methods

Basic Sampling Methods Basic Sampling Methods Sargur Srihari srihari@cedar.buffalo.edu 1 1. Motivation Topics Intractability in ML How sampling can help 2. Ancestral Sampling Using BNs 3. Transforming a Uniform Distribution

More information

Sequential Bayesian Updating

Sequential Bayesian Updating BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a

More information

Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models

Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models Siem Jan Koopman (a) André Lucas (a) Marcel Scharth (b) (a) VU University Amsterdam and Tinbergen Institute, The

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Statistical Practice

Statistical Practice Statistical Practice A Note on Bayesian Inference After Multiple Imputation Xiang ZHOU and Jerome P. REITER This article is aimed at practitioners who plan to use Bayesian inference on multiply-imputed

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Markov chain Monte Carlo methods for visual tracking

Markov chain Monte Carlo methods for visual tracking Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Session 5B: A worked example EGARCH model

Session 5B: A worked example EGARCH model Session 5B: A worked example EGARCH model John Geweke Bayesian Econometrics and its Applications August 7, worked example EGARCH model August 7, / 6 EGARCH Exponential generalized autoregressive conditional

More information

Department of Economics, UCSB UC Santa Barbara

Department of Economics, UCSB UC Santa Barbara Department of Economics, UCSB UC Santa Barbara Title: Past trend versus future expectation: test of exchange rate volatility Author: Sengupta, Jati K., University of California, Santa Barbara Sfeir, Raymond,

More information

Robert Collins CSE586, PSU Intro to Sampling Methods

Robert Collins CSE586, PSU Intro to Sampling Methods Robert Collins Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Robert Collins A Brief Overview of Sampling Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling

More information

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.

More information

Markov Chain Monte Carlo in Practice

Markov Chain Monte Carlo in Practice Markov Chain Monte Carlo in Practice Edited by W.R. Gilks Medical Research Council Biostatistics Unit Cambridge UK S. Richardson French National Institute for Health and Medical Research Vilejuif France

More information