Ensemble Kalman filters, Sequential Importance Resampling and beyond

Size: px
Start display at page:

Download "Ensemble Kalman filters, Sequential Importance Resampling and beyond"

Transcription

1 Ensemble Kalman filters, Sequential Importance Resampling and beyond Peter Jan van Leeuwen Institute for Marine and Atmospheric research Utrecht (IMAU) Utrecht University, P.O.Box 80005, 3508 TA Utrecht, The Netherlands ABSTRACT Data assimilation in high-resolution atmosphere or ocean models is complicated because of the nonlinearity of the problem. Several methods to solve the problem have been presented, all having their own advantages and disadvantages. In this paper so-called particle methods are discussed, with emphasis on Sequential Importance Resampling (SIR) and a new variant of that method. Reference is made to related methods, in particular to the Ensemble Kalman filter (EnKF). A detailed comparison between the EnKF and the SIR is made using the nonlinear KdV equation. It is shown that SIR produces good results in a highly nonlinear multi-layer quasi-geostrophic ocean model. Since the method needs at least 500 ensemble members or particles with the present-day observing system, new variants have to be studied to reduce that number in order to make the method feasible for real applications, like (seasonal) weather forecasting. In the new variant discussed here the number of members can be reduced by at least a factor 10 by guiding the ensemble to future observations. In this way the method starts to resemble asmoother. It is shown that the new method gives promising results, and the potentials and drawbacks of the new method are discussed. 1 Introduction The highly nonlinear nature of atmospheric and oceanic flows together with the relative sparse observation network (especially in oceanography) make the data-assimilation problem nonlinear too. By inspecting a probability density function (pdf) of a multi-layer quasi-geostrophic model of the ocean we immediately see that the nonlinearity is substantial: when starting from a Gaussian the pdf becomes multimodal in a matter of days (see figure 1). The pdf was constructed by running an ensemble of models each with a slightly perturbed initial condition and adding model noise at each time step (see section 4). The message from such a figure is that data-assimilation methods that contain linearizations of the true dynamics are expected to work not too well. Indeed, methods like the Extended kalman filter (EKF) will not work in problems like these, because they assume that the errors in the optimal estimate evolve with linear dynamics. It is expected that when the resolution of the numerical models increases this problem becomes even more stressing because the dynamics tend to become more nonlinear. A possible way to circumvent this problem is by letting the errors evolve with the nonlinear model equations by performing an ensemble of model runs. This approach leads to the well-known Ensemble Kalman filter (EnKF), as presented by Evensen (1994, see also Burgers et al., 1998). A practical advantage of this method above the original EKF is the fact that only a limited number of ensemble members ( ) is needed, compared to the equivalent of N ensemble runs (with linear dynamics) in the EKF, in which N is the size of the state space, which can easily be 1 million. One problem remains, however, and that is the analysis step, when the model is confronted with observations. In all Kalman-filter-like methods it is assumed at analysis time that either the model is linear or that the pdf of 46

2 Figure 1: Example of a probability density function in a certain model point, obtained from a dataassimilation run with a multi-layer quasi-geostrophic model. the model is Gaussian. For observations the assumption is that the pdf is Gaussian, and that the measurement operator (that produces the model equivalent of an observation) is linear. In methods like the EnKF, the Kalman-filter update equations are used even when the above mentioned assumptions are not fulfilled. What is done is that the variance of the ensemble is used in the kalman-filter equation. So, it is assumed that this equation does hold, but the rest of the method is as nonlinear as possible. With the latter I mean that due to the ensemble nature of the method it is possible to use nonlinear measurement operators, and each ensemble member is updated individually. One can show that, under the assumptions of the original kalman-filter, the updated ensemble has the correct covariance. (Note that the covariance is never explicitly calculated in ensemble methods.) However, since these assumptions are relaxed to some extent it is in fact not entirely clear what is done statistically. For instance, the prior ensemble will in general have a third-order moment (skewness) that is unequal to zero. So, extra information on the true pdf is present, which is a good thing. However, the update is done independent of this skewness. So, the updated ensemble will have a skewness that is deformed by the update process, and it is unclear what its meaning is. It might be completely wrong, deviating the estimated pdf from the true one. So-called adjoint methods take the nonlinearity of the dynamics into account directly. A serious problem is, however, that the method does not provide an error estimate in a highly nonlinear context. Note that the inverse of the Hessian, which is sometimes claimed to provide this error estimate, only does so in alinear context. Another problem is the assumption that the model dynamics and so-called physics (that is, the diabatic processes) are free of errors. Unfortunately, we are a long way from that assumption. Finally, the error-free model dynamics make the final estimate extremely dependent on initial conditions in the (nearly) chaotic system that the true atmosphere or ocean is. This latter problem can be avoided by avoiding long assimilation time intervals, as is done in incremental 4DVAR. An advantage of the latter methods is the fact that the model evolution is always balanced. The Kalman-filterlike methods can produce unbalanced states at analysis times because each updated ensemble member is just a linear combination of the prior ensemble. For instance, negative concentrations or unbalanced dynamics can occur. This is due to the Gaussian assumption on the pdf s, while e.g. a concentration does not have a Gaussian pdf. Another interesting fact of all methods presented above is that they have to perform matrix inversions. The size of the matrix depends on the number of observations, and can become rather large. So, it seems wise to 47

3 try to keep the number of observations low for optimal numerical performance, while, from a statistical point of view,we want to use as many observations as possible. Is it possible to create a data-assimilation method that can handle non-gaussian pdf s, nonlinear measurement functionals, provide error estimates and has no problems with millions of observations? To answer this question one has to return to the most general description of the data-assimilation problem. Bayes has shown that the probability density of a model ψ given a new set of observations d is given by: f m (ψ d) = f d(d ψ) f m (ψ) fd (d ψ) f m (ψ)dψ. (1) In words, the pdf of the model given the observations is given by the product of the prior pdf of the model and that of the observations given the model. The denominator is a normalization constant. Note that, because we need the pdf of the observations given the model, we are not troubled with the true value of the observed variable, only with its measurement and the model estimate. A close look at the equation shows that the posterior pdf can be obtained by just multiplying the densities of model and observations. So, data assimilation is that simple. In its purest form it has nothing to do with matrix inversions, it is not an inverse problem in that sense. At the same time we see how ensemble methods can be used. The ensemble members, or particles, are a certain representation of the prior model pdf, and the posterior pdf is represented by a reweighting of these ensemble members. This weighting is dependent on the value of the observations given an ensemble member. Sequential Importance sampling is just doing that: it creates an ensemble of models, runs that ensemble forward until observations become available (so far the method is identical to the other ensemble methods like the EnKF), weight each ensemble member with these observations, and continue the integration. This procedure is depicted in figure 2, and the actual calculation is presented in the next section. prior pdf obser vation pdf posterior pdf resampled pdf Figure 2: Sequential Importance Resampling: the prior pdf, as represented by an ensemble is multiplied with the observation pdf (not necessarily a Gaussian) to obtain the posterior pdf as represented by the ensemble. This posterior pdf is resampled to give each member equal weight again. The horizontal axis denoted one model variable. A practical problem is evident in figure 2, namely that some ensemble members have nothing to do with the observations: they are just way off. These members get a very low weight compared to the others. The ratio 48

4 of these weights can easily be a million or more. Clearly, these members have no influence on the first few moments of the pdf, they contain no real information anymore. This has let people to so-called resampling (Gordon et al, 1993). In the original formulation one just draws the complete ensemble randomly from the weighted posterior ensemble. Clearly, posterior ensemble members with very low weight have a very low probability to be drawn, while members with large weights can be drawn more than once. This is nothing more than abandoning those members that contain no information, and stress those that have. In the present implementation members with large weights are copied, and the number of copies is related to their weight. This copying and abandoning is done such that the total number of members remains the same as before the update. This presents, in words, Sequential Importance Resampling; we elaborate on this in the next section. In Van Leeuwen (2003) was shown that this method works in a highly nonlinear corner of the World Ocean, where the size of the state-space was about Unfortunately, the size of the ensemble needed was rather high; good results were obtained with a size of 495. This is considered too large for operational purposes. The idea to reduce this number further arose from communications with Kevin Judd. He suggests to use a so-called shadowing trajectory (or rather pseudo orbit) to first detect where the observations are before an ensemble is created to follow that trajectory. He proposes to use 4DVAR to generate the shadowing trajectory, but the drawback is that that is rather expensive (of the order of 100 ensemble integrations), so that the gain is lost in generating this first trajectory. Approximate ways to construct the trajectory are under construction, as are extensions to pseudo-orbits for models including dynamical errors. However, instead of generating such atrajectory one could let the ensemble know before hand where the observations are. This results in asir algorithm in which the SIR is applied with the same set of observations before the actual observation time. This leads to the Guided Sequential Importance Resampling. It can be shown that this is a valid procedure as long as the weighting is done properly. In this paper it is shown that this method can reduce the number of ensemble members needed to about 30 for the problem under consideration. In the present paper the methods are presented in the next section. In section 3 a comparison of the performance of the EnKF and the SIR is presented to highlight the differences in astrongly nonlinear environment (clearly not to judge on which method is best...). Section 4 discusses the SIR and especially the GSIR for a real-sized problem with real observations. The paper closes with a discussion of the relative merits. 2 Sequential Importance Resampling In this section we discuss the algorithms for the SIR and the GSIR. What both methods have in common is the reweighting of ensemble members as soon as observations are available. Before the observations are taken into account, the pdf of the model is described by a swarm of particles or ensemble members. Each of them contains the same amount of information of this pdf, each has weight 1/N in which N the size of the ensemble. When observations are available these members are reweighted. Bayes tells us (see (1)) that this new weight should be: w i = f d(d ψ i ) N i=1 f d(d ψ i ). (2) in which f d the pdf of the observations. When the observations are Gaussian distributed this leads to: w i = 1 [ A exp (d Hψ i) 2 ] 2σ 2 (3) in which H is the measurement operator (that can be nonlinear), and in which the normalization A is of no importance because we know that the total weights of all members should equal 1. The next two sections describe the two SIR-filters in detail. 49

5 2.1 The SIR algorithm The algorithm is explained rather easily. For details I refer to Van Leeuwen (2003). The procedure that I followed in this paper closely resembles importance resampling, as mentioned above. In the original SIR new members are drawn randomly from the weighted posterior ensemble. Here, instead of choosing randomly from the distribution determined by the weights, members with large weights are chosen directly from the distribution in the following way. First, this density is multiplied by the total amount of ensemble members. For each so-obtained weight that is larger than 1 the integer part of that weight determines the number of identical copies of the corresponding ensemble member. So, if the original weight of member i was w i = 0.115, with an ensemble size of 100, the new weight w i = = This results in 11 copies of the ensemble member, while w i = 0.5 remains. This procedure is followed for all weights. Finally, all remaining parts ww i form a new density from which the rest of the ensemble is drawn, according to the rules of stochastic importance resampling described above. The reason for the deviation from the basic importance resampling is described in Van Leeuwen (2003) and is of little consequence for what follows. 2.2 The GSIR algorithm t1 t2' t2 Time Figure 3: Guided Sequential Importance Resampling: a SIR is performed at t2 before the actual measurement time t2 to guide the ensemble towards the observations at t2. The ellipses denote the observations with their standard deviation. Now that the SIR algorithm is given, the GSIR is easily explained. Figure 3 shows the method. When the SIR is applied at the previous observation time, we wait until the next observations come in. Then we integrate the ensemble forward in time for a few time steps. We now assume that the new observations are performed at this time and perform a SIR-step. Obviously, we make an error here, because the observations are not to be used yet. However, we want to know already at this stage which members are going in the right direction, and which members are shooting off to remote areas in state space not supported by the observations at all. To avoid being too strict we increase the measurement error by a large factor, 100 say. The system is so nonlinear (or the model is so bad), that a number of ensemble members is unable to get a reasonable weight at this stage. By using the SIR we just abandon these already. After performing the SIR at this intermediate time step before the actual observations, we continue integrating the ensemble, again a few time steps. Then the procedure described above is repeated, but the error in the 50

6 observations is increased less than in the previous step, let s say a factor 10. Again the SIR is performed at this intermediate step. This is repeated a few times if necessary. Then we integrate the ensemble up to the true measurement time. Again we perform a SIR step, but because the ensemble members are already relatively close to the observations (guided to them as it were), the weights will not differ too great. This allows one to greatly reduce the number of members. In the application described in section 4 the reduction is more than a factor 30! This brings the number of members needed to about 30, which is sensible for real applications, given the ensemble forecasts and number of iterations performed in the 4-DVAR at this moment. It is even cheaper... 3 Nonlinear filtering: A simple example with the KdV equation In this section the SIR method is applied to the Korteweg-deVries equation to study its behavior in nonlinear systems. For comparison the ensemble Kalman filter is applied to the same problem. In this way the difference between the conventional Kalman update and a variance-minimizing solution can be investigated. Details of this comparison are presented in Van Leeuwen (2003). The Korteweg-deVries (KdV) equation describes the nonlinear evolution of a field u subject to advection and dispersion. It reads: u t + 6uu x + u xxx = 0 (4) Shape conserving solutions called solitons are allowed by a balance between the steepening of the wave form due to nonlinear advection and the dispersion from the third spatial derivative. We start by a field of the form: u(x,0) = 0.5a (cosh( a(x x 0 ))) 2 (5) in which x 0 is the position of the maximum of the wave form, and 0.5a its amplitude (see Fig. 4a). The KdV equation will move the wave form towards positive x values with a speed a, while conserving its shape. Important for the following is that the soliton is stable to small perturbations Figure 4: Left: True solution of the KdV equation at time 0, 10 and 20. Right: Prior (dotted), EnKF (dashed) and SIR (solid) ensemble mean solutions on t=10. Several experiments have been performed with the new filter and the ensemble Kalman filter. We present one experiment in detail here that highlights the differences between the two methods. We first form a true solution by integrating this form with a = 1 over a domain of length 50, with periodic boundary conditions, x 0 = 20 and x = 0.5. The generation of the ensemble and the integration details are described in Van Leeuwen (2003). 51

7 This solutions is measured six times, on t = 10, at x = 37, x = 40, and at x = 43, and on t = 20, at x = 57, x = 60, and at x = 63. To these pseudo observations random Gaussian noise with zero mean and standard deviation 0.05 was added. Note that the observations are taken around the peak values of the wave form. In figure 4b the mean of the ensemble before the update at t = 10 is given. The decrease of the amplitude can be attributed to the spread in amplitudes a, leading to an ensemble of soliton-like waves with different propagation speed. In the same figure the mean of the ensemble after analysis at t = 10 is given for the Ensemble Kalman filter and for the sequential importance resampling filter. The measurements are indicated by the crosses. The first thing that strikes the eye is that the EnKF solution comes much closer to the observations than the SIRF. The EnKF solution is too close to the observations because the variance in the prior ensemble is very large, so the EnKF assumes there is little information in the prior ensemble. In Fig. 5a the prior, posterior and observational density are given at the peak of the true soliton, on t = 10, at x = 40. The densities are created using the frequency interpretation on pre-specified intervals. Varying the intervals within reasonable bands showed that the features visible are robust. Clearly, the prior is non Gaussian because the solitons are always nonnegative. Several ensemble members moved too slow or too fast to have a significant value for the solution at x = 40. The SIRF gives the exact solution because the ensemble has converged Probability density Variance Figure 5: Left: Prior (dashed), posterior (solid) and observational (dotted) pdf on t=10 at x=40. Right: prior (dashed) and posterior (solid) variance on t=10. Variance estimates for a truly variance minimizing solution like the SIR also show unfamiliar behavior. Figure 5b shows the prior and posterior variance estimates for the SIRF at t = 10. Interestingly, the posterior variance is higher than the prior variance at the measurement point x = 40. So, in a conventional way of thinking the uncertainty in the estimate at that point is increased due to the measurement. A more accurate inspection of the full prior and posterior densities shows that the uncertainty, defined here as the entropy (see Van Leeuwen, 2003), has decreased, but the second moment of the density does not show it. One can easily show that the entropy of a Gaussian distributed variable is proportional to its variance. For probability densities other than the Gaussian this is not necessarily true. For instance, for bimodal densities the variance looses its meaning. In that case the entropy is a more sensible estimate, as Shannon (1948) showed. In our case we find that the entropy of the prior density is H = , and that of the posterior density is H = So, the entropy has decreased. The analysis at t = 20 (not shown here) gives a SIR estimate very close to the observations. This has to do with the correct update of the ensemble at t = 10, leading to a relatively large part of the ensemble rather close to the truth. The EnKF analysis is behaving very wild. This is due to the fact that the relatively strong update at t = 10 generated negative values, and the KdV solution is very sensitive to such values, leading to large waves that travel in the wrong direction. This shows another feature of nonlinear filtering: truly nonlinear filters are 52

8 property conserving, that is, they take into account the conservation laws that are in the governing equations because ensemble members are not mixed, they are merely weighted. It should be mentioned that modifications to the standard EnKF can be used to overcome the problems presented here, but the point we want to make is that nonlinear filtering gives unexpected results sometimes, and needs no adaptations for specific problems. 4 The SIR and its Guided variant In this section the SIR and the GSIR are applied to a large-scale problem to study its behavior in such a setting. We study the ocean area around South Africa. The Agulhas Current runs along the east coast of South Africa southward and retroflects just after leaving the continent at its most southern tip, back into the Indian Ocean. At the retroflection point large Agulhas rings are shed, moving into the South Atlantic. The area is an ideal test bed for data assimilation methods due to the highly nonlinear dynamics and the availability of high-quality satellite altimeter measurements of the height of the sea surface. Since this height is directly related to the pressure it is an important dynamical constraint on the flow. The area is modeled by a 5-layer quasi-geostrophic ocean model with a horizontal resolution of 10 km, with 251*140 gridpoints. The first baroclinic Rossby deformation radius is about 35 km in this area. The layer depths are 300, 300, 400, 100 and 3000 m, respectively, with densities of 1026, , , and kg/m 3. The time stepping was done with leap-frog, with an Euler step every 67th step to suppress the computational mode. A time step of 1 hour was chosen, close to the CFL-limit, for optimal accuracy. Small-scale noise was reduced by a Shapiro filter of order 8. Boundary conditions are such that features are leaving the domain with twice the speed of the fastest 5 wave modes. Only the inflow of the Agulhas Current at the eastern side of South Africa was prescribed. Because this inflow is super critical for all baroclinic (Kelvin) waves this last condition is well posed for those waves. Problems with barotropic waves did not arise. (Note that the values at the boundaries are part of the data assimilation problem.) The model is able to produce realistic ring-shedding events and general meso-scale motion in the area, as compared to satellite observations and in situ ship measurements. 4.1 Statistics The initial streamfunction error (uncertainty) was taken space independent, with values of 4000, 3000, 2000, 1000 and 1000 m 2 /s for the layer models. Every day a random error of 0.05 times these values was added to describe the model error. The spatial correlation of the errors was Gaussian with a decorrelation length of twice the Rossby radius of deformation. The state space, consisting of the 5 streamfunction fields for the 5 layers, has a dimension of about The ensemble size was 1024, where the SIR seems to have converged (see Van Leeuwen, 2003), and 32 for the GSIR. 4.2 Observations The observations were satellite altimeter height data from the TOPEX/Poseidon and the ERS-2 satellites. These two satellites cover the model area with tracks that are about 150 (T/P) and about 70 (ERS) km apart. T/P has a repeat orbit of 10 days, ERS has a repeat orbit of 35 days. The along-track resolution is 7 km, of which we used every fifth point. The observations that are used in the data-assimilation experiment are collected over 1 day and the resulting batches are offered to the model. So, each batch has only a partial coverage of the 53

9 domain, a few tracks, that differs from day to day. The observational error was specified as 5000 m 2 /s, which corresponds to about 4 cm in sea-surface height. The shape of the probability density of the altimeter observations is a difficult matter. Due to the weighting procedure the SIRF is very sensitive to the tails of that density. In general, we know little of those tails. It has been suggested, however, that the tails of a Gaussian are too small: the square in the exponent cuts off large deviations from the observations very drastically. So, outliers, meaning bad measurements here, may have a tremendous effect on the behavior of the filter. The Lorentz density is used in this paper, but better alternatives can probably be found quite easily. Advantage of this density is that it has a shape very similar to a Gaussian near the peak (symmetric and quadratic), but it is much broader away from the peak. The observational error is taken equal to σ, half the full-width at half maximum, so equal to a Gaussian in this respect. The SIR and GSIR filters were implemented on a Origin 3800 parallel computer using up to 256 processors, with MPI. The speedup was close to 100 %, while the f90 code is extremely simple. 4.3 Results We concentrate here on the comparison between the SIR and the GSIR. The prior density shown in figure 1 is from the SIR experiment, showing that Kalman-filter-like methods might not do a good job. Figure 6a shows the streamfunction evolution of the upper layer for days 2, 10 and 20, as determined from the SIR with 1024 members. Figure 6b shows the same for the GSIR with 32 members. Comparing these two figures shows that the two methods produce nearly identical results. The differences are within the ensemble standard deviations for the SIR and the GSIR (not shown). Also these second moments show a close resemblance. The conclusion from these results is that the GSIR works in this highly nonlinear environment with only 32 ensemble members. 5 Summary and discussion A new data assimilation method, a variant of sequential importance resampling, has been presented. It is truly variance minimizing for nonlinear dynamics, unlike schemes based on the (ensemble) Kalman filter. It is argued that due to its very nature data assimilation is not an inverse problem, or a minimizing problem, although it is usually cast in that form. The method is based on importance resampling, a well-known method in control theory (see e.g. Doucet et al, 2001). The SIR filter has been shown to work well in large-scale problems in Van Leeuwen (2003), but needed at least 500 ensemble members. New in this paper is a new variant, the Guided SIR, in which the number of ensemble members can be decreased at least a factor 30 by using observations backward in time. First the special effects in nonlinear filtering were highlighted in an example with the KvD equation, where the EnKF and the SIR were compared. We found that the Gaussian assumption made in the EnKF at analysis time can lead to incorrect updating. Furthermore, it is possible that the variance of the posterior density is larger than that of the prior density at some locations as the new filter showed. This is impossible when the Gaussian assumption is made. The entropy of the pdf, a measure of the uncertainty, did decrease however. Finally, it was shown that the new filter avoids unbalanced states. So, to sum up, the SIR is truly variance minimizing, no matrix inversions are needed, the observations can be distributed non Gaussian and the measurement functionals can be nonlinear without any problem. Furthermore, it preserves prior model constraints like positive definiteness unlike Kalman filter-like methods that mix states at analysis time, and provides error estimates unlike 4DVAR-like methods. Finally, it is easy to implement and 54

10 SIR 1024 members days GSIR 32 members days 20 days Figure 6: Upper-layer streamfunction for SIR with 1024 members (left) and GSIR with 32 members on day 2, 10 and 20. parallel, by its very nature. Secondly the GSIR was compared to the SIR with a large-scale problem. Because of the high dimension of the state space in real-size applications, care has to be taken when formulating the probability density of the observations. It was argued that because of outliers a Gaussian density is too narrow, i.e., its tails are too low. One measurement can degrade the solution if this is not taken into account. A Lorentz profile was used in this paper, leading to a more robust filter that seemed to work quite well. It was found that a SIR with 1024 members produces the same ensemble mean as a GSIR with 32 members, that is, within the error bound of the SIR. Also the standard deviations of both methods are of comparable size. This leads us to the conclusion that the GSIR is a promising method for real operational problems because it combines all advantages of the SIR with a relatively low ensemble size. One disadvantage over the SIR is that two new parameters have to be chosen, the frequency of intermittent SIR steps and the variance inflation of the observations at these steps. Preliminary experiments show that the outcome is not too sensitive to these choices, as long as they are in a range around the values presented in this paper. However, more research, probably with trial and error, is needed to make definite statements about these 55

11 matters. References [1] Burgers, G., P. J. van Leeuwen, and G. Evensen, On the analysis scheme of the Ensemble Kalman Filter, Monthly Weather Rev.,, , [2] Doucet, A., N. de Freitas, N.J. Gordon (Eds.) Sequential Monte Carlo methods in practice Springer, New York, 581p., [3] Evensen, G., Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics, J. Geophys. Res., 99(C5), 10,143 10,162, [4] Gordon, N.J., D.J. Salmond, and A.F.M. Smith Novel appraoch to nonlinear/non-gaussian Bayesian state estimation, IEEE Proceedings-F, 140, , [5] Shannon, C.E. A mathematical theory of communication, Bell Syst. Tech. J., 27, , [6] Van Leeuwen, P.J. An variance-minimizing filter for large-scale applications, Mon. Weather Rev., resubmitted Available from leeuwen. 56

Nonlinear Ensemble Data Assimilation for the Ocean

Nonlinear Ensemble Data Assimilation for the Ocean Nonlinear Ensemble Data Assimilation for the Ocean Peter Jan van Leeuwen Institute for Marine and Atmospheric research Utrecht (IMAU) Utrecht University, P.O.Box 85, 358 TA Utrecht, The Netherlands leeuwen@phys.uu.nl

More information

A Variance-Minimizing Filter for Large-Scale Applications

A Variance-Minimizing Filter for Large-Scale Applications SEPTEMBER 2003 VAN LEEUWEN 2071 A Variance-Minimizing Filter for Large-Scale Applications P. J. VAN LEEUWEN IMAU, Utrecht University, Utrecht, Netherlands (Manuscript received 25 June 2002, in final form

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Par$cle Filters Part I: Theory Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Reading July 2013 Why Data Assimila$on Predic$on Model improvement: - Parameter es$ma$on

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Analysis Scheme in the Ensemble Kalman Filter

Analysis Scheme in the Ensemble Kalman Filter JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

Particle filtering in geophysical systems

Particle filtering in geophysical systems Generated using version 3.0 of the official AMS L A TEX template Particle filtering in geophysical systems Peter Jan van Leeuwen Institute for Marine and Atmospheric Research Utrecht, Utrecht University

More information

Ensembles and Particle Filters for Ocean Data Assimilation

Ensembles and Particle Filters for Ocean Data Assimilation DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Ensembles and Particle Filters for Ocean Data Assimilation Robert N. Miller College of Oceanic and Atmospheric Sciences

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble

More information

6 Sequential Data Assimilation for Nonlinear Dynamics: The Ensemble Kalman Filter

6 Sequential Data Assimilation for Nonlinear Dynamics: The Ensemble Kalman Filter 6 Sequential Data Assimilation for Nonlinear Dynamics: The Ensemble Kalman Filter GEIR EVENSEN Nansen Environmental and Remote Sensing Center, Bergen, Norway 6.1 Introduction Sequential data assimilation

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic

Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic L. Nerger S. Danilov, G. Kivman, W. Hiller, and J. Schröter Alfred Wegener

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Tropical Pacific Ocean model error covariances from Monte Carlo simulations

Tropical Pacific Ocean model error covariances from Monte Carlo simulations Q. J. R. Meteorol. Soc. (2005), 131, pp. 3643 3658 doi: 10.1256/qj.05.113 Tropical Pacific Ocean model error covariances from Monte Carlo simulations By O. ALVES 1 and C. ROBERT 2 1 BMRC, Melbourne, Australia

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Javier Amezcua, Dr. Kayo Ide, Dr. Eugenia Kalnay 1 Outline

More information

Aspects of the practical application of ensemble-based Kalman filters

Aspects of the practical application of ensemble-based Kalman filters Aspects of the practical application of ensemble-based Kalman filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center

More information

Workshop on Heterogeneous Computing, 16-20, July No Monte Carlo is safe Monte Carlo - more so parallel Monte Carlo

Workshop on Heterogeneous Computing, 16-20, July No Monte Carlo is safe Monte Carlo - more so parallel Monte Carlo Workshop on Heterogeneous Computing, 16-20, July 2012 No Monte Carlo is safe Monte Carlo - more so parallel Monte Carlo K. P. N. Murthy School of Physics, University of Hyderabad July 19, 2012 K P N Murthy

More information

Sequential parameter estimation for stochastic systems

Sequential parameter estimation for stochastic systems Sequential parameter estimation for stochastic systems G. A. Kivman To cite this version: G. A. Kivman. Sequential parameter estimation for stochastic systems. Nonlinear Processes in Geophysics, European

More information

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données Four-Dimensional Ensemble-Variational Data Assimilation 4DEnVar Colloque National sur l'assimilation de données Andrew Lorenc, Toulouse France. 1-3 décembre 2014 Crown copyright Met Office 4DEnVar: Topics

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Accelerating the spin-up of Ensemble Kalman Filtering

Accelerating the spin-up of Ensemble Kalman Filtering Accelerating the spin-up of Ensemble Kalman Filtering Eugenia Kalnay * and Shu-Chih Yang University of Maryland Abstract A scheme is proposed to improve the performance of the ensemble-based Kalman Filters

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics

Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 99, NO. C, PAGES,143,162, MAY, 1994 Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics Geir

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen, Ocean Dynamics, Vol 5, No p. The Ensemble

More information

A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME

A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME Thomas M. Hamill and Chris Snyder National Center for Atmospheric Research, Boulder, Colorado 1. INTRODUCTION Given the chaotic nature of

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

The Canadian approach to ensemble prediction

The Canadian approach to ensemble prediction The Canadian approach to ensemble prediction ECMWF 2017 Annual seminar: Ensemble prediction : past, present and future. Pieter Houtekamer Montreal, Canada Overview. The Canadian approach. What are the

More information

Ensemble Data Assimilation and Uncertainty Quantification

Ensemble Data Assimilation and Uncertainty Quantification Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Particle filters, the optimal proposal and high-dimensional systems

Particle filters, the optimal proposal and high-dimensional systems Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters

More information

How does 4D-Var handle Nonlinearity and non-gaussianity?

How does 4D-Var handle Nonlinearity and non-gaussianity? How does 4D-Var handle Nonlinearity and non-gaussianity? Mike Fisher Acknowledgements: Christina Tavolato, Elias Holm, Lars Isaksen, Tavolato, Yannick Tremolet Slide 1 Outline of Talk Non-Gaussian pdf

More information

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. UCAR 2014 The National Center for Atmospheric Research is sponsored by the National

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

ESTIMATING CORRELATIONS FROM A COASTAL OCEAN MODEL FOR LOCALIZING AN ENSEMBLE TRANSFORM KALMAN FILTER

ESTIMATING CORRELATIONS FROM A COASTAL OCEAN MODEL FOR LOCALIZING AN ENSEMBLE TRANSFORM KALMAN FILTER ESTIMATING CORRELATIONS FROM A COASTAL OCEAN MODEL FOR LOCALIZING AN ENSEMBLE TRANSFORM KALMAN FILTER Jonathan Poterjoy National Weather Center Research Experiences for Undergraduates, Norman, Oklahoma

More information

Localization and Correlation in Ensemble Kalman Filters

Localization and Correlation in Ensemble Kalman Filters Localization and Correlation in Ensemble Kalman Filters Jeff Anderson NCAR Data Assimilation Research Section The National Center for Atmospheric Research is sponsored by the National Science Foundation.

More information

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Shu-Chih Yang 1*, Eugenia Kalnay, and Brian Hunt 1. Department of Atmospheric Sciences, National Central

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS Michael Lunglmayr, Martin Krueger, Mario Huemer Michael Lunglmayr and Martin Krueger are with Infineon Technologies AG, Munich email:

More information

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

Accepted in Tellus A 2 October, *Correspondence

Accepted in Tellus A 2 October, *Correspondence 1 An Adaptive Covariance Inflation Error Correction Algorithm for Ensemble Filters Jeffrey L. Anderson * NCAR Data Assimilation Research Section P.O. Box 3000 Boulder, CO 80307-3000 USA Accepted in Tellus

More information

Adaptive Data Assimilation and Multi-Model Fusion

Adaptive Data Assimilation and Multi-Model Fusion Adaptive Data Assimilation and Multi-Model Fusion Pierre F.J. Lermusiaux, Oleg G. Logoutov and Patrick J. Haley Jr. Mechanical Engineering and Ocean Science and Engineering, MIT We thank: Allan R. Robinson

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

Four-Dimensional Ensemble Kalman Filtering

Four-Dimensional Ensemble Kalman Filtering Four-Dimensional Ensemble Kalman Filtering B.R. Hunt, E. Kalnay, E.J. Kostelich, E. Ott, D.J. Patil, T. Sauer, I. Szunyogh, J.A. Yorke, A.V. Zimin University of Maryland, College Park, MD 20742, USA Ensemble

More information

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

Model error and parameter estimation

Model error and parameter estimation Model error and parameter estimation Chiara Piccolo and Mike Cullen ECMWF Annual Seminar, 11 September 2018 Summary The application of interest is atmospheric data assimilation focus on EDA; A good ensemble

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Eugenia Kalnay Lecture 2 Alghero, May 2008 Elements of Ensemble Forecasting

More information

A State Space Model for Wind Forecast Correction

A State Space Model for Wind Forecast Correction A State Space Model for Wind Forecast Correction Valrie Monbe, Pierre Ailliot 2, and Anne Cuzol 1 1 Lab-STICC, Université Européenne de Bretagne, France (e-mail: valerie.monbet@univ-ubs.fr, anne.cuzol@univ-ubs.fr)

More information

Data Assimilation Research Testbed Tutorial

Data Assimilation Research Testbed Tutorial Data Assimilation Research Testbed Tutorial Section 2: How should observations of a state variable impact an unobserved state variable? Multivariate assimilation. Single observed variable, single unobserved

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows

Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows SP and DSS for Filtering Sparse Geophysical Flows Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows Presented by Nan Chen, Michal Branicki and Chenyue

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

Data Assimilation Research Testbed Tutorial

Data Assimilation Research Testbed Tutorial Data Assimilation Research Testbed Tutorial Section 3: Hierarchical Group Filters and Localization Version 2.: September, 26 Anderson: Ensemble Tutorial 9//6 Ways to deal with regression sampling error:

More information

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2)

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2) Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2) Time series curves 500hPa geopotential Correlation coefficent of forecast anomaly N Hemisphere Lat 20.0 to 90.0

More information

Theory and Practice of Data Assimilation in Ocean Modeling

Theory and Practice of Data Assimilation in Ocean Modeling Theory and Practice of Data Assimilation in Ocean Modeling Robert N. Miller College of Oceanic and Atmospheric Sciences Oregon State University Oceanography Admin. Bldg. 104 Corvallis, OR 97331-5503 Phone:

More information

Assimilation des Observations et Traitement des Incertitudes en Météorologie

Assimilation des Observations et Traitement des Incertitudes en Météorologie Assimilation des Observations et Traitement des Incertitudes en Météorologie Olivier Talagrand Laboratoire de Météorologie Dynamique, Paris 4èmes Rencontres Météo/MathAppli Météo-France, Toulouse, 25 Mars

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi 4th EnKF Workshop, April 2010 Relationship

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Stability of Ensemble Kalman Filters

Stability of Ensemble Kalman Filters Stability of Ensemble Kalman Filters Idrissa S. Amour, Zubeda Mussa, Alexander Bibov, Antti Solonen, John Bardsley, Heikki Haario and Tuomo Kauranne Lappeenranta University of Technology University of

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Ocean data assimilation for reanalysis

Ocean data assimilation for reanalysis Ocean data assimilation for reanalysis Matt Martin. ERA-CLIM2 Symposium, University of Bern, 14 th December 2017. Contents Introduction. On-going developments to improve ocean data assimilation for reanalysis.

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

6.5 Operational ensemble forecasting methods

6.5 Operational ensemble forecasting methods 6.5 Operational ensemble forecasting methods Ensemble forecasting methods differ mostly by the way the initial perturbations are generated, and can be classified into essentially two classes. In the first

More information

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Robust Ensemble Filtering With Improved Storm Surge Forecasting Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm

More information

Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows

Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows by Laura Slivinski B.S., University of Maryland; College Park, MD, 2009 Sc.M, Brown University; Providence, RI, 2010 A dissertation

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation

The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation Noname manuscript No. (will be inserted by the editor) The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation Geir Evensen Nansen Environmental and Remote Sensing Center, Bergen

More information

Morphing ensemble Kalman filter

Morphing ensemble Kalman filter Morphing ensemble Kalman filter and applications Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Supported by NSF grants CNS-0623983

More information

Eddy-resolving Simulation of the World Ocean Circulation by using MOM3-based OGCM Code (OFES) Optimized for the Earth Simulator

Eddy-resolving Simulation of the World Ocean Circulation by using MOM3-based OGCM Code (OFES) Optimized for the Earth Simulator Chapter 1 Atmospheric and Oceanic Simulation Eddy-resolving Simulation of the World Ocean Circulation by using MOM3-based OGCM Code (OFES) Optimized for the Earth Simulator Group Representative Hideharu

More information

A Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel

A Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel Generated using version 3.0 of the official AMS L A TEX template A Moment Matching Particle Filter for Nonlinear Non-Gaussian Data Assimilation Jing Lei and Peter Bickel Department of Statistics, University

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed

Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed Jeffrey Anderson, Tim Hoar, Nancy Collins NCAR Institute for Math Applied to Geophysics pg 1 What is Data Assimilation?

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich Arizona State University Co-workers: Istvan Szunyogh, Brian Hunt, Ed Ott, Eugenia Kalnay, Jim Yorke, and many others http://www.weatherchaos.umd.edu

More information

The ECMWF coupled data assimilation system

The ECMWF coupled data assimilation system The ECMWF coupled data assimilation system Patrick Laloyaux Acknowledgments: Magdalena Balmaseda, Kristian Mogensen, Peter Janssen, Dick Dee August 21, 214 Patrick Laloyaux (ECMWF) CERA August 21, 214

More information