Applications of information theory in ensemble data assimilation

Size: px
Start display at page:

Download "Applications of information theory in ensemble data assimilation"

Transcription

1 QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 33: (007) Published online in Wiley InterScience ( Applications of information theory in ensemble data assimilation Dusanka Zupanski, a * Arthur Y. Hou, b Sara Q. Zhang, b Milija Zupanski, a Christian D. Kummerow a and Samson H. Cheung c a Colorado State University, Fort Collins, Colorado, USA b NASA Goddard Space Flight Center, Greenbelt, Maryland, USA c University of California, Davis, California, USA ABSTRACT: We apply information theory within an ensemble-based data assimilation approach and define information matrix in ensemble subspace. The information matrix in ensemble subspace employs a flow-dependent forecast error covariance and it is of relatively small dimensions (equal to the ensemble size). The information matrix in ensemble subspace can be directly linked to the information matrix typically used in non-ensemble-based data assimilation methods, such as the Kalman Filter (KF) and the three-dimensional variational (3D-Var) methods, which provides a framework for consistent comparisons of information measures between different data assimilation methods. We evaluate information measures, such as degrees of freedom for signal, within the Maximum Likelihood Ensemble Filter (MLEF) data assimilation approach and compare them with those obtained using the KF approach and the 3D-Var approach. We assimilate model-simulated observations and use the Goddard Earth Observing System Single Column Model (GEOS-5 SCM) as a dynamical forecast model. The experimental results demonstrate that the proposed framework is useful for comparing information measures obtained in different data assimilation approaches. These comparisons indicate that using a flow-dependent forecast error covariance matrix (e.g. as in the KF and the MLEF experiments) is fundamentally important for adequately describing prior knowledge about the true model state when calculating information measures of assimilated observations. We also demonstrate that data assimilation results obtained using the KF and the MLEF approach (when ensemble size is larger than 0 ensemble members) are superior to the results of the 3D-Var approach. Copyright 007 Royal Meteorological Society KEY WORDS ensemble data assimilation; information theory; Maximum Likelihood Ensemble Filter; Kalman filter; 3D-Var Received 30 December 006; Revised 7 May 007; Accepted June 007. Introduction It has been recognized that information theory (e.g. Shannon and Weaver, 949; Rodgers, 000) and predictability are inherently related (e.g. Schneider and Griffies, 999; Kleeman, 00; Roulston and Smith, 00; Del- Sole, 004; Abramov et al., 005). Information theory has also come to the attention of data assimilation, where it has been used to calculate information content of various observations (e.g. Wahba, 985; Purser and Huang, 993; Wahba et al., 995; Rodgers, 000; Rabier et al., 00; Fisher, 003; Johnson, 003; Engelen and Stephens, 004; L Ecuyer et al., 006). Information content of observations can potentially have many applications, including planning measurement missions, designing observational systems and defining targeted observations and data selection strategies. These applications have been underutilized so far, and were mainly oriented towards defining data selection strategies (e.g. Rabier et al., 00 and references therein). Nevertheless, * Correspondence to: Dusanka Zupanski, Cooperative Institute for Research in the Atmosphere/Colorado State University, Fort Collins, Colorado, , USA. zupanski@cira.colostate.edu progress in data assimilation methods should foster applications of information theory in many different areas. Ensemble-based data assimilation methods, often referred to as Ensemble Kalman Filter (EnKF) methods, are novel data assimilation techniques that have rapidly progressed since the pioneering work of Evensen (994) appeared. As a result of this progress, many different variants of the EnKF have evolved (e.g. Houtekamer and Mitchell, 998; Pham et al., 998; Lermusiaux and Robinson, 999; Hamill and Snyder, 000; Keppenne, 000; Mitchell and Houtekamer, 000; Anderson, 00; Bishop et al., 00; Pham, 00; van Leeuwen, 00; Reichle et al., 00a,b; Whitaker and Hamill, 00; Hoteit et al., 00, 003; Tippett et al., 003; Zhang et al., 004; Ott et al., 005; Peters et al., 005; Szunyogh et al., 005; Zupanski, 005; Zupanski and Zupanski, 006, just to mention some). While there are respectable differences between different variants of the EnKF, they are all closely related in data assimilation problems involving Gaussian Probability Density Functions (PDFs) and linear dynamical forecast models. In such case, the EnKFs share the common property of being rank-reduced approximations to the Copyright 007 Royal Meteorological Society

2 534 D. ZUPANSKI ET AL. theoretically optimal, full-rank KF solution. Under more general conditions, involving highly nonlinear dynamical models and non-gaussian PDFs, the differences between different EnKFs could be more significant (e.g. Fletcher and Zupanski, 006). Even though there was much advancement in the EnKF methods, this was not matched by applications of information theory within these methods. In fact, information theory has primarily been applied within other data assimilation methods (e.g. variational, KF), while its application to ensemble data assimilation has been rather limited so far. Some of the pioneering studies in this area are as follows. Wang and Bishop (003) examined the eigenvalues and eigenvectors of the Ensemble Transform Kalman Filter (ETKF: Bishop et al., 00; Wang and Bishop, 003) transformation matrix and demonstrated that these eigenvalues and eigenvectors define the amount and the direction of the maximum forecast error reduction due to information from the observations. Patil et al. (00), Oczkowski et al. (005), and Wei et al. (006) used the eigenvalues of the ETKF transformation matrix to define measures of information, referred to as bred dimension, effective degrees of freedom, and E dimension, respectively. These studies have recognized that ensemble-based methods have a potential to improve measures of information due to the use of the flowdependent forecast error covariance matrix, especially in applications to adaptive observations. A recent study by Uzunoglu et al. (007) described a novel application of information measures in ensemble data assimilation: for ensemble size reduction or inflation. Building upon the previous studies, and recognizing that there is similarity between the ETKF and the MLEF approach, we link the MLEF transformation matrix with the so-called information or observability matrix, defined in ensemble subspace. We also demonstrate how the information matrix can be used to define standard measures of information theory, such as Degrees of Freedom (DOF) for signal and Shannon entropy reduction (e.g. Rodgers, 000). Thus, we propose a general framework to link together ensemble data assimilation and information theory in a similar manner as in variational and KF methods. This framework can be used for comparing information measures of different data assimilation approaches. Additionally, as demonstrated in Zupanski et al. (007), the information measures in ensemble subspace can be employed to define a flow-dependent distance function for covariance localization. We evaluate this framework within an ensemble-based data assimilation method, using a single-column precipitation model and simulated observations. We also evaluate the results of the KF and the three-dimensional variational (3D-Var) approaches, defined as special applications of the proposed framework. The paper is organized as follows. In Section the general framework is described. The experimental design is explained in Section 3, and experimental results are presented in Section 4. Finally, in Section 5, the conclusions are summarized and their relevance for future research is discussed.. General framework In this study we employ an ensemble data assimilation approach referred to as Maximum Likelihood Ensemble Filter (MLEF: Zupanski, 005; Zupanski and Zupanski, 006; Zupanski et al., 006). Here we briefly describe the MLEF. The MLEF seeks a maximum likelihood state solution employing an iterative minimization of a cost function. The solution for a state vector x (also referred to as control variable), of dimension N state, is obtained by minimizing the cost function J defined as J(x) = [x x b] T P f [x x b] + [y H(x)]T R [y H(x)], () where y is an observation vector of dimension equal to the number of observations (N obs ), and H is, in general, a nonlinear observation operator. Subscript b denotes a background (i.e. prior) estimate of x, and superscript T denotes a transpose. The N obs N obs matrix R is a prescribed observation error covariance, and it includes instrumental and representativeness errors (e.g. Cohn, 997). The matrix P f of dimension N state N state is the forecast error covariance. As in many other ensemble-based methods, we do not use the full matrix P f explicitly, but we employ the rank-reduced f (P square-root formulation P f = P f )T,whereP is an N state N ens square-root matrix (N ens being the ensemble size). Uncertainties of the optimal estimate of the state x are defined as square roots of the analysis error covariance (Pa ) and the forecast error covariance (Pf ), both defined in ensemble subspace. The square root of the analysis error covariance is obtained as (e.g. Zupanski, 005) Pa = [ pa pa... p N ens a f ] = Pf (I ens + C ), () where I ens is an identity matrix of dimension N ens N ens,andpa i are column vectors representing analysis perturbations in ensemble subspace. The square root in () is calculated via eigenvalue decomposition of C. Itis defined as a symmetric positive semi-definite square root, and therefore it is unique (e.g. Horn and Johnson, 985, Theorem 7..6). Matrix C has dimensions N ens N ens and is defined by C = Z T Z ; z i = R H(x + p i f ) R H(x), (3) where vectors z i are the columns of the matrix Z of dimension N obs N ens. Note that, when calculating z i,

3 INFORMATION THEORY IN ENSEMBLE DATA ASSIMILATION 535 a nonlinear operator H is applied to perturbed and unperturbed states x. Vectors pf i are columns of the square root of the background error covariance matrix and are obtained via ensemble forecasting employing a nonlinear forecast model M: Pf = [ p f pf... p N ens f ] ; pf i = M(x + pi a ) M(x). (4) Equations () (3) are solved iteratively in each data assimilation cycle, while Equation (4) is used to propagate in time the columns of the forecast error covariance matrix Pf. An information measure referred to as the DOF for signal is often used in information theory (e.g. Rodgers, 000). In data assimilation applications, DOF for signal (here denoted d s ) is commonly defined in terms of analysis and forecast error covariances, P a and P f,(e.g. Wahba, 985; Purser and Huang, 993; Wahba et al., 995; Rodgers, 000; Rabier et al., 00; Fisher, 003; Johnson, 003; Engelen and Stephens, 004) as d s = tr[i state P a P f ], (5) where tr denotes trace, and I state is an identity matrix of dimension N state N state. The quantity d s counts the number of new pieces of information brought to the analysis by the observations, with respect to what was already known, as expressed by P f. Being dependent on the ratio between the analysis and forecast error covariance (P a P f ), d s measures the forecast error reduction due to new information from the observations. Wahba et al. (995) define d s in terms of so-called influence matrix A as d s = tr[r HP a H T R ] = tr[a], (6) which is equivalent to (5), as pointed out by Fisher (003). Employing the definition of P a in ensemble subspace () and using tr[xx T ] = tr[x T x] we can write (6) in ensemble subspace as d s = tr[(i ens + C ) (Pf )T H T (R ) T R HPf ]. (7) Assuming that the linear operator H is the first derivative of a weakly nonlinear operator H at the point x, wecan write the following approximate equation for the columns r i of the matrix R HP f r i R H(x + p i f ) R H(x). (8) Finally, by combining (3), (7), and (8) we have d s = tr[(i ens + C ) Z T Z ] = tr[(i ens + C ) C ]. (9) Definition (9) is essentially the same as Equation (.6) of Rodgers (000). The only difference is that the trace is obtained employing matrix C of dimension N ens N ens, while in the formulation of Rodgers (000), the trace is obtained employing an information matrix of dimensions N state N state (the full-rank information matrix). We will denote matrix C as the information matrix in ensemble subspace. By introducing information matrix C, wehavedefined a link between information theory and ensemble data assimilation. Having this link is of special importance for the following reasons. When calculating information content measures such as d s, a flow-dependent P f obtained directly from ensemble data assimilation is used. In addition, eigen-decomposition of C is easily accomplished due to the relatively small size of this matrix (N ens N ens ) compared to the typical number of observations (N obs ) used in applications to complex forecast models with large state vectors (of dimension N state ). A possible disadvantage of this ensemble-based approach, as of any ensemble-based approach, is that a small ensemble size might not be sufficient to adequately describe the variability of the full-rank forecast error covariance matrix. In such cases, the information measures would still measure the amount of information brought by the observations with respect to what was already known; however, the quality of the analysis could be poor. One of the main focuses of this study is to evaluate the impact of ensemble size on the information measures. Once the information matrix C is available, various information measures can be calculated. It is especially useful to define these measures in terms of the eigenvalues λ i of C. Thus, as in Rodgers (000), we can define (9) in terms of λ i and calculate d s as d s = i λ i ( + λ (0) i ). Equations (3) and (6) indicate that the eigenvalues λ i depend on the ratio between the forecast error covariance and the observation error covariance, both defined in the observation locations. Thus, for the forecast errors larger than the observation errors we have λ i (signal), and for the forecast errors smaller than the observation errors we have λ i < (noise). Using eigenvalues λ i one can also calculate other information measures, such as Shannon information content defined as the reduction of entropy due to added information from the observations (Shannon and Weaver, 949; Rodgers, 000). Since this measure is quite similar to DOF for signal, it will not be examined in this study. An important characteristic of the MLEF approach is that it can be made identical to KF or variational methods, under special conditions that are explained below. This provides an opportunity to compare information measures obtained using different data assimilation approaches.

4 536 D. ZUPANSKI ET AL... Connection to KF A linear version of the full-rank MLEF is identical to the classical linear KF when using Gaussian PDFs, linear models M, and linear observation operators H. The full-rank MLEF solution is obtained by setting N ens = N state. Under these conditions, the solution that minimizes () can be explicitly calculated using (e.g. Zupanski, 005, Appendix A, Equation A7) x = x b + αp f H T (HP T f H T + R) [y H(x b )]. () If both the KF and the MLEF are initialized using the same forecast error covariance, the MLEF solution after the first data assimilation cycle (Equation ()) will be identical to the KF solution, because the minimization step-size α is equal to for quadratic cost functions (Gill et al., 98). The MLEF solution will remain identical to the KF solution throughout all data assimilation cycles, since the linear version of the forecast error covariance update (Equation (4)) is the same as the KF update equation. Thus, we can conclude that the full-rank MLEF (N ens = N state ) is identical to the full-rank KF under the above assumptions. Under the same assumptions, the reduced-rank MLEF (N ens <N state ) could be interpreted as a variant of a reduced-rank KF, since the same equations are being solved in both approaches; however, a reduced-rank P f is used. Since different variants of the reduced-rank KF would produce different solutions due to different ways of defining a reduced-rank P f, the link between the reduced-rank MLEF and reduced-rank KFs is not uniquely defined... Connection to 3D-Var As explained before, the solution obtained by the MLEF is a maximum likelihood one, and, in general, a nonlinear one. These characteristics are shared with variational methods, thus there is a connection to these methods as well. The full-rank nonlinear MLEF solution without the update of the forecast error covariance (i.e. using a prescribed covariance instead of Equation (4)) is identical to the 3D-Var solution, since the same cost function () is minimized. To obtain identical results, one can employ the same minimization method with the same preconditioning in both the MLEF and the 3D-Var (e.g. Zupanski, 005). In this study we employ Hessian preconditioning, which may not be always feasible in variational methods due to large dimensions of the fullrank covariance matrices. In summary, the general framework proposed here should be directly applicable not only to EnKF methods, but also to KF and 3D-Var methods, as long as it remains practical to evaluate full-rank covariance matrices. There are, however, some restrictions to the proposed general framework. For example, when deriving information measures (e.g. DOF for signal and entropy reduction) we have assumed, as in Rodgers (000), that all errors are Gaussian. Therefore, we have implicitly assumed weak nonlinearity in M and H, even though ensemblebased and variational methods do not necessarily require this assumption. Consequently, the information measures obtained in highly nonlinear data assimilation problems, and also for variables that are typically non-gaussian (e.g. humidity and cloud microphysical variables), could be incorrect or only approximately correct. A theoretical framework for information measures employing non- Gaussian ensembles is proposed in Majda et al. (00) and Abramov and Majda (004). They have employed a different approach, based on the moment constraint optimization, to estimate the so-called predictive utility, which is an information measure derived from the Shannon entropy. As shown in Abramov and Majda (004), higher-order moments, up to the first four moments, would be required for non-gaussian information measures in typical atmospheric applications. The framework proposed here could be further generalized following Majda et al. (00) and Abramov and Majda (004). An extension of the MLEF to account for log-normally distributed observations has already been developed by Fletcher and Zupanski (006) and could be used as a starting point for defining non-gaussian information measures within the MLEF. As indicated in Fletcher and Zupanski (006), the cost function should include an additional term in order to account for log-normally distributed observations. 3. Experimental design 3.. Forecast model A single column version of the Goddard Earth Observing System (GEOS-5) Atmospheric General Circulation Model (AGCM) is used in this study. We refer to this model as GEOS-5 SCM (Single Column Model). Previous experience employing column versions of the GEOSseries within a -dimensional variational data assimilation technique indicated that the -dimensional framework could produce useful data assimilation results, especially in applications to rainfall assimilation (Hou et al., 000, 00, 004). The GEOS-5 SCM consists of the model physics components of the GEOS-5 AGCM: moist processes (Relaxed Arakawa-Schubert convection and prognostic large-scale cloud condensation), turbulence, radiation, land surface and chemistry. The dynamic advection is driven by prescribed forcing time series. The column model is capable of updating all the prognostic state variables and evaluating a suite of additional observable quantities such as precipitation and cloud properties. The GEOS-5 SCM retains most of the nonlinear complexities and interaction between physical processes as in the full AGCM. Meanwhile, it has the advantage of reduced dimensions when it is used in the research experiments of ensemble data assimilation.

5 INFORMATION THEORY IN ENSEMBLE DATA ASSIMILATION Control variable, observations In the applications of this paper, we focus on using simulated observations directly on two state variables: vertical profiles of temperature (T) and specific humidity (q). They are also the control variables for data assimilation. In the experiments presented, 40 model levels are used. Thus, the dimension of the control vector is 80. The column model only updates temperature and specific humidity during a data assimilation interval. Remaining state variables, along with the advection forcing, are prescribed by the Atmospheric Radiation Measurement (ARM) data time series. The tropical western Pacific site (30 E, 5 N) in the ARM observation programme is chosen for the application discussed in this paper. The assimilation experiments cover the period from 7 May 998 to 4 May 998 (7 days). A data assimilation interval of 6 hours is used in the experiments, and simulated observations of temperature and specific humidity are assimilated at the end of each data assimilation interval. Simulated observations are defined using the true state, defined by the GEOS-5 SCM, and by adding Gaussian white noise to the true state. Thus, the observation error covariance matrix R is assumed diagonal and constant in time. We use the same version of the model to perform data assimilation and to create observations, thus we assume that the model is perfect. In experiments with real observations the perfect model assumption might not be justified. In order to relax this assumption one can use some of the recently proposed model error estimation approaches (e.g. Heemink et al., 00; Mitchell et al., 00; Reichle et al., 00a; Zupanski and Zupanski, 006). Observations are created assuming an instrument error of 0. K for T at all model levels (R / inst = 0. K). Instrument errors for q vary between R / inst = and R / inst = ; the errors are defined to decrease from the lowest to the highest model level. The total observation errors are defined as R / = αr / inst,wherean empirical parameter α>isemployed to approximately account for representativeness errors. Here, the representativeness error approximately accounts for the mismatch between the observed and modelled scales (which is a common definition of representativeness error), and also for inadequate scales in the forecast error covariance due to the small ensemble size. To approximately account for both parts of the representativeness error we require that for the largest ensemble size the parameter α is greater than, and we let it increase with decreasing ensemble size. The values of the parameter α are tuned to the ensemble size to approximately satisfy the expected chi-square innovation statistic, calculated for optimized innovations and normalized by the analysis error (e.g. Dee, 995; Ménard et al., 000; Zupanski, 005). Instrument errors and the values of the parameter α used in data assimilation experiments of this study are listed in Table I. Initial conditions for T and q at the beginning of the first data assimilation cycle are from ARM observations of T and q at the time 0000 UTC 07 May 998, and they are interpolated from observation levels to the model levels. With this configuration, errors in the initial conditions are simulated by the difference between ARM observations and the true states defined by the model simulation (started from 800 UTC 06 May 998 and integrated for 6 hours to 0000 UTC 07 May998). This has resulted in Root Mean Square (RMS) errors of 0.46 K for T b and for q b in the first data assimilation cycle (recall that subscript b denotes background values). In all subsequent cycles, the 6-hour forecast of T and q from the previous cycle is used to define the background for the current cycle. Table I. List of data assimilation experiments discussed in this paper. Experiment N ens (T and q estimated) N obs (T and q observed) R / inst for T in degrees K R / inst for q in kg kg (Min; Max errors) Parameter α Localization 0ens 80obs ; NO 0ens 80obs ; NO 40ens 80obs ; NO KF 80obs ; NO 0ens 40obs ; NO 0ens 40obs ; NO 40ens 40obs ; NO KF 40obs ; NO 0ens 40obs loc ; YES 3dv 40obs ; NO no obs 0 Covariance localization was not applied in the 3D-Var experiments; however, the 3D-Var covariance is localized by definition (defined using the Gaspari and Cohn (999) correlation function). N obs indicates the number of observations per data assimilation cycle. The empirical parameter α, varying with ensemble size, is employed to approximately account for an unknown representativeness error. Prefixes KF and 3dv indicate Kalman Filter and 3D-Var experiments, respectively. Suffix loc indicates that localization is applied to the forecast error covariance. Experiment denoted no obs is an experiment without data assimilation.

6 538 D. ZUPANSKI ET AL Ensemble perturbations The square-root forecast error covariance Pf is initialized in the first data assimilation cycle using prescribed perturbations pf i (cold start); in the subsequent cycles the data assimilation scheme updates pf i according to Equations () (4). The cold start ensemble perturbations are defined using Gaussian white noise with prescribed standard deviation of comparable magnitude to the observations errors. A compactly supported secondorder correlation function of Gaspari and Cohn (999), with decorrelation length of 3 vertical layers, is applied to the random perturbations to define a correlated random noise (e.g. Zupanski et al., 006). The decorrelation length of 3 vertical layers was determined empirically, based on the overall best data assimilation performance Minimization A conjugate gradient minimization algorithm (e.g. Luenberger, 984), with the line-search technique as in Navon et al. (99) and with Hessian preconditioning as in Zupanski (005), is used in the experiments of this paper. In all data assimilation experiments, only a single iteration of the minimization is performed, which is sufficient for linear observation operators (Zupanski 005). Note that nonlinearity of the forecast model M, even though it influences the final data assimilation results, does not influence the minimization results within a filter formulation. This would, however, be different for a smoother application, since the nonlinear model would be included in the cost function Covariance localization Covariance localization is often used in ensemble data assimilation applications to better constrain the data assimilation problems with either insufficient observations or insufficient ensemble size (e.g. Houtekamer and Mitchell, 998; Hamill et al., 00; Whitaker and Hamill, 00). The localization was also found beneficial in the full-rank KF filter applications due to spurious loss of variance in the discrete KF covariance evolution equation (e.g. Ménard et al., 000). Since covariance localizations are typically achieved by employing arbitrary covariance functions (e.g. Gaspari and Cohn, 999) it is important to evaluate how such localizations impact the information measures. We use a localization technique based on Schur (element-wise) product between the forecast error covariance matrix and a compactly supported covariance function (e.g. Houtekamer and Mitchell, 998; Hamill et al., 00; Whitaker and Hamill, 00). Since the dimensions of the full forecast error covariance are small (80 80), we evaluate the full covariance P f and multiply it, element-wise, with the localization function (the second-order correlation function of Gaspari and Cohn (999) with decorrelation length of 3 vertical layers). As a result, we obtain a localized P f. We then perform the eigenvalue decomposition of the localized covariance and keep only the N ens leading eigenvalues and eigenvectors in data assimilation. Note, however,thatcovariancelocalization could be achieved in a different (i.e. approximate) way in applications to large-size problems (e.g. Whitaker and Hamill, 00; Ott et al., 005). 4. Results 4.. Verification summary Verifications of data assimilation experiments listed in Table I are performed in terms of analysis and background errors and the chi-square innovation statistic tests (e.g. Dee., 995; Ménard et al., 000; Zupanski, 005). The verification summary is given in Table II. The RMS Table II. Total RMS errors of the analysis and the background solutions. Experiment RMS T a (K) RMS T b (K) RMS q a (kg kg ) RMS q b (kg kg ) Chi-square (mean) Chi-square (stddev) 0ens 80obs ens 80obs ens 80obs KF 80obs ens 40obs ens 40obs ens 40obs KF 40obs ens 40obs loc dv 40obs no obs Calculated with respect to the truth over 70 data assimilation cycles, for the experiments listed in Table I. The RMS analysis and background errors are shown for temperature (denoted RMS T a and RMS T b ) and for specific humidity (denoted RMS q a and RMS q b ). The RMS errors are smallest for the KF experiment with 80 observations and are largest for the experiment without data assimilation (no obs). The smallest RMS errors are highlighted in bold, and the largest RMS errors are highlighted in bold italic. Also shown are the mean values and standard deviations of the chi-square statistic, calculated over 70 data assimilation cycles.

7 INFORMATION THEORY IN ENSEMBLE DATA ASSIMILATION 539 errors of the analysis and the 6-hour forecast (background) are calculated with respect to the truth as mean values over 70 consecutive data assimilation cycles. The mean values and the standard deviations of the chi-square statistic are calculated over 70 data assimilation cycles from the chi-square statistic values obtained in the individual data assimilation cycles. Note that the ergodic hypothesis was made when calculating the mean chisquare values: sample mean was replaced by time mean, calculated over 70 data assimilation cycles. The results in Table II indicate superior performance of the KF approach, and also good performance of the MLEF approach, with the RMS errors decreasing as the ensemble size increases, and also as the number of observations increases, which is an expected performance. In comparison to the 3D-Var experiment, the MLEF errors are generally smaller for larger ensemble sizes (0 and 40 members) and larger for the smallest ensemble size (0 members). The analysis errors of the MLEF experiments with 80 observations are within the estimated total observation errors (note that the total observation errors also include empirical representativeness errors). The analysis and background errors of all experiments are smaller than the errors of the experiment without data assimilation (no obs), thus indicating a positive impact of data assimilation. Table II also indicates that covariance localization, which is applied in the experiment with 0 ensemble members and 40 observations, reduces analysis and background errors (compare experiments 0ens 40obs and 0ens 40obs loc). Mean values of the chi-square statistic indicate that the experiments are generally within 0% difference from the expected value of, with standard deviations within 5 34%, with the exception of the 3D-Var experiment. In the 3D-Var experiment there are larger fluctuations of the chi-square statistic from one data assimilation cycle to another (the standard deviation is 78%) which is a consequence of using a constant forecast error covariance in all data assimilation cycles. Note that the chi-square values larger (smaller) than indicate an underestimation (overestimation) of the forecast error variance. One should, however, expect departures from the expected chi-square statistic, since the Gaussian assumption is not strictly valid due to nonlinearity of the forecast model. The chi-square values calculated in individual data assimilation cycles indicated no time increasing or decreasing trends, meaning that all data assimilation experiments had stable filter performance. number and the observation errors did not change from one data assimilation cycle to another, the time variability of d s reflects the time variability of the forecast error covariance matrix. Comparing the results obtained using the same ensemble size in Figure (a) and (b), we can notice generally higher values of d s in the experiments with 80 observations in the first few data assimilation cycles; the differences between 80 and 40 observations diminish in the later cycles. This is an indication that the KF and also the MLEF have learning capabilities, since they recognize that previously assimilated observations had an impact on reducing the initially prescribed forecast errors, thus having more observations in the later cycles is less beneficial than in the earlier cycles. This learning capability is not present in the 3D-Var approach since the forecast error covariance is kept constant at all times. Consequently, the 3D-Var approach could not recognize that the previously assimilated observations had an impact on reducing the forecast uncertainty. As seen in Figure (a) and (b), the experiments with larger ensemble size typically have larger values of d s, and vice versa. The smaller (larger) value of d s is a consequence of using the forecast error covariance matrix of a smaller (larger) rank. The important observation is that the KF experiment and all reduced-rank experiments show similar time variability of the information measures. Assuming that the full-rank KF experiment produces the best analysis solution and the best estimate of the flowdependent forecast error covariance, these results indicate that the forecast error covariance is also realistically described in the reduced-rank experiments. We will examine this issue farther in the Section Temporal evolution of the information measures. 4.. Impact of ensemble size Let us now examine the impact of ensemble size on DOF for signal. We calculate DOF for signal d s in data assimilation experiments with 80 and 40 observations and plot it as a function of data assimilation cycles in Figure (a) and (b), respectively. Results from the reduced-rank MLEF experiments (with 0 and 40 ensemble members) are shown along with the full-rank KF and 3D-Var experiments. Recall that, since the observation Figure. Degrees of Freedom (DOF) for signal (d s ), obtained in the experiments with (a) 80 observations and (b) 40 observations per data assimilation cycle. Note that d s is constant in time in the 3D-Var experiment, which is a consequence of a constant forecast error covariance.

8 540 D. ZUPANSKI ET AL Impact of covariance localization In this subsection we evaluate the impact of covariance localization on the information measures, focusing on the experiment with small ensemble size (0 ensemble members) and with smaller number of observations (40 observations). In Figure, DOF for signal, obtained in the experiments with and without localization, is plotted as a function of data assimilation cycles. The figure indicates that the covariance localization generally increases the amount of information. This is not surprising, since covariance localization introduces extra DOF to the data assimilation system (e.g. Hamill et al., 00), but the total number of DOF cannot exceed the ensemble size (N ens ), since the information matrix C can have up to N ens non-zero eigenvalues. An important observation is that the localization does not change the essential character of the information measures (the lines with and without covariance localization are approximately parallel). There is, however, a notable departure between the two lines around cycle 56. Note, however, that because we use a single column model, it is likely to get shifted maxima and minima by a single point in time, even under similar experimental conditions Temporal evolution of the information measures As observed in Figures and, the information measures reach a maximum in the first data assimilation cycle. There are also two pronounced local maxima around cycles 40 and 50 (the exact locations of the maxima vary between different experiments). In the following text, we examine whether there is a correlation between the information measures in Figures and and the true model state evolution. The true model state evolution is shown in Figure 3(a), (b), (c) and (d), where true temperature, true specific humidity, observed temperature and observed specific humidity are plotted as functions of data assimilation cycles and model vertical levels. One can observe rapid, front-like, time-tilted changes in both temperature and humidity around cycles 40 and 50. Figures and indicate the two local maxima in the information measures around the same data assimilation cycles. One can also Figure. Values of DOF for signal (d s ), obtained in the experiments with 0 ensemble members and 40 observations (with and without covariance localization), plotted as functions of data assimilation cycles. observe correlations between additional smaller local maxima in Figures and and rapid changes in Figure 3, though the rapid changes are more pronounced in the humidity field than in the temperature field. It is, therefore, evident that the time evolution of the information measures is correlated with the true model state time evolution. Since the information measures employ a flowdependent forecast error covariance, this confirms that the flow-dependency of the forecast error covariance is reasonably correct. Note, however, that a flow-dependent forecast error covariance does not always imply that the information measures are flow-dependent. For example, the experiments with an insufficient ensemble size would commonly produce d s = N ens in all data assimilation cycles, thus indicating that d s is not sensitive to the changes in the true model state, even though the forecast error covariance is flow-dependent. Thus, having a correct flow-dependent forecast error covariance matrix is of fundamental importance for describing a prior knowledge about the truth when calculating information measures. One can also observe more variability in the observations than in the corresponding true fields, especially for the specific humidity field (Figure 3(b) and (d)). This is a manifestation of representativeness error, introduced by randomly perturbing the model state variables when creating simulated observations. Recall that we have approximately accounted for the impact of the representativeness error through the empirical parameter α (Table I) Trace of P f Let us now examine the magnitude of the forecast error covariance. As an example, we present the trace of the forecast error covariance matrix as a function of data assimilation cycles obtained in the KF and 3D-Var experiments with 80 observations (Figure 4). In Figure 4(a) the temperature component (of the total trace) is given and in Figure 4(b) the specific humidity component is shown. As expected, Figure 4 indicates time-varying magnitudes of P f for the KF experiment and constant magnitudes of P f for the 3D-Var experiment. The figure also implies that the largest values of d s obtained in the first data assimilation cycle (Figures and ) are not a simple consequence of using large initial P f, since much larger magnitudes of P f are obtained in later data assimilation cycles (e.g. in the KF experiment around cycles 40 and 50). Thus, we can conclude that the reason for large values of d s in the first data assimilation cycle is an inadequate P f, but not necessarily large P f. The results in Figure 4 also suggest that the large values of the information measures obtained in the 3D- Var experiment are a consequence of the shape of P f, not necessarily of the large magnitude of P f. One can, however, appropriately tune the P f used in the 3D-Var experiment in order to reduce or increase the information measures.

9 INFORMATION THEORY IN ENSEMBLE DATA ASSIMILATION 54 Figure 3. (a) True temperature, (b) true specific humidity, (c) observed temperature, and (d) observed specific humidity, shown as functions of data assimilation cycles and model vertical levels. Observations defined in each grid point (80 observations) are used in Figure 4(c) and (d). Units for temperature are K, and for specific humidity g kg. Note rapid time-tilted changes in both temperature and humidity around cycles 40 and 50. This figure is available in colour online at Figure 4. Trace of P f, shown as a function of data assimilation cycles. Results from the KF and the 3D-Var experiments with 80 observations are plotted. The temperature component of the total trace is given in (a) in units of K, and the specific humidity component of the total trace is given in (b) in units of kg kg. Trace of P f is constant in all cycles for the 3D-Var experiment. It is equal to.6 K for temperature and kg kg for specific humidity Temporal evolution of the analysis and forecast errors Let us conclude this Section by examining temporal evolution of the analysis and forecast RMS errors, calculated with respect to the truth, of different data assimilation experiments. We show the RMS errors obtained in the KF, the 3D-Var and the experiment with 0 ensemble members with covariance localization. The errors of the three experiments are shown as functions of model vertical levels and data assimilation cycles for temperature (Figure 5) and for specific humidity (Figure 6). For reference, we also show temporal evolution of the errors of temperature and specific humidity obtained in the experiment without data assimilation (Figure 7(a) and (b)). Examination of Figures 5, 6 and 7 indicates that the errors of the experiment without data assimilation (no obs, shown in Figure 7) are the largest, and that they are at a maximum around cycles 40 and 50. Around the same cycles, maxima in d s (Figures and ) and the abrupt changes in the true model state (Figure 3) were also observed. These largest errors are reduced by the greatest amount, but still not completely eliminated, in the KF experiment, as shown in Figures 5 and 6. This is an expected result, which indicates a highly efficient use of the observed information in the KF, owing to the use

10 54 D. ZUPANSKI ET AL. Figure 5. Analysis and background errors of temperature obtained in three different data assimilation experiments with 40 observations: KF 40obs, 3dv 40obs and 0ens 40obs loc. The errors are calculated with respect to the truth and are shown as functions of data assimilation cycles and model vertical levels. The results from the KF experiment are shown in (a) for the analysis, and in (b) for the background. The results of the 3D-Var experiment are shown in (c) for the analysis, and (d) for the background. The results of the experiment with 0 ensemble members, which also includes covariance localization, are given in (e) for the analysis, and in (f) for the background. The numbers in the upper right corners are total RMS errors from Table II. The units are in K degrees for both the plots and the total RMS errors. This figure is available in colour online at of the full-rank flow-dependent forecast error covariance. The other two experiments, the 3D-Var (Figures 5(c) and (d), 6(c) and (d)) and the 0-ensemble members experiment (Figures 5(e) and (f), 6(e) and (f)), also indicate considerable, but much smaller, error reductions with respect to the experiment without data assimilation. Comparisons of the RMS errors of the 3D-Var experiment, which uses a full-rank but constant forecast error covariance, with the MLEF experiment with 0 ensemble members, which uses a flow-dependent forecast error covariance but with a considerably reduced rank, indicates generally slightly better performance of the 3D-Var experiment. As shown in Table II, the analysis and the background errors obtained using larger ensemble sizes (e.g. 0 and 40 ensembles) are generally smaller than the 3D-Var errors. Thus, there is a trade-off regarding the

11 INFORMATION THEORY IN ENSEMBLE DATA ASSIMILATION 543 Figure 6. As in Figure 5, but for specific humidity in g kg. The numbers in the upper right corners are total RMS errors from Table II, given in kg kg. This figure is available in colour online at quality of the analysis, depending on how many ensemble members it is feasible to employ. 5. Conclusions In this study, we have applied information theory within an ensemble-based data assimilation approach and defined information matrix in ensemble subspace. We have shown that the information matrix in ensemble subspace can be directly linked to the information matrix typically used in non-ensemble-based data assimilation methods, such as the KF and the 3D-Var methods, which provides a framework for consistent comparisons of information measures between different data assimilation methods. We have evaluated this framework in application to the GEOS-5 SCM and simulated observations, employing ARM observations as forcing. We have compared three different data assimilation approaches, the KF, the MLEF and the 3D-Var, focusing on the impact of ensemble size, covariance localization, and the temporal evolution of the true model state on the information measures. Experimental results indicated that the essential character of the information measures was similar in all experiments using a flow-dependent forecast error covariance matrix (the KF and the MLEF experiments with varying ensemble sizes) by indicating similar trends of

12 544 D. ZUPANSKI ET AL. Figure 7. Analysis errors of the experiment without data assimilation (no obs), calculated with respect to the truth. The results are plotted in (a) for temperature in K, and in (b) for specific humidity in g kg. The RMS errors in the upper right corners are in units of K degrees for temperature, and in kg kg for specific humidity. This figure is available in colour online at increase or decrease with time. The temporal evolution of the information measures was correlated with the true model state evolution, which was an indication that the flow-dependent forecast error covariance was reasonable. The 3D-Var based information measures were insensitive to the changes in the true model state, since the forecast error covariance was (inadequately) prescribed. These results indicated that it is fundamentally important to use a flow-dependent forecast error covariance in order to adequately describe the prior knowledge about the truth when calculating information measures. As expected, the impact of covariance localization was in improved data assimilation results and in increased values of the information measures. Temporal evolution of the information measures remained sensitive to the major changes to the true model state in a similar way as in the experiments without localization. Comparisons of the three different data assimilation approaches indicated superior performance of the KF approach, owing to the use of the full-rank flowdependent forecast error covariance matrix. Comparisons of the reduced-rank MLEF and the 3D-Var approach indicated superior MLEF results when the ensemble size was greater than 0, and comparable or slightly worse MLEF results for smaller ensemble sizes (without covariance localization). The results of this study indicated effectiveness of the proposed framework in applications to different data assimilation approaches. Although the results were very encouraging, further evaluations of the proposed framework are still necessary, especially in applications to data assimilation problems with numerous observations and atmospheric models with many degrees of freedom. Acknowledgements The first author would like to thank Graeme Stephens, Christine Johnson, Louie Grasso, and Stephane Vannitsem for inspiring discussions regarding information content measures. This research was supported by NASA grants: , NAG5-05, and NNG04GI5G. We also acknowledge computational resources provided by the Explore computer system at NASA s Goddard Space Flight Center. References Abramov RV, Majda AJ Quantifying uncertainty for non- Gaussian ensembles in complex systems. SIAM J. Sci. Comput. 6: Abramov R, Majda A, Kleeman R Information theory and predictability for low-frequency variability. J. Atmos. Sci. 6: Anderson JL. 00. An ensemble adjustment Kalman filter for data assimilation. Mon. Weather Rev. 9: Bishop CH, Etherton BJ, Majumdar SJ. 00. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Weather Rev. 9: Cohn SE An introduction to estimation theory. J. Meteorol. Soc. Jpn 75: Dee DP On-line estimation of error covariance parameters for atmospheric data assimilation. Mon. Weather Rev. 3: DelSole T Predictability and information theory. Part I: Measures of predictability. J. Atmos. Sci. 6: Engelen RJ, Stephens GL Information content of infrared satellite sounding measurements with respect to CO. J. Appl. Meteorol. 43: Evensen G Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res. 99(C5): Fisher M Estimation of entropy reduction and degrees of freedom for signal for large variational analysis systems. ECMWF Tech. Memo. No pp. Fletcher SJ, Zupanski M A data assimilation method for lognormally distributed observational errors. Q. J. R. Meteorol. Soc. 3: Gaspari G, Cohn SE Construction of correlation functions in two and three dimensions. Q. J. R. Meteorol. Soc. 5: Gill PE, Murray W, Wright MH. 98. Practical optimization. Academic Press: London. Hamill TM, Snyder C A hybrid ensemble Kalman filter 3D variational analysis scheme. Mon. Weather Rev. 8: Hamill TM, Whitaker JS, Snyder C. 00. Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Weather Rev. 9: Heemink AW, Verlaan M, Segers AJ. 00. Variance reduced ensemble Kalman filtering. Mon. Weather Rev. 9: Horn RA, Johnson CR Matrix analysis. Cambridge University Press. Hoteit I, Pham D-T, Blum J. 00. A simplified reduced order Kalman filtering and application to altimetric data assimilation in tropical Pacific. J. Mar. Syst. 36: 0 7.

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS. Milija Zupanski

MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS. Milija Zupanski MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University Foothills Campus Fort Collins, CO 8053-1375 ZupanskiM@cira.colostate.edu

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Localization in the ensemble Kalman Filter

Localization in the ensemble Kalman Filter Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Carbon flux bias estimation employing Maximum Likelihood Ensemble Filter (MLEF)

Carbon flux bias estimation employing Maximum Likelihood Ensemble Filter (MLEF) Click Here for Full Article JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 112,, doi:10.1029/2006jd008371, 2007 Carbon flux bias estimation employing Maximum Likelihood Ensemble Filter (MLEF) Dusanka Zupanski,

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data John Harlim and Brian R. Hunt Department of Mathematics and Institute for Physical Science and Technology University

More information

Four-Dimensional Ensemble Kalman Filtering

Four-Dimensional Ensemble Kalman Filtering Four-Dimensional Ensemble Kalman Filtering B.R. Hunt, E. Kalnay, E.J. Kostelich, E. Ott, D.J. Patil, T. Sauer, I. Szunyogh, J.A. Yorke, A.V. Zimin University of Maryland, College Park, MD 20742, USA Ensemble

More information

A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96

A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Tellus 000, 000 000 (0000) Printed 20 October 2006 (Tellus LATEX style file v2.2) A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Elana J. Fertig

More information

A METHOD FOR INITIALIZATION OF ENSEMBLE DATA ASSIMILATION. Submitted January (6 Figures) A manuscript submitted for publication to Tellus

A METHOD FOR INITIALIZATION OF ENSEMBLE DATA ASSIMILATION. Submitted January (6 Figures) A manuscript submitted for publication to Tellus A METHOD FOR INITIALIZATION OF ENSEMBLE DATA ASSIMILATION Milija Zupanski 1, Steven Fletcher 1, I. Michael Navon 2, Bahri Uzunoglu 3, Ross P. Heikes 4, David A. Randall 4, and Todd D. Ringler 4 Submitted

More information

Accelerating the spin-up of Ensemble Kalman Filtering

Accelerating the spin-up of Ensemble Kalman Filtering Accelerating the spin-up of Ensemble Kalman Filtering Eugenia Kalnay * and Shu-Chih Yang University of Maryland Abstract A scheme is proposed to improve the performance of the ensemble-based Kalman Filters

More information

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION J5.4 4D ENSEMBLE KALMAN FILTERING FOR ASSIMILATION OF ASYNCHRONOUS OBSERVATIONS T. Sauer George Mason University, Fairfax, VA 22030 B.R. Hunt, J.A. Yorke, A.V. Zimin, E. Ott, E.J. Kostelich, I. Szunyogh,

More information

A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME

A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME Thomas M. Hamill and Chris Snyder National Center for Atmospheric Research, Boulder, Colorado 1. INTRODUCTION Given the chaotic nature of

More information

4D-Var or Ensemble Kalman Filter? TELLUS A, in press

4D-Var or Ensemble Kalman Filter? TELLUS A, in press 4D-Var or Ensemble Kalman Filter? Eugenia Kalnay 1 *, Hong Li 1, Takemasa Miyoshi 2, Shu-Chih Yang 1, and Joaquim Ballabrera-Poy 3 1 University of Maryland, College Park, MD, 20742-2425 2 Numerical Prediction

More information

A Comparison between the 4DVAR and the Ensemble Kalman Filter Techniques for Radar Data Assimilation

A Comparison between the 4DVAR and the Ensemble Kalman Filter Techniques for Radar Data Assimilation NOVEMBER 2005 C A Y A E T A L. 3081 A Comparison between the 4DVAR and the Ensemble Kalman Filter Techniques for Radar Data Assimilation A. CAYA, J.SUN, AND C. SNYDER National Center for Atmospheric Research,*

More information

Four-dimensional ensemble Kalman filtering

Four-dimensional ensemble Kalman filtering Tellus (24), 56A, 273 277 Copyright C Blackwell Munksgaard, 24 Printed in UK. All rights reserved TELLUS Four-dimensional ensemble Kalman filtering By B. R. HUNT 1, E. KALNAY 1, E. J. KOSTELICH 2,E.OTT

More information

Quarterly Journal of the Royal Meteorological Society

Quarterly Journal of the Royal Meteorological Society Quarterly Journal of the Royal Meteorological Society Effects of sequential or simultaneous assimilation of observations and localization methods on the performance of the ensemble Kalman filter Journal:

More information

Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic

Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic L. Nerger S. Danilov, G. Kivman, W. Hiller, and J. Schröter Alfred Wegener

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Variable localization in an Ensemble Kalman Filter: application to the carbon cycle data assimilation

Variable localization in an Ensemble Kalman Filter: application to the carbon cycle data assimilation 1 Variable localization in an Ensemble Kalman Filter: 2 application to the carbon cycle data assimilation 3 4 1 Ji-Sun Kang (jskang@atmos.umd.edu), 5 1 Eugenia Kalnay(ekalnay@atmos.umd.edu), 6 2 Junjie

More information

Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter

Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter Tellus 000, 000 000 (0000) Printed 16 February 2007 (Tellus LATEX style file v2.2) Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter Elana J. Fertig 1, Brian R. Hunt 1, Edward Ott

More information

Analysis sensitivity calculation in an Ensemble Kalman Filter

Analysis sensitivity calculation in an Ensemble Kalman Filter Analysis sensitivity calculation in an Ensemble Kalman Filter Junjie Liu 1, Eugenia Kalnay 2, Takemasa Miyoshi 2, and Carla Cardinali 3 1 University of California, Berkeley, CA, USA 2 University of Maryland,

More information

A Hybrid ETKF 3DVAR Data Assimilation Scheme for the WRF Model. Part I: Observing System Simulation Experiment

A Hybrid ETKF 3DVAR Data Assimilation Scheme for the WRF Model. Part I: Observing System Simulation Experiment 5116 M O N T H L Y W E A T H E R R E V I E W VOLUME 136 A Hybrid ETKF 3DVAR Data Assimilation Scheme for the WRF Model. Part I: Observing System Simulation Experiment XUGUANG WANG Cooperative Institute

More information

Practical Aspects of Ensemble-based Kalman Filters

Practical Aspects of Ensemble-based Kalman Filters Practical Aspects of Ensemble-based Kalman Filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center BremHLR Bremen, Germany

More information

Quarterly Journal of the Royal Meteorological Society !"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"(1"&)23$)(4#$2#"( 5'$*)6!

Quarterly Journal of the Royal Meteorological Society !#$%&'&(&)&'*'+'*%(,#$,-$#*'.(/'*0'(#(1&)23$)(4#$2#( 5'$*)6! !"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"("&)$)(#$#"( '$*)! "#$%&'()!!"#$%&$'()*+"$,#')+-)%.&)/+(#')0&%&+$+'+#')+&%(! *'&$+,%-./!0)! "! :-(;%/-,(;! '/;!?$@A-//;B!@

More information

A Comparison of Hybrid Ensemble Transform Kalman Filter Optimum Interpolation and Ensemble Square Root Filter Analysis Schemes

A Comparison of Hybrid Ensemble Transform Kalman Filter Optimum Interpolation and Ensemble Square Root Filter Analysis Schemes MARCH 2007 W A N G E T A L. 1055 A Comparison of Hybrid Ensemble Transform Kalman Filter Optimum Interpolation and Ensemble Square Root Filter Analysis Schemes XUGUANG WANG CIRES Climate Diagnostics Center,

More information

Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters

Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters 1042 M O N T H L Y W E A T H E R R E V I E W VOLUME 136 Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters PAVEL SAKOV AND PETER R. OKE CSIRO Marine and Atmospheric

More information

Optimal Localization for Ensemble Kalman Filter Systems

Optimal Localization for Ensemble Kalman Filter Systems Journal December of the 2014 Meteorological Society of Japan, Vol. Á. 92, PERIÁÑEZ No. 6, pp. et 585 597, al. 2014 585 DOI:10.2151/jmsj.2014-605 Optimal Localization for Ensemble Kalman Filter Systems

More information

Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter

Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 135: 523 533 (2009) Published online 3 February 2009 in Wiley InterScience (www.interscience.wiley.com).371 Simultaneous estimation

More information

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Robust Ensemble Filtering With Improved Storm Surge Forecasting Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm

More information

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress

More information

Simple Doppler Wind Lidar adaptive observation experiments with 3D-Var. and an ensemble Kalman filter in a global primitive equations model

Simple Doppler Wind Lidar adaptive observation experiments with 3D-Var. and an ensemble Kalman filter in a global primitive equations model 1 2 3 4 Simple Doppler Wind Lidar adaptive observation experiments with 3D-Var and an ensemble Kalman filter in a global primitive equations model 5 6 7 8 9 10 11 12 Junjie Liu and Eugenia Kalnay Dept.

More information

A new structure for error covariance matrices and their adaptive estimation in EnKF assimilation

A new structure for error covariance matrices and their adaptive estimation in EnKF assimilation Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. 139: 795 804, April 2013 A A new structure for error covariance matrices their adaptive estimation in EnKF assimilation Guocan

More information

Comparison between Local Ensemble Transform Kalman Filter and PSAS in the NASA finite volume GCM perfect model experiments

Comparison between Local Ensemble Transform Kalman Filter and PSAS in the NASA finite volume GCM perfect model experiments Nonlin. Processes Geophys., 15, 645 659, 2008 Author(s) 2008. This work is distributed under the Creative Commons Attribution 3.0 License. Nonlinear Processes in Geophysics Comparison between Local Ensemble

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

Comparisons between 4DEnVar and 4DVar on the Met Office global model

Comparisons between 4DEnVar and 4DVar on the Met Office global model Comparisons between 4DEnVar and 4DVar on the Met Office global model David Fairbairn University of Surrey/Met Office 26 th June 2013 Joint project by David Fairbairn, Stephen Pring, Andrew Lorenc, Neill

More information

A Comparison of Error Subspace Kalman Filters

A Comparison of Error Subspace Kalman Filters Tellus 000, 000 000 (0000) Printed 4 February 2005 (Tellus LATEX style file v2.2) A Comparison of Error Subspace Kalman Filters By LARS NERGER, WOLFGANG HILLER and JENS SCHRÖTER Alfred Wegener Institute

More information

A Non-Gaussian Ensemble Filter Update for Data Assimilation

A Non-Gaussian Ensemble Filter Update for Data Assimilation 4186 M O N T H L Y W E A T H E R R E V I E W VOLUME 138 A Non-Gaussian Ensemble Filter Update for Data Assimilation JEFFREY L. ANDERSON NCAR Data Assimilation Research Section,* Boulder, Colorado (Manuscript

More information

University of Oklahoma, Norman Oklahoma Monthly Weather Review Submitted June 2017 Accepted November 2017

University of Oklahoma, Norman Oklahoma Monthly Weather Review Submitted June 2017 Accepted November 2017 Development of a Hybrid En3DVar Data Assimilation System and Comparisons with 3DVar and EnKF for Radar Data Assimilation with Observing System Simulation Experiments Rong Kong 1,2, Ming Xue 1,2, and Chengsi

More information

Analysis Scheme in the Ensemble Kalman Filter

Analysis Scheme in the Ensemble Kalman Filter JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine

More information

How 4DVAR can benefit from or contribute to EnKF (a 4DVAR perspective)

How 4DVAR can benefit from or contribute to EnKF (a 4DVAR perspective) How 4DVAR can benefit from or contribute to EnKF (a 4DVAR perspective) Dale Barker WWRP/THORPEX Workshop on 4D-Var and Ensemble Kalman Filter Intercomparisons Sociedad Cientifica Argentina, Buenos Aires,

More information

EARLY ONLINE RELEASE

EARLY ONLINE RELEASE AMERICAN METEOROLOGICAL SOCIETY Monthly Weather Review EARLY ONLINE RELEASE This is a preliminary PDF of the author-produced manuscript that has been peer-reviewed and accepted for publication. Since it

More information

J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS

J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS Thomas M. Hamill 1, Jeffrey S. Whitaker 1, and Chris Snyder 2 1 NOAA-CIRES Climate Diagnostics Center, Boulder,

More information

A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks

A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks Manuscript prepared for Nonlin. Processes Geophys. with version 5. of the L A TEX class copernicus.cls. Date: 5 August 23 A mechanism for catastrophic filter divergence in data assimilation for sparse

More information

Initiation of ensemble data assimilation

Initiation of ensemble data assimilation Initiation of ensemble data assimilation By M. ZUPANSKI 1*, S. J. FLETCHER 1, I. M. NAVON 2, B. UZUNOGLU 3, R. P. HEIKES 4, D. A. RANDALL 4, T. D. RINGLER 4 and D. DAESCU 5, 1 Cooperative Institute for

More information

Uncertainty Analysis Using the WRF Maximum Likelihood Ensemble Filter System and Comparison with Dropwindsonde Observations in Typhoon Sinlaku (2008)

Uncertainty Analysis Using the WRF Maximum Likelihood Ensemble Filter System and Comparison with Dropwindsonde Observations in Typhoon Sinlaku (2008) Asia-Pacific J. Atmos. Sci., 46(3), 317-325, 2010 DOI:10.1007/s13143-010-1004-1 Uncertainty Analysis Using the WRF Maximum Likelihood Ensemble Filter System and Comparison with Dropwindsonde Observations

More information

An Intercomparison of Single-Column Model Simulations of Summertime Midlatitude Continental Convection

An Intercomparison of Single-Column Model Simulations of Summertime Midlatitude Continental Convection An Intercomparison of Single-Column Model Simulations of Summertime Midlatitude Continental Convection S. J. Ghan Pacific Northwest National Laboratory Richland, Washington D. A. Randall, K.-M. Xu, and

More information

Ensemble Kalman Filter potential

Ensemble Kalman Filter potential Ensemble Kalman Filter potential Former students (Shu-Chih( Yang, Takemasa Miyoshi, Hong Li, Junjie Liu, Chris Danforth, Ji-Sun Kang, Matt Hoffman), and Eugenia Kalnay University of Maryland Acknowledgements:

More information

6.5 Operational ensemble forecasting methods

6.5 Operational ensemble forecasting methods 6.5 Operational ensemble forecasting methods Ensemble forecasting methods differ mostly by the way the initial perturbations are generated, and can be classified into essentially two classes. In the first

More information

Accepted in Tellus A 2 October, *Correspondence

Accepted in Tellus A 2 October, *Correspondence 1 An Adaptive Covariance Inflation Error Correction Algorithm for Ensemble Filters Jeffrey L. Anderson * NCAR Data Assimilation Research Section P.O. Box 3000 Boulder, CO 80307-3000 USA Accepted in Tellus

More information

Evolution of Forecast Error Covariances in 4D-Var and ETKF methods

Evolution of Forecast Error Covariances in 4D-Var and ETKF methods Evolution of Forecast Error Covariances in 4D-Var and ETKF methods Chiara Piccolo Met Office Exeter, United Kingdom chiara.piccolo@metoffice.gov.uk Introduction Estimates of forecast error covariances

More information

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données Four-Dimensional Ensemble-Variational Data Assimilation 4DEnVar Colloque National sur l'assimilation de données Andrew Lorenc, Toulouse France. 1-3 décembre 2014 Crown copyright Met Office 4DEnVar: Topics

More information

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE 1 INTRODUCTION Xuguang Wang* The Pennsylvania State University, University Park, PA Craig H. Bishop

More information

A Local Ensemble Kalman Filter for Atmospheric Data Assimilation

A Local Ensemble Kalman Filter for Atmospheric Data Assimilation Tellus 000, 000 000 (0000) Printed 1 April 2004 (Tellus LATEX style file v2.2) A Local Ensemble Kalman Filter for Atmospheric Data Assimilation By EDWARD OTT 1, BRIAN R. HUNT 2, ISTVAN SZUNYOGH 3, ALEKSEY

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment

The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment ADVANCES IN ATMOSPHERIC SCIENCES, VOL. 27, NO. 6, 2010, 1303 1310 The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment LIU Juanjuan

More information

M.Sc. in Meteorology. Numerical Weather Prediction

M.Sc. in Meteorology. Numerical Weather Prediction M.Sc. in Meteorology UCD Numerical Weather Prediction Prof Peter Lynch Meteorology & Climate Cehtre School of Mathematical Sciences University College Dublin Second Semester, 2005 2006. Text for the Course

More information

GSI 3DVar-based Ensemble-Variational Hybrid Data Assimilation for NCEP Global Forecast System: Single Resolution Experiments

GSI 3DVar-based Ensemble-Variational Hybrid Data Assimilation for NCEP Global Forecast System: Single Resolution Experiments 1 2 GSI 3DVar-based Ensemble-Variational Hybrid Data Assimilation for NCEP Global Forecast System: Single Resolution Experiments 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

More information

Weight interpolation for efficient data assimilation with the Local Ensemble Transform Kalman Filter

Weight interpolation for efficient data assimilation with the Local Ensemble Transform Kalman Filter QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Published online 17 December 2008 in Wiley InterScience (www.interscience.wiley.com).353 Weight interpolation for efficient data assimilation with

More information

Advancing Data AssimilaJon Science for OperaJonal Hydrology: Methodology, ComputaJon, and Algorithms

Advancing Data AssimilaJon Science for OperaJonal Hydrology: Methodology, ComputaJon, and Algorithms 2014 CAHMDA & HEPEX- DAFOH Workshop September 8-12, 2014 The University of Texas at AusJn Advancing Data AssimilaJon Science for OperaJonal Hydrology: Methodology, ComputaJon, and Algorithms Milija Zupanski

More information

J5.8 ESTIMATES OF BOUNDARY LAYER PROFILES BY MEANS OF ENSEMBLE-FILTER ASSIMILATION OF NEAR SURFACE OBSERVATIONS IN A PARAMETERIZED PBL

J5.8 ESTIMATES OF BOUNDARY LAYER PROFILES BY MEANS OF ENSEMBLE-FILTER ASSIMILATION OF NEAR SURFACE OBSERVATIONS IN A PARAMETERIZED PBL J5.8 ESTIMATES OF BOUNDARY LAYER PROFILES BY MEANS OF ENSEMBLE-FILTER ASSIMILATION OF NEAR SURFACE OBSERVATIONS IN A PARAMETERIZED PBL Dorita Rostkier-Edelstein 1 and Joshua P. Hacker The National Center

More information

Aspects of the practical application of ensemble-based Kalman filters

Aspects of the practical application of ensemble-based Kalman filters Aspects of the practical application of ensemble-based Kalman filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center

More information

Weak Constraints 4D-Var

Weak Constraints 4D-Var Weak Constraints 4D-Var Yannick Trémolet ECMWF Training Course - Data Assimilation May 1, 2012 Yannick Trémolet Weak Constraints 4D-Var May 1, 2012 1 / 30 Outline 1 Introduction 2 The Maximum Likelihood

More information

Monthly Weather Review The Hybrid Local Ensemble Transform Kalman Filter

Monthly Weather Review The Hybrid Local Ensemble Transform Kalman Filter Monthly Weather Review The Hybrid Local Ensemble Transform Kalman Filter --Manuscript Draft-- Manuscript Number: Full Title: Article Type: Corresponding Author: Corresponding Author's Institution: First

More information

GSI 3DVar-Based Ensemble Variational Hybrid Data Assimilation for NCEP Global Forecast System: Single-Resolution Experiments

GSI 3DVar-Based Ensemble Variational Hybrid Data Assimilation for NCEP Global Forecast System: Single-Resolution Experiments 4098 M O N T H L Y W E A T H E R R E V I E W VOLUME 141 GSI 3DVar-Based Ensemble Variational Hybrid Data Assimilation for NCEP Global Forecast System: Single-Resolution Experiments XUGUANG WANG School

More information

A Dressed Ensemble Kalman Filter Using the Hybrid Coordinate Ocean Model in the Pacific

A Dressed Ensemble Kalman Filter Using the Hybrid Coordinate Ocean Model in the Pacific ADVANCES IN ATMOSPHERIC SCIENCES, VOL. 26, NO. 5, 2009, 1042 1052 A Dressed Ensemble Kalman Filter Using the Hybrid Coordinate Ocean Model in the Pacific WAN Liying 1 (!"#), ZHU Jiang 2 ($%), WANG Hui

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Estimating observation impact without adjoint model in an ensemble Kalman filter

Estimating observation impact without adjoint model in an ensemble Kalman filter QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 134: 1327 1335 (28) Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 1.12/qj.28 Estimating observation

More information

Coupled Ocean-Atmosphere Assimilation

Coupled Ocean-Atmosphere Assimilation Coupled Ocean-Atmosphere Assimilation Shu-Chih Yang 1, Eugenia Kalnay 2, Joaquim Ballabrera 3, Malaquias Peña 4 1:Department of Atmospheric Sciences, National Central University 2: Department of Atmospheric

More information

The University of Reading

The University of Reading The University of Reading Radial Velocity Assimilation and Experiments with a Simple Shallow Water Model S.J. Rennie 2 and S.L. Dance 1,2 NUMERICAL ANALYSIS REPORT 1/2008 1 Department of Mathematics 2

More information

The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation

The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation Noname manuscript No. (will be inserted by the editor) The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation Geir Evensen Nansen Environmental and Remote Sensing Center, Bergen

More information

Impact of Assimilating Radar Radial Wind in the Canadian High Resolution

Impact of Assimilating Radar Radial Wind in the Canadian High Resolution Impact of Assimilating Radar Radial Wind in the Canadian High Resolution 12B.5 Ensemble Kalman Filter System Kao-Shen Chung 1,2, Weiguang Chang 1, Luc Fillion 1,2, Seung-Jong Baek 1,2 1 Department of Atmospheric

More information

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom A method for merging flow-dependent forecast error statistics from an ensemble with static statistics for use in high resolution variational data assimilation R. E. Petrie and R. N. Bannister Department

More information

A Gaussian Resampling Particle Filter

A Gaussian Resampling Particle Filter A Gaussian Resampling Particle Filter By X. Xiong 1 and I. M. Navon 1 1 School of Computational Science and Department of Mathematics, Florida State University, Tallahassee, FL 3236, USA 14 July ABSTRACT

More information

Comparing Variational, Ensemble-based and Hybrid Data Assimilations at Regional Scales

Comparing Variational, Ensemble-based and Hybrid Data Assimilations at Regional Scales Comparing Variational, Ensemble-based and Hybrid Data Assimilations at Regional Scales Meng Zhang and Fuqing Zhang Penn State University Xiang-Yu Huang and Xin Zhang NCAR 4 th EnDA Workshop, Albany, NY

More information

Ensemble-based Chemical Data Assimilation II: Covariance Localization

Ensemble-based Chemical Data Assimilation II: Covariance Localization Q. J. R. Meteorol. Soc. (06), 128, pp. 1 999 doi: 10.1256/qj.yy.n Ensemble-based Chemical Data Assimilation II: Covariance Localization By Emil M. Constantinescu 1, Adrian Sandu 1, Tianfeng Chai 2, and

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

On the Kalman Filter error covariance collapse into the unstable subspace

On the Kalman Filter error covariance collapse into the unstable subspace Nonlin. Processes Geophys., 18, 243 250, 2011 doi:10.5194/npg-18-243-2011 Author(s) 2011. CC Attribution 3.0 License. Nonlinear Processes in Geophysics On the Kalman Filter error covariance collapse into

More information

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information

More information

The Canadian approach to ensemble prediction

The Canadian approach to ensemble prediction The Canadian approach to ensemble prediction ECMWF 2017 Annual seminar: Ensemble prediction : past, present and future. Pieter Houtekamer Montreal, Canada Overview. The Canadian approach. What are the

More information

Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast)

Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast) Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast) Richard Ménard, Yan Yang and Yves Rochon Air Quality Research Division Environment

More information

A mollified ensemble Kalman filter

A mollified ensemble Kalman filter Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. 136: 1636 1643, July 1 Part B A mollified ensemble Kalman filter Kay Bergemann and Sebastian Reich* Universität Potsdam, Potsdam,

More information

The impact of assimilation of microwave radiance in HWRF on the forecast over the western Pacific Ocean

The impact of assimilation of microwave radiance in HWRF on the forecast over the western Pacific Ocean The impact of assimilation of microwave radiance in HWRF on the forecast over the western Pacific Ocean Chun-Chieh Chao, 1 Chien-Ben Chou 2 and Huei-Ping Huang 3 1Meteorological Informatics Business Division,

More information

Multivariate localization methods for ensemble Kalman filtering

Multivariate localization methods for ensemble Kalman filtering doi:10.5194/npg-22-723-2015 Author(s) 2015. CC Attribution 3.0 License. Multivariate localization methods for ensemble Kalman filtering S. Roh 1, M. Jun 1, I. Szunyogh 2, and M. G. Genton 3 1 Department

More information

A Unification of Ensemble Square Root Kalman Filters. and Wolfgang Hiller

A Unification of Ensemble Square Root Kalman Filters. and Wolfgang Hiller Generated using version 3.0 of the official AMS L A TEX template A Unification of Ensemble Square Root Kalman Filters Lars Nerger, Tijana Janjić, Jens Schröter, and Wolfgang Hiller Alfred Wegener Institute

More information

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Javier Amezcua, Dr. Kayo Ide, Dr. Eugenia Kalnay 1 Outline

More information

The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation

The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation Geir Evensen Norsk Hydro, Oil and Energy Research Centre, Bergen PO Box 7190 - N 5020 Bergen, Norway Geir.Evensen@hydro.com

More information

Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP

Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP Project eam: Mark Buehner Cecilien Charette Bin He Peter Houtekamer Herschel Mitchell WWRP/HORPEX Workshop on 4D-VAR and Ensemble

More information

Ensembles and Particle Filters for Ocean Data Assimilation

Ensembles and Particle Filters for Ocean Data Assimilation DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Ensembles and Particle Filters for Ocean Data Assimilation Robert N. Miller College of Oceanic and Atmospheric Sciences

More information

Comparison between Local Ensemble Transform Kalman Filter and PSAS in the NASA finite volume GCM: perfect model experiments

Comparison between Local Ensemble Transform Kalman Filter and PSAS in the NASA finite volume GCM: perfect model experiments Comparison between Local Ensemble Transform Kalman Filter and PSAS in the NASA finite volume GCM: perfect model experiments Junjie Liu 1*, Elana Judith Fertig 1, and Hong Li 1 Eugenia Kalnay 1, Brian R.

More information