Posterior Covariance vs. Analysis Error Covariance in Data Assimilation

Similar documents
Posterior Covariance vs. Analysis Error Covariance in Data Assimilation

Analysis error covariance versus posterior covariance in variational data assimilation

Optimal solution error covariance in highly nonlinear problems of variational data assimilation

(Extended) Kalman Filter

MODELLING OF ACOUSTIC POWER RADIATION FROM MOBILE SCREW COMPRESSORS

CONTINUOUS DEPENDENCE ESTIMATES FOR VISCOSITY SOLUTIONS OF FULLY NONLINEAR DEGENERATE ELLIPTIC EQUATIONS

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

Sharp estimates of bounded solutions to some semilinear second order dissipative equations

Ensemble of Data Assimilations and uncertainty estimation

A Reduced Rank Kalman Filter Ross Bannister, October/November 2009

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Spectrum Opportunity Detection with Weak and Correlated Signals

Monte Carlo in Bayesian Statistics

Bayesian Model Comparison:

Fundamentals of Data Assimilation

Sequential Monte Carlo Methods (for DSGE Models)

Technical Appendix: Imperfect information and the business cycle

Data assimilation in high dimensions

Bayesian Estimation of DSGE Models

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Strathprints Institutional Repository

FinQuiz Notes

An introduction to data assimilation. Eric Blayo University of Grenoble and INRIA

A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Solutions to Exam 2, Math 10560

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.

Uncertainty quantification for inverse problems with a weak wave-equation constraint

Bayesian Inference for DSGE Models. Lawrence J. Christiano

An efficient sampling method for stochastic inverse problems

Iteration and SUT-based Variational Filter

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Ensemble Kalman Filter

IN-PLANE VIBRATIONS OF PINNED-FIXED HETEROGENEOUS CURVED BEAMS UNDER A CONCENTRATED FORCE

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Basic concepts in estimation

Application of Ensemble Kalman Filter in numerical models. UTM, CSIC, Barcelona, Spain 2. SMOS BEC, Barcelona, Spain 3. NURC, La Spezia, Italy 4

Bayesian parameter estimation in predictive engineering

Convergence of the Ensemble Kalman Filter in Hilbert Space

Effect of Uniform Horizontal Magnetic Field on Thermal Instability in A Rotating Micropolar Fluid Saturating A Porous Medium

Sequential Bayesian Updating

QUALITY CONTROL OF WINDS FROM METEOSAT 8 AT METEO FRANCE : SOME RESULTS

Part III. A Decision-Theoretic Approach and Bayesian testing

Econometrics I, Estimation

IN this paper, we consider the estimation of the frequency

Load Balancing Congestion Games and their Asymptotic Behavior

Hierarchical Modeling for Univariate Spatial Data

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Sloshing problem in a half-plane covered by a dock with two equal gaps

UNDERSTANDING DATA ASSIMILATION APPLICATIONS TO HIGH-LATITUDE IONOSPHERIC ELECTRODYNAMICS

Bayesian inverse problems with Laplacian noise

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA)

Uncertainty quantification for Wavefield Reconstruction Inversion

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Comparisons between 4DEnVar and 4DVar on the Met Office global model

MODELING AND SIMULATION OF FREE FLUID OVER A POROUS MEDIUM BY MESHLESS METHOD

arxiv:nlin/ v1 [nlin.si] 29 Jan 2007

Riemann Manifold Methods in Bayesian Statistics

ON ESTIMATION OF TEMPERATURE UNCERTAINTY USING THE SECOND ORDER ADJOINT PROBLEM

Statistical Estimation of the Parameters of a PDE

Ergodicity in data assimilation methods

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

An Introduction to Bayesian Linear Regression

1.1 a.) Suppose we have a conductor and place some charge density within it. For a surface S inside the conductor enclose the charge density!

TOPOGRAPHY INFLUENCE ON THE LAKE EQUATIONS IN BOUNDED DOMAINS

LARGE-SCALE KALMAN FILTERING USING THE LIMITED MEMORY BFGS METHOD

Sequential Monte Carlo Samplers for Applications in High Dimensions

An Efficient Estimation Method for Longitudinal Surveys with Monotone Missing Data

Fast Numerical Methods for Stochastic Computations

Bayesian Inference. Chapter 9. Linear models and regression

Correlation integral likelihood for stochastic differential equations

ASSESSING A VECTOR PARAMETER

ABHELSINKI UNIVERSITY OF TECHNOLOGY

Short Questions (Do two out of three) 15 points each

Sciences, Oak Ridge National Laboratory, Oak Ridge, TN Sciences of Ukraine, Kyiv, Ukraine Ukraine 88000

Introduction to Bayesian Inference

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Nonlinear variational inverse problems

arxiv: v1 [stat.ml] 10 Apr 2017

Estimating functional uncertainty using polynomial chaos and adjoint equations

ON THE COMPARISON OF BOUNDARY AND INTERIOR SUPPORT POINTS OF A RESPONSE SURFACE UNDER OPTIMALITY CRITERIA. Cross River State, Nigeria

ROUNDOFF ERRORS; BACKWARD STABILITY

Bayesian methods in the search for gravitational waves

Experimental 2D-Var assimilation of ARM cloud and precipitation observations

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

Bayesian Logistic Regression

Learning the hyper-parameters. Luca Martino

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

4. DATA ASSIMILATION FUNDAMENTALS

ERASMUS UNIVERSITY ROTTERDAM

Approximate Bayesian Computation

The Kalman Filter ImPr Talk

Bayesian Inverse problem, Data assimilation and Localization

Simple Examples. Let s look at a few simple examples of OI analysis.

arxiv:quant-ph/ v1 4 Nov 2005

The Bayesian approach to inverse problems

On variational data assimilation for 1D and 2D fluvial hydraulics

A Hybrid Approach to Estimating Error Covariances in Variational Data Assimilation

Transcription:

Posterior Covariance vs. Analysis Error Covariance in Data Assimilation F.-X. Le Dimet(1), I. Gejadze(2), V. Shutyaev(3) (1) Université de Grenole (2)University of Strathclyde, Glasgow, UK (3) Institute of Numerical Mathematics, RAS, Moscow, Russia ledimet@imag.fr Septemer 27, 2013 F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 1 / 1

Overview Introduction Analysis Error Covariance via Hessian Posterior Covariance : A Bayesian approach Effective Covariance Estimates Implementation : some remarks Asymptotic Properties Numerical Example Conclusion F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 2 / 1

Introduction 1 There are two asic approaches for Data Assimilation: Variational Methods Kalman Filter Both lead to minimize a cost function : J(u) = 1 2 1 (V (u u ), u u ) X + 1 1 (Vo (Cϕ y), Cϕ y) Yo, (1) 2 where u X is a prior initial-value function (ackground state), y Y o is a prescried function (oservational data), Y o is an oservation space, C : Y Y o is a linear ounded operator. We get the same optimal solution ū F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 3 / 1

Introduction 2 ū is the solution of the Optimality System : { ϕ t { ϕ = F (ϕ) + f, t (0, T ) t ϕ t=0 = u, + (F (ϕ)) ϕ = C V 1 o (Cϕ y), t (0, T ) ϕ t=t = 0, (2) (3) V 1 (u u ) ϕ t=0 = 0 (4) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 4 / 1

Introduction 3 In an analysis there are two inputs: The ackground u the oservation y Both have error and the question is what is the impact of these errors on the analysis? The ackground can e considered from two different viewpoints: Variational viewpoint: the ackground is a regularization term in the Tykhonov s sense to make the prolem well posed Bayesian view point: the ackground is an a priori information on the analysis F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 5 / 1

Introduction 4 In the linear case we get the same covariance error for the analysis : the inverse of the Hessian of the cost function. In the non linear case we get two different items: Variational approach : Analysis Error Covariance Bayesian Approach : Posterior Covariance Questions: How to compute, approximate, these elements? What are the differences? F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 6 / 1

Analysis Error Covariance 1: True solution and Errors We assume the existence of a true solution u t and an associated true state ϕ t verifying: ϕ t t = F (ϕ t ) + f, t (0, T ) ϕ t t=0 = u t. (5) Then the errors are defined y : u = u t + ξ, y = Cϕ t + ξ o with covariances V andv o F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 7 / 1

Analysis Error Covariance 2: Discrepancy Evolution Let δϕ = ϕ ϕ t, δu = u u t. Then for regular F there exists ϕ = ϕ t + τ(ϕ ϕ t ), τ [0, 1], such that { δϕ t F ( ϕ)δϕ = 0, t (0, T ), δϕ t=0 = δu, (6) { ϕ t + (F (ϕ)) ϕ = C Vo 1 (Cδϕ ξ o ), ϕ t=t = 0, (7) V 1 (δu ξ ) ϕ t=0 = 0. (8) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 8 / 1

Analysis Error Covariance 3: Exact Equation for Analysis Error Let us introduce the operator R(ϕ) : X Y as follows: R(ϕ)v = ψ, v X, (9) where ψ is the solution of the tangent linear prolem ψ t F (ϕ)ψ = 0, ψ t=0 = v. (10) Then, the system for errors can e represented as a single operator equation for δu: H(ϕ, ϕ)δu = V 1 ξ + R (ϕ)c Vo 1 ξ o, (11) where H(ϕ, ϕ) = V 1 + R (ϕ)c V 1 o CR( ϕ). (12) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 9 / 1

Analysis Error Covariance 4 : H Operator The operator H(ϕ, ϕ) : X X can e defined y ψ t ψ t F ( ϕ)ψ = 0, ψ t=0 = v, (13) (F (ϕ)) ψ = C V 1 o Cψ, ψ t=t = 0, (14) H(ϕ, ϕ)v = V 1 v ψ t=0. (15) The operator H(ϕ, ϕ) is neither symmetric, nor positive definite. ϕ = ϕ = θ, it ecomes the Hessian H(θ) of the cost function J 1 in the following auxiliary DA prolem: find δu and δϕ such that J 1 (δu) = inf v J 1(v), where J 1 (δu) = 1 1 (V 2 (δu ξ ), δu ξ ) X + 1 1 (Vo (Cδϕ ξ o ), Cδϕ ξ o ) Yo, 2 (16) and δϕ satisfies the prolem δϕ F (θ)δϕ = 0, δϕ t t=0 = δu. (17) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 10 / 1

Analysis Error Covariance 5 : Analysis Error Covariance via Hessian The optimal solution (analysis) error δu is assumed to e uniased, i.e. E[δu] = 0, and V δu = E[(, δu) X δu] = E[(, u u t ) X (u u t )]. (18) The est value of ϕ and ϕ independent of ξ o, ξ is apparently ϕ t and using the error equation reduces to H(ϕ t )δu = V 1 R( ϕ) R(ϕ t ), R (ϕ) R (ϕ t ), (19) ξ + R (ϕ t )C V 1 ξ o, H( ) = V 1 We express δu from equation (??) δu = H 1 (ϕ t )(V 1 ξ + R (ϕ t )C Vo 1 ξ o ) and otain for the analysis error covariance V δu = H 1 (ϕ t )(V 1 o + R ( )C Vo 1 CR( ). (20) + R (ϕ t )C V 1 o CR(ϕ t ))H 1 (ϕ t ) = H 1 (ϕ t ). (21) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 11 / 1

Analysis Error Covariance 6 : Approximations In practice the true field ϕ t is not known, thus we have to use an approximation ϕ associated to a certain optimal solution ū defined y the real data (ū, ȳ), i.e. we use V δu = H 1 ( ϕ). (22) In (Raier and Courtier,1992), the error equation is derived in the form (V 1 + R (ϕ)c V 1 o CR(ϕ))δu = V 1 ξ + R (ϕ)c V 1 o ξ o. (23) The error due to transitions R( ϕ) R(ϕ t ) and R (ϕ) R (ϕ t ); we call it the linearization error. The use of ϕ instead of ϕ t in the Hessian computations leads to another error, which shall e called the origin error. F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 12 / 1

Posterior Covariance : Bayesian Approach 1 Given u N (ū, V ), y N (ȳ, V o ), the following expression for the posterior distriution of u is derived from the Bayes theorem: p(u ȳ) = C exp ( 1 1 (V 2 (u ū ), u ū ) X ) exp( 1 1 (Vo (Cϕ ȳ), Cϕ ȳ) Yo ). 2 (24) The solution to the variational DA prolem with the data y = ȳ and u = ū is equal to the mode of p(u, ȳ) (see e.g. Lorenc, 1986; Tarantola, 1987). Accordingly, the Bayesian posterior covariance is defined y : with u p(u ȳ). V δu = E[(, u E[u]) X (u E[u]) (25) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 13 / 1

Posterior Covariance : Bayesian Approach 2 In order to compute V δu y the Monte Carlo method, one must generate a sample of pseudo-random realizations u i from p(u ȳ). We will consider u i to e the solutions to the DA prolem with the pertured data u = ū + ξ, and y = ȳ + ξ o, where ξ N (0, V ), ξ o N (0, V o ). Further we assume that E[u] = ū, where ū is the solution to the unpertured prolem in which case V δu can e approximated as follows V δu = E[(, u ū) X (u ū)] = E[(, δu) X δu]. (26) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 14 / 1

Posterior Covariance : O.S. for Errors Unpertured O.S. with u = ū, y = ȳ: ϕ t ϕ t = F ( ϕ) + f, ϕ t=0 = ū, (27) + (F ( ϕ)) ϕ = C V 1 o (C ϕ ȳ), ϕ t=t = 0, (28) V 1 (ū ū ) ϕ t=0 = 0 (29) With perturations : u = ū + ξ, y = ȳ + ξ o, where ξ X, ξ o Y o. δu = u ū, δϕ = ϕ ϕ, δϕ = ϕ ϕ. δϕ t δϕ t = F (ϕ) F ( ϕ), δϕ t=0 = δu, (30) +(F (ϕ)) δϕ = [((F ( ϕ)) F (ϕ)) ] ϕ +C V 1 o (Cδϕ ξ o ), (31) V 1 (δu ξ ) δϕ t=0 = 0. (32) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 15 / 1

Posterior Covariance : Exact Errors Equations Introducing ϕ 1 = ϕ + τ 1 δϕ, ϕ 2 = ϕ + τ 2 δϕ, τ 1, τ 2 [0, 1], we derive the exact system for errors: δϕ t δϕ t = F ( ϕ 1 )δϕ, δϕ t=0 = δu, (33) + (F (ϕ)) δϕ = [(F ( ϕ 2 )) ϕ ] δϕ + C V 1 o (Cδϕ ξ o ), (34) V 1 (δu ξ ) δϕ t=0 = 0, (35) equivalent to a single operator equation for δu: where H(ϕ, ϕ 1, ϕ 2 ) = V 1 H(ϕ, ϕ 1, ϕ 2 )δu = V 1 ξ + R (ϕ)c Vo 1 ξ o, (36) + R (ϕ)(c V 1 o C [(F ( ϕ 2 )) ϕ ] )R( ϕ 1 ). (37) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 16 / 1

Posterior Covariance : Operator H H(ϕ, ϕ 1, ϕ 2 ) : X X is defined y solving: ψ t ψ t = F ( ϕ 1 )ψ, ψ t=0 = v, (38) (F (ϕ)) ψ = [(F ( ϕ 2 )) ϕ ] ψ C V 1 o Cψ, (39) H(ϕ, ϕ 1, ϕ 2 )v = V 1 v ψ t=0. (40) If we had ϕ = ϕ 1 = ϕ 2, H(ϕ) ecomes the Hessian of the cost function in the original DA prolem, it is symmetric and positive definite if u is a minimum of J(u). The equation is often referred as the second order adjoint (Le Dimet et al., 2002). As aove, we assume that E(δu) 0, and we consider the following approximations R( ϕ 1 ) R( ϕ), R (ϕ) R ( ϕ), [(F ( ϕ 2 )) ϕ ] [(F ( ϕ)) ϕ ]. (41) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 17 / 1

Posterior Covariance via Hessian The exact error equation (??) is approximated as follows where H( ) = V 1 Now, we express δu : H( ϕ)δu = V 1 ξ + R( ϕ) C Vo 1 ξ o, (42) + R ( )(C V 1 o C [(F ( )) ϕ ] )R( ). (43) δu = H 1 ( ϕ)(v 1 ξ + R( ϕ) C Vo 1 ξ o ), and otain an approximate expression for the posterior error covariance V 1 = H 1 ( ϕ)(v 1 + R ( ϕ)v o 1 R( ϕ))h 1 ( ϕ) = H 1 ( ϕ)h( ϕ)h 1 ( ϕ), (44) where H( ϕ) is the Hessian of the cost function J 1 computed at θ = ϕ. Other approximations of the posterior covariance: V 2 = H 1 ( ϕ), V 3 = H 1 ( ϕ). (45) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 18 / 1

Posterior Covariance : Effective Estimates To supress the linearization errors, the effective inverse Hessian may e used for estimating the analysis error covariance (see Gejadze et al., 2011): V δu = E [ H 1 (ϕ) ]. (46) The same is true for the posterior error covariance V δu : V δu = E [ H 1 (ϕ, ϕ 1, ϕ 2 ) H(ϕ) H 1 (ϕ, ϕ 1, ϕ 2 ) ]. First, we sustitute a possily asymmetric and indefinite operator H(ϕ, ϕ 1, ϕ 2 ) y H(ϕ): V δu V e 1 = E [ H 1 (ϕ) H(ϕ) H 1 (ϕ) ]. (47) Next, y assuming H(ϕ)H 1 (ϕ) I we get V δu V e 2 = E [ H 1 (ϕ) ]. (48) Finally, y assuming H 1 (ϕ) H 1 (ϕ) we otain yet another approximation V δu V e 3 = E [ H 1 (ϕ) ]. (49) F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 19 / 1

Numerical Example ϕ(x, t) is governed y the 1D Burgers equation with a nonlinear viscous term : ϕ t + 1 (ϕ 2 ) = ( ν(ϕ) ϕ ), (50) 2 x x x ϕ = ϕ(x, t), t (0, T ), x (0, 1), with the Neumann oundary conditions ϕ = ϕ = 0 (51) x x x=0 x=1 and the viscosity coefficient ( ) ϕ 2 ν(ϕ) = ν 0 + ν 1, ν 0, ν 1 = const > 0. (52) x Two initial conditions u t = ϕ t (x, 0) are considered (case A and case B), F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 20 / 1

Numerical Example : Cas A ϕ F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 21 / 1

Numerical Example : Cas B ϕ F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 22 / 1

Numerical Example : Squared Riemann distance ( M ) 1/2. µ(a, B) = i=1 ln2 γ i Case µ 2 (V 3, ˆV) µ 2 (V 2, ˆV) µ 2 (V 1, ˆV) µ 2 (V3 e, ˆV) µ 2 (V2 e, ˆV) µ 2 (V1 e, ˆV) A1 3.817 3.058 4.738 2.250 1.418 1.151 A2 17.89 18.06 21.50 2.535 1.778 1.602 A5 5.832 5.070 5.778 3.710 2.886 2.564 A7 20.21 19.76 22.24 4.290 3.508 3.383 A8 1.133 0.585 1.419 1.108 0.466 0.246 A9 20.18 20.65 24.52 2.191 1.986 1.976 A10 10.01 8.521 8.411 3.200 2.437 2.428 B1 7.271 6.452 6.785 2.852 1.835 1.476 B2 16.42 14.89 14.70 15.61 14.11 13.77 B3 9.937 10.70 17.70 4.125 3.636 3.385 B4 6.223 5.353 11.50 2.600 1.773 1.580 B5 10.73 9.515 9.875 4.752 3.178 2.530 B6 6.184 4.153 8.621 4.874 2.479 1.858 F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 23 / 1

The reference mean deviation ˆσ(x) (related to ˆV). Cases A2, A8. 0.16 0.14 0.12 case A2 case A8 σ 0.1 0.08 0.06 0.04 0.02 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x The mean deviation vector σ is defined as follows: σ(i) = V 1/2 (i, i). F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 24 / 1

The reference mean deviation ˆσ(x) (related to ˆV). Cases B6, B9. 0.16 0.14 0.12 case B6 case B9 σ 0.1 0.08 0.06 0.04 0.02 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 25 / 1

Asolute errors in the correlation matrix: 3, e3, e1 (case A2) a) ) c) case A2 F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 26 / 1

Asolute errors in the correlation matrix: 3, e3, e1 (case A8) a) ) c) F.-X. Le Dimet (INRIA) case A8 Posterior covariance Septemer 27, 2013 27 / 1

Asolute errors in the correlation matrix: 3, e3, e1 (case B6) a) ) c) F.-X. Le Dimet (INRIA) case B6 Posterior covariance Septemer 27, 2013 28 / 1

Conclusion Variational and Bayesian approaches of DA lead to the minimization of the same function For Error Estimation we otain two different concepts Analysis Error Covariance for the Variational Approach Posterior Covariance for the Bayesian Approach In the linear case the covariances coincide In the nonlinear case strong discrepancies can occur. Algorithms for the estimation and the approximation of these covariances are proposed. Gejadze, I.,Shutyaev, V., Le Dimet, F.-X. Analysis error covariance versus posterior covariance in variational data assimilation prolems. Q.J.R.Meteolol. Soc. (2013). F.-X. Le Dimet (INRIA) Posterior covariance Septemer 27, 2013 29 / 1