Posterior Covariance vs. Analysis Error Covariance in Data Assimilation

Similar documents
Posterior Covariance vs. Analysis Error Covariance in Data Assimilation

Analysis error covariance versus posterior covariance in variational data assimilation

Thomas Lugand. To cite this version: HAL Id: tel

A new simple recursive algorithm for finding prime numbers using Rosser s theorem

Methylation-associated PHOX2B gene silencing is a rare event in human neuroblastoma.

Analysis of Boyer and Moore s MJRTY algorithm

Easter bracelets for years

Dispersion relation results for VCS at JLab

On size, radius and minimum degree

A proximal approach to the inversion of ill-conditioned matrices

Smart Bolometer: Toward Monolithic Bolometer with Smart Functions

Passerelle entre les arts : la sculpture sonore

Evolution of the cooperation and consequences of a decrease in plant diversity on the root symbiont diversity

Can we reduce health inequalities? An analysis of the English strategy ( )

Linear Quadratic Zero-Sum Two-Person Differential Games

L institution sportive : rêve et illusion

Completeness of the Tree System for Propositional Classical Logic

On Symmetric Norm Inequalities And Hermitian Block-Matrices

Impedance Transmission Conditions for the Electric Potential across a Highly Conductive Casing

b-chromatic number of cacti

Stochastic invariances and Lamperti transformations for Stochastic Processes

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122,

Soundness of the System of Semantic Trees for Classical Logic based on Fitting and Smullyan

New estimates for the div-curl-grad operators and elliptic problems with L1-data in the half-space

A Context free language associated with interval maps

Vibro-acoustic simulation of a car window

Accounting for Missing Data in Sparse Wavelet Representation of Observation Error Correlations

On Symmetric Norm Inequalities And Hermitian Block-Matrices

From Unstructured 3D Point Clouds to Structured Knowledge - A Semantics Approach

On Poincare-Wirtinger inequalities in spaces of functions of bounded variation

Finite Volume for Fusion Simulations

The FLRW cosmological model revisited: relation of the local time with th e local curvature and consequences on the Heisenberg uncertainty principle

Solution to Sylvester equation associated to linear descriptor systems

Reduced Models (and control) of in-situ decontamination of large water resources

Solving the neutron slowing down equation

Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum

Basic concepts and models in continuum damage mechanics

Parallel Repetition of entangled games on the uniform distribution

Full-order observers for linear systems with unknown inputs

On one class of permutation polynomials over finite fields of characteristic two *

FORMAL TREATMENT OF RADIATION FIELD FLUCTUATIONS IN VACUUM

Boundary conditions control in ORCA2

IMPROVEMENTS OF THE VARIABLE THERMAL RESISTANCE

A simple kinetic equation of swarm formation: blow up and global existence

Numerical modification of atmospheric models to include the feedback of oceanic currents on air-sea fluxes in ocean-atmosphere coupled models

Exogenous input estimation in Electronic Power Steering (EPS) systems

Impulse response measurement of ultrasonic transducers

On sl3 KZ equations and W3 null-vector equations

Two-step centered spatio-temporal auto-logistic regression model

On Newton-Raphson iteration for multiplicative inverses modulo prime powers

Best linear unbiased prediction when error vector is correlated with other random vectors in the model

Some explanations about the IWLS algorithm to fit generalized linear models

On path partitions of the divisor graph

RHEOLOGICAL INTERPRETATION OF RAYLEIGH DAMPING

Numerical Modeling of Eddy Current Nondestructive Evaluation of Ferromagnetic Tubes via an Integral. Equation Approach

Near-Earth Asteroids Orbit Propagation with Gaia Observations

A new approach of the concept of prime number

Territorial Intelligence and Innovation for the Socio-Ecological Transition

There are infinitely many twin primes 30n+11 and 30n+13, 30n+17 and 30n+19, 30n+29 and 30n+31

The Windy Postman Problem on Series-Parallel Graphs

A note on the computation of the fraction of smallest denominator in between two irreducible fractions

Computable priors sharpened into Occam s razors

Particle-in-cell simulations of high energy electron production by intense laser pulses in underdense plasmas

On infinite permutations

On the Earth s magnetic field and the Hall effect

STATISTICAL ENERGY ANALYSIS: CORRELATION BETWEEN DIFFUSE FIELD AND ENERGY EQUIPARTITION

Quantum efficiency and metastable lifetime measurements in ruby ( Cr 3+ : Al2O3) via lock-in rate-window photothermal radiometry

On a series of Ramanujan

Norm Inequalities of Positive Semi-Definite Matrices

Multiple sensor fault detection in heat exchanger system

Towards an active anechoic room

On the longest path in a recursively partitionable graph

Unfolding the Skorohod reflection of a semimartingale

Hook lengths and shifted parts of partitions

Comparison of Harmonic, Geometric and Arithmetic means for change detection in SAR time series

A Simple Proof of P versus NP

Determination of absorption characteristic of materials on basis of sound intensity measurement

3D-CE5.h: Merge candidate list for disparity compensated prediction

Finite element computation of leaky modes in straight and helical elastic waveguides

Sound intensity as a function of sound insulation partition

Low frequency resolvent estimates for long range perturbations of the Euclidean Laplacian

On The Exact Solution of Newell-Whitehead-Segel Equation Using the Homotopy Perturbation Method

Sensitivity of hybrid filter banks A/D converters to analog realization errors and finite word length

A novel method for estimating the flicker level generated by a wave energy farm composed of devices operated in variable speed mode

Non Linear Observation Equation For Motion Estimation

Entropies and fractal dimensions

A Study of the Regular Pentagon with a Classic Geometric Approach

Water Vapour Effects in Mass Measurement

Impairment of Shooting Performance by Background Complexity and Motion

LAWS OF CRYSTAL-FIELD DISORDERNESS OF Ln3+ IONS IN INSULATING LASER CRYSTALS

A remark on a theorem of A. E. Ingham.

The Learner s Dictionary and the Sciences:

A note on the acyclic 3-choosability of some planar graphs

Quasi-periodic solutions of the 2D Euler equation

Analyzing large-scale spike trains data with spatio-temporal constraints

Classification of high dimensional data: High Dimensional Discriminant Analysis

PRESSURE FLUCTUATION NUMERICAL SIMULATION IN A CENTRIFUGAL PUMP VOLUTE CASING

ON THE UNIQUENESS IN THE 3D NAVIER-STOKES EQUATIONS

Theoretical calculation of the power of wind turbine or tidal turbine

Transcription:

Posterior Covariance vs. Analysis Error Covariance in Data Assimilation François-Xavier Le Dimet, Victor Shutyaev, Igor Gejadze To cite this version: François-Xavier Le Dimet, Victor Shutyaev, Igor Gejadze. Posterior Covariance vs. Analysis Error Covariance in Data Assimilation. 6th WMO Symposium on Data Assimilation, Oct 2013, College Park, United States. 2013. <hal-00932577> HAL Id: hal-00932577 https://hal.inria.fr/hal-00932577 Submitted on 17 Jan 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Posterior Covariance vs. Analysis Error Covariance in Data Assimilation F.-X. Le Dimet(1), I. Gejadze(2), V. Shutyaev(3) (1) Université de Grenoble (2)University of Strathclyde, Glasgow, UK (3) Institute of Numerical Mathematics, RAS, Moscow, Russia ledimet@imag.fr September 27, 2013 F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 1 / 1

Overview Introduction Analysis Error Covariance via Hessian Posterior Covariance : A Bayesian approach Effective Covariance Estimates Implementation : some remarks Asymptotic Properties Numerical Example Conclusion F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 2 / 1

Introduction 1 There are two basic approaches for Data Assimilation: Variational Methods Kalman Filter Both lead to minimize a cost function : J(u) = 1 2 (V 1 b (u u b),u u b ) X + 1 2 (V 1 o (Cϕ y),cϕ y) Yo, (1) where u b X is a prior initial-value function (background state), y Y o is a prescribed function (observational data), Y o is an observation space, C : Y Y o is a linear bounded operator. We get the same optimal solution ū F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 3 / 1

Introduction 2 ū is the solution of the Optimality System : { ϕ t { ϕ = F(ϕ)+f, t (0,T) t ϕ t=0 = u, +(F (ϕ)) ϕ = C V 1 o (Cϕ y), t (0,T) ϕ t=t = 0, (2) (3) V 1 b (u u b) ϕ t=0 = 0 (4) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 4 / 1

Introduction 3 In an analysis there are two inputs: The background u b the observation y Both have error and the question is what is the impact of these errors on the analysis? The background can be considered from two different viewpoints: Variational viewpoint: the background is a regularization term in the Tykhonov s sense to make the problem well posed Bayesian view point: the background is an a priori information on the analysis F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 5 / 1

Introduction 4 In the linear case we get the same covariance error for the analysis : the inverse of the Hessian of the cost function. In the non linear case we get two different items: Variational approach : Analysis Error Covariance Bayesian Approach : Posterior Covariance Questions: How to compute, approximate, these elements? What are the differences? F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 6 / 1

Analysis Error Covariance 1: True solution and Errors We assume the existence of a true solution u t and an associated true state ϕ t verifying: ϕ t t = F(ϕ t )+f, t (0,T) ϕ t t=0 = u t. (5) Then the errors are defined by : u b = u t +ξ b, y = Cϕ t +ξ o with covariances V b andv o F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 7 / 1

Analysis Error Covariance 2: Discrepancy Evolution Let δϕ = ϕ ϕ t, δu = u u t. Then for regular F there exists ϕ = ϕ t +τ(ϕ ϕ t ), τ [0,1], such that { δϕ t F ( ϕ)δϕ = 0, t (0,T), δϕ t=0 = δu, (6) { ϕ t +(F (ϕ)) ϕ = C Vo 1 (Cδϕ ξ o ), ϕ t=t = 0, (7) V 1 b (δu ξ b) ϕ t=0 = 0. (8) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 8 / 1

Analysis Error Covariance 3: Exact Equation for Analysis Error Let us introduce the operator R(ϕ) : X Y as follows: R(ϕ)v = ψ, v X, (9) where ψ is the solution of the tangent linear problem ψ t F (ϕ)ψ = 0, ψ t=0 = v. (10) Then, the system for errors can be represented as a single operator equation for δu: H(ϕ, ϕ)δu = V 1 b ξ b +R (ϕ)c V 1 o ξ o, (11) where H(ϕ, ϕ) = V 1 b +R (ϕ)c V 1 o CR( ϕ). (12) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 9 / 1

Analysis Error Covariance 4 : H Operator The operator H(ϕ, ϕ) : X X can be defined by ψ t ψ t F ( ϕ)ψ = 0, ψ t=0 = v, (13) (F (ϕ)) ψ = C V 1 o Cψ, ψ t=t = 0, (14) H(ϕ, ϕ)v = V 1 b v ψ t=0. (15) The operator H(ϕ, ϕ) is neither symmetric, nor positive definite. ϕ = ϕ = θ, it becomes the Hessian H(θ) of the cost function J 1 in the following auxiliary DA problem: find δu and δϕ such that J 1 (δu) = inf v J 1(v), where J 1 (δu) = 1 2 (V 1 b (δu ξ b),δu ξ b ) X + 1 2 (V 1 o (Cδϕ ξ o ),Cδϕ ξ o ) Yo, (16) and δϕ satisfies the problem δϕ F (θ)δϕ = 0, δϕ t t=0 = δu. (17) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 10 / 1

Analysis Error Covariance 5 : Analysis Error Covariance via Hessian The optimal solution (analysis) error δu is assumed to be unbiased, i.e. E[δu] = 0, and V δu = E[(,δu) X δu] = E[(,u u t ) X (u u t )]. (18) The best value of ϕ and ϕ independent of ξ o,ξ b is apparently ϕ t and using the error equation reduces to R( ϕ) R(ϕ t ), R (ϕ) R (ϕ t ), (19) H(ϕ t )δu = V 1 b ξ b +R (ϕ t )C V 1 o ξ o, H( ) = V 1 b +R ( )C V 1 o CR( ). (20) We express δu from equation (??) δu = H 1 (ϕ t )(V 1 b ξ b +R (ϕ t )C V 1 o ξ o ) and obtain for the analysis error covariance V δu = H 1 (ϕ t )(V 1 b +R (ϕ t )C V 1 o CR(ϕ t ))H 1 (ϕ t ) = H 1 (ϕ t ). (21) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 11 / 1

Analysis Error Covariance 6 : Approximations In practice the true field ϕ t is not known, thus we have to use an approximation ϕ associated to a certain optimal solution ū defined by the real data (ū b,ȳ), i.e. we use V δu = H 1 ( ϕ). (22) In (Rabier and Courtier,1992), the error equation is derived in the form (V 1 b +R (ϕ)c V 1 o CR(ϕ))δu = V 1 b ξ b +R (ϕ)c V 1 o ξ o. (23) The error due to transitions R( ϕ) R(ϕ t ) and R (ϕ) R (ϕ t ); we call it the linearization error. The use of ϕ instead of ϕ t in the Hessian computations leads to another error, which shall be called the origin error. F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 12 / 1

Posterior Covariance : Bayesian Approach 1 Given u b N (ū b,v b ), y N (ȳ,v o ), the following expression for the posterior distribution of u is derived from the Bayes theorem: p(u ȳ) = C exp( 1 2 (V 1 b (u ū b),u ū b ) X ) exp( 1 2 (V 1 o (Cϕ ȳ),cϕ ȳ) Yo ). (24) The solution to the variational DA problem with the data y = ȳ and u b = ū is equal to the mode of p(u,ȳ) (see e.g. Lorenc, 1986; Tarantola, 1987). Accordingly, the Bayesian posterior covariance is defined by : with u p(u ȳ). V δu = E[(,u E[u]) X (u E[u]) (25) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 13 / 1

Posterior Covariance : Bayesian Approach 2 In order to compute V δu by the Monte Carlo method, one must generate a sample of pseudo-random realizations u i from p(u ȳ). We will consider u i to be the solutions to the DA problem with the perturbed data u b = ū b +ξ b, and y = ȳ +ξ o, where ξ b N (0,V b ), ξ o N (0,V o ). Further we assume that E[u] = ū, where ū is the solution to the unperturbed problem in which case V δu can be approximated as follows V δu = E[(,u ū) X (u ū)] = E[(,δu) X δu]. (26) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 14 / 1

Posterior Covariance : O.S. for Errors Unperturbed O.S. with u b = ū b, y = ȳ: ϕ t ϕ t = F( ϕ)+f, ϕ t=0 = ū, (27) +(F ( ϕ)) ϕ = C V 1 o (C ϕ ȳ), ϕ t=t = 0, (28) V 1 b (ū ū b) ϕ t=0 = 0 (29) With perturbations : u b = ū b +ξ b, y = ȳ +ξ o, where ξ b X,ξ o Y o. δu = u ū, δϕ = ϕ ϕ, δϕ = ϕ ϕ. δϕ t δϕ t = F(ϕ) F( ϕ), δϕ t=0 = δu, (30) +(F (ϕ)) δϕ = [((F ( ϕ)) F (ϕ)) ] ϕ +C V 1 o (Cδϕ ξ o ), (31) V 1 b (δu ξ b) δϕ t=0 = 0. (32) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 15 / 1

Posterior Covariance : Exact Errors Equations Introducing ϕ 1 = ϕ+τ 1 δϕ, ϕ 2 = ϕ+τ 2 δϕ, τ 1,τ 2 [0,1], we derive the exact system for errors: δϕ t δϕ t = F ( ϕ 1 )δϕ, δϕ t=0 = δu, (33) +(F (ϕ)) δϕ = [(F ( ϕ 2 )) ϕ ] δϕ+c V 1 o (Cδϕ ξ o ), (34) V 1 b (δu ξ b) δϕ t=0 = 0, (35) equivalent to a single operator equation for δu: where H(ϕ, ϕ 1, ϕ 2 )δu = V 1 b ξ b +R (ϕ)c V 1 o ξ o, (36) H(ϕ, ϕ 1, ϕ 2 ) = V 1 b +R (ϕ)(c V 1 o C [(F ( ϕ 2 )) ϕ ] )R( ϕ 1 ). (37) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 16 / 1

Posterior Covariance : Operator H H(ϕ, ϕ 1, ϕ 2 ) : X X is defined by solving: ψ t ψ t = F ( ϕ 1 )ψ, ψ t=0 = v, (38) (F (ϕ)) ψ = [(F ( ϕ 2 )) ϕ ] ψ C V 1 o Cψ, (39) H(ϕ, ϕ 1, ϕ 2 )v = V 1 b v ψ t=0. (40) If we had ϕ = ϕ 1 = ϕ 2, H(ϕ) becomes the Hessian of the cost function in the original DA problem, it is symmetric and positive definite if u is a minimum of J(u). The equation is often referred as the second order adjoint (Le Dimet et al., 2002). As above, we assume that E(δu) 0, and we consider the following approximations R( ϕ 1 ) R( ϕ), R (ϕ) R ( ϕ), [(F ( ϕ 2 )) ϕ ] [(F ( ϕ)) ϕ ]. (41) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 17 / 1

Posterior Covariance via Hessian The exact error equation (??) is approximated as follows where Now, we express δu : H( ϕ)δu = V 1 b ξ b +R( ϕ) C V 1 o ξ o, (42) H( ) = V 1 b +R ( )(C V 1 o C [(F ( )) ϕ ] )R( ). (43) δu = H 1 ( ϕ)(v 1 b ξ b +R( ϕ) C V 1 o ξ o ), and obtain an approximate expression for the posterior error covariance V 1 = H 1 ( ϕ)(v 1 b +R ( ϕ)v 1 o R( ϕ))h 1 ( ϕ) = H 1 ( ϕ)h( ϕ)h 1 ( ϕ), (44) where H( ϕ) is the Hessian of the cost function J 1 computed at θ = ϕ. Other approximations of the posterior covariance: V 2 = H 1 ( ϕ), V 3 = H 1 ( ϕ). (45) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 18 / 1

Posterior Covariance : Effective Estimates To supress the linearization errors, the effective inverse Hessian may be used for estimating the analysis error covariance (see Gejadze et al., 2011): V δu = E [ H 1 (ϕ) ]. (46) The same is true for the posterior error covariance V δu : V δu = E [ H 1 (ϕ, ϕ 1, ϕ 2 ) H(ϕ) H 1 (ϕ, ϕ 1, ϕ 2 ) ]. First, we substitute a possibly asymmetric and indefinite operator H(ϕ, ϕ 1, ϕ 2 ) by H(ϕ): V δu V e 1 = E [ H 1 (ϕ) H(ϕ) H 1 (ϕ) ]. (47) Next, by assuming H(ϕ)H 1 (ϕ) I we get V δu V e 2 = E [ H 1 (ϕ) ]. (48) Finally, by assuming H 1 (ϕ) H 1 (ϕ) we obtain yet another approximation V δu V e 3 = E [ H 1 (ϕ) ]. (49) F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 19 / 1

Numerical Example ϕ(x,t) is governed by the 1D Burgers equation with a nonlinear viscous term : ϕ t + 1 (ϕ 2 ) = ( ν(ϕ) ϕ ), (50) 2 x x x ϕ = ϕ(x,t), t (0,T), x (0,1), with the Neumann boundary conditions ϕ = ϕ = 0 (51) x x x=0 x=1 and the viscosity coefficient ( ) ϕ 2 ν(ϕ) = ν 0 +ν 1, ν 0,ν 1 = const > 0. (52) x Two initial conditions u t = ϕ t (x,0) are considered (case A and case B), F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 20 / 1

Numerical Example : Cas A ϕ F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 21 / 1

Numerical Example : Cas B ϕ F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 22 / 1

Numerical Example : Squared Riemann distance ( M ) 1/2. µ(a,b) = i=1 ln2 γ i Case µ 2 (V 3, ˆV) µ 2 (V 2, ˆV) µ 2 (V 1, ˆV) µ 2 (V3 e, ˆV) µ 2 (V2 e, ˆV) µ 2 (V1 e, ˆV) A1 3.817 3.058 4.738 2.250 1.418 1.151 A2 17.89 18.06 21.50 2.535 1.778 1.602 A5 5.832 5.070 5.778 3.710 2.886 2.564 A7 20.21 19.76 22.24 4.290 3.508 3.383 A8 1.133 0.585 1.419 1.108 0.466 0.246 A9 20.18 20.65 24.52 2.191 1.986 1.976 A10 10.01 8.521 8.411 3.200 2.437 2.428 B1 7.271 6.452 6.785 2.852 1.835 1.476 B2 16.42 14.89 14.70 15.61 14.11 13.77 B3 9.937 10.70 17.70 4.125 3.636 3.385 B4 6.223 5.353 11.50 2.600 1.773 1.580 B5 10.73 9.515 9.875 4.752 3.178 2.530 B6 6.184 4.153 8.621 4.874 2.479 1.858 F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 23 / 1

The reference mean deviation ˆσ(x) (related to ˆV). Cases A2, A8. 0.16 0.14 0.12 case A2 case A8 σ 0.1 0.08 0.06 0.04 0.02 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x The mean deviation vector σ is defined as follows: σ(i) = V 1/2 (i,i). F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 24 / 1

The reference mean deviation ˆσ(x) (related to ˆV). Cases B6, B9. 0.16 0.14 0.12 case B6 case B9 σ 0.1 0.08 0.06 0.04 0.02 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 25 / 1

Absolute errors in the correlation matrix: ǫ3, ǫe3, ǫe1 (case A2) a) b) c) case A2 F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 26 / 1

Absolute errors in the correlation matrix: ǫ3, ǫe3, ǫe1 (case A8) a) b) c) F.-X. Le Dimet (INRIA) case A8 Posterior covariance September 27, 2013 27 / 1

Absolute errors in the correlation matrix: ǫ3, ǫe3, ǫe1 (case B6) a) b) c) F.-X. Le Dimet (INRIA) case B6 Posterior covariance September 27, 2013 28 / 1

Conclusion Variational and Bayesian approaches of DA lead to the minimization of the same function For Error Estimation we obtain two different concepts Analysis Error Covariance for the Variational Approach Posterior Covariance for the Bayesian Approach In the linear case the covariances coincide In the nonlinear case strong discrepancies can occur. Algorithms for the estimation and the approximation of these covariances are proposed. Gejadze, I.,Shutyaev, V., Le Dimet, F.-X. Analysis error covariance versus posterior covariance in variational data assimilation problems. Q.J.R.Meteolol. Soc. (2013). F.-X. Le Dimet (INRIA) Posterior covariance September 27, 2013 29 / 1