An introduction to data assimilation. Eric Blayo University of Grenoble and INRIA
|
|
- Georgia Cunningham
- 6 years ago
- Views:
Transcription
1 An introduction to data assimilation Eric Blayo University of Grenoble and INRIA
2 Data assimilation, the science of compromises Context characterizing a (complex) system and/or forecasting its evolution, given several heterogeneous and uncertain sources of information Widely used for geophysical fluids (meteorology, oceanography, atmospheric chemistry... ), but also in other numerous domains (e.g. nuclear energy, medicine, agriculture planning... ) Closely linked to inverse methods, control theory, estimation theory, filtering... E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 2/58
3 A daily example: numerical weather forecast E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 3/58
4 Data assimilation, the science of compromises Numerous possible aims: Forecast: estimation of the present state (initial condition) Model tuning: parameter estimation Inverse modeling: estimation of parameter fields Data analysis: re-analysis (model = interpolation operator) OSSE: optimization of observing systems... E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 4/58
5 Objectives for this lecture introduce data assimilation from several points of view give an overview of the main families of methods point out the main difficulties and current corresponding answers E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 5/58
6 Objectives for this lecture introduce data assimilation from several points of view give an overview of the main families of methods point out the main difficulties and current corresponding answers Outline 1. Data assimilation for dummies: a simple model problem 2. Generalization: linear estimation theory, variational and sequential approaches 3. Main current challenges E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 5/58
7 Some references 1. Blayo E. and M. Nodet, 2012: Introduction à l assimilation de données variationnelle. Lecture notes for UJF Master course on data assimilation Bocquet M., 2014 : Introduction aux principes et méthodes de l assimilation de données en géophysique. Lecture notes for ENSTA - ENPC data assimilation course Bocquet M., E. Cosme and E. Blayo (Eds.), 2014: Advanced Data Assimilation for Geosciences. Oxford University Press. 4. Bouttier F. and P. Courtier, 1999: Data assimilation, concepts and methods. Meteorological training course lecture series ECMWF, European Center for Medium range Weather Forecast, Reading, UK. notes/data ASSIMILATION/ ASSIM CONCEPTS/Assim concepts21.html 5. Cohn S., 1997: An introduction to estimation theory. Journal of the Meteorological Society of Japan, 75, Daley R., 1993: Atmospheric data analysis. Cambridge University Press. 7. Evensen G., 2009: Data assimilation, the ensemble Kalman filter. Springer. 8. Kalnay E., 2003: Atmospheric modeling, data assimilation and predictability. Cambridge University Press. 9. Lahoz W., B. Khattatov and R. Menard (Eds.), 2010: Data assimilation. Springer. 10. Rodgers C., 2000: Inverse methods for atmospheric sounding. World Scientific, Series on Atmospheric Oceanic and Planetary Physics. 11. Tarantola A., 2005: Inverse problem theory and methods for model parameter estimation. SIAM. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 6/58
8 A simple but fundamental example E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 7/58
9 Model problem: least squares approach Two different available measurements of a single quantity. Which estimation for its true value? least squares approach E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 8/58
10 Model problem: least squares approach Two different available measurements of a single quantity. Which estimation for its true value? least squares approach Example 2 obs y 1 = 19 C and y 2 = 21 C of the (unknown) present temperature x. Let J(x) = 1 2 [ (x y1 ) 2 + (x y 2 ) 2] Min x J(x) ˆx = y 1 + y 2 2 = 20 C E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 8/58
11 Model problem: least squares approach Observation operator If units: y 1 = 66.2 F and y 2 = 69.8 F Let H(x) = 9 5 x + 32 Let J(x) = 1 [ (H(x) y1 ) 2 + (H(x) y 2 ) 2] 2 Min x J(x) ˆx = 20 C E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 9/58
12 Model problem: least squares approach Observation operator If units: y 1 = 66.2 F and y 2 = 69.8 F Let H(x) = 9 5 x + 32 Let J(x) = 1 [ (H(x) y1 ) 2 + (H(x) y 2 ) 2] 2 Min x J(x) ˆx = 20 C Drawback # 1: if observation units are inhomogeneous y 1 = 66.2 F and y 2 = 21 C J(x) = 1 [ (H(x) y1 ) 2 + (x y 2 ) 2] ˆx = C!! 2 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 9/58
13 Model problem: least squares approach Observation operator If units: y 1 = 66.2 F and y 2 = 69.8 F Let H(x) = 9 5 x + 32 Let J(x) = 1 [ (H(x) y1 ) 2 + (H(x) y 2 ) 2] 2 Min x J(x) ˆx = 20 C Drawback # 1: if observation units are inhomogeneous y 1 = 66.2 F and y 2 = 21 C J(x) = 1 [ (H(x) y1 ) 2 + (x y 2 ) 2] ˆx = C!! 2 Drawback # 2: if observation accuracies are inhomogeneous If y 1 is twice more accurate than y 2, one should obtain ˆx = 2y 1 + y 2 [ (x ) 3 2 ( ) ] 2 y1 x y2 + = C J should be J(x) = 1 2 1/2 1 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 9/58
14 Model problem: statistical approach Reformulation in a probabilistic framework: the goal is to estimate a scalar value x y i is a realization of a random variable Y i One is looking for an estimator (i.e. a r.v.) ˆX that is linear: ˆX = α1 Y 1 + α 2 Y 2 (in order to be simple) unbiased: E( ˆX) = x (it seems reasonable) of minimal variance: Var( ˆX) minimum (optimal accuracy) BLUE (Best Linear Unbiased Estimator) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 10/58
15 Model problem: statistical approach Let Y i = x + ε i with Hypotheses E(ε i ) = 0 (i = 1, 2) unbiased measurement devices Var(ε i ) = σi 2 (i = 1, 2) known accuracies Cov(ε 1, ε 2 ) = 0 independent measurement errors E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 11/58
16 Model problem: statistical approach Let Y i = x + ε i with Hypotheses E(ε i ) = 0 (i = 1, 2) unbiased measurement devices Var(ε i ) = σi 2 (i = 1, 2) known accuracies Cov(ε 1, ε 2 ) = 0 independent measurement errors Then, since ˆX = α 1 Y 1 + α 2 Y 2 = (α 1 + α 2 )x + α 1 ε 1 + α 2 ε 2 : E( ˆX) = (α 1 + α 2 )x + α 1 E(ε 1 ) + α 2 E(ε 2 ) = α 1 + α 2 = 1 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 11/58
17 Model problem: statistical approach Let Y i = x + ε i with Hypotheses E(ε i ) = 0 (i = 1, 2) unbiased measurement devices Var(ε i ) = σi 2 (i = 1, 2) known accuracies Cov(ε 1, ε 2 ) = 0 independent measurement errors Then, since ˆX = α 1 Y 1 + α 2 Y 2 = (α 1 + α 2 )x + α 1 ε 1 + α 2 ε 2 : E( ˆX) = (α 1 + α 2 )x + α 1 E(ε 1 ) + α 2 E(ε 2 ) = α 1 + α 2 = 1 Var( ˆX) = E [( ˆX ] x) 2 = E [ (α 1 ε 1 + α 2 ε 2 ) 2] = α1 2σ2 1 +(1 α 1) 2 σ2 2 α 1 = 0 = α 1 = σ2 2 σ σ2 2 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 11/58
18 Model problem: statistical approach In summary: BLUE ˆX = 1 σ 2 1 Y σ Y 2 Its accuracy: σ 2 1 [ ] 1 1 Var( ˆX) = σ σ σ 2 2 accuracies are added E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 12/58
19 Model problem: statistical approach In summary: BLUE ˆX = 1 σ 2 1 Y σ Y 2 Its accuracy: σ 2 1 [ ] 1 1 Var( ˆX) = σ σ σ 2 2 accuracies are added Remarks: The hypothesis Cov(ε 1, ε 2 ) = 0 is not compulsory at all. Cov(ε 1, ε 2 ) = c α i = σ2 j c σ 2 1 +σ2 2 2c Statistical hypotheses on the two first moments of ε 1, ε 2 lead to statistical results on the two first moments of ˆX. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 12/58
20 Model problem: statistical approach Variational equivalence This is equivalent to the problem: Minimize J(x) = 1 [ (x y1 ) 2 2 σ (x y 2) 2 ] σ2 2 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 13/58
21 Model problem: statistical approach Variational equivalence This is equivalent to the problem: Minimize J(x) = 1 [ (x y1 ) 2 2 σ (x y 2) 2 ] σ2 2 Remarks: This answers the previous problems of sensitivity to inhomogeneous units and insensitivity to inhomogeneous accuracies This gives a rationale for choosing the norm for defining J J (ˆx) = 1 }{{} σ σ2 2 convexity = [Var(ˆx)] 1 }{{} accuracy E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 13/58
22 Model problem Alternative formulation: background + observation If one considers that y 1 is a prior (or background) estimate x b for x, and y 2 = y is an independent observation, then: J(x) = 1 (x x b ) 2 2 σb (x y) 2 2 σo }{{}}{{ 2 } J b J o and ˆx = 1 σ 2 b 1 σ 2 b x b + 1 σo 2 y + 1 σ 2 o = x b + σ2 b σb 2 + (y x b ) σ2 o }{{}}{{} innovation gain E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 14/58
23 Model problem Interpretation If the background error and the observation error are uncorrelated: E(e o e b ) = 0, then one can show that the estimation error and the innovation are uncorrelated: E(e a (Y X b )) = 0 orthogonal projection for the scalar product < Z 1, Z 2 >= E(Z 1 Z 2 ) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 15/58
24 Model problem: Bayesian approach One can also consider x as a realization of a r.v. X, and be interested in the pdf p(x Y ). Several optimality criteria minimum variance: ˆXMV such that the spread around it is minimal ˆX MV = E(X Y ) maximum a posteriori: most probable value of X given Y ˆX MAP such that p(x Y ) X = 0 maximum likelihood: ˆXML that maximizes p(y X) Based on the Bayes rule: P(Y = y X = x) P(X = x) P(X = x Y = y) = P(Y = y) requires additional hypotheses on prior pdf for X and for Y X In the Gaussian case, these estimations coincide with the BLUE E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 16/58
25 Model problem: Bayesian approach Back to our example: observations y 1 and y 2 of an unknown value x. The simplest approach: maximum likelihood (no prior on X) Hypotheses: Y i N (X, σi 2 ) unbiased, known accuracies + known pdf Cov(Y 1, Y 2 ) = 0 independent measurement errors Likelihood function: L(x) = dp(y 1 = y 1 and Y 2 = y 2 X = x) One is looking for ˆx ML = Argmax L(x) maximum likelihood estimation E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 17/58
26 Model problem: Bayesian approach L(x) = 2 dp(y i = y i X = x) = i=1 2 1 e (y i x) 2σ i 2 2π σi i=1 Argmax L(x) = Argmin ( ln L(x)) = Argmin 1 [ (x y1 ) 2 2 σ (x y 2) 2 ] σ2 2 2 Hence ˆx ML = 1 σ σ 2 1 y σ σ 2 2 y 2 BLUE again (because of Gaussian hypothesis) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 18/58
27 Model problem: synthesis Data assimilation methods are often split into two families: variational methods and statistical methods. Variational methods: minimization of a cost function (least squares approach) Statistical methods: algebraic computation of the BLUE (with hypotheses on the first two moments), or approximation of pdfs (with hypotheses on the pdfs) and computation of the MAP estimator There are strong links between those approaches, depending on the case (linear? Gaussian?) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 19/58
28 Model problem: synthesis Data assimilation methods are often split into two families: variational methods and statistical methods. Variational methods: minimization of a cost function (least squares approach) Statistical methods: algebraic computation of the BLUE (with hypotheses on the first two moments), or approximation of pdfs (with hypotheses on the pdfs) and computation of the MAP estimator There are strong links between those approaches, depending on the case (linear? Gaussian?) Theorem If you have understood this previous stuff, you have (almost) understood everything on data assimilation. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 19/58
29 Generalization: variational approach E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 20/58
30 Generalization: arbitrary number of unknowns and observations To be estimated: x = Observations: y = y 1. y p x 1. x n IR n IR p Observation operator: y H(x), with H : IR n IR p E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 21/58
31 Generalization: arbitrary number of unknowns and observations A simple example of observation operator x 1 ( ) If x = x 2 x 3 and y = an observation of x 1+x 2 2 an observation of x 4 x then H(x) = Hx with H = E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 22/58
32 Generalization: arbitrary number of unknowns and observations To be estimated: x = Observations: y = y 1. y p x 1. x n IR n IR p Observation operator: y H(x), with H : IR n IR p Cost function: J(x) = 1 H(x) y 2 with 2. to be chosen. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 23/58
33 Reminder: norms and scalar products u = u 1. u n IR n Euclidian norm: u 2 = u T u = n ui 2 i=1 Associated scalar product: (u, v) = u T v = n u i v i i=1 Generalized norm: let M a symmetric positive definite matrix n n M-norm: u 2 M = u T M u = m ij u i u j i=1 j=1 Associated scalar product: (u, v) M = u T M v = n n m ij u i v j i=1 j=1 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 24/58
34 Generalization: arbitrary number of unknowns and observations To be estimated: x = Observations: y = y 1. y p x 1. x n IR n IR p Observation operator: y H(x), with H : IR n IR p Cost function: J(x) = 1 H(x) y 2 with 2. to be chosen. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 25/58
35 Generalization: arbitrary number of unknowns and observations To be estimated: x = Observations: y = y 1. y p x 1. x n IR n IR p Observation operator: y H(x), with H : IR n IR p Cost function: J(x) = 1 H(x) y 2 with 2. to be chosen. Remark (Intuitive) necessary (but not sufficient) condition for the existence of a unique minimum: p n E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 25/58
36 Formalism background value + new observations z = ( xb y ) background new observations The cost function becomes: J(x) = 1 2 x x b 2 b }{{} J b H(x) y 2 o }{{} J o E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 26/58
37 Formalism background value + new observations z = ( xb y ) background new observations The cost function becomes: J(x) = 1 2 x x b 2 b }{{} J b H(x) y 2 o }{{} J o The necessary condition for the existence of a unique minimum (p n) is automatically fulfilled. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 26/58
38 If the problem is time dependent Observations are distributed in time: y = y(t) The observation cost function becomes: N J o (x) = 1 2 i=0 H i (x(t i )) y(t i ) 2 o E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 27/58
39 If the problem is time dependent Observations are distributed in time: y = y(t) The observation cost function becomes: N J o (x) = 1 2 i=0 H i (x(t i )) y(t i ) 2 o dx There is a model describing the evolution of x: = M(x) with dt x(t = 0) = x 0. Then J is often no longer minimized w.r.t. x, but w.r.t. x 0 only, or to some other parameters. J o (x 0 ) = 1 2 N H i (x(t i )) y(t i ) 2 o = 1 2 i=0 N H i (M 0 ti (x 0 )) y(t i ) 2 o i=0 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 27/58
40 If the problem is time dependent J(x 0 ) = 1 2 x 0 x0 b 2 b + 1 N H i (x(t i )) y(t i ) 2 o }{{} 2 i=0 }{{} background term J b observation term J o E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 28/58
41 Uniqueness of the minimum? N J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b i=0 H i (M 0 ti (x 0 )) y(t i ) 2 o If H and M are linear then J o is quadratic. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 29/58
42 Uniqueness of the minimum? N J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b i=0 H i (M 0 ti (x 0 )) y(t i ) 2 o If H and M are linear then J o is quadratic. However J o generally does not have a unique minimum, since the number of observations is generally less than the size of x 0 (the problem is underdetermined: p < n). Example: let (x1 t, x 2 t ) = (1, 1) and y = 1.1 an observation of 1 2 (x 1 + x 2 ). J o(x 1, x 2 ) = 1 2 ( x1 + x 2 2 ) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 29/58
43 Uniqueness of the minimum? N J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b i=0 H i (M 0 ti (x 0 )) y(t i ) 2 o If H and M are linear then J o is quadratic. However it generally does not have a unique minimum, since the number of observations is generally less than the size of x 0 (the problem is underdetermined). Adding J b makes the problem of minimizing J = J o + J b well posed. Example: let (x1 t, x 2 t ) = (1, 1) and y = 1.1 an observation of 1 2 (x 1 + x 2 ). Let (x1 b, x 2 b ) = (0.9, 1.05) J(x 1, x 2 ) = 1 ( ) x1 + x }{{} J o (x1, x 2 ) = ( , ) [ (x1 0.9) 2 + (x ) 2] } {{ } J b E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 30/58
44 Uniqueness of the minimum? J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b N H i (M 0 ti (x 0 )) y(t i ) 2 o If H and/or M are nonlinear then J o is no longer quadratic. i=0 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 31/58
45 Uniqueness of the minimum? J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b N H i (M 0 ti (x 0 )) y(t i ) 2 o If H and/or M are nonlinear then J o is no longer quadratic. Example: the Lorenz system (1963) dx = α(y x) dt dy = βx y xz dt dz = γz + xy dt i=0 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 31/58
46 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 32/58
47 Uniqueness of the minimum? J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b + 1 N H i (M 0 ti (x 0 )) y(t i ) 2 o 2 i=0 If H and/or M are nonlinear then J o is no longer quadratic. Example: the Lorenz system (1963) dx = α(y x) dt dy = βx y xz dt dz = γz + xy dt J o (y 0 ) = 1 2 N (x(t i ) x obs (t i )) 2 dt i=0 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 33/58
48 Uniqueness of the minimum? J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b N H i (M 0 ti (x 0 )) y(t i ) 2 o If H and/or M are nonlinear then J o is no longer quadratic. i=0 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 34/58
49 Uniqueness of the minimum? J(x 0 ) = J b (x 0 ) + J o (x 0 ) = 1 2 x 0 x b 2 b N H i (M 0 ti (x 0 )) y(t i ) 2 o If H and/or M are nonlinear then J o is no longer quadratic. i=0 Adding J b makes it more quadratic (J b is a regularization term), but J = J o + J b may however have several local minima. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 34/58
50 A fundamental remark before going into minimization aspects Once J is defined (i.e. once all the ingredients are chosen: control variables, norms, observations... ), the problem is entirely defined. Hence its solution. The physical (i.e. the most important) part of data assimilation lies in the definition of J. The rest of the job, i.e. minimizing J, is only technical work. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 35/58
51 Minimizing J J(x) = J b (x) + J o (x) = 1 2 (x x b) T B 1 (x x b ) (Hx y)t R 1 (Hx y) Optimal estimation in the linear case ˆx = x b + (B 1 + H T R 1 H) 1 H T R 1 }{{} (y Hx b ) }{{} gain matrix innovation vector E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 36/58
52 Minimizing J J(x) = J b (x) + J o (x) = 1 2 (x x b) T B 1 (x x b ) (Hx y)t R 1 (Hx y) Optimal estimation in the linear case ˆx = x b + (B 1 + H T R 1 H) 1 H T R 1 }{{} (y Hx b ) }{{} gain matrix innovation vector Given the size of n and p, it is generally impossible to handle explicitly H, B and R. So the direct computation of the gain matrix is impossible. even in the linear case (for which we have an explicit expression for ˆx), the computation of ˆx is performed using an optimization algorithm. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 36/58
53 Minimizing J: descent methods Descent methods for minimizing the cost function require the knowledge of (an estimate of) its gradient. x k+1 = x k + α k d k with d k = J(x k ) gradient method [Hess(J)(x k )] 1 J(x k ) Newton method B k J(x k ) quasi-newton methods (BFGS,... ) J(x k ) + J(x k ) 2 J(x k 1 ) d 2 k 1 conjugate gradient E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 37/58
54 Getting the gradient is not obvious It is often difficult (or even impossible) to obtain the gradient through the computation of growth rates. Example: { dx(t)) = M(x(t)) t [0, T ] dt x(t = 0) = u J(u) = 1 2 T 0 J(u) = with u = u 1. u N x(t) x obs (t) 2 requires one model run J u 1 (u). J u N (u) [J(u + α e 1 ) J(u)] /α. [J(u + α e N ) J(u)] /α N + 1 model runs E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 38/58
55 Getting the gradient is not obvious In actual applications like meteorology / oceanography, N = [u] = O( ) this method cannot be used. Alternatively, the adjoint method provides a very efficient way to compute J. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 39/58
56 Example: an adjoint for the Burgers equation u t + u u x ν 2 u = f x ]0, L[, t [0, T ] x 2 u(0, t) = ψ 1 (t) u(l, t) = ψ 2 (t) t [0, T ] u(x, 0) = u 0 (x) x [0, L] u obs (x, t) an observation of u(x, t) Cost function: J(u 0 ) = 1 2 T L 0 0 ( u(x, t) u obs (x, t) ) 2 dx dt E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 40/58
57 Example: an adjoint for the Burgers equation u t + u u x ν 2 u = f x ]0, L[, t [0, T ] x 2 u(0, t) = ψ 1 (t) u(l, t) = ψ 2 (t) t [0, T ] u(x, 0) = u 0 (x) x [0, L] u obs (x, t) an observation of u(x, t) Cost function: J(u 0 ) = 1 2 T L 0 0 ( u(x, t) u obs (x, t) ) 2 dx dt Adjoint model p t + u p x + ν 2 p x 2 = u uobs x ]0, L[, t [0, T ] p(0, t) = 0 p(l, t) = 0 t [0, T ] p(x, T ) = 0 x [0, L] final condition!! backward integration Gradient of J J = p(., 0) function of x E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 40/58
58 Getting the gradient is not obvious In actual applications like meteorology / oceanography, N = [u] = O( ) this method cannot be used. Alternatively, the adjoint method provides a very efficient way to compute J. It requires writing a tangent linear code and an adjoint code (beyond the scope of this lecture): obeys systematic rules is not the most interesting task you can imagine there exists automatic differentiation softwares: cf E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 41/58
59 Getting the gradient is not obvious In actual applications like meteorology / oceanography, N = [u] = O( ) this method cannot be used. Alternatively, the adjoint method provides a very efficient way to compute J. It requires writing a tangent linear code and an adjoint code (beyond the scope of this lecture): obeys systematic rules is not the most interesting task you can imagine there exists automatic differentiation softwares: cf On the contrary, do not forget that, if the size of the control variable is very small (< 10), J can be easily estimated by the computation of growth rates. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 41/58
60 Generalization: statistical approach E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 42/58
61 Generalization: statistical approach To be estimated: x = x 1. x n IR n Observations: y = Observation operator: y H(x), with H : IR n IR p Statistical framework: y is a realization of a random vector Y One is looking for the BLUE, i.e. a r.v. ˆX that is linear: ˆX = AY with size(a) = (n, p) y 1. y p IR p unbiased: E(ˆX) = x of minimal variance: n Var(ˆX) = Var( ˆX i ) = Tr(Cov(ˆX)) minimum i=1 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 43/58
62 Generalization: statistical approach Hypotheses Linear observation operator: H(x) = Hx Let Y = Hx + ε with ε random vector in IR p E(ε) = 0 unbiased measurement devices Cov(ε) = E(εε T ) = R known accuracies and covariances E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 44/58
63 Generalization: statistical approach Hypotheses Linear observation operator: H(x) = Hx Let Y = Hx + ε with ε random vector in IR p E(ε) = 0 unbiased measurement devices Cov(ε) = E(εε T ) = R known accuracies and covariances BLUE ˆX = (H T R 1 H) 1 H T R 1 Y with Cov(ˆX) = (H T R 1 H) 1 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 44/58
64 Generalization: statistical approach Hypotheses Linear observation operator: H(x) = Hx Let Y = Hx + ε with ε random vector in IR p E(ε) = 0 unbiased measurement devices Cov(ε) = E(εε T ) = R known accuracies and covariances BLUE ˆX = (H T R 1 H) 1 H T R 1 Y with Cov(ˆX) = (H T R 1 H) 1 Reminder: variational approach in the linear case J o (x) = 1 2 Hx y 2 o = 1 2 (Hx y)t R 1 (Hx y) min x IR J o(x) ˆx = (H T R 1 H) 1 H T R 1 y n E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 44/58
65 Formalism background value + new observations Background: X b = x + ε b and new observations: Y = Hx + ε o Hypotheses: E(ε b ) = 0 E(ε o ) = 0 Cov(ε b, ε o ) = 0 Cov(ε b ) = B et Cov(ε o ) = R BLUE unbiased background unbiased measurement devices independent background and observation errors known accuracies and covariances ˆX = X b + (B 1 + H T R 1 H) 1 H T R 1 (Y HX b ) }{{}}{{} gain matrix innovation vector [ 1 with Cov(ˆX)] = B 1 + H T R 1 H accuracies are added E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 45/58
66 Link with the variational approach Statistical approach: BLUE ˆX = X b + (B 1 + H T R 1 H) 1 H T R 1 (Y HX b ) with Cov(ˆX) = (B 1 + H T R 1 H) 1 Variational approach in the linear case J(x) = 1 2 x x b 2 b H(x) y 2 o = 1 2 (x x b) T B 1 (x x b ) (Hx y)t R 1 (Hx y) min x IR n J(x) ˆx = x b + (B 1 + H T R 1 H) 1 H T R 1 (y Hx b ) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 46/58
67 Link with the variational approach Statistical approach: BLUE ˆX = X b + (B 1 + H T R 1 H) 1 H T R 1 (Y HX b ) with Cov(ˆX) = (B 1 + H T R 1 H) 1 Variational approach in the linear case J(x) = 1 2 x x b 2 b H(x) y 2 o = 1 2 (x x b) T B 1 (x x b ) (Hx y)t R 1 (Hx y) min x IR n J(x) ˆx = x b + (B 1 + H T R 1 H) 1 H T R 1 (y Hx b ) Same remarks as previously The statistical approach rationalizes the choice of the norms for J o and J b in the variational approach. [ ] 1 Cov(ˆX) = B 1 + H T R 1 H = Hess(J) }{{}}{{} accuracy convexity E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 46/58
68 If the problem is time dependent Dynamical system: x t (t k+1 ) = M(t k, t k+1 )x t (t k ) + e(t k ) x t (t k ) true state at time t k M(t k, t k+1 ) model assumed linear between t k and t k+1 e(t k ) model error at time t k At every observation time t k, we have an observation y k and a model forecast x f (t k ). The BLUE can be applied: E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 47/58
69 If the problem is time dependent x t (t k+1 ) = M(t k, t k+1 ) x t (t k ) + e(t k ) Hypotheses e(t k ) is unbiased, with covariance matrix Q k e(t k ) and e(t l ) are independent (k l) Unbiased observation y k, with error covariance matrix R k e(t k ) and analysis error x a (t k ) x t (t k ) are independent E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 48/58
70 If the problem is time dependent Kalman filter (Kalman and Bucy, 1961) Initialization: x a (t 0 ) = x 0 approximate initial state P a (t 0 ) = P 0 error covariance matrix Step k: (prediction - correction, or forecast - analysis) x f (t k+1 ) = M(t k, t k+1 ) x a (t k ) Forecast P f (t k+1 ) = M(t k, t k+1 )P a (t k )M T (t k, t k+1 ) + Q k [ x a (t k+1 ) = x f (t k+1 ) + K k+1 yk+1 H k+1 x f (t k+1 ) ] BLUE [ K k+1 = P f (t k+1 )H T k+1 Hk+1 P f (t k+1 )H T k+1 + R 1 k+1] P a (t k+1 ) = P f (t k+1 ) K k+1 H k+1 P f (t k+1 ) where exponents f and a stand respectively for forecast and analysis. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 49/58
71 If the problem is time dependent Equivalence with the variational approach If H k and M(t k, t k+1 ) are linear, and if the model is perfect (e k = 0), then the Kalman filter and the variational method minimizing J(x) = 1 2 (x x 0) T P 1 0 (x x 0) + 1 N (H k M(t 0, t k )x y k ) T R 1 k (H k M(t 0, t k )x y k ) 2 k=0 lead to the same solution at t = t N. E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 50/58
72 In summary E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 51/58
73 In summary variational approach least squares minimization (non dimensional terms) no particular hypothesis either for stationary or time dependent problems If M and H are linear, the cost function is quadratic: a unique solution if p n Adding a background term ensures this property. If things are non linear, the approach is still valid. Possibly several minima statistical approach hypotheses on the first two moments time independent + H linear + p n: BLUE (first two moments) time dependent + M and H linear: Kalman filter (based on the BLUE) hypotheses on the pdfs: Bayesian approach (pdf) + ML or MAP estimator E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 52/58
74 In summary The statistical approach gives a rationale for the choice of the norms, and gives an estimation of the uncertainty. time independent problems if H is linear, the variational and the statistical approaches lead to the same solution (provided. b is based on B 1 and. o is based on R 1 ) time dependent problems if H and M are linear, if the model is perfect, both approaches lead to the same solution at final time. 4D-Var Kalman filter E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 53/58
75 To go further... E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 54/58
76 Common main methodological difficulties Non linearities: J non quadratic / what about Kalman filter? Huge dimensions [x] = O( ): minimization of J / management of huge matrices Poorly known error statistics: choice of the norms / B, R, Q Scientific computing issues (data management, code efficiency, parallelization...) E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 55/58
77 In short Variational methods: a series of approximations of the cost function, corresponding to a series of methods: 4D-Var, incremental 4D-Var, 3D-FGAT, 3D-Var the more sophisticated ones (4D-Var, incremental 4D-Var) require the tangent linear and adjoint models (the development of which is a real investment) Statistical methods: extended Kalman filter handle (weakly) non linear problems (requires the tangent linear model) reduced order Kalman filters address huge dimension problems a quite efficient method, addressing both problems: ensemble Kalman filters (EnKF) these are so called Gaussian filters particle filters: currently being developed - fully Bayesian approach - still limited to low dimension problems E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 56/58
78 Some present research directions new methods: less expensive, more robust w.r.t. nonlinearities and/or non gaussianity (particle filters, En4DVar, BFN...) better management of errors (prior statistics, identification, a posteriori validation...) complex observations (images, Lagrangian data...) new application domains (often leading to new methodological questions) definition of observing systems, sensitivity analysis... E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 57/58
79 Two announcements CNA 2014: 5ème Colloque National d Assimilation de données Toulouse, 1-3 décembre 2014 Doctoral course Introduction to data assimilation Grenoble, January 5-9, 2015 E. Blayo - An introduction to data assimilation Formation Modélisation ECCOREV 58/58
Fundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationIntroduction to Data Assimilation. Reima Eresmaa Finnish Meteorological Institute
Introduction to Data Assimilation Reima Eresmaa Finnish Meteorological Institute 15 June 2006 Outline 1) The purpose of data assimilation 2) The inputs for data assimilation 3) Analysis methods Theoretical
More information4. DATA ASSIMILATION FUNDAMENTALS
4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic
More informationIn the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.
hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed
More informationAssimilation des Observations et Traitement des Incertitudes en Météorologie
Assimilation des Observations et Traitement des Incertitudes en Météorologie Olivier Talagrand Laboratoire de Météorologie Dynamique, Paris 4èmes Rencontres Météo/MathAppli Météo-France, Toulouse, 25 Mars
More informationFundamentals of Data Assimila1on
014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationVariational data assimilation
Background and methods NCEO, Dept. of Meteorology, Univ. of Reading 710 March 2018, Univ. of Reading Bayes' Theorem Bayes' Theorem p(x y) = posterior distribution = p(x) p(y x) p(y) prior distribution
More informationM.Sc. in Meteorology. Numerical Weather Prediction
M.Sc. in Meteorology UCD Numerical Weather Prediction Prof Peter Lynch Meteorology & Climate Cehtre School of Mathematical Sciences University College Dublin Second Semester, 2005 2006. Text for the Course
More informationVariational Data Assimilation Current Status
Variational Data Assimilation Current Status Eĺıas Valur Hólm with contributions from Mike Fisher and Yannick Trémolet ECMWF NORDITA Solar and stellar dynamos and cycles October 2009 Eĺıas Valur Hólm (ECMWF)
More informationEnsemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors
Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Eugenia Kalnay Lecture 2 Alghero, May 2008 Elements of Ensemble Forecasting
More informationA new Hierarchical Bayes approach to ensemble-variational data assimilation
A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander
More informationGaussian Filtering Strategies for Nonlinear Systems
Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors
More informationA Spectral Approach to Linear Bayesian Updating
A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT
More informationThe Inversion Problem: solving parameters inversion and assimilation problems
The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationEnKF-based particle filters
EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.
More informationKalman Filter and Ensemble Kalman Filter
Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty
More information(Extended) Kalman Filter
(Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system
More informationStability of Ensemble Kalman Filters
Stability of Ensemble Kalman Filters Idrissa S. Amour, Zubeda Mussa, Alexander Bibov, Antti Solonen, John Bardsley, Heikki Haario and Tuomo Kauranne Lappeenranta University of Technology University of
More informationEnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm
Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress
More informationData assimilation : Basics and meteorology
Data assimilation : Basics and meteorology Olivier Talagrand Laboratoire de Météorologie Dynamique, École Normale Supérieure, Paris, France Workshop on Coupled Climate-Economics Modelling and Data Analysis
More informationRelationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF
Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship
More informationInverse problems and uncertainty quantification in remote sensing
1 / 38 Inverse problems and uncertainty quantification in remote sensing Johanna Tamminen Finnish Meterological Institute johanna.tamminen@fmi.fi ESA Earth Observation Summer School on Earth System Monitoring
More informationLecture notes on assimilation algorithms
Lecture notes on assimilation algorithms Elías alur Hólm European Centre for Medium-Range Weather Forecasts Reading, UK April 18, 28 1 Basic concepts 1.1 The analysis In meteorology and other branches
More informationRelationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF
Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi 4th EnKF Workshop, April 2010 Relationship
More informationOrganization. I MCMC discussion. I project talks. I Lecture.
Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems
More informationSensitivity analysis in variational data assimilation and applications
Sensitivity analysis in variational data assimilation and applications Dacian N. Daescu Portland State University, P.O. Box 751, Portland, Oregon 977-751, U.S.A. daescu@pdx.edu ABSTRACT Mathematical aspects
More informationForecasting and data assimilation
Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics
More informationData assimilation concepts and methods March 1999
Data assimilation concepts and methods March 1999 By F. Bouttier and P. Courtier Abstract These training course lecture notes are an advanced and comprehensive presentation of most data assimilation methods
More informationData assimilation; comparison of 4D-Var and LETKF smoothers
Data assimilation; comparison of 4D-Var and LETKF smoothers Eugenia Kalnay and many friends University of Maryland CSCAMM DAS13 June 2013 Contents First part: Forecasting the weather - we are really getting
More informationApplication of the Ensemble Kalman Filter to History Matching
Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization
More informationMathematical Concepts of Data Assimilation
Mathematical Concepts of Data Assimilation N.K. Nichols 1 Introduction Environmental systems can be realistically described by mathematical and numerical models of the system dynamics. These models can
More informationFundamentals of Data Assimila1on
2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationEnsemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher
Ensemble forecasting and flow-dependent estimates of initial uncertainty Martin Leutbecher acknowledgements: Roberto Buizza, Lars Isaksen Flow-dependent aspects of data assimilation, ECMWF 11 13 June 2007
More informationRelative Merits of 4D-Var and Ensemble Kalman Filter
Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29
More informationLocalization in the ensemble Kalman Filter
Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and
More informationIntroduction to ensemble forecasting. Eric J. Kostelich
Introduction to ensemble forecasting Eric J. Kostelich SCHOOL OF MATHEMATICS AND STATISTICS MSRI Climate Change Summer School July 21, 2008 Co-workers: Istvan Szunyogh, Brian Hunt, Edward Ott, Eugenia
More informationMaximum Likelihood Ensemble Filter Applied to Multisensor Systems
Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO
More informationOptimal control problems with PDE constraints
Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.
More informationLagrangian Data Assimilation and Its Application to Geophysical Fluid Flows
Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:
More informationDiagnosis of observation, background and analysis-error statistics in observation space
Q. J. R. Meteorol. Soc. (2005), 131, pp. 3385 3396 doi: 10.1256/qj.05.108 Diagnosis of observation, background and analysis-error statistics in observation space By G. DESROZIERS, L. BERRE, B. CHAPNIK
More informationShort tutorial on data assimilation
Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum
More informationIntroduction to Data Assimilation
Introduction to Data Assimilation Alan O Neill Data Assimilation Research Centre University of Reading What is data assimilation? Data assimilation is the technique whereby observational data are combined
More informationOptimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields.
Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields. Optimal Interpolation ( 5.4) We now generalize the
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationIntroduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed
Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed Jeffrey Anderson, Tim Hoar, Nancy Collins NCAR Institute for Math Applied to Geophysics pg 1 What is Data Assimilation?
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College
More informationObservation Impact Assessment for Dynamic. Data-Driven Coupled Chaotic System
Applied Mathematical Sciences, Vol. 10, 2016, no. 45, 2239-2248 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.65170 Observation Impact Assessment for Dynamic Data-Driven Coupled Chaotic
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationManifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA
Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationState and Parameter Estimation in Stochastic Dynamical Models
State and Parameter Estimation in Stochastic Dynamical Models Timothy DelSole George Mason University, Fairfax, Va and Center for Ocean-Land-Atmosphere Studies, Calverton, MD June 21, 2011 1 1 collaboration
More informationLecture 4: Types of errors. Bayesian regression models. Logistic regression
Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture
More informationPhase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes
Phase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes Assume a phase space of dimension N where Autonomous governing equations with initial state: = is a state
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble
More informationParameter Estimation
1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive
More informationHow does 4D-Var handle Nonlinearity and non-gaussianity?
How does 4D-Var handle Nonlinearity and non-gaussianity? Mike Fisher Acknowledgements: Christina Tavolato, Elias Holm, Lars Isaksen, Tavolato, Yannick Tremolet Slide 1 Outline of Talk Non-Gaussian pdf
More informationarxiv: v1 [physics.ao-ph] 23 Jan 2009
A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationData assimilation for large state vectors
Data assimilation for large state vectors Frédéric Chevallier Laboratoire des Sciences du Climat et de l Environnement Gif-sur-Yvette France Outline Introduction Basic concepts Prior information Analytical
More informationGeneralized Gradient Descent Algorithms
ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 11 ECE 275A Generalized Gradient Descent Algorithms ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San
More informationData assimilation as an optimal control problem and applications to UQ
Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017
More informationBayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University
Bayesian Statistics and Data Assimilation Jonathan Stroud Department of Statistics The George Washington University 1 Outline Motivation Bayesian Statistics Parameter Estimation in Data Assimilation Combined
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationLocal Ensemble Transform Kalman Filter
Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states
More informationErgodicity in data assimilation methods
Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation
More informationKalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q
Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I
More information6 Sequential Data Assimilation for Nonlinear Dynamics: The Ensemble Kalman Filter
6 Sequential Data Assimilation for Nonlinear Dynamics: The Ensemble Kalman Filter GEIR EVENSEN Nansen Environmental and Remote Sensing Center, Bergen, Norway 6.1 Introduction Sequential data assimilation
More informationIntroduction to Data Assimilation. Saroja Polavarapu Meteorological Service of Canada University of Toronto
Introduction to Data Assimilation Saroja Polavarapu Meteorological Service of Canada University of Toronto GCC Summer School, Banff. May 22-28, 2004 Outline of lectures General idea Numerical weather prediction
More informationData Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis
Data Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture
More informationStatistics and Data Analysis
Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationBrian J. Etherton University of North Carolina
Brian J. Etherton University of North Carolina The next 90 minutes of your life Data Assimilation Introit Different methodologies Barnes Analysis in IDV NWP Error Sources 1. Intrinsic Predictability Limitations
More informationLagrangian data assimilation for point vortex systems
JOT J OURNAL OF TURBULENCE http://jot.iop.org/ Lagrangian data assimilation for point vortex systems Kayo Ide 1, Leonid Kuznetsov 2 and Christopher KRTJones 2 1 Department of Atmospheric Sciences and Institute
More informationLoïk Berre Météo-France (CNRM/GAME) Thanks to Gérald Desroziers
Estimation and diagnosis of analysis/background errors using ensemble assimilation Loïk Berre Météo-France (CNRM/GAME) Thanks to Gérald Desroziers Outline 1. Simulation of the error evolution 2. The operational
More informationObservability, a Problem in Data Assimilation
Observability, Data Assimilation with the Extended Kalman Filter 1 Observability, a Problem in Data Assimilation Chris Danforth Department of Applied Mathematics and Scientific Computation, UMD March 10,
More informationAdaptive Data Assimilation and Multi-Model Fusion
Adaptive Data Assimilation and Multi-Model Fusion Pierre F.J. Lermusiaux, Oleg G. Logoutov and Patrick J. Haley Jr. Mechanical Engineering and Ocean Science and Engineering, MIT We thank: Allan R. Robinson
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationEnsemble Kalman Filter
Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents
More informationGaussian processes and bayesian optimization Stanisław Jastrzębski. kudkudak.github.io kudkudak
Gaussian processes and bayesian optimization Stanisław Jastrzębski kudkudak.github.io kudkudak Plan Goal: talk about modern hyperparameter optimization algorithms Bayes reminder: equivalent linear regression
More informationFour-Dimensional Ensemble Kalman Filtering
Four-Dimensional Ensemble Kalman Filtering B.R. Hunt, E. Kalnay, E.J. Kostelich, E. Ott, D.J. Patil, T. Sauer, I. Szunyogh, J.A. Yorke, A.V. Zimin University of Maryland, College Park, MD 20742, USA Ensemble
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More informationA Note on the Particle Filter with Posterior Gaussian Resampling
Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and
More informationConvergence of the Ensemble Kalman Filter in Hilbert Space
Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationLagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC
Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background
More informationNumerical Weather prediction at the European Centre for Medium-Range Weather Forecasts
Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts Time series curves 500hPa geopotential Correlation coefficent of forecast anomaly N Hemisphere Lat 20.0 to 90.0 Lon
More informationInverse methods and data assimilation
Inverse methods and data assimilation 1 Introduction, and simple but essential exemple 3 1.1 What is an inverse problem........................... 3 1.2 A simple example................................
More informationStatistics 910, #15 1. Kalman Filter
Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More information4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données
Four-Dimensional Ensemble-Variational Data Assimilation 4DEnVar Colloque National sur l'assimilation de données Andrew Lorenc, Toulouse France. 1-3 décembre 2014 Crown copyright Met Office 4DEnVar: Topics
More informationEnsemble Data Assimilation and Uncertainty Quantification
Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationOptimization Problems
Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that
More informationStochastic Collocation Methods for Polynomial Chaos: Analysis and Applications
Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535
More informationMachine Learning Lecture 5
Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory
More information