M.Sc. in Meteorology. Numerical Weather Prediction
|
|
- Stuart Powers
- 6 years ago
- Views:
Transcription
1 M.Sc. in Meteorology UCD Numerical Weather Prediction Prof Peter Lynch Meteorology & Climate Cehtre School of Mathematical Sciences University College Dublin Second Semester,
2 Text for the Course The lectures will be based closely on the text Atmospheric Modeling, Data Assimilation and Predictability by Eugenia Kalnay published by Cambridge University Press (2002). 2
3 Data Assimilation (Kalnay, Ch. 5) NWP is an initial/boundary value problem 3
4 Data Assimilation (Kalnay, Ch. 5) NWP is an initial/boundary value problem Given an estimate of the present state of the atmosphere (initial conditions) appropriate surface and lateral boundary conditions the model simulates or forecasts the evolution of the atmosphere. 3
5 Data Assimilation (Kalnay, Ch. 5) NWP is an initial/boundary value problem Given an estimate of the present state of the atmosphere (initial conditions) appropriate surface and lateral boundary conditions the model simulates or forecasts the evolution of the atmosphere. The more accurate the estimate of the initial conditions, the better the quality of the forecasts. 3
6 Data Assimilation (Kalnay, Ch. 5) NWP is an initial/boundary value problem Given an estimate of the present state of the atmosphere (initial conditions) appropriate surface and lateral boundary conditions the model simulates or forecasts the evolution of the atmosphere. The more accurate the estimate of the initial conditions, the better the quality of the forecasts. Operational NWP centers produce initial conditions through a statistical combination of observations and short-range forecasts. 3
7 Data Assimilation (Kalnay, Ch. 5) NWP is an initial/boundary value problem Given an estimate of the present state of the atmosphere (initial conditions) appropriate surface and lateral boundary conditions the model simulates or forecasts the evolution of the atmosphere. The more accurate the estimate of the initial conditions, the better the quality of the forecasts. Operational NWP centers produce initial conditions through a statistical combination of observations and short-range forecasts. This approach is called data assimilation 3
8 5
9 6
10 7
11 8
12 9
13 10
14 11
15 Insufficiency of Data Coverage Modern primitive equations models have a number of degrees of freedom of the order of
16 Insufficiency of Data Coverage Modern primitive equations models have a number of degrees of freedom of the order of For a time window of ±3 hours, there are typically 10 to 100 thousand observations of the atmosphere, two orders of magnitude less than the number of degrees of freedom of the model. 12
17 Insufficiency of Data Coverage Modern primitive equations models have a number of degrees of freedom of the order of For a time window of ±3 hours, there are typically 10 to 100 thousand observations of the atmosphere, two orders of magnitude less than the number of degrees of freedom of the model. Moreover, they are distributed nonuniformly in space and time. 12
18 Insufficiency of Data Coverage Modern primitive equations models have a number of degrees of freedom of the order of For a time window of ±3 hours, there are typically 10 to 100 thousand observations of the atmosphere, two orders of magnitude less than the number of degrees of freedom of the model. Moreover, they are distributed nonuniformly in space and time. It is necessary to use additional information, called the background field, first guess or prior information. 12
19 Insufficiency of Data Coverage Modern primitive equations models have a number of degrees of freedom of the order of For a time window of ±3 hours, there are typically 10 to 100 thousand observations of the atmosphere, two orders of magnitude less than the number of degrees of freedom of the model. Moreover, they are distributed nonuniformly in space and time. It is necessary to use additional information, called the background field, first guess or prior information. A short-range forecast is used as the first guess in operational data assimilation systems. 12
20 Insufficiency of Data Coverage Modern primitive equations models have a number of degrees of freedom of the order of For a time window of ±3 hours, there are typically 10 to 100 thousand observations of the atmosphere, two orders of magnitude less than the number of degrees of freedom of the model. Moreover, they are distributed nonuniformly in space and time. It is necessary to use additional information, called the background field, first guess or prior information. A short-range forecast is used as the first guess in operational data assimilation systems. Present-day operational systems typically use a 6-h cycle performed four times a day. 12
21 Typical 6-hour analysis cycle. 13
22 Suppose the background field is a model 6-h forecast: x b 14
23 Suppose the background field is a model 6-h forecast: x b To obtain the background or first guess observations, the model forecast is interpolated to the observation location 14
24 Suppose the background field is a model 6-h forecast: x b To obtain the background or first guess observations, the model forecast is interpolated to the observation location If the observed quantities are not the same as the model variables, the model variables are converted to observed variables y o. 14
25 Suppose the background field is a model 6-h forecast: x b To obtain the background or first guess observations, the model forecast is interpolated to the observation location If the observed quantities are not the same as the model variables, the model variables are converted to observed variables y o. The first guess of the observations is denoted H(x b ) where H is called the observation operator. 14
26 Suppose the background field is a model 6-h forecast: x b To obtain the background or first guess observations, the model forecast is interpolated to the observation location If the observed quantities are not the same as the model variables, the model variables are converted to observed variables y o. The first guess of the observations is denoted H(x b ) where H is called the observation operator. The difference between the observations and the background, y o H(x b ), is called the observational increment or innovation. 14
27 The analysis x a is obtained by adding the innovations to the background field with weights W that are determined based on the estimated statistical error covariances of the forecast and the observations: x a = x b + W[y o H(x b )] 15
28 The analysis x a is obtained by adding the innovations to the background field with weights W that are determined based on the estimated statistical error covariances of the forecast and the observations: x a = x b + W[y o H(x b )] Different analysis schemes (SCM, OI, 3D-Var, and KF) are based on this equation, but differ by the approach taken to combine the background and the observations to produce the analysis. 15
29 The analysis x a is obtained by adding the innovations to the background field with weights W that are determined based on the estimated statistical error covariances of the forecast and the observations: x a = x b + W[y o H(x b )] Different analysis schemes (SCM, OI, 3D-Var, and KF) are based on this equation, but differ by the approach taken to combine the background and the observations to produce the analysis. Earlier methods such as the SCM used weights which were determined empirically. The weights were a function of the distance between the observation and the grid point, and the analysis wass iterated several times. 15
30 In Optimal Interpolation (OI), the matrix of weights W is determined from the minimization of the analysis errors at each grid point. 16
31 In Optimal Interpolation (OI), the matrix of weights W is determined from the minimization of the analysis errors at each grid point. In the 3D-Var approach one defines a cost function proportional to the square of the distance between the analysis and both the background and the observations. This cost function is minimized to obtain the analysis. 16
32 In Optimal Interpolation (OI), the matrix of weights W is determined from the minimization of the analysis errors at each grid point. In the 3D-Var approach one defines a cost function proportional to the square of the distance between the analysis and both the background and the observations. This cost function is minimized to obtain the analysis. Lorenc (1986) showed that OI and the 3D-Var approach are equivalent if the cost function is defined as: J = 1 { } [y 2 o H(x)] T R 1 [y o H(x)] + (x x b ) T B 1 (x x b ) 16
33 In Optimal Interpolation (OI), the matrix of weights W is determined from the minimization of the analysis errors at each grid point. In the 3D-Var approach one defines a cost function proportional to the square of the distance between the analysis and both the background and the observations. This cost function is minimized to obtain the analysis. Lorenc (1986) showed that OI and the 3D-Var approach are equivalent if the cost function is defined as: J = 1 { } [y 2 o H(x)] T R 1 [y o H(x)] + (x x b ) T B 1 (x x b ) The cost function J measures: The distance of a field x to the observations (first term) The distance to the background x b (second term). 16
34 The distances are scaled by the observation error covariance R and by the background error covariance B respectively. The minimum of the cost function is obtained for x = x a, which is defined as the analysis. 17
35 The distances are scaled by the observation error covariance R and by the background error covariance B respectively. The minimum of the cost function is obtained for x = x a, which is defined as the analysis. The analysis obtained by OI and 3DVar is the same if the weight matrix is given by W = BH T (HBH T + R 1 ) 1 17
36 The distances are scaled by the observation error covariance R and by the background error covariance B respectively. The minimum of the cost function is obtained for x = x a, which is defined as the analysis. The analysis obtained by OI and 3DVar is the same if the weight matrix is given by W = BH T (HBH T + R 1 ) 1 The difference between OI and the 3D-Var approach is in the method of solution: 17
37 The distances are scaled by the observation error covariance R and by the background error covariance B respectively. The minimum of the cost function is obtained for x = x a, which is defined as the analysis. The analysis obtained by OI and 3DVar is the same if the weight matrix is given by W = BH T (HBH T + R 1 ) 1 The difference between OI and the 3D-Var approach is in the method of solution: In OI, the weights W are obtained for each grid point or grid volume, using suitable simplifications. 17
38 The distances are scaled by the observation error covariance R and by the background error covariance B respectively. The minimum of the cost function is obtained for x = x a, which is defined as the analysis. The analysis obtained by OI and 3DVar is the same if the weight matrix is given by W = BH T (HBH T + R 1 ) 1 The difference between OI and the 3D-Var approach is in the method of solution: In OI, the weights W are obtained for each grid point or grid volume, using suitable simplifications. In 3D-Var, the minimization of J is performed directly, allowing for additional flexibility and a simultaneous global use of the data. 17
39 Schematic of the size of matrix operations in OI and 3D-Var (from
40 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). 18
41 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). This is called four-dimensional variational assimilation (4DVar). 18
42 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). This is called four-dimensional variational assimilation (4DVar). In the analysis cycle, the importance of the model cannot be overemphasized: 18
43 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). This is called four-dimensional variational assimilation (4DVar). In the analysis cycle, the importance of the model cannot be overemphasized: It transports information from data-rich to data-poor regions 18
44 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). This is called four-dimensional variational assimilation (4DVar). In the analysis cycle, the importance of the model cannot be overemphasized: It transports information from data-rich to data-poor regions It provides a complete estimation of the four-dimensional state of the atmosphere. 18
45 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). This is called four-dimensional variational assimilation (4DVar). In the analysis cycle, the importance of the model cannot be overemphasized: It transports information from data-rich to data-poor regions It provides a complete estimation of the four-dimensional state of the atmosphere. The introduction of 4DVar at ECMWF has resulted in marked improvements in the quality of medium-range forecasts. 18
46 Recently, the variational approach has been extended to four dimensions, by including within the cost function the distance to observations over a time interval (assimilation window). This is called four-dimensional variational assimilation (4DVar). In the analysis cycle, the importance of the model cannot be overemphasized: It transports information from data-rich to data-poor regions It provides a complete estimation of the four-dimensional state of the atmosphere. The introduction of 4DVar at ECMWF has resulted in marked improvements in the quality of medium-range forecasts. End of Introduction 18
47 21
48 Background Field For operational models, it is not enough to perform spatial interpolation of observations into regular grids: There are not enough data available to define the initial state. 23
49 Background Field For operational models, it is not enough to perform spatial interpolation of observations into regular grids: There are not enough data available to define the initial state. The number of degrees of freedom in a modern NWP model is of the order of The total number of conventional observations is of the order of
50 Background Field For operational models, it is not enough to perform spatial interpolation of observations into regular grids: There are not enough data available to define the initial state. The number of degrees of freedom in a modern NWP model is of the order of The total number of conventional observations is of the order of There are many new types of data, such as satellite and radar observations, but: they don t measure the variables used in the models their distribution in space and time is very nonuniform. 23
51 Background Field In addition to observations, it is necessary to use a first guess estimate of the state of the atmosphere at the grid points. 24
52 Background Field In addition to observations, it is necessary to use a first guess estimate of the state of the atmosphere at the grid points. The first guess (also known as background field or prior information) is our best estimate of the state of the atmosphere prior to the use of the observations. 24
53 Background Field In addition to observations, it is necessary to use a first guess estimate of the state of the atmosphere at the grid points. The first guess (also known as background field or prior information) is our best estimate of the state of the atmosphere prior to the use of the observations. A short-range forecast is normally used as a first guess in operational systems in what is called an analysis cycle. 24
54 Background Field In addition to observations, it is necessary to use a first guess estimate of the state of the atmosphere at the grid points. The first guess (also known as background field or prior information) is our best estimate of the state of the atmosphere prior to the use of the observations. A short-range forecast is normally used as a first guess in operational systems in what is called an analysis cycle. If a forecast is unavailable (e.g., if the cycle is broken), we may have to use climatological fields but they are normally a poor estimate of the initial state. 24
55 Global 6-h analysis cycle (00, 06, 12, and 18 UTC). 25
56 Regional analysis cycle, performed (perhaps) every hour. 26
57 Intermittent data assimilation is used in most global operational systems, typically with a 6-h cycle performed four times a day. 27
58 Intermittent data assimilation is used in most global operational systems, typically with a 6-h cycle performed four times a day. The model forecast plays a very important role: 27
59 Intermittent data assimilation is used in most global operational systems, typically with a 6-h cycle performed four times a day. The model forecast plays a very important role: Over data-rich regions, the analysis is dominated by the information contained in the observations. 27
60 Intermittent data assimilation is used in most global operational systems, typically with a 6-h cycle performed four times a day. The model forecast plays a very important role: Over data-rich regions, the analysis is dominated by the information contained in the observations. In data-poor regions, the forecast benefits from the information upstream. 27
61 Intermittent data assimilation is used in most global operational systems, typically with a 6-h cycle performed four times a day. The model forecast plays a very important role: Over data-rich regions, the analysis is dominated by the information contained in the observations. In data-poor regions, the forecast benefits from the information upstream. For example, 6-h forecasts over the North Atlantic Ocean are relatively good, because of the information coming from North America. 27
62 Intermittent data assimilation is used in most global operational systems, typically with a 6-h cycle performed four times a day. The model forecast plays a very important role: Over data-rich regions, the analysis is dominated by the information contained in the observations. In data-poor regions, the forecast benefits from the information upstream. For example, 6-h forecasts over the North Atlantic Ocean are relatively good, because of the information coming from North America. The model is able to transport information from data-rich to data-poor areas. 27
63 Least Squares Method (Kalnay, 5.3) We start with a toy model example, the two temperatures problem.
64 Least Squares Method (Kalnay, 5.3) We start with a toy model example, the two temperatures problem. We use two methods to solve it, a sequential and a variational approach, and find that they are equivalent: they yield identical results.
65 Least Squares Method (Kalnay, 5.3) We start with a toy model example, the two temperatures problem. We use two methods to solve it, a sequential and a variational approach, and find that they are equivalent: they yield identical results. The problem is important because the methodology and results carry over to multivariate OI, Kalman filtering, and 3D-Var and 4D-Var assimilation.
66 Least Squares Method (Kalnay, 5.3) We start with a toy model example, the two temperatures problem. We use two methods to solve it, a sequential and a variational approach, and find that they are equivalent: they yield identical results. The problem is important because the methodology and results carry over to multivariate OI, Kalman filtering, and 3D-Var and 4D-Var assimilation. If you fully understand the toy model, you should find the more realistic application straightforward.
67 Statistical estimation Introduction. Each of you: Guess the temperature in this room right now. How can we get a best estimate of the temperature? 2
68 Statistical estimation Introduction. Each of you: Guess the temperature in this room right now. How can we get a best estimate of the temperature? The best estimate of the state of the atmosphere is obtained by combining prior information about the atmosphere (background or first guess) with observations. 2
69 Statistical estimation Introduction. Each of you: Guess the temperature in this room right now. How can we get a best estimate of the temperature? The best estimate of the state of the atmosphere is obtained by combining prior information about the atmosphere (background or first guess) with observations. In order to combine them optimally, we also need statistical information about the errors in these pieces of information. 2
70 Statistical estimation Introduction. Each of you: Guess the temperature in this room right now. How can we get a best estimate of the temperature? The best estimate of the state of the atmosphere is obtained by combining prior information about the atmosphere (background or first guess) with observations. In order to combine them optimally, we also need statistical information about the errors in these pieces of information. As an introduction to statistical estimation, we consider the simple problem, that we call the two temperatures problem: 2
71 Statistical estimation Introduction. Each of you: Guess the temperature in this room right now. How can we get a best estimate of the temperature? The best estimate of the state of the atmosphere is obtained by combining prior information about the atmosphere (background or first guess) with observations. In order to combine them optimally, we also need statistical information about the errors in these pieces of information. As an introduction to statistical estimation, we consider the simple problem, that we call the two temperatures problem: Given two independent observations T 1 and T 2, determine the best estimate of the true temperature T t. 2
72 Simple (toy) Example Let the two observations of temperature be } T 1 = T t + ε 1 T 2 = T t + ε 2 [For example, we might have two iffy thermometers]. 3
73 Simple (toy) Example Let the two observations of temperature be } T 1 = T t + ε 1 T 2 = T t + ε 2 [For example, we might have two iffy thermometers]. The observations have errors ε i, which we don t know. 3
74 Simple (toy) Example Let the two observations of temperature be } T 1 = T t + ε 1 T 2 = T t + ε 2 [For example, we might have two iffy thermometers]. The observations have errors ε i, which we don t know. Let E( ) represent the expected value, i.e., the average of many similar measurements. 3
75 Simple (toy) Example Let the two observations of temperature be } T 1 = T t + ε 1 T 2 = T t + ε 2 [For example, we might have two iffy thermometers]. The observations have errors ε i, which we don t know. Let E( ) represent the expected value, i.e., the average of many similar measurements. We assume that the measurements T 1 and T 2 are unbiased: or equivalently, E(T 1 T t ) = 0, E(T 2 T t ) = 0 E(ε 1 ) = E(ε 2 ) = 0 3
76 We also assume that we know the variances of the observational errors: E ( ε 2 1) = σ 2 1 E ( ε 2 2) = σ 2 2 4
77 We also assume that we know the variances of the observational errors: E ( ε 2 1) = σ 2 1 E ( ε 2 2) = σ 2 2 We next assume that the errors of the two measurements are uncorrelated: E(ε 1 ε 2 ) = 0 4
78 We also assume that we know the variances of the observational errors: E ( ε 2 1) = σ 2 1 E ( ε 2 2) = σ 2 2 We next assume that the errors of the two measurements are uncorrelated: E(ε 1 ε 2 ) = 0 This implies, for example, that there is no systematic tendency for one thermometer to read high (ε 2 > 0) when the other is high (ε 2 > 0). 4
79 We also assume that we know the variances of the observational errors: E ( ε 2 1) = σ 2 1 E ( ε 2 2) = σ 2 2 We next assume that the errors of the two measurements are uncorrelated: E(ε 1 ε 2 ) = 0 This implies, for example, that there is no systematic tendency for one thermometer to read high (ε 2 > 0) when the other is high (ε 2 > 0). The above equations represent the statistical information that we need about the actual observations. 4
80 We estimate T t as a linear combination of the observations: T a = a 1 T 1 + a 2 T 2 5
81 We estimate T t as a linear combination of the observations: T a = a 1 T 1 + a 2 T 2 The analysis T a should be unbiased: E(T a ) = E(T t ) 5
82 We estimate T t as a linear combination of the observations: T a = a 1 T 1 + a 2 T 2 The analysis T a should be unbiased: E(T a ) = E(T t ) This implies a 1 + a 2 = 1 5
83 We estimate T t as a linear combination of the observations: T a = a 1 T 1 + a 2 T 2 The analysis T a should be unbiased: E(T a ) = E(T t ) This implies a 1 + a 2 = 1 T a will be the best estimate of T t if the coefficients are chosen to minimize the mean squared error of T a : σ 2 a = E[(T a T t ) 2 ] = E {[ a 1 (T 1 T t ) + a 2 (T 2 T t ) ] 2 } subject to the constraint a 1 + a 2 = 1. 5
84 We estimate T t as a linear combination of the observations: T a = a 1 T 1 + a 2 T 2 The analysis T a should be unbiased: E(T a ) = E(T t ) This implies a 1 + a 2 = 1 T a will be the best estimate of T t if the coefficients are chosen to minimize the mean squared error of T a : σ 2 a = E[(T a T t ) 2 ] = E {[ a 1 (T 1 T t ) + a 2 (T 2 T t ) ] 2 } subject to the constraint a 1 + a 2 = 1. This may be written σ 2 a = E[(a 1 ε 1 + a 2 ε 2 ) 2 ] 5
85 Expanding this expression for σ 2 a, we get σ 2 a = a 2 1 σ2 1 + a2 2 σ2 2 To minimize σ 2 a w.r.t. a 1, we require σ 2 a/ a 1 = 0. 6
86 Expanding this expression for σ 2 a, we get σ 2 a = a 2 1 σ2 1 + a2 2 σ2 2 To minimize σ 2 a w.r.t. a 1, we require σ 2 a/ a 1 = 0. Naïve solution: σ 2 a/ a 1 = 2a 1 σ 2 1 = 0, so a 1 = 0. Similarly, σ 2 a/ a 2 = 0 implies a 2 = 0. 6
87 Expanding this expression for σ 2 a, we get σ 2 a = a 2 1 σ2 1 + a2 2 σ2 2 To minimize σ 2 a w.r.t. a 1, we require σ 2 a/ a 1 = 0. Naïve solution: σ 2 a/ a 1 = 2a 1 σ 2 1 = 0, so a 1 = 0. Similarly, σ 2 a/ a 2 = 0 implies a 2 = 0. We have forgotten the constraint a 1 + a 2 = 1. So, a 1 and a 2 are not independent. 6
88 Expanding this expression for σ 2 a, we get σ 2 a = a 2 1 σ2 1 + a2 2 σ2 2 To minimize σ 2 a w.r.t. a 1, we require σ 2 a/ a 1 = 0. Naïve solution: σ 2 a/ a 1 = 2a 1 σ 2 1 = 0, so a 1 = 0. Similarly, σ 2 a/ a 2 = 0 implies a 2 = 0. We have forgotten the constraint a 1 + a 2 = 1. So, a 1 and a 2 are not independent. Substituting a 2 = 1 a 1, we get σa 2 = a 2 1 σ2 1 + (1 a 1) 2 σ2 2 6
89 Expanding this expression for σ 2 a, we get σ 2 a = a 2 1 σ2 1 + a2 2 σ2 2 To minimize σ 2 a w.r.t. a 1, we require σ 2 a/ a 1 = 0. Naïve solution: σ 2 a/ a 1 = 2a 1 σ 2 1 = 0, so a 1 = 0. Similarly, σ 2 a/ a 2 = 0 implies a 2 = 0. We have forgotten the constraint a 1 + a 2 = 1. So, a 1 and a 2 are not independent. Substituting a 2 = 1 a 1, we get σa 2 = a 2 1 σ2 1 + (1 a 1) 2 σ2 2 Equating the derivative w.r.t. a 1 to zero, σa/ a 2 1 = 0, gives a 1 = σ2 2 σ σ2 2 a 2 = σ2 1 σ σ2 2 6
90 Expanding this expression for σ 2 a, we get σ 2 a = a 2 1 σ2 1 + a2 2 σ2 2 To minimize σ 2 a w.r.t. a 1, we require σ 2 a/ a 1 = 0. Naïve solution: σ 2 a/ a 1 = 2a 1 σ 2 1 = 0, so a 1 = 0. Similarly, σ 2 a/ a 2 = 0 implies a 2 = 0. We have forgotten the constraint a 1 + a 2 = 1. So, a 1 and a 2 are not independent. Substituting a 2 = 1 a 1, we get σa 2 = a 2 1 σ2 1 + (1 a 1) 2 σ2 2 Equating the derivative w.r.t. a 1 to zero, σa/ a 2 1 = 0, gives a 1 = σ2 2 σ σ2 2 a 2 = σ2 1 σ σ2 2 Thus, we have expressions for the weights a 1 and a 2 in terms of the variances (which are assumed to be known). 6
91 We define the precision to be the inverse of the variance. It is a measure of the accuracy of the observations. 7
92 We define the precision to be the inverse of the variance. It is a measure of the accuracy of the observations. Note: The term precision, while a good one, does not have universal currency, so it should be defined when used. 7
93 We define the precision to be the inverse of the variance. It is a measure of the accuracy of the observations. Note: The term precision, while a good one, does not have universal currency, so it should be defined when used. Substituting the coefficients in σa 2 = a 2 1 σ2 1 + a2 2 σ2 2, we obtain σ 2 a = σ2 1 σ2 2 σ σ2 2 7
94 We define the precision to be the inverse of the variance. It is a measure of the accuracy of the observations. Note: The term precision, while a good one, does not have universal currency, so it should be defined when used. Substituting the coefficients in σa 2 = a 2 1 σ2 1 + a2 2 σ2 2, we obtain σ 2 a = σ2 1 σ2 2 σ σ2 2 This can be written in the alternative form: 1 = 1 σ σ 2 2 σ 2 a 7
95 We define the precision to be the inverse of the variance. It is a measure of the accuracy of the observations. Note: The term precision, while a good one, does not have universal currency, so it should be defined when used. Substituting the coefficients in σa 2 = a 2 1 σ2 1 + a2 2 σ2 2, we obtain σ 2 a = σ2 1 σ2 2 σ σ2 2 This can be written in the alternative form: 1 = 1 σ σ 2 2 σ 2 a Thus, if the coefficients are optimal, the precision of the analysis is the sum of the precisions of the measurements. 7
96 Variational approach We can also obtain the same best estimate of T t by minimizing a cost function. 8
97 Variational approach We can also obtain the same best estimate of T t by minimizing a cost function. The cost function is defined as the sum of the squares of the distances of T to the two observations, weighted by their observational error precisions: J(T ) = 1 2 [ (T T 1 ) 2 σ 2 1 ] + (T T 2) 2 σ2 2 8
98 Variational approach We can also obtain the same best estimate of T t by minimizing a cost function. The cost function is defined as the sum of the squares of the distances of T to the two observations, weighted by their observational error precisions: J(T ) = 1 2 [ (T T 1 ) 2 σ 2 1 ] + (T T 2) 2 σ2 2 The minimum of the cost function J is obtained is obtained by requiring J/ T = 0. 8
99 Variational approach We can also obtain the same best estimate of T t by minimizing a cost function. The cost function is defined as the sum of the squares of the distances of T to the two observations, weighted by their observational error precisions: J(T ) = 1 2 [ (T T 1 ) 2 σ 2 1 ] + (T T 2) 2 σ2 2 The minimum of the cost function J is obtained is obtained by requiring J/ T = 0. Exercise: Prove that J/ T = 0 gives the same value for T a as the least squares method. 8
100 The control variable for the minimization of J (i.e., the variable with respect to which we are minimizing the cost function) is the temperature. For the least squares method, the control variables were the weights. 9
101 The control variable for the minimization of J (i.e., the variable with respect to which we are minimizing the cost function) is the temperature. For the least squares method, the control variables were the weights. The equivalence between the minimization of the analysis error variance and the variational cost function approach is important. 9
102 The control variable for the minimization of J (i.e., the variable with respect to which we are minimizing the cost function) is the temperature. For the least squares method, the control variables were the weights. The equivalence between the minimization of the analysis error variance and the variational cost function approach is important. This equivalence also holds true for multidimensional problems, in which case we use the covariance matrix rather than the scalar variance. It indicates that OI and 3D-Var are solving the same problem by different means. 9
103 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. 10
104 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a =
105 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a = 0.8. σ σ2 2 = 5 10
106 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a = 0.8. σ σ2 2 = 5 a 1 = σ2 2 σ σ2 2 = 1 5 a 2 = σ2 1 σ σ2 2 =
107 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a = 0.8. σ σ2 2 = 5 a 1 = σ2 2 σ σ2 2 = 1 5 a 2 = σ2 1 σ σ2 2 = 4 5 CHECK: a 1 + a 2 = 1. 10
108 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a = 0.8. σ σ2 2 = 5 a 1 = σ2 2 σ σ2 2 = 1 5 a 2 = σ2 1 σ σ2 2 = 4 5 CHECK: a 1 + a 2 = 1. T a = a 1 T 1 + a 2 T 2 = =
109 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a = 0.8. σ σ2 2 = 5 a 1 = σ2 2 σ σ2 2 = 1 5 a 2 = σ2 1 σ σ2 2 = 4 5 CHECK: a 1 + a 2 = 1. T a = a 1 T 1 + a 2 T 2 = = 0.4 σ 2 a = σ2 1 σ2 2 σ σ2 2 = =
110 Example: Suppose T 1 = 2 σ 1 = 2 T 2 = 0 σ 2 = 1. Show that T a = 0.4 and σ a = 0.8. σ σ2 2 = 5 a 1 = σ2 2 σ σ2 2 = 1 5 a 2 = σ2 1 σ σ2 2 = 4 5 CHECK: a 1 + a 2 = 1. T a = a 1 T 1 + a 2 T 2 = = 0.4 σ 2 a = σ2 1 σ2 2 σ σ2 2 = = 0.8 This solution is illustrated in the next figure. 10
111 The probability distribution for a simple case. The analysis has a pdf with a maximum closer to T 2, and a smaller standard deviation than either observation. 11
112 Conclusion of the foregoing. 12
113 Simple Sequential Assimilation We consider the toy example as a prototype of a full multivariate OI. 13
114 Simple Sequential Assimilation We consider the toy example as a prototype of a full multivariate OI. Recall that we wrote the analysis as a linear combination T a = a 1 T 1 + a 2 T 2 13
115 Simple Sequential Assimilation We consider the toy example as a prototype of a full multivariate OI. Recall that we wrote the analysis as a linear combination T a = a 1 T 1 + a 2 T 2 The requirement that the analysis be unbiassed led to a 1 + a 2 = 1, so T a = T 1 + a 2 (T 2 T 1 ) 13
116 Simple Sequential Assimilation We consider the toy example as a prototype of a full multivariate OI. Recall that we wrote the analysis as a linear combination T a = a 1 T 1 + a 2 T 2 The requirement that the analysis be unbiassed led to a 1 + a 2 = 1, so T a = T 1 + a 2 (T 2 T 1 ) Assume that one of the two temperatures, say T 1 = T b, is not an observation, but a background value, such as a forecast. Assume that the other value is an observation, T 2 = T o. 13
117 Simple Sequential Assimilation We consider the toy example as a prototype of a full multivariate OI. Recall that we wrote the analysis as a linear combination T a = a 1 T 1 + a 2 T 2 The requirement that the analysis be unbiassed led to a 1 + a 2 = 1, so T a = T 1 + a 2 (T 2 T 1 ) Assume that one of the two temperatures, say T 1 = T b, is not an observation, but a background value, such as a forecast. Assume that the other value is an observation, T 2 = T o. We can write the analysis as T a = T b + W (T o T b ) where W = a 2 can be expressed in terms of the variances. 13
118 The least squares method gave us the optimal weight: W = σ2 b σ 2 b + σ2 o 14
119 The least squares method gave us the optimal weight: W = σ2 b σ 2 b + σ2 o When the analysis is written as T a = T b + W (T o T b ) the quantity (T o T b ) is called the observational innovation, i.e., the new information brought by the observation. 14
120 The least squares method gave us the optimal weight: W = σ2 b σ 2 b + σ2 o When the analysis is written as T a = T b + W (T o T b ) the quantity (T o T b ) is called the observational innovation, i.e., the new information brought by the observation. It is also known as the observational increment (with respect to the background). 14
121 The analysis error variance is, as before, given by 1 σ 2 a = 1 σ 2 b + 1 σ 2 o or σ 2 a = σ2 b σ2 o σ 2 b + σ2 o 15
122 The analysis error variance is, as before, given by 1 σ 2 a = 1 σ 2 b + 1 σ 2 o or σ 2 a = σ2 b σ2 o σ 2 b + σ2 o The analysis variance can be written as σ 2 a = (1 W )σ 2 b 15
123 The analysis error variance is, as before, given by 1 σ 2 a = 1 σ 2 b + 1 σ 2 o or σ 2 a = σ2 b σ2 o σ 2 b + σ2 o The analysis variance can be written as σ 2 a = (1 W )σ 2 b Exercise: Verify all the foregoing formulæ. 15
124 The analysis error variance is, as before, given by 1 σ 2 a = 1 σ 2 b + 1 σ 2 o or σ 2 a = σ2 b σ2 o σ 2 b + σ2 o The analysis variance can be written as σ 2 a = (1 W )σ 2 b Exercise: Verify all the foregoing formulæ. We have shown that the simple two-temperatures problem serves as a paradigm for the problem of objective analysis of the atmospheric state. 15
125 Collection of Main Equations We gather the principal equations here: 16
126 Collection of Main Equations We gather the principal equations here: T a = T b + W (T o T b ) W = σ2 b σ 2 b + σ2 o σ 2 a = σ2 b σ2 o σ 2 b + σ2 o = W σ 2 o σ 2 a = (1 W )σ 2 b 16
127 These four equations have been derived for the simplest scalar case but they are important for the problem of data assimilation because they have exactly the same form as more general equations: 17
128 These four equations have been derived for the simplest scalar case but they are important for the problem of data assimilation because they have exactly the same form as more general equations: The least squares sequential estimation method is used for real multidimensional problems (OI, interpolation, 3D-Var and even Kalman filtering). 17
129 These four equations have been derived for the simplest scalar case but they are important for the problem of data assimilation because they have exactly the same form as more general equations: The least squares sequential estimation method is used for real multidimensional problems (OI, interpolation, 3D-Var and even Kalman filtering). Therefore we will interpret these four equations in detail. 17
130 The first equation T a = T b + W (T o T b ) 18
131 The first equation This says: T a = T b + W (T o T b ) The analysis is obtained by adding to the background value, or first guess, the innovation (the difference between the observation and first guess), weighted by the optimal weight. 18
132 The second equation This says: W = σ2 b σ 2 b + σ2 o The optimal weight is the background error variance multiplied by the inverse of the total error variance (the sum of the background and the observation error variances). 19
133 The second equation This says: W = σ2 b σ 2 b + σ2 o The optimal weight is the background error variance multiplied by the inverse of the total error variance (the sum of the background and the observation error variances). Note that the larger the background error variance, the larger the correction to the first guess. 19
134 The second equation This says: W = σ2 b σ 2 b + σ2 o The optimal weight is the background error variance multiplied by the inverse of the total error variance (the sum of the background and the observation error variances). Note that the larger the background error variance, the larger the correction to the first guess. Look at the limits: σ 2 o = 0 ; σ 2 b = 0. 19
135 The third equation The variance of the analysis is σ 2 a = σ2 b σ2 o σ 2 b + σ2 o 20
136 The third equation The variance of the analysis is σ 2 a = σ2 b σ2 o σ 2 b + σ2 o This can also be written 1 σ 2 a = 1 σ 2 b + 1 σ 2 o 20
137 The third equation The variance of the analysis is σ 2 a = σ2 b σ2 o σ 2 b + σ2 o This can also be written 1 σ 2 a = 1 σ 2 b + 1 σ 2 o This says: The precision of the analysis (inverse of the analysis error variance) is the sum of the precisions of the background and the observation. 20
138 The fourth equation This says: σ 2 a = (1 W )σ 2 b The error variance of the analysis is the error variance of the background, reduced by a factor equal to one minus the optimal weight. 21
139 The fourth equation This says: σ 2 a = (1 W )σ 2 b The error variance of the analysis is the error variance of the background, reduced by a factor equal to one minus the optimal weight. It can also be written σ 2 a = W σ 2 o 21
140 All the above statements are important because they also hold true for sequential data assimilation systems (OI and Kalman filtering) for multidimensional problems. 22
141 All the above statements are important because they also hold true for sequential data assimilation systems (OI and Kalman filtering) for multidimensional problems. In these problems, in which T b and T a are three-dimensional fields of size order 10 7 and T o is a set of observations (typically of size 10 5 ), we have to replace expressions as follows: 22
142 All the above statements are important because they also hold true for sequential data assimilation systems (OI and Kalman filtering) for multidimensional problems. In these problems, in which T b and T a are three-dimensional fields of size order 10 7 and T o is a set of observations (typically of size 10 5 ), we have to replace expressions as follows: error variance = error covariance matrix 22
143 All the above statements are important because they also hold true for sequential data assimilation systems (OI and Kalman filtering) for multidimensional problems. In these problems, in which T b and T a are three-dimensional fields of size order 10 7 and T o is a set of observations (typically of size 10 5 ), we have to replace expressions as follows: error variance = error covariance matrix optimal weight = optimal gain matrix. 22
144 All the above statements are important because they also hold true for sequential data assimilation systems (OI and Kalman filtering) for multidimensional problems. In these problems, in which T b and T a are three-dimensional fields of size order 10 7 and T o is a set of observations (typically of size 10 5 ), we have to replace expressions as follows: error variance = error covariance matrix optimal weight = optimal gain matrix. Note that there is one essential tuning parameter in OI: It is the ratio of the observationalvariance to the background error variance: ( ) 2 σo σ b 22
145 Application to Analysis If the background is a forecast, we can use the four equations to create a simple sequential analysis cycle. The observation is used once at the time it appears and then discarded. 23
146 Application to Analysis If the background is a forecast, we can use the four equations to create a simple sequential analysis cycle. The observation is used once at the time it appears and then discarded. Assume that we have completed the analysis at time t i (e.g., at 06 UTC), and we want to proceed to the next cycle (time t i+1, or 12 UTC). 23
147 Application to Analysis If the background is a forecast, we can use the four equations to create a simple sequential analysis cycle. The observation is used once at the time it appears and then discarded. Assume that we have completed the analysis at time t i (e.g., at 06 UTC), and we want to proceed to the next cycle (time t i+1, or 12 UTC). The analysis cycle has two phases, a forecast phase to update the background T b and its error variance σb 2, and an analysis phase, to update the analysis T a and its error variance σa. 2 23
148 Typical 6-hour analysis cycle. 24
149 Forecast Phase In the forecast phase of the analysis cycle, the background is first obtained through a forecast: T b (t i+1 ) = M [T a (t i )] where M represents the forecast model. 25
150 Forecast Phase In the forecast phase of the analysis cycle, the background is first obtained through a forecast: T b (t i+1 ) = M [T a (t i )] where M represents the forecast model. We also need the error variance of the background. 25
151 Forecast Phase In the forecast phase of the analysis cycle, the background is first obtained through a forecast: T b (t i+1 ) = M [T a (t i )] where M represents the forecast model. We also need the error variance of the background. In OI, this is obtained by making a suitable simple assumption, such as that the model integration increases the initial error variance by a fixed amount, a factor a somewhat greater than 1: σ 2 b (t i+1) = aσ 2 a(t i ) 25
152 Forecast Phase In the forecast phase of the analysis cycle, the background is first obtained through a forecast: T b (t i+1 ) = M [T a (t i )] where M represents the forecast model. We also need the error variance of the background. In OI, this is obtained by making a suitable simple assumption, such as that the model integration increases the initial error variance by a fixed amount, a factor a somewhat greater than 1: σ 2 b (t i+1) = aσ 2 a(t i ) This allows the new weight W (t i+1 ) to be estimated using W = σ2 b σ 2 b + σ2 o 25
153 Analysis Phase In the analysis phase of the cycle we get the new observation T o (t i+1 ), and we derive the new analysis T a (t i+1 ) using T a = T b + W (T o T b ) 26
154 Analysis Phase In the analysis phase of the cycle we get the new observation T o (t i+1 ), and we derive the new analysis T a (t i+1 ) using T a = T b + W (T o T b ) The estimates of σ 2 b is from σ 2 b (t i+1) = aσ 2 a(t i ) 26
155 Analysis Phase In the analysis phase of the cycle we get the new observation T o (t i+1 ), and we derive the new analysis T a (t i+1 ) using T a = T b + W (T o T b ) The estimates of σ 2 b is from σb 2 (t i+1) = aσa(t 2 i ) The new analysis error variance σa(t 2 i+1 ) comes from σa 2 = (1 W )σb 2 It is smaller than the background error. 26
156 Analysis Phase In the analysis phase of the cycle we get the new observation T o (t i+1 ), and we derive the new analysis T a (t i+1 ) using T a = T b + W (T o T b ) The estimates of σ 2 b is from σ 2 b (t i+1) = aσ 2 a(t i ) The new analysis error variance σ 2 a(t i+1 ) comes from σ 2 a = (1 W )σ 2 b It is smaller than the background error. After the analysis, the cycle for time t i+1 is completed, and we can proceed to the next cycle. 26
157 Reading Assignment Study the Remarks in Kalnay,
Fundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationBrian J. Etherton University of North Carolina
Brian J. Etherton University of North Carolina The next 90 minutes of your life Data Assimilation Introit Different methodologies Barnes Analysis in IDV NWP Error Sources 1. Intrinsic Predictability Limitations
More informationOptimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields.
Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields. Optimal Interpolation ( 5.4) We now generalize the
More informationIn the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.
hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed
More informationNumerical Weather Prediction: Data assimilation. Steven Cavallo
Numerical Weather Prediction: Data assimilation Steven Cavallo Data assimilation (DA) is the process estimating the true state of a system given observations of the system and a background estimate. Observations
More informationData assimilation; comparison of 4D-Var and LETKF smoothers
Data assimilation; comparison of 4D-Var and LETKF smoothers Eugenia Kalnay and many friends University of Maryland CSCAMM DAS13 June 2013 Contents First part: Forecasting the weather - we are really getting
More informationIntroduction to data assimilation and least squares methods
Introduction to data assimilation and least squares methods Eugenia Kalnay and many friends University of Maryland October 008 (part 1 Contents (1 Forecasting the weather - we are really getting better!
More informationIntroduction to Data Assimilation
Introduction to Data Assimilation Alan O Neill Data Assimilation Research Centre University of Reading What is data assimilation? Data assimilation is the technique whereby observational data are combined
More informationIntroduction to Data Assimilation. Saroja Polavarapu Meteorological Service of Canada University of Toronto
Introduction to Data Assimilation Saroja Polavarapu Meteorological Service of Canada University of Toronto GCC Summer School, Banff. May 22-28, 2004 Outline of lectures General idea Numerical weather prediction
More informationIntroduction to Data Assimilation. Reima Eresmaa Finnish Meteorological Institute
Introduction to Data Assimilation Reima Eresmaa Finnish Meteorological Institute 15 June 2006 Outline 1) The purpose of data assimilation 2) The inputs for data assimilation 3) Analysis methods Theoretical
More informationBasic concepts of data assimilation
Basic concepts of data assimilation Numerical Weather Prediction NWP is an initial-boundary value problem: - given an estimate of the present state of the atmosphere (initial conditions), and appropriate
More information4. DATA ASSIMILATION FUNDAMENTALS
4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic
More informationIntroduction to initialization of NWP models
Introduction to initialization of NWP models weather forecasting an initial value problem traditionally, initialization comprised objective analysis of obs at a fixed synoptic time, i.e. 00Z or 12Z: data
More informationKalman Filter and Ensemble Kalman Filter
Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty
More informationIntroduction to ensemble forecasting. Eric J. Kostelich
Introduction to ensemble forecasting Eric J. Kostelich SCHOOL OF MATHEMATICS AND STATISTICS MSRI Climate Change Summer School July 21, 2008 Co-workers: Istvan Szunyogh, Brian Hunt, Edward Ott, Eugenia
More informationM.Sc. in Meteorology. Numerical Weather Prediction
M.Sc. in Meteorology UCD Numerical Weather Prediction Prof Peter Lynch Meteorology & Climate Centre School of Mathematical Sciences University College Dublin Second Semester, 2005 2006. In this section
More informationDynamic Inference of Background Error Correlation between Surface Skin and Air Temperature
Dynamic Inference of Background Error Correlation between Surface Skin and Air Temperature Louis Garand, Mark Buehner, and Nicolas Wagneur Meteorological Service of Canada, Dorval, P. Quebec, Canada Abstract
More informationA new Hierarchical Bayes approach to ensemble-variational data assimilation
A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander
More informationRelative Merits of 4D-Var and Ensemble Kalman Filter
Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More informationBackground and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience
Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading
More informationInterpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast)
Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast) Richard Ménard, Yan Yang and Yves Rochon Air Quality Research Division Environment
More informationSimple Examples. Let s look at a few simple examples of OI analysis.
Simple Examples Let s look at a few simple examples of OI analysis. Example 1: Consider a scalar prolem. We have one oservation y which is located at the analysis point. We also have a ackground estimate
More informationGaussian Filtering Strategies for Nonlinear Systems
Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors
More informationAccelerating the spin-up of Ensemble Kalman Filtering
Accelerating the spin-up of Ensemble Kalman Filtering Eugenia Kalnay * and Shu-Chih Yang University of Maryland Abstract A scheme is proposed to improve the performance of the ensemble-based Kalman Filters
More informationIntroduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed
Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed Jeffrey Anderson, Tim Hoar, Nancy Collins NCAR Institute for Math Applied to Geophysics pg 1 What is Data Assimilation?
More informationEnsemble Kalman Filter based snow data assimilation
Ensemble Kalman Filter based snow data assimilation (just some ideas) FMI, Sodankylä, 4 August 2011 Jelena Bojarova Sequential update problem Non-linear state space problem Tangent-linear state space problem
More informationData assimilation concepts and methods March 1999
Data assimilation concepts and methods March 1999 By F. Bouttier and P. Courtier Abstract These training course lecture notes are an advanced and comprehensive presentation of most data assimilation methods
More informationJi-Sun Kang. Pr. Eugenia Kalnay (Chair/Advisor) Pr. Ning Zeng (Co-Chair) Pr. Brian Hunt (Dean s representative) Pr. Kayo Ide Pr.
Carbon Cycle Data Assimilation Using a Coupled Atmosphere-Vegetation Model and the LETKF Ji-Sun Kang Committee in charge: Pr. Eugenia Kalnay (Chair/Advisor) Pr. Ning Zeng (Co-Chair) Pr. Brian Hunt (Dean
More informationData Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006
Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian
More informationLecture notes on assimilation algorithms
Lecture notes on assimilation algorithms Elías alur Hólm European Centre for Medium-Range Weather Forecasts Reading, UK April 18, 28 1 Basic concepts 1.1 The analysis In meteorology and other branches
More informationThree-dimensional spatial interpolation of surface meteorological observations from high-resolution local networks
METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 331 345 (28) Published online 17 April 28 in Wiley InterScience (www.interscience.wiley.com).76 Three-dimensional spatial interpolation of surface meteorological
More informationP 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES
P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES
More informationThe Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics
The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich Arizona State University Co-workers: Istvan Szunyogh, Brian Hunt, Ed Ott, Eugenia Kalnay, Jim Yorke, and many others http://www.weatherchaos.umd.edu
More informationEnsemble 4DVAR for the NCEP hybrid GSI EnKF data assimilation system and observation impact study with the hybrid system
Ensemble 4DVAR for the NCEP hybrid GSI EnKF data assimilation system and observation impact study with the hybrid system Xuguang Wang School of Meteorology University of Oklahoma, Norman, OK OU: Ting Lei,
More informationSensitivity analysis in variational data assimilation and applications
Sensitivity analysis in variational data assimilation and applications Dacian N. Daescu Portland State University, P.O. Box 751, Portland, Oregon 977-751, U.S.A. daescu@pdx.edu ABSTRACT Mathematical aspects
More information(Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts
35 (Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts If the numerical model forecasts are skillful,
More information4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données
Four-Dimensional Ensemble-Variational Data Assimilation 4DEnVar Colloque National sur l'assimilation de données Andrew Lorenc, Toulouse France. 1-3 décembre 2014 Crown copyright Met Office 4DEnVar: Topics
More informationRelationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF
Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi 4th EnKF Workshop, April 2010 Relationship
More informationEnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm
Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress
More informationThe Canadian Land Data Assimilation System (CaLDAS)
The Canadian Land Data Assimilation System (CaLDAS) Marco L. Carrera, Stéphane Bélair, Bernard Bilodeau and Sheena Solomon Meteorological Research Division, Environment Canada Dorval, QC, Canada 2 nd Workshop
More informationEnhancing information transfer from observations to unobserved state variables for mesoscale radar data assimilation
Enhancing information transfer from observations to unobserved state variables for mesoscale radar data assimilation Weiguang Chang and Isztar Zawadzki Department of Atmospheric and Oceanic Sciences Faculty
More informationThe Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM
The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information
More informationThe Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance
The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July
More informationTue 4/19/2016. MP experiment assignment (due Thursday) Final presentations: 28 April, 1-4 pm (final exam period)
Data Assimilation Tue 4/9/06 Notes and optional practice assignment Reminders/announcements: MP eperiment assignment (due Thursday) Final presentations: 8 April, -4 pm (final eam period) Schedule optional
More informationFundamentals of Data Assimila1on
014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationObservation impact on data assimilation with dynamic background error formulation
Observation impact on data assimilation with dynamic background error formulation ALEXANDER BECK alexander.beck@univie.ac.at Department of, Univ. Vienna, Austria Thanks to: Martin Ehrendorfer, Patrick
More informationComparing Variational, Ensemble-based and Hybrid Data Assimilations at Regional Scales
Comparing Variational, Ensemble-based and Hybrid Data Assimilations at Regional Scales Meng Zhang and Fuqing Zhang Penn State University Xiang-Yu Huang and Xin Zhang NCAR 4 th EnDA Workshop, Albany, NY
More informationEnvironment Canada s Regional Ensemble Kalman Filter
Environment Canada s Regional Ensemble Kalman Filter May 19, 2014 Seung-Jong Baek, Luc Fillion, Kao-Shen Chung, and Peter Houtekamer Meteorological Research Division, Environment Canada, Dorval, Quebec
More informationImproving weather prediction via advancing model initialization
Improving weather prediction via advancing model initialization Brian Etherton, with Christopher W. Harrop, Lidia Trailovic, and Mark W. Govett NOAA/ESRL/GSD 15 November 2016 The HPC group at NOAA/ESRL/GSD
More informationMet Office convective-scale 4DVAR system, tests and improvement
Met Office convective-scale 4DVAR system, tests and improvement Marco Milan*, Marek Wlasak, Stefano Migliorini, Bruce Macpherson Acknowledgment: Inverarity Gordon, Gareth Dow, Mike Thurlow, Mike Cullen
More informationEnsemble Data Assimilation and Uncertainty Quantification
Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce
More information(Extended) Kalman Filter
(Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system
More informationEnsembles and Particle Filters for Ocean Data Assimilation
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Ensembles and Particle Filters for Ocean Data Assimilation Robert N. Miller College of Oceanic and Atmospheric Sciences
More informationSome ideas for Ensemble Kalman Filter
Some ideas for Ensemble Kalman Filter Former students and Eugenia Kalnay UMCP Acknowledgements: UMD Chaos-Weather Group: Brian Hunt, Istvan Szunyogh, Ed Ott and Jim Yorke, Kayo Ide, and students Former
More informationNumerical Weather Prediction Chaos, Predictability, and Data Assimilation
July 23, 2013, DA summer school, Reading, UK Numerical Weather Prediction Chaos, Predictability, and Data Assimilation Takemasa Miyoshi RIKEN Advanced Institute for Computational Science Takemasa.Miyoshi@riken.jp
More informationGSI Tutorial Background and Observation Errors: Estimation and Tuning. Daryl Kleist NCEP/EMC June 2011 GSI Tutorial
GSI Tutorial 2011 Background and Observation Errors: Estimation and Tuning Daryl Kleist NCEP/EMC 29-30 June 2011 GSI Tutorial 1 Background Errors 1. Background error covariance 2. Multivariate relationships
More informationConsider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.
ATMO/OPTI 656b Spring 009 Bayesian Retrievals Note: This follows the discussion in Chapter of Rogers (000) As we have seen, the problem with the nadir viewing emission measurements is they do not contain
More informationAn introduction to data assimilation. Eric Blayo University of Grenoble and INRIA
An introduction to data assimilation Eric Blayo University of Grenoble and INRIA Data assimilation, the science of compromises Context characterizing a (complex) system and/or forecasting its evolution,
More informationDemonstration and Comparison of of Sequential Approaches for Altimeter Data Assimilation in in HYCOM
Demonstration and Comparison of of Sequential Approaches for Altimeter Data Assimilation in in HYCOM A. Srinivasan, E. P. Chassignet, O. M. Smedstad, C. Thacker, L. Bertino, P. Brasseur, T. M. Chin,, F.
More informationFundamentals of Data Assimila1on
2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationRelationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF
Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship
More information1. Current atmospheric DA systems 2. Coupling surface/atmospheric DA 3. Trends & ideas
1 Current issues in atmospheric data assimilation and its relationship with surfaces François Bouttier GAME/CNRM Météo-France 2nd workshop on remote sensing and modeling of surface properties, Toulouse,
More informationDevelopment of the Canadian Precipitation Analysis (CaPA) and the Canadian Land Data Assimilation System (CaLDAS)
Development of the Canadian Precipitation Analysis (CaPA) and the Canadian Land Data Assimilation System (CaLDAS) Marco L. Carrera, Vincent Fortin and Stéphane Bélair Meteorological Research Division Environment
More informationAssimilation of Doppler radar observations for high-resolution numerical weather prediction
Assimilation of Doppler radar observations for high-resolution numerical weather prediction Susan Rennie, Peter Steinle, Mark Curtis, Yi Xiao, Alan Seed Introduction Numerical Weather Prediction (NWP)
More informationExperiences from implementing GPS Radio Occultations in Data Assimilation for ICON
Experiences from implementing GPS Radio Occultations in Data Assimilation for ICON Harald Anlauf Research and Development, Data Assimilation Section Deutscher Wetterdienst, Offenbach, Germany IROWG 4th
More informationSatellite Observations of Greenhouse Gases
Satellite Observations of Greenhouse Gases Richard Engelen European Centre for Medium-Range Weather Forecasts Outline Introduction Data assimilation vs. retrievals 4D-Var data assimilation Observations
More informationApplication of the sub-optimal Kalman filter to ozone assimilation. Henk Eskes, Royal Netherlands Meteorological Institute, De Bilt, the Netherlands
Ozone data assimilation and forecasts Application of the sub-optimal Kalman filter to ozone assimilation Henk Eskes, Royal Netherlands Meteorological Institute, De Bilt, the Netherlands Ozone assimilation
More informationComparison of of Assimilation Schemes for HYCOM
Comparison of of Assimilation Schemes for HYCOM Ashwanth Srinivasan, C. Thacker, Z. Garraffo, E. P. Chassignet, O. M. Smedstad, J. Cummings, F. Counillon, L. Bertino, T. M. Chin, P. Brasseur and C. Lozano
More informationLocalization in the ensemble Kalman Filter
Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and
More informationA Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96
Tellus 000, 000 000 (0000) Printed 20 October 2006 (Tellus LATEX style file v2.2) A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Elana J. Fertig
More informationMultivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context
Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context Catherine Thomas 1,2,3, Kayo Ide 1 Additional thanks to Daryl Kleist, Eugenia Kalnay, Takemasa
More information15B.7 A SEQUENTIAL VARIATIONAL ANALYSIS APPROACH FOR MESOSCALE DATA ASSIMILATION
21st Conference on Weather Analysis and Forecasting/17th Conference on Numerical Weather Prediction 15B.7 A SEQUENTIAL VARIATIONAL ANALYSIS APPROACH FOR MESOSCALE DATA ASSIMILATION Yuanfu Xie 1, S. E.
More informationUniversity of Athens School of Physics Atmospheric Modeling and Weather Forecasting Group
University of Athens School of Physics Atmospheric Modeling and Weather Forecasting Group http://forecast.uoa.gr Data Assimilation in WAM System operations and validation G. Kallos, G. Galanis and G. Emmanouil
More informationNumerical Weather prediction at the European Centre for Medium-Range Weather Forecasts
Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts Time series curves 500hPa geopotential Correlation coefficent of forecast anomaly N Hemisphere Lat 20.0 to 90.0 Lon
More informationThe Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment
ADVANCES IN ATMOSPHERIC SCIENCES, VOL. 27, NO. 6, 2010, 1303 1310 The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment LIU Juanjuan
More informationRadar data assimilation using a modular programming approach with the Ensemble Kalman Filter: preliminary results
Radar data assimilation using a modular programming approach with the Ensemble Kalman Filter: preliminary results I. Maiello 1, L. Delle Monache 2, G. Romine 2, E. Picciotti 3, F.S. Marzano 4, R. Ferretti
More informationParameter Estimation in EnKF: Surface Fluxes of Carbon, Heat, Moisture and Momentum
Parameter Estimation in EnKF: Surface Fluxes of Carbon, Heat, Moisture and Momentum *Ji-Sun Kang, *Eugenia Kalnay, *Takemasa Miyoshi, + Junjie Liu, # Inez Fung, *Kayo Ide *University of Maryland, College
More informationCan hybrid-4denvar match hybrid-4dvar?
Comparing ensemble-variational assimilation methods for NWP: Can hybrid-4denvar match hybrid-4dvar? WWOSC, Montreal, August 2014. Andrew Lorenc, Neill Bowler, Adam Clayton, David Fairbairn and Stephen
More informationON DIAGNOSING OBSERVATION ERROR STATISTICS WITH LOCAL ENSEMBLE DATA ASSIMILATION
ON DIAGNOSING OBSERVATION ERROR STATISTICS WITH LOCAL ENSEMBLE DATA ASSIMILATION J. A. Waller, S. L. Dance, N. K. Nichols University of Reading 1 INTRODUCTION INTRODUCTION Motivation Only % of observations
More informationDirect assimilation of all-sky microwave radiances at ECMWF
Direct assimilation of all-sky microwave radiances at ECMWF Peter Bauer, Alan Geer, Philippe Lopez, Deborah Salmond European Centre for Medium-Range Weather Forecasts Reading, Berkshire, UK Slide 1 17
More informationAssimilation Techniques (4): 4dVar April 2001
Assimilation echniques (4): 4dVar April By Mike Fisher European Centre for Medium-Range Weather Forecasts. able of contents. Introduction. Comparison between the ECMWF 3dVar and 4dVar systems 3. he current
More informationModel and observation bias correction in altimeter ocean data assimilation in FOAM
Model and observation bias correction in altimeter ocean data assimilation in FOAM Daniel Lea 1, Keith Haines 2, Matt Martin 1 1 Met Office, Exeter, UK 2 ESSC, Reading University, UK Abstract We implement
More informationHybrid variational-ensemble data assimilation. Daryl T. Kleist. Kayo Ide, Dave Parrish, John Derber, Jeff Whitaker
Hybrid variational-ensemble data assimilation Daryl T. Kleist Kayo Ide, Dave Parrish, John Derber, Jeff Whitaker Weather and Chaos Group Meeting 07 March 20 Variational Data Assimilation J Var J 2 2 T
More informationThe role of data assimilation in atmospheric composition monitoring and forecasting
The role of data assimilation in atmospheric composition monitoring and forecasting Why data assimilation? Henk Eskes Royal Netherlands Meteorological Institute, De Bilt, The Netherlands Atmospheric chemistry
More informationComparison of 3D-Var and LETKF in an Atmospheric GCM: SPEEDY
Comparison of 3D-Var and LEKF in an Atmospheric GCM: SPEEDY Catherine Sabol Kayo Ide Eugenia Kalnay, akemasa Miyoshi Weather Chaos, UMD 9 April 2012 Outline SPEEDY Formulation Single Observation Eperiments
More informationModel error in coupled atmosphereocean data assimilation. Alison Fowler and Amos Lawless (and Keith Haines)
Model error in coupled atmosphereocean data assimilation Alison Fowler and Amos Lawless (and Keith Haines) Informal DARC seminar 11 th February 2015 Introduction Coupled atmosphere-ocean DA schemes have
More informationXuguang Wang and Ting Lei. School of Meteorology, University of Oklahoma and Center for Analysis and Prediction of Storms, Norman, OK.
1 2 3 GSI-based four dimensional ensemble-variational (4DEnsVar) data assimilation: formulation and single resolution experiments with real data for NCEP Global Forecast System 4 5 6 7 8 9 10 11 12 13
More informationQuantifying observation error correlations in remotely sensed data
Quantifying observation error correlations in remotely sensed data Conference or Workshop Item Published Version Presentation slides Stewart, L., Cameron, J., Dance, S. L., English, S., Eyre, J. and Nichols,
More informationEnKF Localization Techniques and Balance
EnKF Localization Techniques and Balance Steven Greybush Eugenia Kalnay, Kayo Ide, Takemasa Miyoshi, and Brian Hunt Weather Chaos Meeting September 21, 2009 Data Assimilation Equation Scalar form: x a
More informationComparing Local Ensemble Transform Kalman Filter with 4D-Var in a Quasi-geostrophic model
Comparing Local Ensemble Transform Kalman Filter with 4D-Var in a Quasi-geostrophic model Shu-Chih Yang 1,2, Eugenia Kalnay 1, Matteo Corazza 3, Alberto Carrassi 4 and Takemasa Miyoshi 5 1 University of
More informationVariational data assimilation of lightning with WRFDA system using nonlinear observation operators
Variational data assimilation of lightning with WRFDA system using nonlinear observation operators Virginia Tech, Blacksburg, Virginia Florida State University, Tallahassee, Florida rstefane@vt.edu, inavon@fsu.edu
More informationDevelopment of the Local Ensemble Transform Kalman Filter
Development of the Local Ensemble Transform Kalman Filter Istvan Szunyogh Institute for Physical Science and Technology & Department of Atmospheric and Oceanic Science AOSC Special Seminar September 27,
More informationSatellite data assimilation for Numerical Weather Prediction II
Satellite data assimilation for Numerical Weather Prediction II Niels Bormann European Centre for Medium-range Weather Forecasts (ECMWF) (with contributions from Tony McNally, Jean-Noël Thépaut, Slide
More informationTheory and Practice of Data Assimilation in Ocean Modeling
Theory and Practice of Data Assimilation in Ocean Modeling Robert N. Miller College of Oceanic and Atmospheric Sciences Oregon State University Oceanography Admin. Bldg. 104 Corvallis, OR 97331-5503 Phone:
More informationWP2 task 2.2 SST assimilation
WP2 task 2.2 SST assimilation Matt Martin, Dan Lea, James While, Rob King ERA-CLIM2 General Assembly, Dec 2015. Contents Impact of SST assimilation in coupled DA SST bias correction developments Assimilation
More informationSnow Analysis for Numerical Weather prediction at ECMWF
Snow Analysis for Numerical Weather prediction at ECMWF Patricia de Rosnay, Gianpaolo Balsamo, Lars Isaksen European Centre for Medium-Range Weather Forecasts IGARSS 2011, Vancouver, 25-29 July 2011 Slide
More informationOperational Use of Scatterometer Winds at JMA
Operational Use of Scatterometer Winds at JMA Masaya Takahashi Numerical Prediction Division, Japan Meteorological Agency (JMA) 10 th International Winds Workshop, Tokyo, 26 February 2010 JMA Outline JMA
More informationThe hybrid ETKF- Variational data assimilation scheme in HIRLAM
The hybrid ETKF- Variational data assimilation scheme in HIRLAM (current status, problems and further developments) The Hungarian Meteorological Service, Budapest, 24.01.2011 Nils Gustafsson, Jelena Bojarova
More informationDiagnosis of observation, background and analysis-error statistics in observation space
Q. J. R. Meteorol. Soc. (2005), 131, pp. 3385 3396 doi: 10.1256/qj.05.108 Diagnosis of observation, background and analysis-error statistics in observation space By G. DESROZIERS, L. BERRE, B. CHAPNIK
More information