Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3.

Similar documents
How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes?

Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices

Background Error Covariance Modelling

The Matrix Algebra of Sample Statistics

Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast)

( ).666 Information Extraction from Speech and Text

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Recent Developments in Numerical Methods for 4d-Var

The ECMWF Hybrid 4D-Var and Ensemble of Data Assimilations

Covariance to PCA. CS 510 Lecture #14 February 23, 2018

EnKF Localization Techniques and Balance

Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience

Kalman Filter and Ensemble Kalman Filter

A new Hierarchical Bayes approach to ensemble-variational data assimilation

Localization in the ensemble Kalman Filter

Computational Linear Algebra

Appendix C: Recapitulation of Numerical schemes

Phys 201. Matrices and Determinants

Review of Covariance Localization in Ensemble Filters

2. Matrix Algebra and Random Vectors

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Gaussian Elimination for Linear Systems

Lecture 03 Positive Semidefinite (PSD) and Positive Definite (PD) Matrices and their Properties

Solving large scale eigenvalue problems

Derivation of the Kalman Filter

GSI Tutorial Background and Observation Errors: Estimation and Tuning. Daryl Kleist NCEP/EMC June 2011 GSI Tutorial

Notes for System Identification: Impulse Response Functions via Wavelet

Class Notes: Solving Simultaneous Linear Equations by Gaussian Elimination. Consider a set of simultaneous linear equations:

Numerical Linear Algebra

4. DATA ASSIMILATION FUNDAMENTALS

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

Dynamic System Identification using HDMR-Bayesian Technique

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom

Objective localization of ensemble covariances: theory and applications

Ensemble Kalman Filter

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

T2.3: Use of ensemble information in ocean analysis and development of efficient 4D-Var

A Reduced Rank Kalman Filter Ross Bannister, October/November 2009

Notes on vectors and matrices

Quantum Computing Lecture 2. Review of Linear Algebra

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Gaussian Filtering Strategies for Nonlinear Systems

Data assimilation in high dimensions

Algebraic Properties of Solutions of Linear Systems

Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields.

Determinants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25

1 Vectors and Tensors

+ V = 0, j = 1,..., 3N (7.1) i =1, 3 m = 1, N, X mi, V X mi. i = 0 in the equilibrium. X mi X nj

Inhomogeneous Background Error Modeling and Estimation over Antarctica with WRF-Var/AMPS

4DEnVar: link with 4D state formulation of variational assimilation and different possible implementations

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller

A general hybrid formulation of the background-error covariance matrix for ensemble-variational ocean data assimilation

22A-2 SUMMER 2014 LECTURE Agenda

Basic Concepts in Matrix Algebra

Ergodicity in data assimilation methods

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

ELEMENTARY LINEAR ALGEBRA

A primer on matrices

Background error modelling: climatological flow-dependence

Lie Groups for 2D and 3D Transformations

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2)

DETERMINANTS, COFACTORS, AND INVERSES

Sometimes the domains X and Z will be the same, so this might be written:

A Brief Introduction to Tensors

AMS526: Numerical Analysis I (Numerical Linear Algebra)

arxiv:astro-ph/ v1 20 Oct 2003

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

A primer on matrices

New Fast Kalman filter method

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu

Lecture 4: Products of Matrices

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

Gaussian Elimination and Back Substitution

Evolution of Forecast Error Covariances in 4D-Var and ETKF methods

An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF

Statistical Inference and Methods

Minwise hashing for large-scale regression and classification with sparse data

Matrix Factorization and Analysis

BOUNDARY CONDITIONS FOR SYMMETRIC BANDED TOEPLITZ MATRICES: AN APPLICATION TO TIME SERIES ANALYSIS

Anouncements. Assignment 3 has been posted!

UNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION

The priority program SPP1167 Quantitative Precipitation Forecast PQP and the stochastic view of weather forecasting

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Algorithms for Uncertainty Quantification

Potentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Doppler radial wind spatially correlated observation error: operational implementation and initial results

Spectral Ensemble Kalman Filters

Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor

Local Ensemble Transform Kalman Filter

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

Data assimilation in high dimensions

The Matrix Representation of a Three-Dimensional Rotation Revisited

Physics 342 Lecture 26. Angular Momentum. Lecture 26. Physics 342 Quantum Mechanics I

Multivariate Statistical Analysis

Numerical Linear Algebra

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Perturbation Theory and Numerical Modeling of Quantum Logic Operations with a Large Number of Qubits

The Ensemble Kalman Filter:

Transcription:

Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3.. The implicit Schur product 2. The Bishop method for adaptive localization (ECO-RAP) 2 3. Element-by-element evaluation of (0) 3 4. Calculation of the localized covariances 3 5. Notes for evaluating (22) for a structure function 4 6. Limiting cases 4 7. Suggested algorithm with adaptive localization 6 8. Suggested algorithm with static localization only 7 9. Adaptive localization with a major simplification 7 References 2. The implicit Schur product The Schur product is used in ensemble-based data assimilation to remove long-range correlations P f L P f R Ω, () where PL f and PR f are the localized and raw forecast error covariance matrices respectively and Ω is the localization matrix. These matrices are of size 5n 5n (where n is the total number of grid points for each of the five parameters, ψ, χ, p, θ and q [what about w?]) and so we don't have the ability to store them explicitly. In the ensemble Kalman filter, PR f is represented by its square-root (ie the 5n K matrix of ensemble perturbations, each divided by K ). Assuming that Ω is also in its squareroot (5n L) form then P f R P f R /2 P f R T/2 K K k x k x T k, (2) Ω Ω /2 Ω T/2 L L ω l ω T l. (3) l In the last line, the square-root of Ω is also considered to be comprised of new effective ensemble members, ω l, each divided by L. For Ω to be a correlation matrix, each component of the ω l must have a variance of unity. Substituting (2) and (3) into () and then writing for the i, j th element of P f gives P f L L (P f /2 R P f RT/2 ) (Ω /2 Ω T/2 ), (4) P f Lij (P f /2 R P f RT/2 )ij (Ω /2 Ω T/2 ) ij, (5)

K p (PR/2 f )ip (PR/2 f )jp (Ω /2 ) iq (Ω /2 ) jq, (6) q K (PR/2 f )ip (Ω /2 ) iq (PR/2 f )jp (Ω /2 ) jq. (7) p q Equation (7) shows that the localized forecast error covariance matrix is effectively made up of approximately KL ensemble members instead of just K. The effective ensemble members that give rise to the localized covariances can be written as pq xpq x p ω q, (8) where x is the effective ensemble member comprising the vector Schur product of raw ensemble member x p ( K times the pth column of P f /2 R ) with ω q ( L times the qth column of ). Ω /2 2. The Bishop method for adaptive localization (ECO-RAP) Bishop and Hodyss [] proposed the following form for Ω /2 Ω /2 C Q K EΛ /2. (9) Here C K is a 5n 5n correlation matrix calculated from the K ensemble members (see below), and EΛ /2 is a 5n L matrix. E performs an inverse Fourier transform per parameter, and Λ /2 performs scale-dependent filtering. The Q superscript in (9) indicates an element-by-element raising of power (a Schur power), where Q is even. The overbar denotes a normalization so that the Ω becomes a correlation matrix. This involves setting the sum of squares of each row of to unity. Ω /2 The localization gains its adaptive property through the C K matrix. If it were not for C K, (9) would be the square-root of a static and homogeneous correlation matrix. The issues of this problem are the following.. Determination of (i) the set spectral modes in the horizontal, (ii) the set of vertical modes in the vertical for each model quantity, (iii) an appropriate spectrum, Λ, and (iv) a choice of L, the number of modes to truncate. 2. Efficient determination and action of C Q k. For reference, (9) has the following multivariate form (followed by a specification of the dimensions of each matrix). Ω /2 ( C Q K EΛ /2 Q C K ) E ψ Λ /2 ψ E χ Λ /2 χ E p Λ /2 p E θ Λ /2 θ E q Λ /2 q, (0) [5n L] [5n 5n] [5n L]. The [5n L] part of the right hand side of (0) is the static localization. It imposes no 2

multivariate localization modulation; it limits univariate lengthscales of each variable. Localization associated with the multivariate part of the problem is handled by the adaptive matrix, C Q. K 3. Element-by-element evaluation of (0) Equation (0) has a high operation count. In the HRTM there are n 360 288 70 7.26 0 6 grid points. The five variables means that there are 3.6 0 7 variables. Clearly special attention must be paid towards efficiency of the problem and any approximations that can be made should be made. Let (C Q K EΛ /2 ) ip,k be column k and field position i for parameter p of C Q K EΛ /2. (C Q K EΛ /2 ) ip,k (C Q K ) ip,jp E p,j,k (Λ /2 p ) k,k, () j,p where i, j go from to n and p, p run over each parameter (ψ, χ, p, θ, q). The matrix C K (a correlation matrix found from the ensemble members) has the following form C K Σ P f RΣ, (2) K K Σ x k x T kσ, (3) k where Σ is the diagonal standard deviation matrix. Element i, j between parameters p, p is and (C K ) ip,jp K K k (x k ) p i (x k ) p j, (4) (C Q K ) ip,jp (C K ) Q ip,jp. (5) The normalization in (9) and (0) (ie the overbar) means that the localization matrix has to be calculated row-wise. Normalization gives the matrix Ω /2. 4. Calculation of the localized covariances The localized covariance element is, from (7) ip, i p (P f L) ip,i p σ p i σ p j (K ) (L ) K (x k ω l ) ip (x k ω l ) i p, k l (K ) (L ) K (x k ) ip (ω l ) ip (x k ) i p (ω l ) i p. (6) k l (x k ) ip and (x k ) i p are readily available, (ω l ) ip and (ω l ) i p are not. The relationship between the columns of Ω /2 and ω l is (ωl) ip L (Ω /2 ) ip,l, (7) Ω /2 where is to be written in terms of its components (9). The overbar on (9) can be dealt with by a factor, which normalizes µ ip (Ω /2 ) ip,l µ ip (C Q K EΛ /2 ) ip,l, (8) 3

where Combining (7), (8) and () gives µ ip l (C Q K EΛ /2 ) 2. (9) ip,l (ω l ) ip L µ ip (C Q K ) ip,jp E p,j,l (Λ /2 p ) l,l. (20) j,p Substituting (20) into (6) gives an expression for the localized covariances in terms of quantities that are known (P f L) ip,i p K k (K ) ( (x k ) ip (x k ) i p µ ip (C Q K ) ip,j p E p,j,l (Λ /2 l j,p p ) l,l) ( µ i p (C Q K ) i p,j p E p,j,l (Λ /2 l,l) p ). j,p (2) This summation has to be arranged so that it can be evaluated in the most efficient way, allowing for evaluation of the coefficients (P f L) ip,i p (K ) ( K ( µ ip l j From (9) and () µ ip i p ) (x k ) ip (x k ) k (C Q K ) ip,j p E p,j,l (Λ /2 l,l) ( p ) µ i p p j µ ip (C Q K ) i p,j p E p,j,l (Λ /2 l,l) p ). p (22) l ( j,p (C Q K ) ip,j p E p,j,l (Λ /2 2. (23) p ) l,l ) In Sec. 9, we consider a major simplification of these equations that is considered to make way for their efficient evaluation for large systems. For now though we consider the exact form of the equations. 5. Notes for evaluating (22) for a structure function For a structure function, i and p will both be fixed. The j summations that appear in the above may be evaluated on a reduced resolution grid (e.g. every 0-points). The summations j,p (C Q K ) ip,j p E p,j,l (Λ /2 p ) l,l appear in (22) and in (23). Store these for all l for each i, p to allow (23) to be evaluated. 6. Limiting cases Choosing Q leads to C Q K I (i.e. only elements that have matrix element identically unity will survive the Schur power). This is equivalent to the case with no adaptive localization. Choosing Q 0 is non-physical. It will set each non-zero matrix element in 4 C Q K

to unity. Note a fundamental difference between the conventional and the Schur matrix products. For the conventional matrix product AB C, (24) setting B to the identity matrix will leave A C. For the Schur matrix product A B C, (25) setting all elements of B to will leave A C. Exploring the case when there is no adaptive localization, Q, then (0) becomes ( Ω /2 EΛ /2 ) E ψ Λ /2 ψ E χ Λ /2 χ E p Λ /2 p E θ Λ /2 θ E q Λ /2 q, (26) which is block diagonal in parameter. Considering only one parameter, then (26) gives the following square root where Meaning that Ω from (27) with (3) give Ω ij (Ω /2 ) iq µ i E iq Λ /2 qq, (27) µ i q µ i µ j l (E il Λ /2 ll ) 2. (28) (Ω /2 ) iq (Ω T/2 ) qj, (29) q E iq E jqλ qq, (30) where means complex conjugate (we add this here because the illustration below makes use of a complex Fourier transform). In -D (30) becomes Ω ij µ i µ j q exp ik q (r i r j ) Λ qq, (3) where k q is the qth wavenumber and r i is the position of the ith grid point. If L covers the complete spectrum and Λ qq is constant (broad localization in spectral space) then orthogonality gives Ω ij µ 2 i δ ij, (32) meaning that this Schur product will be diagonal and will completely localize in real space. A narrower localization in spectral space, ie Λ qq 0 with increasing q (qualitatively similar to smaller L) then the localization in real space will be broader. 5

7. Suggested algorithm with adaptive localization Costs for each loop are specified at the end of each loop in red for the case when no efficiencies are used, and in blue when efficiencies are used. Numerical terms in blue brackets are for the specific model domain ( n 360 288 70 and 5 parameters). Assume for now that L 50, except for the numerical values in green which are for the efficiency costs, but for L.. Calculation of right-hand bracketed term in (22) 2. Loop round j, p 3. Evaluate χ (j, p ) (C Q K ) i p,j p K (24) K (24) (24) 4. End loop (j, p ) 5Kn (87 0 6 ) 5Kn / 00 (9 0 6 ) (9 0 6 ) 5. α (:) 0 6. β 0 7. Loop round l 8. Loop round j, p 9. α (l) + χ (j, p ) E p,j,l (Λ /2 p ) l,l 0. End loop (j, p ) 5n (36 0 6 ) 5n / 00 (36 0 4 ) (36 0 4 ). β + α 2 (l) 2. End loop (l) 5Ln (.8 0 9 ) 5Ln / 00 (8 0 6 ) (36 0 4 ) 3. µ i p / β 4. Loop around destination points in the structure function 5. Loop round i, p 6. Calculation of left-hand bracketed term in (22) 7. γ 0 8. Loop round k 9. γ + (x k ) ip (x k ) i p 20. End loop (k) K (24) K (24) (24) 2. Calculation of middle bracketed term in (22) 22. Loop round j, p 23. Evaluate χ (j, p ) (C Q K ) ip,j p K (24) K (24) (24) 24. End loop (j, p ) 5Kn (87 0 6 ) 5n / 00 (9 0 6 ) (9 0 6 ) 25. ε (:) 0 26. β 0 27. Loop round l 28. Loop round j, p 29. ε (l) + χ (j, p ) E p,j,l (Λ /2 p ) l,l 30. End loop j, p 5n (36 0 6 ) 5n / 00 (36 0 4 ) (36 0 4 ) 3. β + ε 2 (l) 32. End loop (l) 5Ln (.8 0 9 ) 5Ln / 00 (8 0 6 ) (363 0 3 ) 33. µ ip / β 6

34. Structure function for i, p can be evaluated - see (33) 35. End loop (i, p) 5n [K + 5n ( + K + L)] (0 7 ) 5n [K + 5n ( + K + L) / 00] (0 5 ) (3.3 0 4 ) (PL) f ip,i p γ µ ipµ i p (K ) L ε (l) α (l). (33) l 8. Suggested algorithm with static localization only Without the adaptive localization the problem becomes considerably simpler. In this case (22) and (23) become (P f L) ip,i p (K ) ( K i p ) (x k ) ip (x k ) k l µ ip ( µ ip E p,i,l (Λ /2 p ) l,l ) ( µ i p E p,i,l (Λ /2 p ) l,l ), (34) l (E p,i,l (Λ /2 p ) l,l ) 2. (35). Calculation of right-hand bracketed term in (34) 2. β 0 3. Loop round l 4. α (l) E p,i,l (Λ /2 p ) l,l 5. β + α 2 (l) 6. End loop (l) L (50) () 7. µ i p / β 8. Loop around destination points in the structure function 9. Loop round i, p 0. Calculation of left-hand bracketed term in (34). γ 0 2. Loop round k 3. γ + (x k ) ip (x k ) i p 4. End loop (k) K (24) (24) 5. Calculation of middle bracketed term in (34) 6. β 0 7. Loop round l 8. ε (l) E p,i,l (Λ /2 p ) l,l 9. β + ε 2 (l) 20. End loop (l) L (50) () 2. µ ip / β 22. Structure function for i, p can be evaluated - see (33) 23. End loop (i, p) 5n [K + L] (2.7 0 9 ) (0.9 0 9 ) 9. Adaptive localization with a major simplification In Sec. 7 we considered an algorithm for the brute-force evaluation of (22) and (23) for the 7

evaluation of localized covariances, and in Sec. 8 we considered the limiting case where the localization was static. Neither of these approaches are useful for large systems (the algorithm in Sec. 7 is prohibitive and the algorithm in Sec. 8 is inadequate for many purposes). Here we consider a simplification to the covariance formulae that may be useable and useful. First, recap the equations that are to be evaluated. The localized covariance matrix elements from (6) (PL) f ip,i p (K ) (L ) K (x k ) ip (ω l ) ip (x k ) i p (ω l ) i p, (6) k l the localization members from (7) and elements of the localization matrix from (8) where, from (9) (ω l ) ip L (Ω /2 ) ip,l, (7) (Ω /2 ) ip,l µ ip (C Q K EΛ /2 ) ip,l, (8) µ ip l (C Q K EΛ /2 ) 2. (9) ip,l These are straight copies of equations previously given in this document. In [2] it is suggested that considerable efficiency savings can be made in the evaluation of (8) (and hence in (6)) in the case of adaptive localization if the matrix C Q K is approximated by one that has separable structure functions. This is now explored. (C Q The analysis is centred on the evaluation of K EΛ /2 ) ip,l, which is one of the most expensive parts of the calculation ( C Q K EΛ /2)ip,j (C K ) Q ip,i p E j p,j Λ /2 jj. (36) i p Now consider the case when rows of C K are approximated by separable functions. Since index i represents all three dimensions in space, this step requires a change of notation. Let a given i represent a unique combination of x, y, z and let i represent x, y, z. Then (C K ) ip,i p may be written as (C K ) ip,i p C K (x, y, z, p; x, y, z, p ). (37) Assuming separable functions means that (C K ) ip,i p is approximated by (C K ) ip,i p C K (x, y, z, p; x, y, z, p ) C K (x, y, z, p; x, y, z, p ) C K (x, y, z, p; x, y, z, p ), (38) ie the row associated with x, y, z, p is a function of x, y, z, p and is written as the product of three functions, one a function of x, p, another a function of y, p and another a function of z, p. This is separable in x, y, z -space. Note that unfortunately, C K written in this way is not guaranteed to be symmetric (as is required for a correlation matrix), but it is assumed that this is not a vital for localization, as is presumably the case in [2]. 8

This is useful if columns of E are also separable, which they are under the planned formulation. In the same notation as used above, and noting that j (not j ) is a wavevector index representing k x j, k y j, k z j, then may be written E j p,j E j p,j f x p (k x j, x ) f y p (k y j, y ) f z p (k z j, z ), (39) where f x p (k x j, x ), f y p (k y j, y ) and f z p (k z j, z ) are orthogonal functions (trigonometric in the horizontal and EOF in the vertical). Separability is useful because it makes evaluation of (36) cost effective as follows ( C Q K EΛ /2)ip,j (C K ) Q ip,i p E j p,j Λ /2 jj i p C Q K (x, y, z, p; x, y, z, p ) C Q K (x, y, z, p; x, y, z, p ) C Q K (x, y, z, p; x, y, z, p ) x y z p f x p (k x j, x ) f y p (k y j, y ) f z p (k z j, z ) Λ /2 (k x j, k y j, k z j), Λ /2 (k x j, k y j, k z j) p x C Q K (x, y, z, p; x, y, z, p ) f x p (k x j, x ) C Q K (x, y, z, p; x, y, z, p ) f y p (k y j, y ) y C Q K (x, y, z, p; x, y, z, p ) f z p (k z j, z ), (40) z where notational changes have been made for compatibility with recent discussion. Remember that i is shorthand for x, y, z and j is shorthand for k x j, k y j, k z j. The 3-D integral in (36) has been replaced by three integrals over each dimension (plus parameters) in (40). At the resolution of 360 288 70 and with five parameters, this reduces the operation count from 36 288 000 to just 3590. This is 0 000 times more efficient. Putting together (6), (7), (8), (9) and (40) gives (P f L) ip,i p (K ) K k µ ip µ i p (K ) K k (x k ) ip (Ω /2 ) ip,l (x k ) i p (Ω /2 ) i p,l, l (x k ) ip (C Q K EΛ /2 ) ip,l (x k ) i p (C Q K EΛ /2 ) i p,l, l µ ip µ i p (K ) K (x k ) ip (x k ) i p k l Λ /2 (k x j, k y j, k z j) p x C Q K (x, y, z, p; x, y, z, p ) f x p (k x l, x ) 9

Λ /2 (k x j, k y j, k z j) p x C Q K (x, y, z, p; x, y, z, p ) f y p (k y l, y ) y C Q K (x, y, z, p; x, y, z, p ) f z p (k z l, z ) z C Q K (x, y, z, p ; x, y, z, p ) f x p (k x l, x ) where C Q K (x, y, z, p ; x, y, z, p ) f y p (k y l, y ) y z µ ip ( C Q K (x, y, z, p ; x, y, z, p ) f z p (k z l, z ), (4) l p C Q K (x, y, z, p; x, y, z, p ) f x p (k x l, x ) x C Q K (x, y, z, p; x, y, z, p ) f y p (k y l, y ) y C Q K (x, y, z, p; x, y, z, p ) f z p (k z l, z ). (42) z The suggested algorithm is now given for this case. Costs for each loop are specified at the end of each loop in blue. Numerical terms in blue brackets are for the specific model domain ( n 360 288 70 and 5 parameters). Assume for now that L 50. 2) /2. Calculation of term in (4) for (fixed) i, p ( i denotes a particular x, y, z ) 2. Loop round p 3. Loop round x 4. Evaluate χ x p (x ) C K (x, y, z, p ; x, y, z, p ) K (24) 5. End loop (x ) 360K (8 640) 6. Loop round y 7. Evaluate χ y p (y ) C K (x, y, z, p ; x, y, z, p ) K (24) 8. End loop (y ) 288K (6 92) 9. Loop round z 0. Evaluate χ z p (z ) C K (x, y, z, p ; x, y, z, p ) K (24). End loop (z ) 70K ( 680) 2. End loop (p ) 3 590K (86 60) 3. α (:) 0 4. β 0 5. Loop round l 6. Loop round p 0

7. α x 0 8. Loop round x 9. α x + χ x p (x ) f x p (k x l, x ) 20. End loop (x ) 360 (360) 2. α y 0 22. Loop round y 23. α y + χ y p (y ) f y p (k y l, y ) 24. End loop (y ) 288 (288) 25. α z 0 26. Loop round z 27. α z + χ z p (z ) f z p (k z l, z ) 28. End loop (z ) 70 (70) 29. α (l) + α x α y α z 30. End loop (p ) 3 590 (3 590) 3. α (l) Λ /2 (k x l, k y l, k z l) 32. β + α 2 (l) 33. End loop (l) 3 590L (79 500) 34. µ i p / β 35. Loop around destination points in the structure function 36. Loop round i, p ( i denotes a particular x, y, z) 37. Calculation of static term in (4) 38. γ 0 39. Loop round k 40. γ + (x k ) ip (x k ) i p 4. End loop (k) K (24) 42. Calculation of term in (4) for (variable) i, p 43. Loop round p 44. Loop round x 45. Evaluate χ x p (x ) C K (x, y, z, p; x, y, z, p ) K (24) 46. End loop (x ) 360K (8 640) 47. Loop round y 48. Evaluate χ y p (y ) C K (x, y, z, p; x, y, z, p ) K (24) 49. End loop (y ) 288K (6 92) 50. Loop round z 5. Evaluate χ z p (z ) C K (x, y, z, p; x, y, z, p ) K (24) 52. End loop (z ) 70K ( 680) 53. End loop (p ) 3 590K (86 60) 54. ε (:) 0 55. β 0

56. Loop round l 57. Loop round p 58. ε x 0 59. Loop round x 60. ε x + χ x p (x ) f x p (k x l, x ) 6. End loop (x ) 360 (360) 62. ε y 0 63. Loop round y 64. ε y + χ y p (y ) f y p (k y l, y ) 65. End loop (y ) 288 (288) 66. ε z 0 67. Loop round z 68. ε z + χ z p (z ) f z p (k z l, z ) 69. End loop (z ) 70 (70) 70. ε (l) + ε x ε y ε z 7. End loop p 3 590 (3 590) 72. ε (l) Λ /2 (k x l, k y l, k z l) 73. β + ε 2 (l) 74. End loop (l) 3 590L (79 500) 75. µ ip / β 76. Structure function for i, p can be evaluated - see (33) 77. End loop (i, p) 5n [K + 3 590K + 3 590L] (0 3 ) This cost can be reduced by looping only round those i that are in the same plane as i. Instead of multiplying by n, this multiple in the last line is 360 288 + 288 70 + 360 70 03 680 + 20 60 + 25 200 49 040. This reduced cost is 745 200 [265 684] 2 0. References [] Bishop C.H., Hodyss D., Ensemble covariances adaptively localized with ECO-RAP, : tests on simple error models, submitted to Tellus, 2008. [2] Bishop C.H., Hodyss D., Ensemble covariances adaptively localized with ECO-RAP, 2: a strategy for the atmosphere, submitted to Tellus, 2008. 2