A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

Size: px
Start display at page:

Download "A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation"

Transcription

1 A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation Alexandru Cioaca Computational Science Laboratory (CSL) Department of Computer Science Virginia Tech August 9, 213 [1/64]

2 Outline 1. Introduction 2. Fusing simulations and observations through data assimilation 3. Quantifying observation contribution Sensitivity analysis Efficient computation Experimental applications 4. Optimal observational networks Formulating the optimization problem Optimizing observation values, weights and locations 5. Ending remarks [2/64]

3 Setting Scientific context Computer-generated numerical simulations of real time-evolving processes Focus area Computational Fluid Dynamics atmosphere, ocean (meteorology, hydrology, climate modeling, air-quality studies etc), industrial processes (manufacturing, energy production, hazard proliferation etc), blood flow and other biological systems (medicine), and many others. General methodology 1. Model the governing physical laws as systems of ODEs/PDEs/SDEs; 2. Solve the equations by discretizing space and time ( numerical models ); 3. Efficient computation for speed and accuracy. Numerical simulation (forecast, prediction) initial value problem [3/64]

4 Challenging problems 1. Anchor simulations in reality using observations of the real system states: surface observations, weather balloons, aircraft, ship, radar, satellite, etc 2. Mitigate the amplification of discretization errors 3. Handle the high non-linearity of fluid flow (turbulence, large eddy etc) 4. Perform model calibration and observations quality control 5. Develop adaptive strategies for simulating and observing 6. Reduce computational cost for large-scale applications [4/64]

5 Data assimilation Assimilating data means fusing measurements and numerical simulations Objective analysis of model states required for restarting forecasts Numerical techniques of considerable computational cost Various flavors of data assimilation: 1. Optimal interpolation 2. Statistical estimators 3. Ensemble-based methods (particle filter, Kalman filter) 4. Variational methods (3D, 4D) 4D-Var (four-dimensional variational) requires 1. A priori estimates of model states (background) 2. Measurements of the real system (observations) 3. Error statistics 4. Numerical models for forecast and sensitivity analysis [5/64]

6 4D-Var data assimilation PDE-constrained optimization problem: J (x ) = 1 2 ( ) T x x b B 1 (x x b ) N k=1 x a = arg min x J (x ) subject to x k = M t t k (x ). (H k (x k ) y k ) T R 1 k (H k (x k ) y k ), J D-Var cost function x k initial solution (at t k ) x b background initial solution B background error covariance y k observations R k observation error covariance H k observation selection operator M nonlinear model (forecast) x a improved initial solution [6/64]

7 4D-Var interpretation Fit model predictions to data PDE-constrained nonlinear optimization Minimize the uncertainty of the model states Inverse problem for estimating parameters of maximum likelihood Reconstruct initial conditions, boundary conditions and other parameters [7/64]

8 4D-Var solution x a is called the minimizer, optimal solution or optimum of J (x ) x a represents an improved estimate of the initial model states 4D-Var first-order optimality condition: x J (x a ) = B 1 (x x b ) + N k= M T,k H k R 1 k (H k (x k ) y k ) = M,k and H k are linearized operators corresponding to M,k and H k x a 4D-Var analysis. Obtaining x a by minimizing 4D-Var cost function assimilating the data. Difficult to compute x a directly for real problems [8/64]

9 Computing the 4D-Var solution 4D-Var optimization is solved iteratively using gradient-based solvers: Quasi-Newton Nonlinear conjugate gradients Truncated Newton Each solver iteration requires: 1. Evaluating the 4D-Var cost function at the current iterate Forecast model runs and simple algebraic operations 2. Generating the descent direction to the next iterate Computing the gradient/hessian of the objective cost function 3. Advancing along the descent direction through linesearch or trust-regions More forecast model runs and gradient/hessian evaluations 4D-Var is characterized by a large computational cost For large-scale applications like weather forecast, the solver is stopped after a few iterations, before reaching the global optimum [9/64]

10 General research interests on 4D-Var Experimental setting 1. Collecting observations y 2. Background estimation x b 3. Learning error statistics B, R k Efficient computation 1. Faster models 2. Better solvers 3. Preconditioning/Accelerating convergence [1/64]

11 Our research interests on 4D-Var 1. Quantify the contribution of each observation in reducing uncertainty 2. Devise efficient computation techniques (speed + accuracy) for practical use 3. Optimize the process of collecting and processing observations [11/64]

12 Quantifying the contribution of observations Theoretical frameworks: 1. Observation impact How much did each observation contribute to reducing the error? 2. Sensitivity analysis To which observations is the forecast error most sensitive? 3. Information theory How much information content does each observation carry? 4. Statistical Design How much trust can we put in each observation? 5. Observability, predictability, controllability Which system states are more difficult to be determined solely from simulations? Experimental frameworks: Observing system experiments real observations Observing simulated system experiments synthetic observations [12/64]

13 Sensitivity analysis Describes locally valid linear dependencies between inputs and outputs of a dynamical system: ( ) xf x f = x x Large sensitivities small input variations translate into large output variations Also applied to scalar cost functions defined on the output model states: ( ) T xf x E(x f ) = x E xf WANTED: Sensitivity of E to y k, R k, x b, B, H etc PROPOSED APPROACH: Chain-rule derivation of sensitivity equations [13/64]

14 Computing sensitivity values Classic approach: Finite Difference Uses just the forward (forecast, nonlinear) numerical model Requires two or more model runs from the perturbed initial solution Still used operationally but obsolete Our approach: Adjoint Models Involves building auxiliary numerical models counterpart to the forward model Solves the linear differential equations of original system by time integration Tangent linear model and its (first-order) adjoint for first-order derivatives Second-order adjoint model for curvature information (computationally expensive) [14/64]

15 Deriving the 4D-Var sensitivity equations 4D-Var optimality condition depends on model states x a AND parameters u: x J (x a, u) = Implicit dependence between x a and u: x a = x a (u) Apply the implicit function theorem to the optimality condition to obtain the sensitivity of x a to u: [ 1 u x a (u) = 2 x,uj (x a, u) 2 x,x J (x a, u)]. [15/64]

16 Deriving the 4D-Var sensitivity equations Replacing u with y k we have: [ ] 1 yk x a = 2 y k,xj (x a ) 2 x,x J (x a ) Derive the first-order optimality condition by y k : to obtain ( ) x J (x a ) = B 1 x a x b + N k=1 2 y k,x J (x a ) = R k H k M,k M T,kH T k R 1 k (H k (x a k) y k ) = The sensitivity of the 4D-Var analysis x a to observations y k is: yk x a = R 1 k H k M,k ( x,x J (x a )) 1 [16/64]

17 Deriving the 4D-Var sensitivity equations Consider the forecast score E as the 2-norm error towards a verification forecast: E(x a ) = ( x a F x v ) T F C ( x a F x v ) F x v F can be seen as a reference forecast C is a matrix used to prescribe the scaling, weighting or selecting. Using chain rule differentiation we obtain the sensitivity of the forecast score yk E(x a ) = yk x a x E(x a ) = yk x a M T,F C (x a F x v ) F [17/64]

18 Deriving the 4D-Var sensitivity equations Sensitivity to observations y k : yk E(x a ) = R 1 k H k M,k ( x,x J (x a )) 1 x E(x a ), Sensitivity to observation error covariance R k : ( ) Rk E = R 1 k [H(x) y] ye(x a ), [18/64]

19 Computing sensitivity analysis Forecast sensitivity to observations: yk E(x a ) = R 1 k H k M,k ( x,x J (x a )) 1 x E(x a ) Computation can be split in three steps: x E(x a ) = MT,F C (x a F ) xv F ) µ = ( 2 x,x J (x a 1 ), y) x E(x a ( ) 2 x,x J (x a, y) µ = x E(x a ) yk E(x a ) = R 1 k H k M,k µ Must build the required computational tools: ) 1 1. adjoint models for evaluating M, M T and ( 2 x,x J (x a, y) 2. linear iterative solvers for obtaining the supersensitivity µ 3. preconditioning, multigrid for accelerating convergence 4. spectral decomposition for low-rank approximations of sensitivity values [19/64]

20 Research objectives 1. Test problem 2D Shallow Water Equations 2. Build adjoint models for computing model state derivatives 3. Validate the benefits of second-order adjoint models 4. Interface the numerical tools according to the sensitivity analysis methodology 5. Efficient computation through preconditioning, multigrid and low-rank approximations 6. Use sensitivities to optimize the parameters of the observational network [2/64]

21 Test model: 2D shallow water equations Shallow water equations (Saint-Venant) describe fluid movement: t h + x (uh) + y (vh) = t (uh) + (u 2 h + 12 ) x gh2 + y (uvh) = t (vh) + x (uvh) + (v 2 h + 12 ) y gh2 =. Simulating the following physical variables: h(t, x, y) is the fluid layer thickness (height), u(t, x, y) and v(t, x, y) are the components of the velocity field, Simplified version of the primitive equations of the atmosphere Conservation of mass and momentum No terms for pressure, friction, Coriolis or viscosity Used to model gravity waves in atmosphere and ocean [21/64]

22 SWE numerical models 1. SWE_EXP Time discretization: explicit, 4th order Runge-Kutta Space discretization: finite volume method Adjoint models: Automatic differentiation CPU time ratios for FWD : TLM : FOA : SOA 1 : 4.5 : 4.5 : 14 (slower models) 2. SWE_IMP Time discretization: implicit, Crank-Nicholson Space discretization: 3rd order upwind finite differences Adjoint models: Manual differentiation CPU time ratios for FWD : TLM : FOA : SOA 1 :.1 :.1 :.15 (faster models) Boundary conditions periodic Cartesian grid of size 4 x 4 48 variables Number of time steps 1; Time step size [22/64]

23 Implicit timestepping discretization schemes Forward model (FWD) x n+1 t 2 f (xn+1 ) = x n + t 2 f (xn ) Tangent linear model (TLM) ( I t ) ( 2 f (x n+1 ) δx n+1 = I + t ) 2 f (x n ) δx n First-order adjoint model (FOA) ( I t ) ( 2 f (x n+1 ) T λ n = I + t ) 2 f (x n ) T λ n+1 Second-order adjoint model (SOA) ( I t ) 2 f (x n+1 ) T σ n = + t 2 + t 2 ( I + t ) 2 f (x n ) T σ n+1 ( f (x n ) δx n) T λ n+1 ( f (x n+1 ) δx n+1) T λ n. [23/64]

24 Data assimilation testing scenario Circular dam problem Reference solution: Build h as Gaussian pulse, u and v as constant fields Can propagate solution through model to make variables consistent (a) Initial time (b) Final time Figure: Model solution for height variable. Note: Solving the 4D-Var problem using information provided by adjoint models is equivalent to using them for sensitivity analysis. [24/64]

25 Crafting the data assimilation scenario Background solution: B correlation matrix scaled with reference solution x b reference solution plus noise of standard deviation 8% Observations: Available at each grid point at the final time (t 1 ) R k diagonal error covariance matrix (observations uncorrelated) y k reference solution run plus white noise Models: Interface models with data Forcing terms into adjoint Comparison different nonlinear solvers: First-order adjoints: L-BFGS, TNSOA, HYBRID, NLCG, CGDESC Second-order adjoints: DANCG, TNFD [25/64]

26 Adjoint sensitivity analysis for minimizing 4D-Var 1 Relative reduction in cost function 1 1 BFGS HYBRID TNFD TNSOA NLCG CGDESC DANCG 1 Relative reduction in cost function BFGS HYBRID TNFD TNSOA NLCG CGDESC DANCG Model Runs Model Runs Figure: Solver convergence for SWE_EXP (left) and SWE_IMP (right) versus time scaled by forward model runs SWE_EXP: Solvers using SOAs can converge as fast as those using FOAs SWE_IMP: Solvers using SOAs can converge faster than those using FOAs [26/64]

27 4D-Var data assimilation results (a) Observations (final time) (b) 4D-Var analysis (initial time) Figure: Observations assimilated with 4D-Var and the resulting initial condition. Now available adjoint models to solve 4D-Var sensitivity equations We can proceed to computing 4D-Var forecast sensitivity to observations [27/64]

28 Computing 4D-Var forecast sensitivity to observations [28/64]

29 Computing 4D-Var forecast sensitivity to observations Computational cost dominated by the solution of the linear system ( ) 2 x,x J (x a ) µ = x E(x a ) Need Hessian-vector products which can be evaluated via: Second-order adjoint models Finite difference from adjoint (gradient) runs: 2 x,x J (x a ) u x J (xa + ɛ u)t x J (x a )T ɛ Gauss-Newton approximation obtained from deriving 4D-Var twice: 2 x,x J (x a ) = B 1 + N k= M T,k HT k R 1 k H k M,k Convergence information generated during the minimization of 4D-Var Inverse of error covariance matrix for the 4D-Var analysis:. 2 x,x J (x a ) = A 1 [29/64]

30 Solving the linear system Working in a matrix-free environment restricts the numerical set of tools Linear solvers Krylov solvers Preconditioners Knowledge of the problem Reuse computation from minimizing 4D-Var Few Hessian-vector evaluations Multigrid Numerical models with coarser space discretizations Krylov solvers for smoothing operators Transfer operators from error correlation As shown during the preliminary exam, preconditioning and multigrid improve the convergence of the iterative linear solvers. [3/64]

31 Figure: Forecast sensitivity to h observations and approximation errors y x (a) perfect observations via second- (b) noisy observations via secondorder adjoint models. order adjoint models (error) y y x x (c) perfect observations via finite dif- (d) perfect observations via Gaussference (error). [31/64] Newton (error).

32 Data pruning 4 y x RMSE [log scale] 1 1 FULL HIGH LOW BFGS iterations [log scale] (a) Location of HIGH (red) and LOW impact observations. (b) h RMS error decrease versus the number of L-BFGS iterations. RMSE [log scale] FULL HIGH LOW RMSE [log scale] FULL HIGH LOW BFGS iterations [log scale] BFGS iterations [log scale] (c) u RMS error decrease versus the number of L-BFGS iterations. (d) v RMS error decrease versus the number of L-BFGS iterations. [32/64]

33 Sensor malfunctioning y x (a) 4D-Var increment y x (b) Supersensitivity field (c) Sensitivity to observations Figure: Observation sensitivity field when the assimilated data is corrupted at two locations with coordinates (1,1) and (2,2). The location of the faulty sensors is unknown to the data assimilation system, but is retrieved via the observation impact methodology. [33/64]

34 Computing low-rank approximations for the 4D-Var forecast sensitivity to observations [34/64]

35 Singular value decomposition for the observation sensitivity matrix Consider the sensitivity matrix mapping from model space to observation space [ ] T = yx a x a = = R 1 k H k M,k A y Singular Value Decomposition (SVD) is a popular technique used in image reconstruction, information retrieval, data analysis etc for: Principal Component Analysis Reduced Order Modeling Error Estimation For a given matrix T (not necessarily square) SVD represents a factorization: where the factor matrices are: T = U S V T U right-singular vectors as matrix columns S singular values as matrix diagonal entries V left-singular vectors as matrix columns [35/64]

36 Low-rank approximations for sensitivity to observations Each triplet U i, S i, V i captures one singular mode of T action Leading (dominant) singular vectors are associated with the largest singular values T low-rank approximations (rank p) from dominant modes in the truncated SVD: (T) (p) = U (p) S (p) V T (p) Computationally challenging to perform SVD on large matrices Even more challenging for a chain sequence of matrix-free operators (T) We propose two approaches for matrix-free SVD: 1. Iterative: Computing the leading singular pairs one at a time 2. Parallel: Computing the leading singular pairs all at once [36/64]

37 Deriving the serial algorithm Compute the leading eigenvectors of the product between the observation impact matrix and its transpose, T T = US 2 U, where T T = A N M,kM,k A = k=1 N T k T k, k=1 M,k = R 1 k H k M,k, T k = M,k A. Start from the truncated eigenvalue decomposition for A : ( ) A = (V D V ) 1 = V D 1 V V (p) D 1 (p) V (p) = (A ) (p). Eigenvalue decomposition for A (or A 1 ) can be obtained via: Jacobi-Davidson (JDQZ) Lanczos Arnoldi other Krylov-based approaches [37/64]

38 Deriving the iterative algorithm Plug the low-rank approximation (A ) (p) into the expression of T T k T k ( ) ( ) V (p) D 1 (p) V (p) M,k M,k V (p) D 1 (p) V (p) = V (p) D 1 (p) W k W k D 1 (p) V (p), Efficient computation Perform economy SVD on the small product matrix: D 1 (p) W D 1 (p) = VredDredV red, Low-rank approximation for T is obtained as: T T ( V (p) V red ) Dred ( V(p) V red ). D red is the matrix of dominant singular values V p V red the matrix of left singular vectors. [38/64]

39 Iterative algorithm 1. Solve iteratively the eigenvalue problem for the 4D-Var Hessian; 2. Map newly generated eigenvectors through the tangent linear model; 3. Compute the truncated SVD of the resulting (small-size) matrix; 4. Project the left singular vectors onto the eigenvector base of the 4D-Var Hessian; 5. Build the low-rank approximation of T. [39/64]

40 Deriving the parallel algorithm Random sampling techniques exhibit trivial parallelism 1. Draw p random vectors and form a matrix Ω. 2. Compute the product Y = A 1 Ω using Hessian-vector multiplications, i.e., running the second order order adjoint model for each column. 3. Construct the QR decomposition Y = QR. Q is an orthonormal basis for the range of Y, but also the orthonormal factor in: A 1 = Q B, B = Q A 1, B = A 1 Q. Efficient computation Perform economy SVD of B to obtain: A 1 = Q U B Σ B V B = U A Σ B V B. Obtain approximate T from TLM runs with the columns of the pseudoinverse of A : T p M,k (A 1 )+ p = M,k V B Σ + B U A. [4/64]

41 Parallel algorithm 1. Build the matrix B, through parallel second-adjoint runs; 2. Compute a full SVD of B; 3. Project the left singular vectors of B in Q and form the SVD of A 1 ; 4. Compute the Hessian pseudoinverse A + ; 5. Build the impact matrix T through tangent linear runs. [41/64]

42 Low-rank observation impact 1 5 Value RMS Mode (a) Singular value spectrum Rank (b) Truncation error Figure: Singular value decay for the observation impact matrix T and the corresponding truncation error norm. [42/64]

43 Low-rank observation impact y y x x.2.4 (a) Low-rank estimate (b) Truncation error field Figure: Low-rank approximation of observation sensitivity for h data and the associated truncation error field for 16 modes. [43/64]

44 Low-rank observation impact y 2 y x x.5 (a) Full-rank impact for single center obs. (b) Low-rank impact for single center obs y 2 y x 2 4 x (c) Full-rank impact for single corner obs. (d) Low-rank impact for single corner obs. [44/64]

45 Minimizing the forecast error with respect to parameters of the observational network [45/64]

46 Optimizing the observational network Consider again the forecast score E defined on the data assimilation output: E(x a ) = ( x a F x v F ) T C ( x a F x v F ). We can now compute the sensitivity of E to parameters u: ue(x a ) = T x a. Sensitivities describe directions of descent for E in u parameter space Proposed approach nonlinear optimization for parameter values: 1. Solve the 4D-Var optimization problem to obtain improved initial condition x a 2. Evaluate the cost function E 3. Compute sensitivity to observations yk E(x a ) 4. Change parameter values along descent directions y new k 5. Repeat = y prev k α yk E(x a ) [46/64]

47 Nonlinear optimization problem Optimization constrained by optimization (data assimilation) Tuning parameters of an optimization problem Meta-optimization Objective cost function: u opt = arg min E ( x a ) v u subject to { x a = arg min x J ( x, u ), x a v = M t t v (x a ). First-order optimality condition: ( ) 1 ( ) ue(x a v(u)) = 2 u,x J 2 x,x J M T,v C x a v x verif v =. [47/64]

48 Application #1: optimal observation values Objective cost function for u = y k (y k ) opt = arg min E ( x a ) v y k subject to { x J ( x a, y k ) =, x a v = M t t v (x a ). Already have the gradient of E to y k : 1 ( ) yk E = R 1 k H k M,k ( 2 x,x J (x)) a M T,v C x a v x verif v. Reconstructs observation values which lead to the verification forecast, once assimilated with 4D-Var Useful for detecting observational errors or crafting simulated experiments [48/64]

49 Application #1: optimal observation values Reference model trajectory height field aligned along the South-North direction y y x (a) Initial Time x (b) Observation Time 99.9 Figure: The reference height field h at the initial and final (observation) times. [49/64]

50 Application #1: optimal observation values Assimilating the wrong observations leads to an initial solution aligned along the East-West direction y 2 1 y x 99.9 (a) Faulty Observations 2 4 x (b) 4D-Var Analysis 1 Figure: The faulty (unoptimized) observations of the height field h and the corresponding 4D-Var analysis. [5/64]

51 Application #1: optimal observation values The forecast score quantifies the error between the 4D-Var forecast and the verification forecast By adjusting the values of the argument (observations to be assimilated), the 4D-Var forecast is getting closer and closer to the verification Eventually, the optimized observation values correspond to what we would have expected to observe based on the reference trajectory Cost function 1 1 y Outer iterations (a) L-BFGS convergence 2 4 x (b) Optimized observation values Figure: The minimization of the verification cost function E and the optimized observations at assimilation time t 1. [51/64]

52 Application #2: optimal observation weights Objective cost function for u = R k (R k ) opt = arg min E ( x a ) v R k subject to { x J ( x a, R k ) =, x a v = M t t v (x a ). Can compute gradient of E to R k from y k : Rk E = ( ) R 1 k [H(x) y] ye(r k ), In 4D-Var, each observation is associated with a weight representing a measure of trust The weighting coefficients are prescribed through the observation error covariance R k Previous research in this field only succeedeed at tuning the global weighting of B vs R k Our approach aims at dynamically tuning the observation weights based on reducing the 4D-Var forecast error [52/64]

53 Application #2: optimal observation weights Observations on a subdomain contain a significant level of noise 4D-Var analysis does not clearly reflect noise presence y 2 11 y 2 y x x 2 4 x 9 (a) Perfect Observations (b) Prescribed Noise (physical variable units) (c) 4D-Var Analysis Figure: The h observations, the prescribed observation noise, and the resulting 4D-Var analysis using the initial specification of the error covariances. [53/64]

54 Application #2: optimal observation weights The forecast score quantifies the error between the 4D-Var forecast and the verification forecast By adjusting the values of the argument (observation weights), the 4D-Var forecast is getting closer and closer to the verification Eventually, the optimized observation weights reflect the noise present in the observations Cost function y y Outer iterations 2 4 x 2 4 x 9 (a) L-BFGS Convergence (b) Optimized Covariances (c) 4D-Var Analysis Figure: The minimization of the verification cost function, the optimized h observation error covariances, and the resulting 4D-Var analysis using the improved values. [54/64]

55 Application #3: optimal sensor locations For this application, we use sparse observations Must specify the observation selection operator H interpolation scheme Inverse Distance Weighting interpolation is widely used in geographic information systems: H k (l x, l y ; z) = i d 1 z i i, if d i, d 1 i i z i, if d i =, [ (lx ) where d i = l 2 xi + ( ) ] l y l 2 1/2 yi. Gradient of H with respect to 2D locations (l x, l y ): (d i ) 2 (d j ) 1 ( ) ( ) l x l xi zi zi z j i j lx H k (l x, l y ; z) =, 2 (d i ) 1 i [55/64]

56 Application #3: optimal sensor locations Objective cost function for u = (l x, l y) (l x, l y) opt = arg min l x,l y E ( (l x, l y)) subject to { x J ( x a, L ) =, x a v = M t t v (x a ). Gradient of E with respect to locations (l x, l y) ( 1 ( ) lx E = 2 l x,x J 2 x,x J (x)) a M T,v C x a v x verif v, Derive 4D-Var first-order optimality condition with respect to H k : 2 l x,x J = lx H k (l x, l y; x a k) + M T,k ( lx H k (l x, l y; x a k)) T R 1 k (H k (l x, l y; x a k) y k ) Second-term containing the innovation vector is negligible and can be ignored [56/64]

57 Application #3: optimal sensor locations Choose the initial configuration for the location of 3 observations By relocating the observations, the 4D-Var forecast is getting closer and closer to the verification Newly obtained locations provide superior information for the data assimilation process 2 y x 4 (a) Initial locations Cost function Outer iterations (b) L-BFGS convergence y x 4 (c) Optimized locations Figure: The optimization of sensor locations for the third testing scenario: initial locations, numerical solver convergence, and optimal locations. [57/64]

58 Application #3: optimal sensor locations Choose the initial configuration for the location of 3 observations By relocating the observations, the 4D-Var forecast is getting closer and closer to the verification Newly obtained locations provide superior information for the data assimilation process y x 4 (a) Initial locations Cost function Outer iterations (b) L-BFGS convergence y x 4 (c) Optimized locations Figure: The optimization of sensor locations for the second testing scenario: initial locations, numerical solver convergence, and optimal locations. [58/64]

59 Application #3: optimal sensor locations Choose the initial configuration for the location of 3 observations By relocating the observations, the 4D-Var forecast is getting closer and closer to the verification Newly obtained locations provide superior information for the data assimilation process 25 y x 4 (a) Initial locations Cost function Outer iterations (b) L-BFGS convergence y x 4 (c) Optimized locations Figure: The optimization of sensor locations for the first testing scenario: initial locations, numerical solver convergence, and optimal locations. [59/64]

60 Research achievements Built a computational framework for observation impact via sensitivity analysis Developed adjoint models competitive for practical use Efficient computation via preconditioning, multigrid and low-rank approximations Optimized parameters of the observational networks based on sensitivity results [6/64]

61 Future Work Apply the computational methodology to large-scale problems (WRF) Develop new algorithms for efficient computation Compare the sensitivity approach to other observation impact measures Optimize sensor locations using different interpolation schemes Combinatorial and mixed-integer programming for observational networks [61/64]

62 Bibliography Alexe, M., Cioaca, A., & Sandu, A. (21, April). Obtaining and using second order derivative information in the solution of large scale inverse problems. In Proceedings of the 21 Spring Simulation Multiconference (p. 85). ACM. Cioaca, A., Zavala, V., & Constantinescu, E. (211, November). Adjoint sensitivity analysis for numerical weather prediction: Applications to power grid optimization. In Proceedings of the first international workshop on High performance computing, networking and analytics for the power grid (pp ). IEEE/ACM 24th International Conference for High Performance Computing, Networking, Storage and Analysis. Cioaca, A., Alexe, M., & Sandu, A. (212). Second-order adjoints for solving PDE-constrained optimization problems. Optimization Methods and Software, 27(4-5), Cioaca, A., Sandu, A., De Sturler, E., & Constantinescu, E. (212). Efficient computation of observation impact in 4D-Var data assimilation. Uncertainty Quantification in Scientific Computing. IFIP Advances in Information and Communication Technology, 377, Rao V., Cioaca, A., & Sandu. A. (212, November). A highly scalable approach for time parallelization of long range forecasts. In Proceedings of the Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA). IEEE/ACM 25th International Conference for High Performance Computing, Networking, Storage and Analysis. [62/64]

63 Bibliography Sandu A., Cioaca. A, Rao. V. (213). Dynamic sensor network configuration in InfoSymbiotic systems using model singular vectors Cioaca A., Sandu, A., & De Sturler E. (213). Efficient methods for computing observation impact in 4D-Var data assimilation. ACCEPTED at Journal of Computational Geosciences. Cioaca A., Sandu, A., (213). Low-Rank Approximations for Observation Impact in 4D-Var Data Assimilation. Cioaca A., Sandu, A., (213). An Optimization Approach for Observational Networks in 4D-Var Data Assimilation. [63/64]

64 Many thanks to the following: Dr. Adrian Sandu (academic advisor) Dr. Cal Ribbens Dr. Clifford Shaffer Dr. Traian Iliescu Dr. Eric de Sturler Computational Science Laboratory Department of Computer Science [64/64]

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation

Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation Revision of TR-9-25: A Hybrid Variational/Ensemble ilter Approach to Data Assimilation Adrian Sandu 1 and Haiyan Cheng 1 Computational Science Laboratory Department of Computer Science Virginia Polytechnic

More information

The University of Reading

The University of Reading The University of Reading Radial Velocity Assimilation and Experiments with a Simple Shallow Water Model S.J. Rennie 2 and S.L. Dance 1,2 NUMERICAL ANALYSIS REPORT 1/2008 1 Department of Mathematics 2

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi 4th EnKF Workshop, April 2010 Relationship

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

State-of-the-art in 4D-Var Chemical Data Assimilation. Adrian Sandu Virginia Tech

State-of-the-art in 4D-Var Chemical Data Assimilation. Adrian Sandu Virginia Tech State-of-the-art in 4D-Var Chemical Data Assimilation Adrian Sandu Virginia Tech Information feedback loops between CTMs and observations: data assimilation and targeted meas. Chemical kinetics Dynamics

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Variational data assimilation of lightning with WRFDA system using nonlinear observation operators

Variational data assimilation of lightning with WRFDA system using nonlinear observation operators Variational data assimilation of lightning with WRFDA system using nonlinear observation operators Virginia Tech, Blacksburg, Virginia Florida State University, Tallahassee, Florida rstefane@vt.edu, inavon@fsu.edu

More information

Variational Data Assimilation Current Status

Variational Data Assimilation Current Status Variational Data Assimilation Current Status Eĺıas Valur Hólm with contributions from Mike Fisher and Yannick Trémolet ECMWF NORDITA Solar and stellar dynamos and cycles October 2009 Eĺıas Valur Hólm (ECMWF)

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Weak Constraints 4D-Var

Weak Constraints 4D-Var Weak Constraints 4D-Var Yannick Trémolet ECMWF Training Course - Data Assimilation May 1, 2012 Yannick Trémolet Weak Constraints 4D-Var May 1, 2012 1 / 30 Outline 1 Introduction 2 The Maximum Likelihood

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Observation Impact Assessment for Dynamic. Data-Driven Coupled Chaotic System

Observation Impact Assessment for Dynamic. Data-Driven Coupled Chaotic System Applied Mathematical Sciences, Vol. 10, 2016, no. 45, 2239-2248 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.65170 Observation Impact Assessment for Dynamic Data-Driven Coupled Chaotic

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

Stability of Ensemble Kalman Filters

Stability of Ensemble Kalman Filters Stability of Ensemble Kalman Filters Idrissa S. Amour, Zubeda Mussa, Alexander Bibov, Antti Solonen, John Bardsley, Heikki Haario and Tuomo Kauranne Lappeenranta University of Technology University of

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Eugenia Kalnay Lecture 2 Alghero, May 2008 Elements of Ensemble Forecasting

More information

Data Assimilation in Geosciences: A highly multidisciplinary enterprise

Data Assimilation in Geosciences: A highly multidisciplinary enterprise Data Assimilation in Geosciences: A highly multidisciplinary enterprise Adrian Sandu Computational Science Laboratory Department of Computer Science Virginia Tech Data assimilation fuses information from

More information

Effect of random perturbations on adaptive observation techniques

Effect of random perturbations on adaptive observation techniques INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Published online 2 March 2 in Wiley Online Library (wileyonlinelibrary.com..2545 Effect of random perturbations on adaptive observation techniques

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship

More information

Parallel Algorithms for Four-Dimensional Variational Data Assimilation

Parallel Algorithms for Four-Dimensional Variational Data Assimilation Parallel Algorithms for Four-Dimensional Variational Data Assimilation Mie Fisher ECMWF October 24, 2011 Mie Fisher (ECMWF) Parallel 4D-Var October 24, 2011 1 / 37 Brief Introduction to 4D-Var Four-Dimensional

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS Victor S. Ryaben'kii Semyon V. Tsynkov Chapman &. Hall/CRC Taylor & Francis Group Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Total Energy Singular Vectors for Atmospheric Chemical Transport Models

Total Energy Singular Vectors for Atmospheric Chemical Transport Models Total Energy Singular Vectors for Atmospheric Chemical Transport Models Wenyuan Liao and Adrian Sandu Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA

More information

Introduction to Data Assimilation

Introduction to Data Assimilation Introduction to Data Assimilation Alan O Neill Data Assimilation Research Centre University of Reading What is data assimilation? Data assimilation is the technique whereby observational data are combined

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Numerical Weather Prediction: Data assimilation. Steven Cavallo

Numerical Weather Prediction: Data assimilation. Steven Cavallo Numerical Weather Prediction: Data assimilation Steven Cavallo Data assimilation (DA) is the process estimating the true state of a system given observations of the system and a background estimate. Observations

More information

Phase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes

Phase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes Phase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes Assume a phase space of dimension N where Autonomous governing equations with initial state: = is a state

More information

NUMERICAL METHODS FOR ENGINEERING APPLICATION

NUMERICAL METHODS FOR ENGINEERING APPLICATION NUMERICAL METHODS FOR ENGINEERING APPLICATION Second Edition JOEL H. FERZIGER A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Morphing ensemble Kalman filter

Morphing ensemble Kalman filter Morphing ensemble Kalman filter and applications Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Supported by NSF grants CNS-0623983

More information

Dacian N. Daescu 1, I. Michael Navon 2

Dacian N. Daescu 1, I. Michael Navon 2 6.4 REDUCED-ORDER OBSERVATION SENSITIVITY IN 4D-VAR DATA ASSIMILATION Dacian N. Daescu 1, I. Michael Navon 2 1 Portland State University, Portland, Oregon, 2 Florida State University, Tallahassee, Florida

More information

Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data Assimilation

Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data Assimilation Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data Assimilation Lin Zhang Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University

More information

Numerical Mathematics

Numerical Mathematics Alfio Quarteroni Riccardo Sacco Fausto Saleri Numerical Mathematics Second Edition With 135 Figures and 45 Tables 421 Springer Contents Part I Getting Started 1 Foundations of Matrix Analysis 3 1.1 Vector

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9 Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Computer Science Technical Report TR July 11, 2007

Computer Science Technical Report TR July 11, 2007 Computer Science Technical Report TR-07-27 July 11, 2007 Adrian Sandu and Lin Zhang Discrete Second Order Adjoints in Atmospheric Chemical Transport Modeling Computer Science Department Virginia Polytechnic

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

Ensemble Data Assimilation: Theory and Applications

Ensemble Data Assimilation: Theory and Applications : Theory and Applications Humberto C. Godinez J. David Moulton, Jon Reisner, Alex O. Fierro, Stephen Guimond, Jim Kao Los Alamos National Laboratory National Severe Storms Laboratory Seminar July 6, 2011

More information

Introduction to Data Assimilation. Reima Eresmaa Finnish Meteorological Institute

Introduction to Data Assimilation. Reima Eresmaa Finnish Meteorological Institute Introduction to Data Assimilation Reima Eresmaa Finnish Meteorological Institute 15 June 2006 Outline 1) The purpose of data assimilation 2) The inputs for data assimilation 3) Analysis methods Theoretical

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

Data Assimilation for Weather Forecasting: Reducing the Curse of Dimensionality

Data Assimilation for Weather Forecasting: Reducing the Curse of Dimensionality Data Assimilation for Weather Forecasting: Reducing the Curse of Dimensionality Philippe Toint (with S. Gratton, S. Gürol, M. Rincon-Camacho, E. Simon and J. Tshimanga) University of Namur, Belgium Leverhulme

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher Ensemble forecasting and flow-dependent estimates of initial uncertainty Martin Leutbecher acknowledgements: Roberto Buizza, Lars Isaksen Flow-dependent aspects of data assimilation, ECMWF 11 13 June 2007

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Diagnosis of observation, background and analysis-error statistics in observation space

Diagnosis of observation, background and analysis-error statistics in observation space Q. J. R. Meteorol. Soc. (2005), 131, pp. 3385 3396 doi: 10.1256/qj.05.108 Diagnosis of observation, background and analysis-error statistics in observation space By G. DESROZIERS, L. BERRE, B. CHAPNIK

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields.

Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields. Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields. Optimal Interpolation ( 5.4) We now generalize the

More information

A Penalized 4-D Var data assimilation method for reducing forecast error

A Penalized 4-D Var data assimilation method for reducing forecast error A Penalized 4-D Var data assimilation method for reducing forecast error M. J. Hossen Department of Mathematics and Natural Sciences BRAC University 66 Mohakhali, Dhaka-1212, Bangladesh I. M. Navon Department

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Continuous versus Discrete Advection Adjoints in Chemical Data Assimilation with CMAQ

Continuous versus Discrete Advection Adjoints in Chemical Data Assimilation with CMAQ Continuous versus Discrete Advection Adjoints in Chemical Data Assimilation with CMAQ Tianyi Gou a, Adrian Sandu a a Computational Science Laboratory, Department of Computer Science, Virginia Polytechnic

More information

Ensemble Data Assimilation and Uncertainty Quantification

Ensemble Data Assimilation and Uncertainty Quantification Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom A method for merging flow-dependent forecast error statistics from an ensemble with static statistics for use in high resolution variational data assimilation R. E. Petrie and R. N. Bannister Department

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model nonlinear 4DVAR 4DVAR Data Assimilation of the Shallow Water Equation Model R. Ştefănescu and Ionel M. Department of Scientific Computing Florida State University Tallahassee, Florida May 23, 2013 (Florida

More information

Brian J. Etherton University of North Carolina

Brian J. Etherton University of North Carolina Brian J. Etherton University of North Carolina The next 90 minutes of your life Data Assimilation Introit Different methodologies Barnes Analysis in IDV NWP Error Sources 1. Intrinsic Predictability Limitations

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Linear algebra for MATH2601: Theory

Linear algebra for MATH2601: Theory Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................

More information

Stochastic Analogues to Deterministic Optimizers

Stochastic Analogues to Deterministic Optimizers Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured

More information

Sensitivity analysis in variational data assimilation and applications

Sensitivity analysis in variational data assimilation and applications Sensitivity analysis in variational data assimilation and applications Dacian N. Daescu Portland State University, P.O. Box 751, Portland, Oregon 977-751, U.S.A. daescu@pdx.edu ABSTRACT Mathematical aspects

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information