Data assimilation with and without a model

Similar documents
Data assimilation with and without a model

Adaptive ensemble Kalman filtering of nonlinear systems

Reconstructing neural dynamics using data assimilation with multiple models

arxiv: v1 [physics.data-an] 16 Nov 2016

Correcting biased observation model error in data assimilation

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

State and Parameter Estimation in Stochastic Dynamical Models

EnKF-based particle filters

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Ensemble Kalman Filter

Adaptive ensemble Kalman filtering of nonlinear systems. Tyrus Berry and Timothy Sauer George Mason University, Fairfax, VA 22030

Semiparametric modeling: correcting low-dimensional model error in parametric models

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Ensemble Kalman filtering without a model

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

A State Space Model for Wind Forecast Correction

Data Assimilation Methods for Neuronal State and Parameter Estimation

A Note on the Particle Filter with Posterior Gaussian Resampling

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter

New Fast Kalman filter method

(Extended) Kalman Filter

A new unscented Kalman filter with higher order moment-matching

Ergodicity in data assimilation methods

A Crash Course on Kalman Filtering

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

The crucial role of statistics in manifold learning

Bayesian Inverse problem, Data assimilation and Localization

Data assimilation in high dimensions

The Kalman Filter ImPr Talk

Methods of Data Assimilation and Comparisons for Lagrangian Data

Lecture 7: Optimal Smoothing

Gaussian Process Approximations of Stochastic Differential Equations

1 Kalman Filter Introduction

Time Series Analysis

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

What do we know about EnKF?

On Parameter Estimation for Neuron Models

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

Machine Learning 4771

The Ensemble Kalman Filter:

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

9 Multi-Model State Estimation

Four-Dimensional Ensemble Kalman Filtering

4. DATA ASSIMILATION FUNDAMENTALS

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

A Stochastic Collocation based. for Data Assimilation

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

EnKF and filter divergence

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS

Fundamentals of Data Assimilation

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Electrophysiology of the neuron

Fundamentals of Data Assimila1on

Lecture 9. Time series prediction

Local Ensemble Transform Kalman Filter

Gaussian processes for inference in stochastic differential equations

TYRUS BERRY PhD in Mathematics from George Mason University. GPA: 4.0

A new Hierarchical Bayes approach to ensemble-variational data assimilation

Lecture 6: Bayesian Inference in SDE Models

Parameter Estimation in a Moving Horizon Perspective

Statistics Homework #4

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION

Data assimilation in high dimensions

Smoothers: Types and Benchmarks

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University

EnKF and Catastrophic filter divergence

Single neuron models. L. Pezard Aix-Marseille University

4 Derivations of the Discrete-Time Kalman Filter

Correcting Observation Model Error. in Data Assimilation

Implicit sampling for particle filters. Alexandre Chorin, Mathias Morzfeld, Xuemin Tu, Ethan Atkins

Stability of Ensemble Kalman Filters

Practical Aspects of Ensemble-based Kalman Filters

TSRT14: Sensor Fusion Lecture 8

Convergence of the Ensemble Kalman Filter in Hilbert Space

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher

Chimera states in networks of biological neurons and coupled damped pendulums

From neuronal oscillations to complexity

Overfitting, Bias / Variance Analysis

Gaussian Filtering Strategies for Nonlinear Systems

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation Based on Nested Particle Filters

Robot Localization and Kalman Filters

The Kalman filter. Chapter 6

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Kalman Filter and Ensemble Kalman Filter

EE 565: Position, Navigation, and Timing

ECE 636: Systems identification

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

State Estimation of Linear and Nonlinear Dynamic Systems

Chris Bishop s PRML Ch. 8: Graphical Models

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Transcription:

Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF

Most of this work is due to: Tyrus Berry, Postdoc, GMU Franz Hamilton, Postdoc, NC State

OUTLINE OF TALK 1. EnKF and applications Parameters Network edge detection 2. Problem: Don t know noise covariances Adaptive filtering 3. Problem: Model error Multimodel DA 4. Problem: Model unknown Kalman-Takens filter

DATA ASSIMILATION x k = f (x k 1 ) + η k η k N(0, Q) y k = h(x k ) + ν k ν k N(0, R) Main Problem. Given the model above plus observations y k, Filtering: Estimate the current state p(x k y 1,..., y k ) Forecasting: Estimate a future state p(x k+l y 1,..., y k ) Smoothing: Estimate a past state p(x k l y 1,..., y k ) Parameter estimation We ll apply the ensemble Kalman filter (EnKF) to attempt to achieve these goals.

DATA ASSIMILATION x k = f (x k 1 ) + η k η k N(0, Q) y k = h(x k ) + ν k ν k N(0, R) Possible obstructions. The observations y k mix system noise η k with obs noise ν k. Model error Q and R may be unknown Known model with unknown parameters Wrong model, even with best fit parameters Have model for some, not all of the variables

EXAMPLE 1. LORENZ 96 dx i dt = x i 2 x i 1 + x i 1 x i+1 x i + F 10 20 30 40 10 20 30 40 10 20 30 40 10 20 30 40 0 10 20 30 40 50 Time -5 0 5 10

EXAMPLE 2. MEA RECORDINGS

TWO STEPS TO FIND p(x k y 1,..., y k ) Assume we have p(x k 1 y 1,..., y k 1 ) Forecast Step: Find p(x k y 1,..., y k 1 ) Assimilation Step: Perform a Bayesian update, p(x k y 1,..., y k ) p(x k y 1,..., y k 1 )p(y k x k, y 1,..., y k 1 ) Posterior Prior Likelihood

BEST POSSIBLE SCENARIO x k = f (x k 1 ) + η k η k N(0, Q) y k = h(x k ) + ν k ν k N(0, R) f and h are linear, all parameters known. x k = F k 1 x k 1 + η k η k N(0, Q) y k = H k x k + ν k ν k N(0, R)

KALMAN FILTER Assume linear dynamics/obs and additive Gaussian noise x k = F k 1 x k 1 + η k η k N (0, Q) y k = H k x k + ν k ν k N (0, R) observability condition for linear systems: H k H k l = H k+1 F k. H k+l+1 F k+l F k Must be full rank for some l KF guaranteed to work

KALMAN FILTER Assume linear dynamics/obs and additive Gaussian noise x k = F k 1 x k 1 + η k η k N (0, Q) y k = H k x k + ν k ν k N (0, R) Assume current estimate is Gaussian: p(x k 1 y 1,..., y k 1 ) = N (x a k 1, Pa k 1 ) Forecast: Linear combinations of Gaussians Prior: p(x k y 1,..., y k 1 ) = N (x f k, Pf k ) x f k = F k 1 x a k 1 P f k = F k 1 P k 1 F k 1 + Q Likelihood: p(y k x k, y 1,..., y k 1 ) = N (y f k, Py k ) y f k = H k x f k P y k = H kp f kh k + R

KALMAN FILTER Forecast: Linear combinations of Gaussians Prior: p(xk y 1,..., y k 1 ) = N (x f k, Pf k ) x f k = F k 1 x a k 1 P f k = F k 1 P k 1 F k 1 + Q Likelihood: p(y k x k, y 1,..., y k 1 ) = N (y f k, Py k ) y f k = H k x f k P y k = H kpkh f k + R Assimilation: Product of Gaussians (complete the square) p(x k y 1,..., y k ) = N (xk f, Pf k ) N (y k f, Py k ) = N (x k a, Pa k ) Define the Kalman gain: K k = Pk f HT k (Py k ) 1 x a k = xk f + K k(y k yk f ) P a k = (I K kh k )P f k

KALMAN FILTER SUMMARY forecast x f k = F k 1 x a k 1 covariance update P f k = F k 1 P a k 1 F T k 1 + Q k 1 P y k = H k P f k HT k + R k 1 Kalman gain K k = P f k HT k (Py k ) 1 ɛ k = y k y f k = y k H k x f k assimilation step x a k = x f k + K kɛ k P a k = (I K k H k )P f k

WHAT ABOUT NONLINEAR SYSTEMS? Consider a system of the form: x k+1 = f (x k ) + η k+1 η k+1 N (0, Q) y k+1 = h(x k+1 ) + ν k+1 ν k+1 N (0, R) More complicated observability condition (Lie derivatives) Extended Kalman Filter (EKF): Linearize F k = Df (ˆx a k ) and H k = Dh(ˆx f k ) Problem: State estimate ˆx a k may not be well localized Solution: Ensemble Kalman Filter (EnKF)

ENSEMBLE KALMAN FILTER (ENKF) + P xx P xx x t + x t i F x t ~ x i t F(x t i ) Generate an ensemble with the current statistics (use matrix square root): x i t = sigma points on semimajor axes xt f = 1 F(x i t ) 2n P f xx = 1 2n 1 (F(x i t ) x f t )(F (x i t ) x f t ) T + Q

ENSEMBLE KALMAN FILTER (ENKF) + P xx P xx x t + x t i F x t ~ x i t F(x t i ) Calculate yt i = H(F (xt i )). Set yt f = 1 2n i y t i. P yy = (2n 1) 1 (y i t y f t )(y i t y f t ) T + R P xy = (2n 1) 1 (F(x i t ) x f t )(y i t y f t ) T K = P xyp 1 yy x a t+1 = x f t + K (y t y f t ) and P a xx = P f xx KP yyk T

PARAMETER ESTIMATION When the model has parameters p, x k+1 = f (x k, p) + η k+1 Can augment the state x k = [x k, p k ] Introduce trivial dynamics for p x k+1 = f (x k, p k ) + η k+1 p k+1 = p k + η p k+1 Need to tune the covariance of η p k+1

EXAMPLE OF PARAMETER ESTIMATION Consider the Hodgkin-Huxley neuron model, expanded to a network of n equations V i = g Na m 3 h(v i E Na ) g K n 4 (V i E K ) g L (V i E L ) n +I + Γ HH (V j )V j j i ṁ i = a m (V i )(1 m i ) b m (V i )m i ḣ i = a h (V i )(1 h i ) b h (V i )h i ṅ i = a n (V i )(1 n i ) b n (V i )n i Γ HH (V j ) = β ij /(1 + e 10(V j +40) ) Only observe the voltages V i, recover the hidden variables and the connection parameters β

EXAMPLE OF PARAMETER ESTIMATION Can even turn connections on and off (grey dashes) Variance estimate statistical test (black dashes)

LINK DETECTION FROM NETWORKS OF MODEL NEURONS Network of Hindmarsh-Rose neurons, modeled by Hindmarsh-Rose Network of Hodgkin-Huxley neurons, modeled by Hindmarsh-Rose Sensitivity = true positive rate Specificity = true negative rate

LINK DETECTION FROM MEA RECORDINGS

LINK DETECTION FROM MEA RECORDINGS

ENKF: WRONG Q AND R 3.2 Simple example with full observation and diagonal noise covariances Red indicates RMSE of unfiltered observations Black is RMSE of optimal filter (true covariances known) RMSE RMSE 1.6 0.8 0.4 0.2 0.1 10 3 10 2 10 1 10 0 10 1 10 2 Variance used in R k 3.2 1.6 0.8 0.4 0.2 0.1 10 4 10 3 10 2 10 1 10 0 10 1 Variance used in Q k

ENKF: WRONG Q AND R Standard Kalman Update: Pk f = F k 1 Pk 1 a F k 1 T + Q k 1 RMSE 3.2 1.6 0.8 0.4 P y k = H k P f k HT k + R k 1 K k = P f k HT k (Py k ) 1 P a k = (I K k H k )P f k 0.2 0.1 10 3 10 2 10 1 10 0 10 1 10 2 Variance used in R k 3.2 1.6 ɛ k = y k y f k = y k H k x f k x a k = x f k + K kɛ k RMSE 0.8 0.4 0.2 0.1 10 4 10 3 10 2 10 1 10 0 10 1 Variance used in Q k

ADAPTIVE FILTER: ESTIMATING Q AND R Innovations contain information about Q and R ɛ k = y k yk f = h(x k ) + ν k h(xk f ) = h(f (x k 1 ) + η k ) h(f (x a k 1 )) + ν k H k F k 1 (x k 1 x a k 1 ) + H kη k + ν k IDEA: Use innovations to produce samples of Q and R : E[ɛ k ɛ T k ] HPf H T + R E[ɛ k+1 ɛ T k ] HFPe H T HFK E[ɛ k ɛ T k ] P e FP a F T + Q In the linear case this is rigorous and was first solved by Mehra in 1970

ADAPTIVE FILTER: ESTIMATING Q AND R To find Q and R we estimate H k and F k 1 from the ensemble and invert the equations: E[ɛ k ɛ T k ] HPf H T + R E[ɛ k+1 ɛ T k ] HFPe H T HFK E[ɛ k ɛ T k ] This gives the following empirical estimates of Q k and R k : P e k Pk e = (H k+1 F k ) 1 (ɛ k+1 ɛ T k + H k+1f k K k ɛ k ɛ T k )H T k Qk e = Pk e F k 1Pk 1 a F k 1 T R e k = ɛ k ɛ T k H kp f k HT k is an empirical estimate of the background covariance

ADAPTIVE ENKF We combine the estimates of Q and R with a moving average Original Kalman Eqs. P f k = F k 1 P a k 1 F T k 1 + Q k 1 P y k = H k P f k HT k + R k 1 Our Additional Update P e k 1 = F 1 k 1 H 1 k ɛ k ɛ T k 1 H T k 1 + K k 1 ɛ k 1 ɛ T k 1 H T k 1 K k = P f k HT k (Py k ) 1 P a k = (I K k H k )P f k Q e k 1 = P e k 1 F k 2P a k 2 F T k 2 R e k 1 = ɛ k 1 ɛ T k 1 H k 1P f k 1 HT k 1 ɛ k = y k y f k x a k = x f k + K kɛ k Q k = Q k 1 + (Q e k 1 Q k 1)/τ R k = R k 1 + (R e k 1 R k 1)/τ

ADAPTIVE FILTER: APPLICATION TO LORENZ-96 We will apply the adaptive EnKF to the 40-dimensional Lorenz96 model integrated over a time step t = 0.05 dx i dt = x i 2 x i 1 + x i 1 x i+1 x i + F We augment the model with Gaussian white noise x k = f (x k 1 ) + η k η k = N (0, Q) y k = h(x k ) + ν k ν k = N (0, R) The Adaptive EnKF uses F = 8 We will consider model error where the true F i = N (8, 16)

RECOVERING Q AND R, PERFECT MODEL True Covariance Initial Guess Final Estimate Difference 8 x 10 3 Q RMSE of Q Q k 6 4 2 0 10 0 10 1 10 2 10 3 10 4 10 5 Filter Step k 0.8 R RMSE 0.6 0.4 0.2 0 10 2 10 3 10 4 10 5 Filter Steps RMSE (bottom right) for the initial guess covariances (red) the true Q and R (black) and the adaptive filter (blue)

COMPENSATING FOR MODEL ERROR The adaptive filter compensates for errors in the forcing F i True Covariance Initial Guess Final Estimate Difference Q R RMSE (Q f ) ii Relative Change 60 50 40 30 20 10 0 0 0 10 20 30 40 Site Number 0.8 0.6 0.4 150 100 50 F i Model Error (%) 0.2 0 0 0.5 1 1.5 2 4 Filter Steps x 10 RMSE (bottom right) for the initial guess covariances (red) the perfect model (black) and the adaptive filter (blue)

ESTIMATING UNMODELED DYNAMICS Multiple models approach: Run m subfilters in parallel ẇ = F (w) + η t y... y S = H(w) + ν t where w = x 1... x m c 1... c m d, F = f (x, p 1 )... f (x, p m) 0... 0 0, H(w) = h(x 1 )... h(x m ) i,j c i j xi j + d Learn c i j and d from training data S, then can predict unmodeled S.

SELECTING THE ASSIMILATION MODEL Assume a generic spiking neuron model (Hindmarsh-Rose) V = y av 3 + bv 2 z + I ẏ = c dv 2 y ż = τ(s(v + 1) z) V neuron potential, y fast scale dynamics, z slow scale dynamics Goal: Use multiple parameterized Hindmarsh-Rose models to reconstruct unmodeled neuronal quantities while assimilating recorded neuron potential J. Hindmarsh and R. Rose, Proc. Roy. Soc., 221, 87 (1984).

RECONSTRUCTING UNMODELED GATING VARIABLES Observing Hodgkin-Huxley voltage, reconstruct unmodeled gating variables (assimilation model is Hindmarsh-Rose) A. Hodgkin and A. Huxley, J. Physiology 117, 500 (1952). Actual Observed Predicted

RECONSTRUCTING UNMODELED GATING VARIABLES Observing Hodgkin-Huxley voltage, reconstruct unmodeled gating variables (assimilation model is Hindmarsh-Rose) A. Hodgkin and A. Huxley, J. Physiology 117, 500 (1952). Actual Observed Predicted

RECONSTRUCTING UNMODELED GATING VARIABLES Observing Hodgkin-Huxley voltage, reconstruct unmodeled gating variables (assimilation model is Hindmarsh-Rose) A. Hodgkin and A. Huxley, J. Physiology 117, 500 (1952). Actual Observed Predicted

RECONSTRUCTING UNMODELED IONIC DYNAMICS Observing seizure voltage, reconstruct unmodeled potassium and sodium dynamics (assimilation model is Hindmarsh-Rose) J. Cressman, G. Ullah, J. Ziburkus, S. Schiff, and E. Barreto, Journal of Comp. Neuroscience 26, 159 (2009). Actual Observed Predicted

RECONSTRUCTING UNMODELED IONIC DYNAMICS Observing seizure voltage, reconstruct unmodeled potassium and sodium dynamics (assimilation model is Hindmarsh-Rose) J. Cressman, G. Ullah, J. Ziburkus, S. Schiff, and E. Barreto, Journal of Comp. Neuroscience 26, 159 (2009). Actual Observed Predicted

RECONSTRUCTING UNMODELED IONIC DYNAMICS Observing seizure voltage, reconstruct unmodeled potassium and sodium dynamics (assimilation model is Hindmarsh-Rose) J. Cressman, G. Ullah, J. Ziburkus, S. Schiff, and E. Barreto, Journal of Comp. Neuroscience 26, 159 (2009). Actual Observed Predicted

RECONSTRUCTING EXTRACELLULAR POTASSIUM FROM AN In Vitro NETWORK We want to track extracellular potassium dynamics in a network but measurements are difficult and spatially limited Extracellular potassium is an unmodeled variable

RECONSTRUCTING EXTRACELLULAR POTASSIUM FROM AN In Vitro NETWORK The local extracellular potassium in an MEA network can be reconstructed and predicted using our approach (assimilation model is Hindmarsh-Rose) Actual Observed Predicted

RECONSTRUCTING EXTRACELLULAR POTASSIUM FROM AN In Vitro NETWORK The local extracellular potassium in an MEA network can be reconstructed and predicted using our approach (assimilation model is Hindmarsh-Rose) Actual Observed Predicted

RECONSTRUCTING EXTRACELLULAR POTASSIUM FROM AN In Vitro NETWORK The local extracellular potassium in an MEA network can be reconstructed and predicted using our approach (assimilation model is Hindmarsh-Rose) Actual Observed Predicted

KALMAN-TAKENS FILTER: THROWING OUT THE MODEL... Starting with historical observations {y 0,..., y n } Form Takens delay-embedding state vectors x i = (y i, y i 1,..., y i d ) Build an EnKF: Apply analog forecast to each ensemble member Use the observation function Hxi = y i Crucial to estimate Q and R

NO MODEL: KALMAN-TAKENS FILTER Reasons why this might not work: Nonparametric methods that depend on local reconstructions tend to fail in presence of observation noise Takens Theorem assumes no obs noise Neighbors in reconstruction space are wrong Kalman filters depend on knowledge of model to advance dynamics Filter equations need knowledge of dynamics to advance background

ENSEMBLE KALMAN FILTERING (ENKF) At kth step: Form an ensemble of state vectors centered at best guess x + k 1, with spread given by estimated covariance P+ k 1 Model f is applied to this ensemble and observed with function h. Form the prior state estimate x k and the predicted observation y k Construct the resulting covariance matrices P xy k and P y k Update the state and covariance estimates with the observation y k K k = P xy k (Py k ) 1 P + k = P k Pxy k (Py k ) 1 P yx k x + k = x k + K ( k yk y ) k.

ENSEMBLE KALMAN FILTERING (ENKF) At kth step: Form an ensemble of state vectors centered at best guess x + k 1, with spread given by estimated covariance P+ k 1 Model f is applied to this ensemble and observed with function h. Form the prior state estimate x k and the predicted observation y k Construct the resulting covariance matrices P xy k and P y k Update the state and covariance estimates with the observation y k K k = P xy k (Py k ) 1 P + k = P k Pxy k (Py k ) 1 P yx k x + k = x k + K ( k yk y ) k.

KALMAN-TAKENS FILTER At kth step: Form ensemble of reconstructed states centered at delay-coordinate vector x + k 1, with spread given by estimated covariance P + k 1 Apply nonparametric prediction and observe with function h. Estimate prior state x k and observation y k Construct the resulting covariance matrices P xy k and P y k Update the state and covariance estimates with the observation y k K k = P xy k (Py k ) 1 P + k = P k Pxy k (Py k ) 1 P yx k x + k = x k + K ( k yk y ) k.

KALMAN-TAKENS FILTER: LORENZ 63 25 20 15 10 x x 5 0-5 -10-15 -20-25 0 1 2 3 4 5 Time 25 20 15 10 5 0-5 -10-15 -20-25 0 1 2 3 4 5 Time full model Kalman- Takens

KALMAN-TAKENS FILTER Comparison of Kalman-Takens (red) with full model (blue) 7 6 5 RMSE 4 3 2 1 0 10 20 30 40 50 60 70 80 Percent Noise

KALMAN-TAKENS FILTER: LORENZ 96 x 1 x 1 10 8 6 4 2 0-2 -4-6 -8-10 10-10 0 2 4 6 8 0 2 4 6 Time 8 6 4 2 0-2 -4-6 -8 0 2 4 6 8 0 2 4 6 Time full model Kalman- Takens

KALMAN-TAKENS FILTER: MODEL ERROR Lorenz 63 Lorenz 96

KALMAN-TAKENS FILTER: LORENZ 96 10 20 30 40 10 20 30 40 10 20 30 40 10 20 30 40 0 10 20 30 40 50 Time -5 0 5 10 10 20 30 40 10 20 30 40 10 20 30 40 10 20 30 40 50 Time 5 6 7 8 9 10

KALMAN-TAKENS FILTER: FORECAST ERROR El Nino index Training data: Jan. 1950 - Dec. 1999. Forecasts after 2000 1 0.9 0.8 0.7 RMSE 0.6 0.5 0.4 0.3 0.2 0.1 0 2 4 6 8 10 12 14 16 Forecast Steps (months)

SUMMARY EnKF is a useful data assimilation technique for neurodynamics and other types of data Parameter estimation Adaptive QR is helpful when Q and R are unknown Difficulties Model error Unmodeled variables No model Multimodel data assimilation Kalman-Takens filter

REFERENCES T. Berry, T. Sauer, Adaptive ensemble Kalman filtering of nonlinear systems. Tellus A 65, 20331 (2013) F. Hamilton, T. Berry, N. Peixoto, T. Sauer, Real-time tracking of neuronal network structure using data assimilation. Phys. Rev. E 88, 052715 (2013) F. Hamilton, J. Cressman, N. Peixoto, T. Sauer, Reconstructing neural dynamics using data assimilation with multiple models. Europhys. Lett. 107, 68005 (2014) F. Hamilton, T. Berry, T. Sauer, Ensemble Kalman filtering without a model. Phys. Rev. X 6, 011021 (2016). F. Hamilton, T. Berry, T. Sauer, Kalman-Takens filtering in the presence of dynamical noise. To appear, Eur. Phys. J.