UNDERSTANDING DATA ASSIMILATION APPLICATIONS TO HIGH-LATITUDE IONOSPHERIC ELECTRODYNAMICS

Similar documents
Lecture 2: From Linear Regression to Kalman Filter and Beyond

(Extended) Kalman Filter

Fundamentals of Data Assimila1on

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Fundamentals of Data Assimila1on

Enabling system science: Ionospheric conductivity

Fundamentals of Data Assimilation

JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, A06301, doi: /2004ja010531, 2005

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

A new Hierarchical Bayes approach to ensemble-variational data assimilation

Tomoko Matsuo s collaborators

Graphical Models for Collaborative Filtering

Smoothers: Types and Benchmarks

Machine Learning 4771

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed

Short tutorial on data assimilation

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

Nonparameteric Regression:

Density Estimation. Seungjin Choi

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Variational data assimilation

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Bayesian Approaches Data Mining Selected Technique

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Introduction to Probabilistic Graphical Models: Exercises

Assessing Reliability Using Developmental and Operational Test Data

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

arxiv: v1 [physics.ao-ph] 23 Jan 2009

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

A non-parametric regression model for estimation of ionospheric plasma velocity distribution from SuperDARN data

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Forecasting and data assimilation

SuperDARN assimilative mapping

Probabilistic Graphical Models

Notes on Machine Learning for and

4. DATA ASSIMILATION FUNDAMENTALS

Conditions for successful data assimilation

Lecture 6: Bayesian Inference in SDE Models

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Introduction to Particle Filters for Data Assimilation

Introduction to Bayesian Learning. Machine Learning Fall 2018

Naïve Bayes classification

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Graphical Models for Statistical Inference and Data Assimilation

Bayesian analysis in nuclear physics

Gaussian Process Approximations of Stochastic Differential Equations

Bayesian Linear Regression [DRAFT - In Progress]

Expectation Propagation Algorithm

Robust Monte Carlo Methods for Sequential Planning and Decision Making

PATTERN RECOGNITION AND MACHINE LEARNING

CS491/691: Introduction to Aerial Robotics

Lecture 7: Optimal Smoothing

Clustering and Gaussian Mixture Models

Ergodicity in data assimilation methods

Bayesian Interpretations of Regularization

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Intelligent Systems I

State Space and Hidden Markov Models

Strong Lens Modeling (II): Statistical Methods

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Lecture 4: Probabilistic Learning

Particle filters, the optimal proposal and high-dimensional systems

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Dynamic System Identification using HDMR-Bayesian Technique

Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA

Relative Merits of 4D-Var and Ensemble Kalman Filter

Parameter Estimation in the Spatio-Temporal Mixed Effects Model Analysis of Massive Spatio-Temporal Data Sets

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

Introduction to Machine Learning

MAT Inverse Problems, Part 2: Statistical Inversion

Bayesian data analysis in practice: Three simple examples

CMU-Q Lecture 24:

Ensemble Data Assimilation and Uncertainty Quantification

Gaussian Mixture Filter in Hybrid Navigation

Introduction to Machine Learning

Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses

Statistical learning. Chapter 20, Sections 1 3 1

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Correcting biased observation model error in data assimilation

Graphical Models for Statistical Inference and Data Assimilation

The Kalman Filter ImPr Talk

Linear Dynamical Systems

DART Tutorial Part II: How should observa'ons impact an unobserved state variable? Mul'variate assimila'on.

Review of Covariance Localization in Ensemble Filters

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Overfitting, Bias / Variance Analysis

Probability and Estimation. Alan Moses

Data assimilation with and without a model

Linear Models for Regression

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Introduction to Data Assimilation

Kernel Bayes Rule: Nonparametric Bayesian inference with kernels

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Transcription:

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 1 UNDERSTANDING DATA ASSIMILATION APPLICATIONS TO HIGH-LATITUDE IONOSPHERIC ELECTRODYNAMICS Tomoko Matsuo University of Colorado, Boulder Space Weather Prediction Center, NOAA

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 2 What is data assimilation? Combining Information prior knowledge of the state of system empirical or physical models (e.g. physical laws) complete in space and time x observations directly measured or retrieved quantities incomplete in space and time y

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 3 Bayes Theorem - Bayesian statistics provides a coherent probabilistic framework for most of DA approaches [e.g., Lorenc, 1986] prior p(x ) N (x b, P b ) x = x b + b observation likelihood p(y x ) N (Hx, R) x = Hx + b probability distribution of y when x have a given value posterior p(x y) p(y x )p(x )

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 4 Bayes Theorem - Bayesian statistics provides a coherent probabilistic framework for most of DA approaches [e.g., Lorenc, 1986] prior p(x ) N (x b, P b ) x = x b + b observation likelihood p(y x ) N (Hx, R) y = Hx + y probability distribution of y when x have a given value posterior p(x y) p(y x )p(x ) p(x y) N (x a, P a ) where x a = x b + K(y Hx b ) P a =(I KH)P b K = P b H T (HP b H T + R) 1

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 5 What is covariance? two variables case Bayes theorem p(x y) p(y x )p(x ) prior p(x ) N (x b, P b ) x b =(2.3 2.5) P b = 0.225 0.05 0.05 0.15 observation-likelihood p(y x ) N (Hx, R) H = (1 0) x 1 : observed x 2 : unobserved

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 6 What is covariance? in spatial sense lag distance lag distance P 1 (µ =0.5, γ =0.05) P 2 (µ =1.5, γ =0.1)

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 7 What is covariance? in spatial sense x 1 N (0, P 1 ) x 2 N (0, P 2 ) 3 3 3 1 1 112 3 3 3 3 3 1 1 112 3 3 3 3 3 1 1 112 3 3

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 8 What is covariance? in spatial sense x 1 N (0, P 1 ) x 2 N (0, P 2 ) 3 1 1 2 3 3 1 1 2 3 2-D Correlation Function HP 3 1 1 2 3 0.2 0.2 0.1 0.1 0.5 0.5 0.3 0.3 0.5 0.5 0.3 0.3 0.2 0.2 0.1 0.1 0.5 0.5 0.3 0.3 0.2 0.2 0.1 0.1

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 9 Assimilative Mapping of Ionospheric Electrodynamics Inverse procedure to infer maps of E I J E,, I, J, B From observations of B IS or HF radar, Satellites IS radar Satellite or ground-based magnetometers [Richmond and Kamide, 1988]

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 10 Assimilative Mapping of Ionospheric Electrodynamics [Richmond and Kamide, 1988] prior observations

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 11 [Lu et al., 1998]

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 12

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 13 AMIE relationship among electromagnetic variables Inverse procedure to infer maps of E,, I, J, B From observations of E I J B IS or HF radar, Satellites IS radar Satellite or ground-based magnetometers linear relationship (for a given ) F( E) =, I, J, B E I J I = Φ = Σ E = I, J ΔB Biot-Savart s law

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 14 AMIE basis functions as forward operator functional analysis E Φ = Ψ x Ψ : x : spherical harmonics coefficients = Ψ x forward operator y = Hx = F ( Ψ) x linear relationship (for a given ) F( E) =, I, J, B E I J I = Φ = Σ E = I, J ΔB Biot-Savart s law

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 15 Ideas to improve AMIE: Adaptive covariance x a = x b + K(y Hx b ) K = P b (α)h T (HP b (α)h T + R) 1 Maximum likelihood Method [Dee 1995, Matsuo et al., 2005] d = y Hx b d N (0, S) where Find alpha that maximizes the following pdf p(d α) = S(α) =R + HP b (α)h T 1 2π J dets(α) exp dt S 1 (α)d 2

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 16 Ideas to improve AMIE: Adaptive covariance For given observations x a = x b + K(y Hx b ) K = P b (α)h T (HP b (α)h T + R) 1

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 17 Ideas to improve AMIE: Multi-resolution basis functions Spherical Harmonics Multi-resolution bas Σ = WH 2 W T [Nychka et al., 20 A 1-D wavelet basis. Father wavelet (scaling) functions Mother wavelet functions Wavelets 0e+00 4e 04 8e 04 1e 03 0e+00 1e 03 Father wavelet (scaling) functions Mother wavelet functions Mother wavelet functions Mother wavelet functions 0e+00 4e 04 8e 04 1e 03 0e+00 1e 03 0.004 0.000 0.004 0.015 0.000 0.015.6 0.8 1.0

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 18 Ideas to improve AMIE: what to minimize Inverse procedure to infer maps of E,, I, J, B B or E How to take advantage of new space-based observations? From observations of E I J B IS or HF radar, Satellites IS radar Satellite or ground-based magnetometers Science Objective: Understand the global electro- T

Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 19 Summary Bayesian statistics as an overarching framework for many of DA methods Assumptions: Gaussian distribution, Linear H Role of Covariance in spatial interpolation Applications to high-latitude electrodynamics (AMIE) Functional analysis of E, Φ, I, J, B Kalman update of spherical hamonics coefficients Current issues Resolution: global function ->> compactly supported functions New space-based magnetometer observations ->> Improve conductance models Adaptive Covariance E or B