On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

Similar documents
Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond

The Kalman Filter ImPr Talk

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Proper Orthogonal Decomposition for Optimal Control Problems with Mixed Control-State Constraints

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Constrained State Estimation Using the Unscented Kalman Filter

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems

An optimal control problem for a parabolic PDE with control constraints

Dynamic System Identification using HDMR-Bayesian Technique

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Parametric Problems, Stochastics, and Identification

Lecture 6: Bayesian Inference in SDE Models

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Autonomous Mobile Robot Design

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

A STUDY ON THE STATE ESTIMATION OF NONLINEAR ELECTRIC CIRCUITS BY UNSCENTED KALMAN FILTER

ADAPTIVE FILTER THEORY

Ensemble Data Assimilation and Uncertainty Quantification

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE

TSRT14: Sensor Fusion Lecture 8

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Parameterized Partial Differential Equations and the Proper Orthogonal D

Sequential Monte Carlo Samplers for Applications in High Dimensions

The Unscented Particle Filter

Gaussian Process Approximations of Stochastic Differential Equations

Data assimilation in high dimensions

Short tutorial on data assimilation

Goal. Robust A Posteriori Error Estimates for Stabilized Finite Element Discretizations of Non-Stationary Convection-Diffusion Problems.

Bayesian Regression Linear and Logistic Regression

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Estimating functional uncertainty using polynomial chaos and adjoint equations

LEAST-SQUARES FINITE ELEMENT MODELS

Bayesian Calibration of Simulators with Structured Discretization Uncertainty

Introduction to Unscented Kalman Filter

A new unscented Kalman filter with higher order moment-matching

Optimisation under Uncertainty with Stochastic PDEs for the History Matching Problem in Reservoir Engineering

Ergodicity in data assimilation methods

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Greedy algorithms for high-dimensional non-symmetric problems

State Estimation of Linear and Nonlinear Dynamic Systems

Lecture 7: Optimal Smoothing

ADAPTIVE FILTER THEORY

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction

Non-Intrusive Solution of Stochastic and Parametric Equations

Gaussian Filtering Strategies for Nonlinear Systems

Uncertainty Quantification for Inverse Problems. November 7, 2011

Augmented Tikhonov Regularization

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A Stochastic Collocation based. for Data Assimilation

Filtering and Likelihood Inference

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

The Bayesian approach to inverse problems

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A NEW NONLINEAR FILTER

Scaling Neighbourhood Methods

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

State Estimation using Moving Horizon Estimation and Particle Filtering

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Computer Intensive Methods in Mathematical Statistics

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

Rank reduction of parameterized time-dependent PDEs

A nonlinear filtering tool for analysis of hot-loop test campaings

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Stochastic Spectral Approaches to Bayesian Inference

BACKGROUNDS. Two Models of Deformable Body. Distinct Element Method (DEM)

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Overview. Bayesian assimilation of experimental data into simulation (for Goland wing flutter) Why not uncertainty quantification?

Dual Estimation and the Unscented Transformation

Probabilistic Graphical Models

Minimizing D(Q,P) def = Q(h)


A Note on the Particle Filter with Posterior Gaussian Resampling

A Study of Covariances within Basic and Extended Kalman Filters

Calibration and selection of stochastic palaeoclimate models

EnKF-based particle filters

Temporal-Difference Q-learning in Active Fault Diagnosis

Bayesian Inverse problem, Data assimilation and Localization

STA414/2104 Statistical Methods for Machine Learning II

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

2D Image Processing (Extended) Kalman and particle filter

Markov Chain Monte Carlo Methods for Stochastic

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Goal-oriented Updating Technique Applied to Building Thermal model

1 Kalman Filter Introduction

Probing the covariance matrix

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Transcription:

On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic Chamoin 1, Christian Rey 2 1 LMT/ENS Cachan/CNRS/Paris-Saclay University, France 2 SAFRAN, Research and Technology Center, France April 5-8, 2016

DDDAS Paradigm DDDAS 1 paradigm : a continuous exchange between the physical system and its numerical model ξ model u identification control Real system S observation s ξ c 1. Darema, Dynamica Data Driven Applications Systems : A New Paradigm for Application Simulations and Measurements, 2003 SIAM UQ 2016 - Marchand et al April 5-8, 2016 2 / 30

In this work Objectives : Identification process for time dependent systems/parameters fast resolution robust even if highly corrupted data SIAM UQ 2016 - Marchand et al April 5-8, 2016 3 / 30

In this work Objectives : Identification process for time dependent systems/parameters fast resolution robust even if highly corrupted data Tools : Kalman filter for evolution aspect modified Constitutive Relation Error for robustness offline/online process based on Proper Generalized Decomposition SIAM UQ 2016 - Marchand et al April 5-8, 2016 3 / 30

Outline SIAM UQ 2016 - Marchand et al April 5-8, 2016 4 / 30

Data assimilation Dynamical system : { u (k+1) s (k) = M (k) u (k) (k) + e u = H (k) u (k) (k) + e s SIAM UQ 2016 - Marchand et al April 5-8, 2016 5 / 30

Data assimilation Dynamical system : { u (k+1) Bayes theorem : s (k) = M (k) u (k) (k) + e u = H (k) u (k) (k) + e s ( π u (k) s (k)) = π ( s (k) u (k)) π ( u (k) s (0:k 1)) π ( s (k) s (0:k 1)) under the following hypothesis : State u (k) is a Markov process, Observations s (k) are statistically independent of state history SIAM UQ 2016 - Marchand et al April 5-8, 2016 5 / 30

Linear Kalman Filter Principle Kalman filter 2 is a bayesian filter combined with Maximum a Posteriori method in the case of Gaussian probability density functions. 2. Kalman, A new approach to linear filtering and prediction problems, 1960 SIAM UQ 2016 - Marchand et al April 5-8, 2016 6 / 30

Linear Kalman Filter Principle Kalman filter 2 is a bayesian filter combined with Maximum a Posteriori method in the case of Gaussian probability density functions. Two main steps : u u ( + 1 2 ) u ( ) a s ( ) t (k 1) t (k) t (k+1) t (k+2) t (k+3) t (k+4) t 2. Kalman, A new approach to linear filtering and prediction problems, 1960 SIAM UQ 2016 - Marchand et al April 5-8, 2016 6 / 30

Linear Kalman Filter Principle Kalman filter 2 is a bayesian filter combined with Maximum a Posteriori method in the case of Gaussian probability density functions. Two main steps : (a) Prediction step where is realized a priori estimation u (k+ 1 2 ) of state system u u ( + 1 2 ) u ( ) a s ( ) t (k 1) t (k) t (k+1) t (k+2) t (k+3) t (k+4) t 2. Kalman, A new approach to linear filtering and prediction problems, 1960 SIAM UQ 2016 - Marchand et al April 5-8, 2016 6 / 30

Linear Kalman Filter Principle Kalman filter 2 is a bayesian filter combined with Maximum a Posteriori method in the case of Gaussian probability density functions. Two main steps : (a) Prediction step where is realized a priori estimation u (k+ 1 2 ) of state system (b) Assimilation step where is realized a posteriori estimation u a using observations data u u ( + 1 2 ) u ( ) a s ( ) t (k 1) t (k) t (k+1) t (k+2) t (k+3) t (k+4) t 2. Kalman, A new approach to linear filtering and prediction problems, 1960 SIAM UQ 2016 - Marchand et al April 5-8, 2016 6 / 30

Inverse problems formulation Kalman filter is a very well-known method to solve inverse problems 3 3. Kaipio and Somersalo, Statistical and Computational Inverse Problems, 2006 SIAM UQ 2016 - Marchand et al April 5-8, 2016 7 / 30

Inverse problems formulation Kalman filter is a very well-known method to solve inverse problems 3 Principle : Introduce model parameters vector ξ R np no a priori knowledge stationarity hypothesis : ξ t 0 ξ(k+1) = ξ (k) + e (k) ξ 3. Kaipio and Somersalo, Statistical and Computational Inverse Problems, 2006 SIAM UQ 2016 - Marchand et al April 5-8, 2016 7 / 30

Inverse problems formulation Kalman filter is a very well-known method to solve inverse problems 3 Principle : Introduce model parameters vector ξ R np no a priori knowledge stationarity hypothesis : ξ t 0 ξ(k+1) = ξ (k) + e (k) ξ 3. Kaipio and Somersalo, Statistical and Computational Inverse Problems, 2006 SIAM UQ 2016 - Marchand et al April 5-8, 2016 7 / 30

Inverse problems formulation Kalman filter is a very well-known method to solve inverse problems 3 Principle : Introduce model parameters vector ξ R np no a priori knowledge stationarity hypothesis : Joint Kalman Filter { ū (k+1) = ξ t 0 ξ(k+1) = ξ (k) + e (k) ξ M (k) ū (k) + ē (k) M s (k) = H (k) ū (k) + e (k) s Two formulations Dual Kalman filter { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H (k) u (k) (ξ (k) ) + e (k) s 3. Kaipio and Somersalo, Statistical and Computational Inverse Problems, 2006 SIAM UQ 2016 - Marchand et al April 5-8, 2016 7 / 30

Inverse problems formulation Kalman filter is a very well-known method to solve inverse problems 3 Principle : Introduce model parameters vector ξ R np no a priori knowledge stationarity hypothesis : [ ] u (k) ξ (k) Joint Kalman Filter { ū (k+1) = ξ t 0 ξ(k+1) = ξ (k) + e (k) ξ M (k) ū (k) + ē (k) M s (k) = H (k) ū (k) + e (k) s Two formulations Dual Kalman filter { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H (k) u (k) (ξ (k) ) + e (k) s 3. Kaipio and Somersalo, Statistical and Computational Inverse Problems, 2006 SIAM UQ 2016 - Marchand et al April 5-8, 2016 7 / 30

Inverse problems formulation Kalman filter is a very well-known method to solve inverse problems 3 Principle : Introduce model parameters vector ξ R np no a priori knowledge stationarity hypothesis : [ ] u (k) ξ (k) Joint Kalman Filter { ū (k+1) = ξ t 0 ξ(k+1) = ξ (k) + e (k) ξ M (k) ū (k) + ē (k) M s (k) = H (k) ū (k) + e (k) s Two formulations 3. Kaipio and Somersalo, Statistical and Computational Inverse Problems, 2006 Dual Kalman filter { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H (k) u (k) (ξ (k) ) + e (k) s computed with another Kalman filter SIAM UQ 2016 - Marchand et al April 5-8, 2016 7 / 30

Resolution schemes : UKF vs EKF The problem : Gaussian N ( x, C x ) Nonlinear operator A Gaussian N (ȳ, C y ) Two main approaches in Kalman filtering context SIAM UQ 2016 - Marchand et al April 5-8, 2016 8 / 30

Resolution schemes : UKF vs EKF The problem : Gaussian N ( x, C x ) Nonlinear operator A Gaussian N (ȳ, C y ) Two main approaches in Kalman filtering context First order linearization, Extended Kalman filter 4 4. Sorenson and Stubberud, Non-linear Filtering by Approximation of the a posteriori Density, 1968 SIAM UQ 2016 - Marchand et al April 5-8, 2016 8 / 30

Resolution schemes : UKF vs EKF The problem : Gaussian N ( x, C x ) Nonlinear operator A Gaussian N (ȳ, C y ) Two main approaches in Kalman filtering context First order linearization, Extended Kalman filter 4 Deterministic Monte-Carlo like method, Unscented Transform, Unscented Kalman filter 5 4. Sorenson and Stubberud, Non-linear Filtering by Approximation of the a posteriori Density, 1968 5. Julier and Uhlmann, A new extension of the kalman filter to nonlinear systems, 1997 SIAM UQ 2016 - Marchand et al April 5-8, 2016 8 / 30

Linearization vs Unscented Transform First Order Linearization Prior 11 Linearization A = x A 10 ȳ = A( x) C y = AC x A T 9 9 10 11 0.6 0.4 0.2 0 Posterior 1 0 SIAM UQ 2016 - Marchand et al April 5-8, 2016 9 / 30

Linearization vs Unscented Transform First Order Linearization Prior 11 Linearization A = x A 10 ȳ = A( x) C y = AC x A T 9 9 10 11 0.6 0.4 0.2 0 Posterior 1 0 Unscented Transform Prior 11 σ-points propagation 10 {x i } i=1,..,2n+1 {y i } = A ({x i }) 9 9 10 11 0.6 Posterior 0.4 0.2 0 1 0.5 0 SIAM UQ 2016 - Marchand et al April 5-8, 2016 9 / 30

Linearization vs Unscented Transform First Order Linearization Prior 11 Linearization A = x A 10 ȳ = A( x) C y = AC x A T 9 9 10 11 0.6 0.4 0.2 0 Posterior 1 0 Unscented Transform Prior 11 σ-points propagation 10 {x i } i=1,..,2n+1 {y i } = A ({x i }) 9 9 10 11 0.6 Posterior 0.4 0.2 0 1 0.5 0 For the same computational cost SIAM UQ 2016 - Marchand et al April 5-8, 2016 9 / 30

Why another approach? Kalman Filter based methods well-adapted for evolution problems and DDDAS paradigm SIAM UQ 2016 - Marchand et al April 5-8, 2016 10 / 30

Why another approach? Kalman Filter based methods well-adapted for evolution problems and DDDAS paradigm But : methods very costly if degrees of freedom/parameters increase SIAM UQ 2016 - Marchand et al April 5-8, 2016 10 / 30

Why another approach? Kalman Filter based methods well-adapted for evolution problems and DDDAS paradigm But : methods very costly if degrees of freedom/parameters increase Identification quality strongly depends on measurement noise SIAM UQ 2016 - Marchand et al April 5-8, 2016 10 / 30

Outline SIAM UQ 2016 - Marchand et al April 5-8, 2016 11 / 30

Principle of the method Keep the dual formulation { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H (k) u (k) (ξ (k) ) + e (k) s Classically computed using a Kalman Filter SIAM UQ 2016 - Marchand et al April 5-8, 2016 12 / 30

Principle of the method Keep the dual formulation { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H (k) u (k) (ξ (k) ) + e (k) s Classically computed using a Kalman Filter But use another observation operator { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H m (k) (ξ (k) ; s (k 1:k) ) + e (k) s SIAM UQ 2016 - Marchand et al April 5-8, 2016 12 / 30

Principle of the method Keep the dual formulation { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H (k) u (k) (ξ (k) ) + e (k) s Classically computed using a Kalman Filter But use another observation operator Defined from the modified Constitutive Relation Error functional { ξ (k+1) = ξ (k) + e (k) ξ s (k) = H m (k) (ξ (k) ; s (k 1:k) ) + e (k) s SIAM UQ 2016 - Marchand et al April 5-8, 2016 12 / 30

MCRE framework The idea 6 : Weight the classical Constitutive Relation Error 7 by a measurements error term 6. Ladevèze et al, Updating of finite element models using vibration tests, 1994 7. Ladevèze and Leguillon, Error estimate procedure in the finite element method and application, 1983 SIAM UQ 2016 - Marchand et al April 5-8, 2016 13 / 30

MCRE framework The idea 6 : Principle : Weight the classical Constitutive Relation Error 7 by a measurements error term Primal-dual formulation based on Legendre-Fenchel inequality applied to Helmholtz free energy 6. Ladevèze et al, Updating of finite element models using vibration tests, 1994 7. Ladevèze and Leguillon, Error estimate procedure in the finite element method and application, 1983 SIAM UQ 2016 - Marchand et al April 5-8, 2016 13 / 30

MCRE framework The idea 6 : Principle : Weight the classical Constitutive Relation Error 7 by a measurements error term Primal-dual formulation based on Legendre-Fenchel inequality applied to Helmholtz free energy mcre functional for unsteady thermal problems : E m(u, q; ξ) = 1 2 (q K u) K 1 (q K u) dxdt + δ 2 Πu s 2 dt I t Ω I t U = { u H 1 (Ω) L 2 (I t) \ u = u d on Ω u, u = u 0 at t = t 0 } S(u) = { q [L 2 (Ω) L 2 (I t)] d \ q n = q d on Ω q, tu + q = f } 6. Ladevèze et al, Updating of finite element models using vibration tests, 1994 7. Ladevèze and Leguillon, Error estimate procedure in the finite element method and application, 1983 SIAM UQ 2016 - Marchand et al April 5-8, 2016 13 / 30

mcre inverse problems Solution is defined by : p = argmin ξ P ad min E m (u, q; ξ) (u,q) U ad S ad SIAM UQ 2016 - Marchand et al April 5-8, 2016 14 / 30

mcre inverse problems Solution is defined by : p = argmin ξ P ad min E m (u, q; ξ) (u,q) U ad S ad Admissible fields Constrained minimization SIAM UQ 2016 - Marchand et al April 5-8, 2016 14 / 30

mcre inverse problems Solution is defined by : p = argmin ξ P ad min E m (u, q; ξ) (u,q) U ad S ad Parameters minimization Gradient based methods Admissible fields Constrained minimization SIAM UQ 2016 - Marchand et al April 5-8, 2016 14 / 30

mcre inverse problems Solution is defined by : p = argmin ξ P ad min E m (u, q; ξ) (u,q) U ad S ad Parameters minimization Gradient based methods Fixed point Admissible fields Constrained minimization SIAM UQ 2016 - Marchand et al April 5-8, 2016 14 / 30

mcre inverse problems Solution is defined by : p = argmin ξ P ad min E m (u, q; ξ) (u,q) U ad S ad Parameters minimization Gradient based methods Fixed point Admissible fields Constrained minimization Interest (i) Robustness of the method with highly corrupted data (ii) Strong mechanical content (iii) Model reduction integration SIAM UQ 2016 - Marchand et al April 5-8, 2016 14 / 30

The Modified Kalman Filter { ξ (k+1) = ξ (k) ( + e (k) ξ s (k) = H m (k) ξ (k), s (k 1:k)) (k) + e s SIAM UQ 2016 - Marchand et al April 5-8, 2016 15 / 30

The Modified Kalman Filter { ξ (k+1) = ξ (k) ( + e (k) ξ s (k) = H m (k) ξ (k), s (k 1:k)) (k) + e s Two steps for H m (ξ (k) (k), s (k 1:k)) SIAM UQ 2016 - Marchand et al April 5-8, 2016 15 / 30

The Modified Kalman Filter { ξ (k+1) = ξ (k) ( + e (k) ξ s (k) = H m (k) ξ (k), s (k 1:k)) (k) + e s Two steps for H m (ξ (k) (k), s (k 1:k)) Step 1 : admissible fields computation u (k) = G mcre (ξ (k), s (k 1:k) ) SIAM UQ 2016 - Marchand et al April 5-8, 2016 15 / 30

The Modified Kalman Filter { ξ (k+1) = ξ (k) ( + e (k) ξ s (k) = H m (k) ξ (k), s (k 1:k)) (k) + e s Two steps for H m (ξ (k) (k), s (k 1:k)) Step 1 : admissible fields computation u (k) = G mcre (ξ (k), s (k 1:k) ) Step 2 : projection Typically using boolean matrix H := Π H m (ξ (k), s (k) ) = H G mcre (ξ (k), s (k 1:k) ) SIAM UQ 2016 - Marchand et al April 5-8, 2016 15 / 30

Optimization point of view Dual Kalman filter based identification can be seen as the minimization of n t ( T J(ξ) = s (k) H (k) u (k) (ξ )) (k) (k) 1 ( ) Cs s (k) H (k) u (k) (ξ (k) ) k=0 SIAM UQ 2016 - Marchand et al April 5-8, 2016 16 / 30

Optimization point of view Dual Kalman filter based identification can be seen as the minimization of n t ( T J(ξ) = s (k) H (k) u (k) (ξ )) (k) (k) 1 ( ) Cs s (k) H (k) u (k) (ξ (k) ) k=0 min U Classical s (k) H (k) u (k) Cs (k) 1 SIAM UQ 2016 - Marchand et al April 5-8, 2016 16 / 30

Optimization point of view Dual Kalman filter based identification can be seen as the minimization of n t ( T J(ξ) = s (k) H (k) u (k) (ξ )) (k) (k) 1 ( ) Cs s (k) H (k) u (k) (ξ (k) ) k=0 min U Classical s (k) H (k) u (k) Cs (k) 1 mcre based min q u + δ U S K 1,I (k) t 2 Πu s I (k) t SIAM UQ 2016 - Marchand et al April 5-8, 2016 16 / 30

Optimization point of view Dual Kalman filter based identification can be seen as the minimization of n t ( T J(ξ) = s (k) H (k) u (k) (ξ )) (k) (k) 1 ( ) Cs s (k) H (k) u (k) (ξ (k) ) k=0 min U Classical s (k) H (k) u (k) Cs (k) 1 mcre based min q u + δ U S K 1,I (k) t 2 Πu s I (k) t Observations data strongly imposed Observations data weakly imposed SIAM UQ 2016 - Marchand et al April 5-8, 2016 16 / 30

Technical points State estimation Admissible fields : (u ad, q ad ) = argmin (u,q) U ad S ad E m (u, q; ξ (k) ) SIAM UQ 2016 - Marchand et al April 5-8, 2016 17 / 30

Technical points State estimation Admissible fields : (u ad, q ad ) = argmin (u,q) U ad S ad E m (u, q; ξ (k) ) t (0) τ (0) k Kalman time scale t (k 1) I (k) t t (k) mcre time scale τ (i 1) k τ (i) k τ (ns 1) k t (nt 1) SIAM UQ 2016 - Marchand et al April 5-8, 2016 17 / 30

Technical points State estimation Admissible fields : (u ad, q ad ) = argmin (u,q) U ad S ad E m (u, q; ξ (k) ) t (0) τ (0) k Kalman time scale t (k 1) I (k) t t (k) mcre time scale τ (i 1) k τ (i) k λ lagrange multiplier field and stationarity conditions τ (ns 1) k t (nt 1) SIAM UQ 2016 - Marchand et al April 5-8, 2016 17 / 30

Technical points State estimation Admissible fields : (u ad, q ad ) = argmin (u,q) U ad S ad E m (u, q; ξ (k) ) t (0) τ (0) k Kalman time scale t (k 1) I (k) t t (k) mcre time scale τ (i 1) k τ (i) k λ lagrange multiplier field and stationarity conditions After FE discretization : [ [ ] [ ] [ C 0 K K u + 0 C] u λ δπ T = Π K λ] with u(τ (0) (ns 1) [ ] Fext δπ T s k ) = u (k 1) and λ(τ k ) = 0 t τ (ns 1) k t (nt 1) SIAM UQ 2016 - Marchand et al April 5-8, 2016 17 / 30

Technical points State estimation Admissible fields : (u ad, q ad ) = argmin (u,q) U ad S ad E m (u, q; ξ (k) ) t (0) τ (0) k Kalman time scale t (k 1) I (k) t t (k) mcre time scale τ (i 1) k τ (i) k λ lagrange multiplier field and stationarity conditions After FE discretization : [ [ ] [ ] [ C 0 K K u + 0 C] u λ δπ T = Π K λ] with u(τ (0) (ns 1) [ ] Fext δπ T s k ) = u (k 1) and λ(τ k ) = 0 t τ (ns 1) k t (nt 1) Coupled forward-backward problem in time SIAM UQ 2016 - Marchand et al April 5-8, 2016 17 / 30

PGD based model reduction Find u X = X 1 X D such that B(u, v) = L(v) v X 8. Nouy, A priori model reduction through Proper Generalized Decomposition for solving time-dependent partial differential equations, 2010 SIAM UQ 2016 - Marchand et al April 5-8, 2016 18 / 30

PGD based model reduction Find u X = X 1 X D such that B(u, v) = L(v) v X Principle : Low-rank tensor approximation u u m = m w 1 i w 2 i w D i i=1 ; u m X m X 8. Nouy, A priori model reduction through Proper Generalized Decomposition for solving time-dependent partial differential equations, 2010 SIAM UQ 2016 - Marchand et al April 5-8, 2016 18 / 30

PGD based model reduction Find u X = X 1 X D such that B(u, v) = L(v) v X Principle : Low-rank tensor approximation u u m = m w 1 i w 2 i w D i i=1 Construction : many strategies 8 ; u m X m X ; progressive Galerkin approach 8. Nouy, A priori model reduction through Proper Generalized Decomposition for solving time-dependent partial differential equations, 2010 SIAM UQ 2016 - Marchand et al April 5-8, 2016 18 / 30

PGD based model reduction Find u X = X 1 X D such that B(u, v) = L(v) v X Principle : Low-rank tensor approximation u u m = m w 1 i w 2 i w D i i=1 Construction : many strategies 8 u M 1 known ; u m X m X ; progressive Galerkin approach B 1 (w 1, w ) = L(w ) B 1 (u M 1, w ). B D (w D, w ) = L(w ) B D (u M 1, w ) Orthogonalization and update u M = u M 1 + w 1 w D Fixed point Greedy 8. Nouy, A priori model reduction through Proper Generalized Decomposition for solving time-dependent partial differential equations, 2010 SIAM UQ 2016 - Marchand et al April 5-8, 2016 18 / 30

PGD-mCRE Two fields problem : u and λ Two PGD decompositions simultaneously computed SIAM UQ 2016 - Marchand et al April 5-8, 2016 19 / 30

PGD-mCRE Two fields problem : u and λ Two PGD decompositions simultaneously computed Many parameters to consider as extra-coordinates SIAM UQ 2016 - Marchand et al April 5-8, 2016 19 / 30

PGD-mCRE Two fields problem : u and λ Two PGD decompositions simultaneously computed Many parameters to consider as extra-coordinates space, time parameters to identify ξ observations data initial condition SIAM UQ 2016 - Marchand et al April 5-8, 2016 19 / 30

PGD-mCRE Two fields problem : u and λ space, time Two PGD decompositions simultaneously computed Many parameters to consider as extra-coordinates parameters to identify ξ observations data initial condition Projection into a reduced basis n init u (k) 0 = α i ψ i (x) i=0 SIAM UQ 2016 - Marchand et al April 5-8, 2016 19 / 30

PGD-mCRE Two fields problem : u and λ space, time Two PGD decompositions simultaneously computed Many parameters to consider as extra-coordinates parameters to identify ξ observations data initial condition m u PGD = λ PGD = i=1 m i=1 φ u i φ λ i ψ u i ψ λ i Projection into a reduced basis n init u (k) 0 = α i ψ i (x) i=0 n p n obs n obs n init χ u j,i θ u k,i η u m,i ϕ u q,i j=1 k=1 m=1 q=1 n p n obs n obs n init χ λ j,i θ λ k,i η λ m,i ϕ λ q,i j=1 k=1 m=1 q=1 SIAM UQ 2016 - Marchand et al April 5-8, 2016 19 / 30

PGD-mCRE Two fields problem : u and λ space, time Two PGD decompositions simultaneously computed Many parameters to consider as extra-coordinates parameters to identify ξ observations data initial condition m u PGD = λ PGD = i=1 m i=1 φ u i φ λ i ψ u i ψ λ i Projection into a reduced basis n init u (k) 0 = α i ψ i (x) i=0 n p n obs n obs n init χ u j,i θ u k,i η u m,i ϕ u q,i j=1 k=1 m=1 q=1 n p n obs n obs n init χ λ j,i θ λ k,i η λ m,i ϕ λ q,i j=1 k=1 m=1 q=1 n p + 2 n obs + n init 20 SIAM UQ 2016 - Marchand et al April 5-8, 2016 19 / 30

Synthesis 8. Marchand et al, Real-time updating of structural mechanics models using Kalman filtering, modified Constitutive Relation Error and Proper Generalized Decomposition, 2016 SIAM UQ 2016 - Marchand et al April 5-8, 2016 20 / 30

Synthesis offline Reduced basis computation for initial condition projection PGD admissible fields computation 8. Marchand et al, Real-time updating of structural mechanics models using Kalman filtering, modified Constitutive Relation Error and Proper Generalized Decomposition, 2016 SIAM UQ 2016 - Marchand et al April 5-8, 2016 20 / 30

Synthesis offline Reduced basis computation for initial condition projection PGD admissible fields computation online at each time step Project current initial condition in reduced basis Evaluate PGD parametric solution for set of σ-points Project state into observation space Kalman parameters update 8. Marchand et al, Real-time updating of structural mechanics models using Kalman filtering, modified Constitutive Relation Error and Proper Generalized Decomposition, 2016 SIAM UQ 2016 - Marchand et al April 5-8, 2016 20 / 30

Outline SIAM UQ 2016 - Marchand et al April 5-8, 2016 21 / 30

Example 1 Problem setting q d (t) =? ρc, κ sensor location Time stepping for observation : 1000 Time stepping for identification : 100 Noise level : 20% u = ud PGD modes SIAM UQ 2016 - Marchand et al April 5-8, 2016 22 / 30

Exemple 1 : Neumann B.C. identification 10 Joint Unscented Kalman Filter 10 Modified Kalman Filter 5 0 0 50 100 time step 5 0 0 50 100 time step Better accuracy exact mean variance Tuning parameters impact ε MKF = ξ true E [ξ MKF ] L 2 (I t ) ξ true L 2 (I t ) εmkf 0.6 0.4 0.2 10 0 c s 10 1 10 2 10 0 10 1 10 2 10 3 10 3 c ξ SIAM UQ 2016 - Marchand et al April 5-8, 2016 23 / 30

Example 2 Problem setting κ 1? κ 2? κ 3? κ 4? u = u d sensor location Space modes Time stepping for observation : 1000 Time stepping for identification : 100 Noise level : 10% SIAM UQ 2016 - Marchand et al April 5-8, 2016 24 / 30

Exemple 2 : conductivity identification Joint Unscented Kalman Filter 2 2 Modified Kalman Filter 1.5 κ1 κ ref 1 1 0.5 0 0 50 100 2 1.5 1 0.5 0 2 50 100 Better 1.5 κ2 κ ref 1 2 0.5 1.5 1 0.5 accuracy κ3 κ ref 1 3 0.5 0 0 50 100 2 1.5 0 0 50 100 2 1.5 κ4 κ ref 1 4 0.5 0 0 50 100 2 1.5 1 0.5 0 0 50 100 2 1.5 1 0.5 and robustness 0 0 50 100 exact mean variance 0 0 50 100 SIAM UQ 2016 - Marchand et al April 5-8, 2016 25 / 30

Example 3 Problem setting Thermal source : f (x; x c ) = sinc 2 (π x x c (t) ) u = 0 f(x, x c) u = 0 sensor location To include x c as PGD s extra-coordinate f (x; x c ) N F i (x) G i (x c ) i=1 Using SVD SIAM UQ 2016 - Marchand et al April 5-8, 2016 26 / 30

Exemple 3 : source localization Time stepping for observation : 1000 Time stepping for identification : 100 Noise level : 10% Modified Kalman Filter 8 xc identification 2 yc identification 6 4 2 0 50 100 1 0 0 50 100 exact mean variance not compared to UKF since this problem requires to solve 5000 problems at each time step with the UKF approach SIAM UQ 2016 - Marchand et al April 5-8, 2016 27 / 30

Exemple 3 : source localization Limits of PGD here PGD limits Space modes Source center modes Solution is relatively singular involves Initial condition should be project on many modes ninit 1 but np + 2 nobs + ninit / 20 SIAM UQ 2016 - Marchand et al April 5-8, 2016 28 / 30

Outline SIAM UQ 2016 - Marchand et al April 5-8, 2016 29 / 30

and future works Unscented Kalman Filter Modified Kalman Filter modified CRE Implementation Cost Robustness Implementation Robustness Cost Robustness Cost 2 Minimizations Proper Generalized Decomposition SIAM UQ 2016 - Marchand et al April 5-8, 2016 30 / 30

and future works Unscented Kalman Filter Modified Kalman Filter modified CRE Implementation Cost Robustness Implementation Robustness Cost Robustness Cost 2 Minimizations Proper Generalized Decomposition Number of parameters significantly increases Extension to field identification split state and parameters meshes adaptive strategy 9 SIAM UQ 2016 - Marchand et al April 5-8, 2016 30 / 30