A reduced basis method and ROM-based optimization for batch chromatography

Size: px
Start display at page:

Download "A reduced basis method and ROM-based optimization for batch chromatography"

Transcription

1 MAX PLANCK INSTITUTE MedRed, 2013 Magdeburg, Dec A reduced basis method and ROM-based optimization for batch chromatography Yongjin Zhang joint work with Peter Benner, Lihong Feng and Suzhou Li Max Planck Institute for Dynamics of Complex Technical Systems Magdeburg, Germany FOR DYNAMICS OF COMPLEX TECHNICAL SYSTEMS MAGDEBURG Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 1/26

2 Outline 1 Motivation 2 Review on RBM POD-Greedy Algorithm Empirical Interpolation (EI) & Collateral Reduced Basis (CRB) 3 Adaptive Snapshot Selection 4 Error Estimation 5 Numerical Experiments 6 Conclusions and Outlook Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 2/26

3 Motivation : Model Description Batch chromatography: b Feed a + b Column a a b Pump Q t 4 concentration t Solvent concentration t in b a t cyc t Products Principle of batch chromatography for binary separation. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 3/26

4 Motivation : Model Description Batch chromatography: b Feed a + b Column a a b Pump Q t 4 concentration t Solvent concentration t in b a t cyc t Products Principle of batch chromatography for binary separation. Eqs : { cz t + 1 ɛ q z ɛ t = cz x q z t = L Q/(ɛA c ) κ z(q Eq c z Pe x, 0 < x < 1, 2 z q z ), 0 x 1, (1) with suitable initial and boundary conditions. qz Eq = f z (c a (µ), c b (µ)) is nonlinear and nonaffine, z = a, b. µ := (Q, t in ) is the vector of operating parameters. Pe = Ref: Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 3/26

5 Motivation : Model Description (cont.) The optimization of batch chromatography: min { Pr(c z(µ), q z (µ); µ)} µ P s.t. Rec(c z (µ), q z (µ); µ) Rec min, c z (µ), q z (µ) are the solutions to the above system (1). Production rate: Pr = p(µ)q t cyc Recovery yield: Rec = p(µ) t in (c f a +cf b ) p(µ) := t 2 t 1 c b,o (t; µ)dt + t 4 t 3 c a,o (t; µ)dt c z,o (t; µ) := c z (t, x = 1; µ), t [0, T ]: concentrations at the outlet Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 4/26

6 Motivation : General Idea RBM & ROM-based optimization: FOM e.g. POD-Greedy (D)EIM ROM FOM-Opt. Approx. ROM-Opt. Certified error estimation Offline-online decomposition Efficiency of the ROM (-based optimization) Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 5/26

7 Review on RBM Consider a spatial discretized parametrized time dependent system Σ : F (u(t; µ)) = 0, µ P, t [0, T ], where u(t; µ) R N and F is a linear or nonlinear operator. The output of interest y := y(u(µ, t), µ). Assume that a reduced basis (RB) is available, V := [V 1,, V N ], and u û := Va with a := a(t; µ) R N, then a reduced order model (ROM) can be obtained by using the Galerkin projection, ˆΣ : V T F (û) = V T F (Va) = 0, N N. The corresponding output can be approximated by ŷ(µ) y(û(µ)). Remark: Empirical Interpolation Method (EIM) or its variants (e.g. DEIM, empirical operator interpolation) can be employed if F is nonlinear/non-affine. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 6/26

8 Review on RBM POD-Greedy Algorithm Algorithm 1 RB generation using POD-Greedy Input: P train, tol RB (< 1) Output: RB: V = [V 1,..., V N ] 1: Initialization: W N = [ ], N = 0, µ max = µ 0, η N (µ max ) = 1 2: while the error η N (µ max ) > tol RB do 3: Compute the trajectory S max := {u n (µ max )} K n=0 4: Enrich the RB: W N+1 := W N V N+1, where V N+1 is the first POD mode of the matrix Ū = [ū 0,..., ū K ] with ū n := u n (µ max ) Π W N [u n (µ max )], n = 0,..., K, Π W N [u] is the projection of u on the current space W N := span{v 1,..., V N } 5: N = N + 1 6: Find µ max := arg max µ Ptrain η N (µ) 7: end while Influence factors of the cost: N, P train, η(µ max ), K,... Ref: B. Haasdonk and M. Ohlberger, M2NA., 42, Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 7/26

9 Review on RBM Empirical Interpolation (EI) & Collateral Reduced Basis (CRB) Idea: g(x, µ) ĝ m := m i=1 σ i(µ)w i, µ P [Barrault et al. 2004] Algorithm 2 Generation of CRB and EI points T M Input: := {g(x, µ) µ Pcrb train }, tol CRB(< 1) Output: CRB: W = [W 1,..., W M ] and T M = {x 1,..., x M } 1: Initialization: m = 1, Wei 0 := {0}, r 0 = 1 2: while ξ m 1 > tol CRB do L crb train 3: g L crb train, define the best approximation ĝ = m 1 current space W m 1 ei := span{w 1,..., W m 1 } i=1 σ iw i in the 4: Define g := arg max g L crb m := g ˆ g train 5: if ξ m tol CRB then 6: Stop and set M = m 1 7: else 8: Determine: x m := arg sup x Ω r m (x), W m := ξ m /ξ m (x m ) 9: end if 10: m := m : end while Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 8/26

10 Review on RBM Runtime comparison: FOM-Opt. vs. ROM-Opt h vs. 0.58( = 82.83) h Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 9/26

11 Adaptive Snapshot Selection The idea of ASS is to discard the redundant linear information in the trajectory earlier, before performing SVD for the generation of the basis. Given non-zero vectors v 1, v 2, cos θ = < v 1, v 2 > v 1 v 2. v 1 and v 2 are linearly dependent cos θ = 1 (θ = 0 or π) For a trajectory {u n (µ)} n=k n=0, define Ind(u n (µ), u m (µ)) = 1 < un (µ), u m (µ) > u n (µ) u m, (µ) ASS is implemented as follows. θ v 2 v 1 Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 10/26

12 ASS (cont.) Algorithm 3 Adaptive snapshot selection (ASS) Input: Initial vector u 0 (µ), tol ASS Output: Selected snapshot matrix: S A = [u n1 (µ), u n2 (µ),..., u n l (µ)] 1: Initialization: j = 1, n j = 0, S A = [u n j (µ)] 2: for n = 1,..., K do 3: Compute the vector u n (µ). 4: if Ind(u n (µ), u n j (µ)) > tol ASS then 5: j = j + 1 6: n j = n 7: S A = [S A, u n j (µ)] 8: end if 9: end for Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 11/26

13 ASS (cont.) Remark 1: It can be easily combined with other algorithms, e.g. Alg. 1, for the generation of RB and CRB. Remark 2: For the linear dependency, it is also possible to check the angle between the tested vector u n (µ) and the subspace spanned by the selected snapshots S A. More redundant information can be discarded but at more cost. However, the data will be compressed further, e.g. by using the POD-Greedy algorithm, we simply choose the economical case shown in Algorithm 3. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 12/26

14 ASS-POD-Greedy Algorithm 4 RB generation using ASS-POD-Greedy Input: P train, tol RB Output: RB: V = [V 1,..., V N ] 1: Initialization: W N = {0}, N = 0, µ max = µ 0, η(µ max ) = 2: while the error η N (µ max ) > tol RB do 3: Compute the trajectory S := {u n (µ max )} K n=0, adaptively select snapshots using Alg. 3, and get S A := {u n1 (µ max ),..., u n l (µ max )} 4: Enrich the RB: W N+1 := W N V N+1, where V N+1 is the first POD mode of the matrix Ū A = [ū n1,..., ū n l ] with ū ns := u ns (µ max ) Π W N [u ns (µ max )], s = 1,..., n l (n l K), Π W N [u] is the projection of u on the current space W N := span{v 1,..., V N } 5: N = N + 1 6: Find µ max := arg max µ Ptrain η N (µ) 7: end while Remark: the error η N (µ max ) can be an error estimator or the true error. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 13/26

15 Error Estimation Consider a general evolution scheme, Au n+1 (µ) = Bu n (µ) + g n, where A, B R N N are constant matrices, g n := g(u n (µ), µ) R N is nonlinear or non-affine. Let: û n (µ) = Va n (µ): RB approximation of u n (µ), ĝ n (µ) := I M [g(û n (µ))] = W β n (µ): interpolation of g n, V R N N, W R N M : parameter-independent bases. a n R N, β n R M : parameter-dependent. Define the residual: We have the following propositions. r n+1 := Bû n + I M [g(û n )] Aû n+1. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 14/26

16 Error Estimation(cont.) Proposition 1: Assume that the operator g is Lipschitz continuous, i.e., L g R +, s.t. g(x) g(y) L g x y, and the interpolation of g is exact with a certain dimension of W, i.e., I M+M [g(û n )] := M+M m=1 W m β n m = g(û n ). Assume the initial projection error e 0 = 0, then the field variable error e n := u n û n satisfies: n 1 e n (µ) ( A 1 ) n k C n 1 k (ɛ k EI (µ) + r k+1 (µ) ), k=0 where C = B + L g, and ɛ n EI (µ) is the error due to the interpolation. A sharper error bound is given as: n 1 e n (µ) ( A 1 B + L g A 1 ) n 1 k ( A 1 ɛ k EI (µ) + A 1 r k+1 (µ) ) k=0 := η n N,M(µ). Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 15/26

17 Error Estimation(cont.) Proposition 2: Under the assumptions of the Proposition 1, assume the output of interest y(u n (µ)) can be expressed as y(u n (µ)) = Pu n, where P R No N is a constant matrix. Then the output error e n O (µ) := Pun Pû n satisfies: e n+1 O (µ) ηn+1 N,M :=( PA 1 B + L g PA 1 )η n N,M + PA 1 ɛ n EI (µ) + P A 1 r n+1 (µ). Remark: It is easy to show that the output error bound η n+1 N,M than the trivial bound is sharper e n+1 O (µ) = P(un+1 û n+1 ) P e n+1 (µ) P e n+1 (µ) η n+1 N,M. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 16/26

18 RBM & ROM-based Optimization FOM ASS-POD-Greedy ASS-EI ROM Σ : Ac n+1 z q n+1 z = Bcz n + dz n 1 ɛ ɛ thn z = qz n + thz n c n z, q n z R N Approx. ˆΣ : Â cz a n+1 c z a n+1 q z = ˆB cz ac n z + d0 nˆd cz 1 ɛ ɛ tĥ cz βz n = aq n z + tĥ qz βz n a n c z, a n q z R N, a n q z R M N N, M Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 17/26

19 RBM & ROM-based Optimization FOM ASS-POD-Greedy ASS-EI ROM Σ : Ac n+1 z q n+1 z = Bcz n + dz n 1 ɛ ɛ thn z = qz n + thz n c n z, q n z R N Approx. ˆΣ : Â cz a n+1 c z a n+1 q z = ˆB cz ac n z + d0 nˆd cz 1 ɛ ɛ tĥ cz βz n = aq n z + tĥ qz βz n a n c z, a n q z R N, a n q z R M min { Pr(c z(µ), q z (µ); µ)} s.t. µ P Rec(c z (µ), q z (µ); µ) Rec min, c z (µ), q z (µ) : solutions to Σ. N N, M Approx. min { ˆPr(ĉ z (µ), ˆq z (µ); µ)} s.t. µ P Rec(ĉ ˆ z (µ), ˆq z (µ); µ) Rec min, ĉ z (µ), ˆq z (µ) : solutions to ˆΣ. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 17/26

20 Num. Ex.: Performance of ASS Generation of CRBs (W a, W b ) with different tol ASS at the same tolerance tol CRB = tol ASS M (W a W b ) Runtime [h] no ASS (-) ASS ( 90.3%) ASS ( 93.5%) ASS ( 94.4%) Runtime comparison of using POD-Greedy algorithm with early-stop (Alg. 5) with and without ASS. Simulations Num. of RB (N) Runtime [h] POD-Greedy ASS-POD-Greedy ( 50.1%) 1 Done on a Workstation with 4 Intel Xeon E CPUs (8 core per CPU) 2.67 GHz RAM 1TB, others were done on the PC with Quad CPU 2.83GHz RAM 4GB. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 18/26

21 Num. Ex.: Performance of Error Estimation 4 2 Field variable error bound Output error bound True output error max µ Ptrain error Size of RB: N Comparison of the error bound decay during the RB extension and the corresponding true output error. Output error bound: η N := max z {a,b} {max µ Ptrain η N,M,c K z (µ)} Field variable error bound: η N := max z {a,b} {max µ Ptrain η N,M,c K z (µ)} True output error: en max := max µ Ptrain ē N (µ), where ē N (µ) := max z {a,b} ē N,cz (µ), ē N,cz (µ) := 1 K K n=1 cn z,o (µ) ĉn z,o (µ) Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 19/26

22 Num. Ex.: ASS-POD-Greedy with Early-stop Algorithm 5 RB generation using ASS-POD-Greedy with early-stop Input: P train, tol RB, tol decay Output: RB: V = [V 1,..., V N ] 1: Implement Step 1 in Alg. 4 2: while the error η N (µ max ) > tol RB do 3: Implement Steps 3-6 in Alg. 4 4: Compute the decay of the error bound dη = η N 1(µ old 5: if dη < tol deacy then max ) η N (µ max) η N 1 (µ old max ) 6: Compute the true output error at the selected parameter µ max, ē N (µ max ) 7: if ē N (µ max ) < tol RB then 8: Stop 9: end if 10: end if 11: end while Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 20/26

23 Num. Ex.: Error Decay 4 2 Output error bound True output error max µ Ptrain error Size of RB: N Error bound decay during the RB extension using Alg. 5 and the corresponding maximal true output error. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 21/26

24 Num. Ex.: Parameter Location Injection period: t in Feed flow rate: Q Parameter location during the RB extension using Alg. 5. Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 22/26

25 Num. Ex.: ROM Performance Dimensionless Concentration c a FOM c b FOM c a ROM c ROM b Dimensionless Time Concentrations at the outlet of the column with the FOM (N = 1500) and the ROM (N = 47) at the parameter µ = (Q, t in ) = (0.1018, ). Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 23/26

26 Num. Ex.: ROM Validation Runtime comparison of the detailed and reduced simulation over a validation set P val with 600 random sample points. Tolerance for the generation of ROM: tol CRB = , tol RB = , tol ASS = Simulations Max. error Average runtime [s]/fos FOM (N = 1500) (-) ROM, no ASS for POD-Greedy / 53 ROM, ASS-POD-Greedy / 48 Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 24/26

27 Num. Ex.: ROM-based Optimization The optimization for batch chromatography: FOM-based Opt.: min { Pr(c z(µ), q z (µ); µ)} s.t. µ P Rec(c z (µ), q z (µ); µ) Rec min, c z (µ), q z (µ) : solutions to Σ. Approx. ROM-based Opt.: min { ˆPr(ĉ z (µ), ˆq z (µ); µ)} s.t. µ P Rec(ĉ ˆ z (µ), ˆq z (µ); µ) Rec min, ĉ z (µ), ˆq z (µ) : solutions to ˆΣ. P = [0.0667, ] [0.5, 2.0] Pu a = Pu b = 95.0%, Rec min = 80.0% N = 1500, N = 47 Comparison of the optimization based on the ROM and FOM. Simulations Obj. (Pr) Opt. solution (µ) It. Runtime [h]/fos FOM-Opt ( , ) / - ROM-Opt ( , ) / 53 The optimizer: NLOPT GN DIRECT L Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 25/26

28 Conclusions and Outlook FOM ASS-POD-Greedy ASS-EI ROM FOM-Opt. Approx. ROM-Opt. Adaptive Snapshots Selection Output-oriented error estimation Early-stop for (ASS-)POD-Greedy algorithm Outlooks: Error estimation with dual system RBM to Simulated Moving Bed (SMB) chromatography RBM for SMB with uncertainty quantification (UQ) Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 26/26

29 Conclusions and Outlook FOM ASS-POD-Greedy ASS-EI ROM FOM-Opt. Approx. ROM-Opt. Adaptive Snapshots Selection Output-oriented error estimation Early-stop for (ASS-)POD-Greedy algorithm Outlooks: Error estimation with dual system RBM to Simulated Moving Bed (SMB) chromatography RBM for SMB with uncertainty quantification (UQ) Thank you for your attention! Max Planck Institute Magdeburg Y. Zhang, RBM and ROM-based optimization 26/26

Empirical Interpolation of Nonlinear Parametrized Evolution Operators

Empirical Interpolation of Nonlinear Parametrized Evolution Operators MÜNSTER Empirical Interpolation of Nonlinear Parametrized Evolution Operators Martin Drohmann, Bernard Haasdonk, Mario Ohlberger 03/12/2010 MÜNSTER 2/20 > Motivation: Reduced Basis Method RB Scenario:

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

Model reduction for multiscale problems

Model reduction for multiscale problems Model reduction for multiscale problems Mario Ohlberger Dec. 12-16, 2011 RICAM, Linz , > Outline Motivation: Multi-Scale and Multi-Physics Problems Model Reduction: The Reduced Basis Approach A new Reduced

More information

Model Reduction for Dynamical Systems

Model Reduction for Dynamical Systems MAX PLANCK INSTITUTE Otto-von-Guericke Universitaet Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 10 Peter Benner Lihong Feng Max Planck Institute for

More information

Reduced Basis Method for Parametric

Reduced Basis Method for Parametric 1/24 Reduced Basis Method for Parametric H 2 -Optimal Control Problems NUMOC2017 Andreas Schmidt, Bernard Haasdonk Institute for Applied Analysis and Numerical Simulation, University of Stuttgart andreas.schmidt@mathematik.uni-stuttgart.de

More information

The Certified Reduced-Basis Method for Darcy Flows in Porous Media

The Certified Reduced-Basis Method for Darcy Flows in Porous Media The Certified Reduced-Basis Method for Darcy Flows in Porous Media S. Boyaval (1), G. Enchéry (2), R. Sanchez (2), Q. H. Tran (2) (1) Laboratoire Saint-Venant (ENPC - EDF R&D - CEREMA) & Matherials (INRIA),

More information

Parametric Model Order Reduction for Linear Control Systems

Parametric Model Order Reduction for Linear Control Systems Parametric Model Order Reduction for Linear Control Systems Peter Benner HRZZ Project Control of Dynamical Systems (ConDys) second project meeting Zagreb, 2 3 November 2017 Outline 1. Introduction 2. PMOR

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD SIAM J. SCI. COMPUT. Vol. 36, No., pp. A68 A92 c 24 Society for Industrial and Applied Mathematics LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD BENJAMIN PEHERSTORFER, DANIEL BUTNARU, KAREN WILLCOX,

More information

Towards Reduced Order Modeling (ROM) for Gust Simulations

Towards Reduced Order Modeling (ROM) for Gust Simulations Towards Reduced Order Modeling (ROM) for Gust Simulations S. Görtz, M. Ripepi DLR, Institute of Aerodynamics and Flow Technology, Braunschweig, Germany Deutscher Luft und Raumfahrtkongress 2017 5. 7. September

More information

Using Model Order Reduction to Accelerate Optimization of Multi-Stage Linear Dynamical Systems

Using Model Order Reduction to Accelerate Optimization of Multi-Stage Linear Dynamical Systems Using Model Order Reduction to Accelerate Optimization of Multi-Stage Linear Dynamical Systems Yao Yue, Suzhou Li, Lihong Feng, Andreas Seidel-Morgenstern, Peter Benner Max Planck Institute for Dynamics

More information

WWU. Efficient Reduced Order Simulation of Pore-Scale Lithium-Ion Battery Models. living knowledge. Mario Ohlberger, Stephan Rave ECMI 2018

WWU. Efficient Reduced Order Simulation of Pore-Scale Lithium-Ion Battery Models. living knowledge. Mario Ohlberger, Stephan Rave ECMI 2018 M Ü N S T E R Efficient Reduced Order Simulation of Pore-Scale Lithium-Ion Battery Models Mario Ohlberger, Stephan Rave living knowledge ECMI 2018 Budapest June 20, 2018 M Ü N S T E R Reduction of Pore-Scale

More information

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut RICE UNIVERSITY Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

Some recent developments on ROMs in computational fluid dynamics

Some recent developments on ROMs in computational fluid dynamics Some recent developments on ROMs in computational fluid dynamics F. Ballarin 1, S. Ali 1, E. Delgado 2, D. Torlo 1,3, T. Chacón 2, M. Gómez 2, G. Rozza 1 1 mathlab, Mathematics Area, SISSA, Trieste, Italy

More information

Reduced Basis Methods for inverse problems

Reduced Basis Methods for inverse problems Reduced Basis Methods for inverse problems Dominik Garmatter dominik.garmatter@mathematik.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Joint work with Bastian

More information

Model reduction of parametrized aerodynamic flows: discontinuous Galerkin RB empirical quadrature procedure. Masayuki Yano

Model reduction of parametrized aerodynamic flows: discontinuous Galerkin RB empirical quadrature procedure. Masayuki Yano Model reduction of parametrized aerodynamic flows: discontinuous Galerkin RB empirical quadrature procedure Masayuki Yano University of Toronto Institute for Aerospace Studies Acknowledgments: Anthony

More information

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems Krzysztof Fidkowski, David Galbally*, Karen Willcox* (*MIT) Computational Aerospace Sciences Seminar Aerospace Engineering

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

Comparison of methods for parametric model order reduction of instationary problems

Comparison of methods for parametric model order reduction of instationary problems Max Planck Institute Magdeburg Preprints Ulrike Baur Peter Benner Bernard Haasdonk Christian Himpe Immanuel Maier Mario Ohlberger Comparison of methods for parametric model order reduction of instationary

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

Reduced basis method for efficient model order reduction of optimal control problems

Reduced basis method for efficient model order reduction of optimal control problems Reduced basis method for efficient model order reduction of optimal control problems Laura Iapichino 1 joint work with Giulia Fabrini 2 and Stefan Volkwein 2 1 Department of Mathematics and Computer Science,

More information

Towards parametric model order reduction for nonlinear PDE systems in networks

Towards parametric model order reduction for nonlinear PDE systems in networks Towards parametric model order reduction for nonlinear PDE systems in networks MoRePas II 2012 Michael Hinze Martin Kunkel Ulrich Matthes Morten Vierling Andreas Steinbrecher Tatjana Stykel Fachbereich

More information

A model order reduction framework for parametrized nonlinear PDE constrained optimization

A model order reduction framework for parametrized nonlinear PDE constrained optimization MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 11.2015 Mai 2015 A model order reduction framework

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods

Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods Jan S. Hesthaven Benjamin Stamm Shun Zhang September 9, 211 Abstract.

More information

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein Universität Konstanz Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems Laura Iapichino Stefan Trenz Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 347, Dezember 2015

More information

Error estimation and adaptivity for model reduction in mechanics of materials

Error estimation and adaptivity for model reduction in mechanics of materials Error estimation and adaptivity for model reduction in mechanics of materials David Ryckelynck Centre des Matériaux Mines ParisTech, UMR CNRS 7633 8 février 2017 Outline 1 Motivations 2 Local enrichment

More information

Model reduction of parametrized aerodynamic flows: stability, error control, and empirical quadrature. Masayuki Yano

Model reduction of parametrized aerodynamic flows: stability, error control, and empirical quadrature. Masayuki Yano Model reduction of parametrized aerodynamic flows: stability, error control, and empirical quadrature Masayuki Yano University of Toronto Institute for Aerospace Studies Acknowledgments: Anthony Patera;

More information

Efficient model reduction of parametrized systems by matrix discrete empirical interpolation

Efficient model reduction of parametrized systems by matrix discrete empirical interpolation MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 02.2015 February 2015 Efficient model reduction of

More information

Discrete Empirical Interpolation for Nonlinear Model Reduction

Discrete Empirical Interpolation for Nonlinear Model Reduction Discrete Empirical Interpolation for Nonlinear Model Reduction Saifon Chaturantabut and Danny C. Sorensen Department of Computational and Applied Mathematics, MS-34, Rice University, 6 Main Street, Houston,

More information

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control

More information

An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem Mathematical and Computational Applications Article An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem Felix Fritzen 1, ID,

More information

Applications of DEIM in Nonlinear Model Reduction

Applications of DEIM in Nonlinear Model Reduction Applications of DEIM in Nonlinear Model Reduction S. Chaturantabut and D.C. Sorensen Collaborators: M. Heinkenschloss, K. Willcox, H. Antil Support: AFOSR and NSF U. Houston CSE Seminar Houston, April

More information

A-Posteriori Error Estimation for Parameterized Kernel-Based Systems

A-Posteriori Error Estimation for Parameterized Kernel-Based Systems A-Posteriori Error Estimation for Parameterized Kernel-Based Systems Daniel Wirtz Bernard Haasdonk Institute for Applied Analysis and Numerical Simulation University of Stuttgart 70569 Stuttgart, Pfaffenwaldring

More information

UNITEXT La Matematica per il 3+2

UNITEXT La Matematica per il 3+2 UNITEXT La Matematica per il 3+2 Volume 92 Editor-in-chief A. Quarteroni Series editors L. Ambrosio P. Biscari C. Ciliberto M. Ledoux W.J. Runggaldier More information about this series at http://www.springer.com/series/5418

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation Y. Maday (LJLL, I.U.F., Brown Univ.) Olga Mula (CEA and LJLL) G.

More information

Probabilistic Time Series Classification

Probabilistic Time Series Classification Probabilistic Time Series Classification Y. Cem Sübakan Boğaziçi University 25.06.2013 Y. Cem Sübakan (Boğaziçi University) M.Sc. Thesis Defense 25.06.2013 1 / 54 Problem Statement The goal is to assign

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

Time-adaptive methods for the incompressible Navier-Stokes equations

Time-adaptive methods for the incompressible Navier-Stokes equations Time-adaptive methods for the incompressible Navier-Stokes equations Joachim Rang, Thorsten Grahs, Justin Wiegmann, 29.09.2016 Contents Introduction Diagonally implicit Runge Kutta methods Results with

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics

Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics Lihong Feng and Peter Benner Abstract We consider model order reduction of a system described

More information

Multiobjective PDE-constrained optimization using the reduced basis method

Multiobjective PDE-constrained optimization using the reduced basis method Multiobjective PDE-constrained optimization using the reduced basis method Laura Iapichino1, Stefan Ulbrich2, Stefan Volkwein1 1 Universita t 2 Konstanz - Fachbereich Mathematik und Statistik Technischen

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Submitted to SISC; revised August A SPACE-TIME PETROV-GALERKIN CERTIFIED REDUCED BASIS METHOD: APPLICATION TO THE BOUSSINESQ EQUATIONS

Submitted to SISC; revised August A SPACE-TIME PETROV-GALERKIN CERTIFIED REDUCED BASIS METHOD: APPLICATION TO THE BOUSSINESQ EQUATIONS Submitted to SISC; revised August 2013. A SPACE-TIME PETROV-GALERKI CERTIFIED REDUCED BASIS METHOD: APPLICATIO TO THE BOUSSIESQ EQUATIOS MASAYUKI YAO Abstract. We present a space-time certified reduced

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Methods for Nonlinear Systems Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 65 Outline 1 Nested Approximations 2 Trajectory PieceWise Linear (TPWL)

More information

Reduced Basis Method for Explicit Finite Volume Approximations of Nonlinear Conservation Laws

Reduced Basis Method for Explicit Finite Volume Approximations of Nonlinear Conservation Laws Proceedings of Symposia in Applied Mathematics Reduced Basis Method for Explicit Finite Volume Approximations of Nonlinear Conservation Laws Bernard Haasdonk and Mario Ohlberger Abstract. The numerical

More information

Online Learning for Multi-Task Feature Selection

Online Learning for Multi-Task Feature Selection Online Learning for Multi-Task Feature Selection Haiqin Yang Department of Computer Science and Engineering The Chinese University of Hong Kong Joint work with Irwin King and Michael R. Lyu November 28,

More information

POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM

POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM G. Dimitriu Razvan Stefanescu Ionel M. Navon Abstract In this study, we implement the DEIM algorithm (Discrete Empirical

More information

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model nonlinear 4DVAR 4DVAR Data Assimilation of the Shallow Water Equation Model R. Ştefănescu and Ionel M. Department of Scientific Computing Florida State University Tallahassee, Florida May 23, 2013 (Florida

More information

A Reduced Basis Method for Evolution Schemes with Parameter-Dependent Explicit Operators

A Reduced Basis Method for Evolution Schemes with Parameter-Dependent Explicit Operators B. Haasdonk a M. Ohlberger b G. Rozza c A Reduced Basis Method for Evolution Schemes with Parameter-Dependent Explicit Operators Stuttgart, March 29 a Instite of Applied Analysis and Numerical Simulation,

More information

Fermionic coherent states in infinite dimensions

Fermionic coherent states in infinite dimensions Fermionic coherent states in infinite dimensions Robert Oeckl Centro de Ciencias Matemáticas Universidad Nacional Autónoma de México Morelia, Mexico Coherent States and their Applications CIRM, Marseille,

More information

Model Order Reduction for Parameter-Varying Systems

Model Order Reduction for Parameter-Varying Systems Model Order Reduction for Parameter-Varying Systems Xingang Cao Promotors: Wil Schilders Siep Weiland Supervisor: Joseph Maubach Eindhoven University of Technology CASA Day November 1, 216 Xingang Cao

More information

Some a posteriori error bounds for reduced order modelling of (non-)parametrized linear systems

Some a posteriori error bounds for reduced order modelling of (non-)parametrized linear systems Max Planck Institute Magdeburg Preprints Lihong Feng Athanasios C. Antoulas Peter Benner Some a posteriori error bounds for reduced order modelling of (non-)parametrized linear systems MAX PLANCK INSTITUT

More information

Reduced-basis boundary element method for fast electromagnetic field computation

Reduced-basis boundary element method for fast electromagnetic field computation Research Article Vol. 34, No. 12 / December 217 / Journal of the Optical Society of America A 2231 Reduced-basis boundary element method for fast electromagnetic field computation YATING SHI, XIUGUO CHEN,*

More information

Schmidt, Andreas 1 and Haasdonk, Bernard 1

Schmidt, Andreas 1 and Haasdonk, Bernard 1 ESAIM: Control, Optimisation and Calculus of Variations URL: http://www.emath.fr/cocv/ Will be set by the publisher REDUCED BASIS APPROXIMATION OF LARGE SCALE PARAMETRIC ALGEBRAIC RICCATI EQUATIONS Schmidt,

More information

Fluid flow dynamical model approximation and control

Fluid flow dynamical model approximation and control Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an

More information

Model reduction of nonlinear circuit equations

Model reduction of nonlinear circuit equations Model reduction of nonlinear circuit equations Tatjana Stykel Technische Universität Berlin Joint work with T. Reis and A. Steinbrecher BIRS Workshop, Banff, Canada, October 25-29, 2010 T. Stykel. Model

More information

Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen-Loève Expansion

Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen-Loève Expansion Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen-Loève Expansion Bernard Haasdonk, Karsten Urban und Bernhard Wieland Preprint Series:

More information

A Two-Step Certified Reduced Basis Method

A Two-Step Certified Reduced Basis Method A Two-Step Certified Reduced Basis Method The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Eftang,

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

MATH 1314 Test 2 Review

MATH 1314 Test 2 Review Name: Class: Date: MATH 1314 Test 2 Review Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Find ( f + g)(x). f ( x) = 2x 2 2x + 7 g ( x) = 4x 2 2x + 9

More information

Machine Learning. Linear Models. Fabio Vandin October 10, 2017

Machine Learning. Linear Models. Fabio Vandin October 10, 2017 Machine Learning Linear Models Fabio Vandin October 10, 2017 1 Linear Predictors and Affine Functions Consider X = R d Affine functions: L d = {h w,b : w R d, b R} where ( d ) h w,b (x) = w, x + b = w

More information

Reduced order modelling for Crash Simulation

Reduced order modelling for Crash Simulation Reduced order modelling for Crash Simulation Fatima Daim ESI Group 28 th january 2016 Fatima Daim (ESI Group) ROM Crash Simulation 28 th january 2016 1 / 50 High Fidelity Model (HFM) and Model Order Reduction

More information

Model order reduction of electrical circuits with nonlinear elements

Model order reduction of electrical circuits with nonlinear elements Model order reduction of electrical circuits with nonlinear elements Andreas Steinbrecher and Tatjana Stykel 1 Introduction The efficient and robust numerical simulation of electrical circuits plays a

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Finite Element Decompositions for Stable Time Integration of Flow Equations

Finite Element Decompositions for Stable Time Integration of Flow Equations MAX PLANCK INSTITUT August 13, 2015 Finite Element Decompositions for Stable Time Integration of Flow Equations Jan Heiland, Robert Altmann (TU Berlin) ICIAM 2015 Beijing DYNAMIK KOMPLEXER TECHNISCHER

More information

A model order reduction technique for speeding up computational homogenisation

A model order reduction technique for speeding up computational homogenisation A model order reduction technique for speeding up computational homogenisation Olivier Goury, Pierre Kerfriden, Wing Kam Liu, Stéphane Bordas Cardiff University Outline Introduction Heterogeneous materials

More information

The annals of Statistics (2006)

The annals of Statistics (2006) High dimensional graphs and variable selection with the Lasso Nicolai Meinshausen and Peter Buhlmann The annals of Statistics (2006) presented by Jee Young Moon Feb. 19. 2010 High dimensional graphs and

More information

NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION

NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SIAM J. SCI. COMPUT. Vol. 3, No. 5, pp. 737 764 c Society for Industrial and Applied Mathematics NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SAIFON CHATURANTABUT AND DANNY C. SORENSEN

More information

Single Index Quantile Regression for Heteroscedastic Data

Single Index Quantile Regression for Heteroscedastic Data Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University JSM, 2015 E. Christou, M. G. Akritas (PSU) SIQR JSM, 2015

More information

Sparse Approximation of Signals with Highly Coherent Dictionaries

Sparse Approximation of Signals with Highly Coherent Dictionaries Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

Chapitre 1. Cours Reduced Basis Output Bounds Methodes du 1er Semestre

Chapitre 1. Cours Reduced Basis Output Bounds Methodes du 1er Semestre Chapitre 1 Cours Methodes du 1er Semestre 2007-2008 Pr. Christophe Prud'homme christophe.prudhomme@ujf-grenoble.fr Université Joseph Fourier Grenoble 1 1.1 1 2 FEM Approximation Approximation Estimation

More information

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics

More information

The Reduced Basis Method:

The Reduced Basis Method: The Reduced Basis Method: Application to Problems in Fluid and Solid Mechanics ++ Aachen Institute for Advanced Study in Computational Engineering Science K. Veroy-Grepl 31 January 2014 AICES & MOR I The

More information

Approximation of Parametric Derivatives by the Empirical Interpolation Method

Approximation of Parametric Derivatives by the Empirical Interpolation Method Approximation of Parametric Derivatives by the Empirical Interpolation ethod Jens L. Eftang, artin A. Grepl, Anthony T. Patera and Einar. Rønquist Bericht Nr. 330 September 2011 Key words: function approximation;

More information

Paradigms of Probabilistic Modelling

Paradigms of Probabilistic Modelling Paradigms of Probabilistic Modelling Hermann G. Matthies Brunswick, Germany wire@tu-bs.de http://www.wire.tu-bs.de abstract RV-measure.tex,v 4.5 2017/07/06 01:56:46 hgm Exp Overview 2 1. Motivation challenges

More information

Reduced Modeling in Data Assimilation

Reduced Modeling in Data Assimilation Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation

More information

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

Moving Frontiers in Model Reduction Using Numerical Linear Algebra Using Numerical Linear Algebra Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Group Magdeburg, Germany Technische Universität Chemnitz

More information

A New Predictor-Corrector Method for Optimal Power Flow

A New Predictor-Corrector Method for Optimal Power Flow 1 st Brazilian Workshop on Interior Point Methods Federal Univesity of Technology Paraná (UTFPR) A New Predictor-Corrector Method for Optimal Power Flow Roy Wilhelm Probst 27-28 April, 2015 - Campinas,

More information

Parameter Selection Techniques and Surrogate Models

Parameter Selection Techniques and Surrogate Models Parameter Selection Techniques and Surrogate Models Model Reduction: Will discuss two forms Parameter space reduction Surrogate models to reduce model complexity Input Representation Local Sensitivity

More information

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017 Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training

More information

Greedy Control. Enrique Zuazua 1 2

Greedy Control. Enrique Zuazua 1 2 Greedy Control Enrique Zuazua 1 2 DeustoTech - Bilbao, Basque Country, Spain Universidad Autónoma de Madrid, Spain Visiting Fellow of LJLL-UPMC, Paris enrique.zuazua@deusto.es http://enzuazua.net X ENAMA,

More information

A regularized least-squares method for sparse low-rank approximation of multivariate functions

A regularized least-squares method for sparse low-rank approximation of multivariate functions Workshop Numerical methods for high-dimensional problems April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with

More information

Computer Science Technical Report CSTR-19/2015 June 14, R. Ştefănescu, A. Sandu

Computer Science Technical Report CSTR-19/2015 June 14, R. Ştefănescu, A. Sandu Computer Science Technical Report CSTR-19/2015 June 14, 2015 arxiv:submit/1279278 [math.na] 14 Jun 2015 R. Ştefănescu, A. Sandu Efficient approximation of sparse Jacobians for time-implicit reduced order

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

arxiv: v1 [cs.na] 9 Jan 2019

arxiv: v1 [cs.na] 9 Jan 2019 Statistical closure modeling for reduced-order models of stationary systems by the ROMES method Stefano Pagani, Andrea Manzoni, and Kevin Carlberg arxiv:1901.02792v1 [cs.na] 9 Jan 2019 Abstract. This work

More information

Toni Lassila 1, Andrea Manzoni 1 and Gianluigi Rozza 1

Toni Lassila 1, Andrea Manzoni 1 and Gianluigi Rozza 1 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher ON THE APPROIMATION OF STABILITY FACTORS FOR GENERAL PARAMETRIZED PARTIAL DIFFERENTIAL

More information

A CERTIFIED TRUST REGION REDUCED BASIS APPROACH TO PDE-CONSTRAINED OPTIMIZATION

A CERTIFIED TRUST REGION REDUCED BASIS APPROACH TO PDE-CONSTRAINED OPTIMIZATION SIAM J. SCI. COMPUT. Vol. 39, o. 5, pp. S434 S460 c 2017 Elizabeth Qian, Martin Grepl, Karen Veroy & Karen Willcox A CERTIFIED TRUST REGIO REDUCED BASIS APPROACH TO PDE-COSTRAIED OPTIMIZATIO ELIZABETH

More information

Continuous methods for numerical linear algebra problems

Continuous methods for numerical linear algebra problems Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on

More information

Model reduction of wave propagation

Model reduction of wave propagation Model reduction of wave propagation via phase-preconditioned rational Krylov subspaces Delft University of Technology and Schlumberger V. Druskin, R. Remis, M. Zaslavsky, Jörn Zimmerling November 8, 2017

More information

MLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22

MLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22 MLE and GMM Li Zhao, SJTU Spring, 2017 Li Zhao MLE and GMM 1 / 22 Outline 1 MLE 2 GMM 3 Binary Choice Models Li Zhao MLE and GMM 2 / 22 Maximum Likelihood Estimation - Introduction For a linear model y

More information

Certified Reduced Basis Methods for Parametrized Distributed Optimal Control Problems

Certified Reduced Basis Methods for Parametrized Distributed Optimal Control Problems Certified Reduced Basis Methods for Parametrized Distributed Optimal Control Problems Mark Kärcher, Martin A. Grepl and Karen Veroy Preprint o. 402 July 2014 Key words: Optimal control, reduced basis method,

More information

Model Reduction for Linear Dynamical Systems

Model Reduction for Linear Dynamical Systems Summer School on Numerical Linear Algebra for Dynamical and High-Dimensional Problems Trogir, October 10 15, 2011 Model Reduction for Linear Dynamical Systems Peter Benner Max Planck Institute for Dynamics

More information