Rank reduction of parameterized time-dependent PDEs

Size: px
Start display at page:

Download "Rank reduction of parameterized time-dependent PDEs"

Transcription

1 Rank reduction of parameterized time-dependent PDEs A. Spantini 1, L. Mathelin 2, Y. Marzouk 1 1 AeroAstro Dpt., MIT, USA 2 LIMSI-CNRS, France UNCECOMP 2015 (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

2 A common situation Design of a complex system: optimize some quantitie(s) of interest but... multiple operating conditions (Reynolds number, loading,... ), parameterized geometry, initial/boundary conditions, source terms, etc., uncertainty in some variables naturally leads to the introduction of an image probability space (Ξ, B Ξ, µ Ξ ),... one routinely faces parametric PDEs potentially high-dimensional solution space need for a good approximation method. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

3 Problem statement Let u (x, ξ) : X Ξ R, u S = S X Ξ S X S Ξ, be the solution at any given time of F possibly nonlinear. F (u; ξ) = f (x, ξ), Goal Solve and represent the solution u of F (u) = f efficiently. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

4 Problem statement Let u (x, ξ) : X Ξ R, u S = S X Ξ S X S Ξ, be the solution at any given time of F (u; ξ) = f (x, ξ), F possibly nonlinear. Approximating the solution in a finite dimensional space, and upon introduction of suitable bases for the product space S, yields u S u h = n p i=1 j=1 u ij h φ i (x) ψ j (ξ) S h, u ij h R, and S h is isomorphic to R n R p R n p. (U) ij = u ij h, U Rn p. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

5 Problem statement Let u (x, ξ) : X Ξ R, u S = S X Ξ S X S Ξ, be the solution at any given time of F (u; ξ) = f (x, ξ), F possibly nonlinear. Approximating the solution in a finite dimensional space, and upon introduction of suitable bases for the product space S, yields u S u h = n p i=1 j=1 u ij h φ i (x) ψ j (ξ) S h, u ij h R, and S h is isomorphic to R n R p R n p. (U) ij = u ij h, U Rn p. Complexity of the solution field: O (n p). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

6 Complexity of representation Alternatively, u can be written as a sum of rank one tensors. In the finite dimensional case, they can be evaluated from the SVD of U: u h KL = min[n,p] r=1 λr u Ξ r (ξ) u X r (x), [Karhunen-Loève-like], u Ξ r (ξ) R p, u X r (x) R n. R-term KL approximation: Complexity: O (R (n + p + 1)). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

7 Low rankness is good... A low rank description of the solution is desirable in terms of storage (O (R (n + p))) and computational effort. Several techniques exploit the low rankness of the solution structure in the solution process: Generalized Spectral Decomposition (GSD), [Chinesta, Nouy,... ], Dynamically Orthogonal (DO), [Sapsis & Lermusiaux, 2009], Dynamically Bi-Orthogonal Decomposition, [Cheng, Hou & Zhang, 2013], Reduced Basis Methods, [Patera, Maday,... ], Dynamical low-rank approximation [Koch & Lubich, 2007],... (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

8 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

9 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

10 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

11 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

12 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

13 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

14 Stochastic advection equation u t + V (ξ) u = 0, V (ξ) = ξ, ξ U ( 1, 1) x R(n) Progressive increase of the rank u (x, ξ; t n) w r (x) λ r (ξ), Long time integration issue. r=1 (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

15 Low rankness is good... Several techniques exploit the low rankness of the solution structure in the solution process: Generalized Spectral Decomposition (GSD), [Chinesta, Nouy,... ], Dynamically Orthogonal (DO), [Sapsis & Lermusiaux, 2009], Dynamically Bi-Orthogonal Decomposition, [Cheng, Hou & Zhang, 2013], Reduced Basis Methods, [Patera, Maday,... ], Dynamical low-rank approximation [Koch & Lubich, 2007],... (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

16 Low rankness is good... Yes, but... Several techniques exploit the low rankness of the solution structure in the solution process: Generalized Spectral Decomposition (GSD), [Chinesta, Nouy,... ], Dynamically Orthogonal (DO), [Sapsis & Lermusiaux, 2009], Dynamically Bi-Orthogonal Decomposition, [Cheng, Hou & Zhang, 2013], Reduced Basis Methods, [Patera, Maday,... ], Dynamical low-rank approximation [Koch & Lubich, 2007],... the solution is approximated in a low-dimensional hyperplane (linear manifold). There might exist a low-dimensional nonlinear manifold on which the solution is well approximated. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

17 A naive observation Let U = rank [U] = n (full rank). Complexity: O ( n 2) R n n. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

18 A naive observation Let U = rank [U] = n (full rank). Complexity: O ( n 2) R n n. But its structure is fairly simple: U can be described by a recovery strategy and associated data Û and τ : Û (U) = τ = Complexity: O (n). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

19 Core idea Setting up the stage Let U be the solution field u S projected onto a finite dimensional space. Π S : S R n p, u U = Π S [u]. Let Φ : S S be a bijective form between vector spaces and consider û = Φ (u): U = Π S [u], Û = Π S [Φ (u)], [ ] Û = Π S Φ Π 1 S U. Let Û R ε r=1 w r λt r, the transformation Φ is determined such as the ε -rank R is low, R min [n, p]. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

20 Core idea (cont d) Proposed approach The solution of the transformed problem ( F Φ 1) ( û ) = f is meant to exhibit good numerical properties. The problem formulates as a constrained minimization problem: find a map Φ and a preconditioned solution field û such that s.t. {û, Φ } arg min J ( û ), J ( û ) = ε rank [ û (x, ξ; t) ], Φ ( F Φ 1) (û) = f, Φ Φ adm. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

21 Core idea (cont d) Proposed approach The solution of the transformed problem ( F Φ 1) ( û ) = f is meant to exhibit good numerical properties. The problem formulates as a constrained minimization problem: find a map Φ and a preconditioned solution field û such that s.t. {û, Φ } arg min J ( û ), J ( û ) = ε rank [ û (x, ξ; t) ], Φ ( F Φ 1) (û) = f, Φ Φ adm. But... rank minimization is non-convex, non-continuous, NP-hard need for a proxy. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

22 Heuristics for rank minimization Nuclear norm. See for instance Recht, Fazel & Parrilo (2010) J ( û ) = Û = (σ 1 σ 2...) 1. Recall û (x, ξ; t) KL = r σ r û X r (x) û Ξ r (ξ), I ( û ) = Û Rn p s.t. (Û) = û ij. ij σ ( ) r = W T û (V ) jr, i, j, r. ij ri Note: if Û is rank-deficient, the (Û) nuclear norm is non-differentiable. sub-differential Û û of the nuclear norm, smoothing with Huber penalties. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

23 Heuristics for rank minimization Nuclear norm. See for instance Recht, Fazel & Parrilo (2010) J ( û ) = Û = (σ 1 σ 2...) 1. Recall û (x, ξ; t) KL = r σ r û X r (x) û Ξ r (ξ), I ( û ) = Û Rn p s.t. (Û) = û ij. ij σ ( ) r = W T û (V ) jr, i, j, r. ij ri Note: if Û is rank-deficient, the (Û) nuclear norm is non-differentiable. sub-differential Û û of the nuclear norm, smoothing with Huber penalties. Complementary energy minimization. J ( û ) = E n c (û) (û), En c E = σ 2 (û) r, r>n Venturi (2011) uses n = 1: E = r J ( û ) = 1 σ2 1 û 2. S σr 2 (û). (û) (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

24 Heuristics for rank minimization Nuclear norm. See for instance Recht, Fazel & Parrilo (2010) J ( û ) = Û = (σ 1 σ 2...) 1. Recall û (x, ξ; t) KL = r σ r û X r (x) û Ξ r (ξ), I ( û ) = Û Rn p s.t. (Û) = û ij. ij σ ( ) r = W T û (V ) jr, i, j, r. ij ri Note: if Û is rank-deficient, the (Û) nuclear norm is non-differentiable. sub-differential Û û of the nuclear norm, smoothing with Huber penalties. Complementary energy minimization. J ( û ) = E n c (û) (û), En c E = σ 2 (û) r, r>n Venturi (2011) uses n = 1: Boyd (2003): J ( û ) = log ( r σr ). E = r J ( û ) = 1 σ2 1 û 2. S σr 2 (û). (û) (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

25 Heuristics for rank minimization (cont d) These transformation definitions rely on solution field decomposition techniques (e.g., SVD) computationally expensive. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

26 Heuristics for rank minimization (cont d) These transformation definitions rely on solution field decomposition techniques (e.g., SVD) computationally expensive. Alternative definition More affordable heuristic: reference subspace target. Let Ũref = span (u (ξ 1 ),..., u (ξ m)) be a subspace spanned by solutions of the original problem for different values of the parameters ξ. J ( û ) = 1 2 The first variation of J ( û ) is given by û ΠŨref [û] 2 S X S Ξ. DûJ ( û ) = ΠŨ ref [û] where ΠŨ ref [û] is the projection onto the orthocomplement of the reference subspace Ũ ref. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

27 A solution method for time-dependent parameterized PDEs At any time, the solution is represented under a separated format: R(t) u (x, ξ, t) = ur X (x, t) ur Ξ (ξ, t). r=1 the rank depends on time. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

28 A solution method for time-dependent parameterized PDEs At any time, the solution is represented under a separated format: R(t) u (x, ξ, t) = ur X (x, t) ur Ξ (ξ, t). r=1 the rank depends on time. Let consider an artificial time: û (x, ξ, t) := Φ (u, x, ξ, t) = u (x, ξ, τ (x, ξ, t)). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

29 A solution method for time-dependent parameterized PDEs Space-independent transformation τ (ξ, t), receding-horizon optimal control approach: J ( û ) = 1 2 t+t t û ΠŨref [û] 2 S X S Ξ. Regularization term α T 2 1 τ 2 S ξ promotes map invertibility ( τ > 0, µ Ξ -a.s.) and reduces drift from physical time (ṫ 1) as much as possible. Myopic formulation (vanishing horizon T 0 +) the (degenerate) optimal transformation τ is the solution of a parameterized ODE. ( ) Finally, for a problem of the form F (u) = + N (u) = f : t τ t [û] = α β (1 τ) + β û ΠŨref, (f F) S x. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

30 A solution method for time-dependent parameterized PDEs Space-independent transformation τ (ξ, t), receding-horizon optimal control approach: J ( û ) = 1 2 t+t t û ΠŨref [û] 2 S X S Ξ. Coupled problem where the reformulated problem and the parameterized ODE governing the transformation are solved together. ( ) F (u) = t + N (u) = f. ( ) (û) + τ N t = τ f, τ t [û] = α β (1 τ) + β û ΠŨref, (f F) S x, (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

31 Numerical experiments (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

32 Flow past a cylinder τ (ξ, t). Flow around a circular cylinder in a channel. Two-dimensional flow, laminar regime, parameterized Reynolds number: V (ξ) = ξ, ξ U ( 1, 1). ξ-dependent Reynolds number (MIT & LIMSI-CNRS) = ξ-dependent vortex shedding frequency, vortices wavelength, boundary layer thickness, etc. = growing rank in the {x, ξ}-space over time. Rank reduction of parameterized PDEs UNCECOMP / 22

33 Flow past a cylinder τ (ξ, t). Flow around a circular cylinder in a channel. Two-dimensional flow, laminar regime, parameterized Reynolds number: V (ξ) = ξ, ξ U ( 1, 1). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

34 x-velocity fields at t = 15 Non preconditioned Fields are out-of-phase. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

35 x-velocity fields at t = 15 Preconditioned Ũ ref = span (u ( ξ )) Fields are in-phase. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

36 Flow past a cylinder Time transform Substantial reduction of the numerical rank (20 3). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

37 A progressive decomposition Let u (x, ξ, t) S : u R r u X r u Ξ r S 1 S 2. In pratice, R is usually unknown a priori and u is derived progressively. Remark The following identifications are all legitimate: S (X T Ξ) S 1 (X T ) S 2 (Ξ), S 1 (X) S 2 (T Ξ), S 1 (X Ξ) S 2 (T ). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

38 A progressive decomposition Proposed approach Let û 1 s.t. u (x, ξ, t) rk 1 ( ) û 1 x, ξ, τ 1 1 (x, ξ, t) be the rank-1 approximation of the solution of ) F 1 (û1 = f. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

39 A progressive decomposition Proposed approach Let û 1 s.t. u (x, ξ, t) rk 1 ( ) û 1 x, ξ, τ 1 1 (x, ξ, t) be the rank-1 approximation of the solution of ) F 1 (û1 = f. Introducing û 2 s.t. u (x, ξ, t) rk 2 ( ) ( ) û 1 x, ξ, τ 1 1 (x, ξ, t) + û 2 x, ξ, τ 1 2 (x, ξ, t) and deflating the problem yields the approximation of the solution of F ) 2 (û2 ; û 1 = f. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

40 A progressive decomposition Proposed approach Let û 1 s.t. u (x, ξ, t) rk 1 ( ) û 1 x, ξ, τ 1 1 (x, ξ, t) be the rank-1 approximation of the solution of ) F 1 (û1 = f. Introducing û 2 s.t. u (x, ξ, t) rk 2 ( ) ( ) û 1 x, ξ, τ 1 1 (x, ξ, t) + û 2 x, ξ, τ 1 2 (x, ξ, t) and deflating the problem yields the approximation of the solution of F ) 2 (û2 ; û 1 = f. The artificial time τ r (x, ξ, t) is derived s.t. ( τ r arg min J r û r, τ r ; { } ) r 1 û i, τ i, J i=1 r = F r f, τ r S τ s.t. Fr f span ( ) ( ) ( ) û r, ûr x, ξ, τr 1 = ur X x, τr 1 ur Ξ (ξ). Finally u (x, ξ, t) r ( ) û r x, ξ, τr 1, τ r = τr 1 (x, t, ξ). (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

41 A progressive decomposition (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

42 Concluding remarks Strategy for parameterized PDEs which may achieve significant savings both in terms of CPU and memory. Solution to the original problem is obtained via a recovery strategy. Reformulate the problem so that its solution exhibits a particular structure and efficient numerical tools can be exploited. By-product: alleviates the long-time integration issue in parameterized PDEs. The proposed approach can be linked to the action of a Lie group on the solution manifold. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

43 Concluding remarks Strategy for parameterized PDEs which may achieve significant savings both in terms of CPU and memory. Solution to the original problem is obtained via a recovery strategy. Reformulate the problem so that its solution exhibits a particular structure and efficient numerical tools can be exploited. By-product: alleviates the long-time integration issue in parameterized PDEs. The proposed approach can be linked to the action of a Lie group on the solution manifold. Yes, but... Mathematical and physical issues associated with a space-dependent time. Impact on properties of the problem (say, ellipticity), stability of the solution numerical scheme, etc. Theoretical analysis on how much complexity can be reduced and splitted between û and Φ. Best compromise and most suitable functional forms for Φ. (MIT & LIMSI-CNRS) Rank reduction of parameterized PDEs UNCECOMP / 22

Uncertainty quantification for sparse solutions of random PDEs

Uncertainty quantification for sparse solutions of random PDEs Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July

More information

Reduced Modeling in Data Assimilation

Reduced Modeling in Data Assimilation Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Improvement of Reduced Order Modeling based on Proper Orthogonal Decomposition

Improvement of Reduced Order Modeling based on Proper Orthogonal Decomposition ICCFD5, Seoul, Korea, July 7-11, 28 p. 1 Improvement of Reduced Order Modeling based on Proper Orthogonal Decomposition Michel Bergmann, Charles-Henri Bruneau & Angelo Iollo Michel.Bergmann@inria.fr http://www.math.u-bordeaux.fr/

More information

Overview of sparse system identification

Overview of sparse system identification Overview of sparse system identification J.-Ch. Loiseau 1 & Others 2, 3 1 Laboratoire DynFluid, Arts et Métiers ParisTech, France 2 LIMSI, Université d Orsay CNRS, France 3 University of Washington, Seattle,

More information

Concepts in Global Sensitivity Analysis IMA UQ Short Course, June 23, 2015

Concepts in Global Sensitivity Analysis IMA UQ Short Course, June 23, 2015 Concepts in Global Sensitivity Analysis IMA UQ Short Course, June 23, 2015 A good reference is Global Sensitivity Analysis: The Primer. Saltelli, et al. (2008) Paul Constantine Colorado School of Mines

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS

MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS L. TAMELLINI, O. LE MAÎTRE, AND A. NOUY Abstract. In this paper we consider a

More information

Adaptive low-rank approximation in hierarchical tensor format using least-squares method

Adaptive low-rank approximation in hierarchical tensor format using least-squares method Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Numerical Approximation of Phase Field Models

Numerical Approximation of Phase Field Models Numerical Approximation of Phase Field Models Lecture 2: Allen Cahn and Cahn Hilliard Equations with Smooth Potentials Robert Nürnberg Department of Mathematics Imperial College London TUM Summer School

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Greedy algorithms for high-dimensional non-symmetric problems

Greedy algorithms for high-dimensional non-symmetric problems Greedy algorithms for high-dimensional non-symmetric problems V. Ehrlacher Joint work with E. Cancès et T. Lelièvre Financial support from Michelin is acknowledged. CERMICS, Ecole des Ponts ParisTech &

More information

Model Order Reduction Techniques

Model Order Reduction Techniques Model Order Reduction Techniques SVD & POD M. Grepl a & K. Veroy-Grepl b a Institut für Geometrie und Praktische Mathematik b Aachen Institute for Advanced Study in Computational Engineering Science (AICES)

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Generalized Spectral Decomposition for Stochastic Non Linear Problems

Generalized Spectral Decomposition for Stochastic Non Linear Problems Generalized Spectral Decomposition for Stochastic Non Linear Problems Anthony Nouy O.P. Le Maître Preprint submitted to Journal of Computational Physics Abstract We present an extension of the Generalized

More information

Closed-loop fluid flow control with a reduced-order model gain-scheduling approach

Closed-loop fluid flow control with a reduced-order model gain-scheduling approach Closed-loop fluid flow control with a reduced-order model gain-scheduling approach L. Mathelin 1 M. Abbas-Turki 2 L. Pastur 1,3 H. Abou-Kandil 2 1 LIMSI - CNRS (Orsay) 2 SATIE, Ecole Normale Supérieure

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

1. Geometry of the unit tangent bundle

1. Geometry of the unit tangent bundle 1 1. Geometry of the unit tangent bundle The main reference for this section is [8]. In the following, we consider (M, g) an n-dimensional smooth manifold endowed with a Riemannian metric g. 1.1. Notations

More information

arxiv: v1 [math.oc] 11 Jun 2009

arxiv: v1 [math.oc] 11 Jun 2009 RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION VENKAT CHANDRASEKARAN, SUJAY SANGHAVI, PABLO A. PARRILO, S. WILLSKY AND ALAN arxiv:0906.2220v1 [math.oc] 11 Jun 2009 Abstract. Suppose we are given a

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Dynamical low-rank approximation

Dynamical low-rank approximation Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Two-scale Dirichlet-Neumann preconditioners for boundary refinements

Two-scale Dirichlet-Neumann preconditioners for boundary refinements Two-scale Dirichlet-Neumann preconditioners for boundary refinements Patrice Hauret 1 and Patrick Le Tallec 2 1 Graduate Aeronautical Laboratories, MS 25-45, California Institute of Technology Pasadena,

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

Parameter Selection Techniques and Surrogate Models

Parameter Selection Techniques and Surrogate Models Parameter Selection Techniques and Surrogate Models Model Reduction: Will discuss two forms Parameter space reduction Surrogate models to reduce model complexity Input Representation Local Sensitivity

More information

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Non-Intrusive Solution of Stochastic and Parametric Equations

Non-Intrusive Solution of Stochastic and Parametric Equations Non-Intrusive Solution of Stochastic and Parametric Equations Hermann G. Matthies a Loïc Giraldi b, Alexander Litvinenko c, Dishi Liu d, and Anthony Nouy b a,, Brunswick, Germany b École Centrale de Nantes,

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Ultra-Hyperbolic Equations

Ultra-Hyperbolic Equations Ultra-Hyperbolic Equations Minh Truong, Ph.D. Fontbonne University December 16, 2016 MSI Tech Support (Institute) Slides - Beamer 09/2014 1 / 28 Abstract We obtained a classical solution in Fourier space

More information

Inversion of Satellite Ocean-Color Data

Inversion of Satellite Ocean-Color Data Inversion of Satellite Ocean-Color Data Robert Frouin Scripps Institution of Oceanography La Jolla, California, USA ADEOS-2 AMSR/GLI Workshop,Tsukuba, Japan, 30 January 2007 Collaborators Pierre-Yves Deschamps,

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

Dynamical Low-Rank Approximation to the Solution of Wave Equations

Dynamical Low-Rank Approximation to the Solution of Wave Equations Dynamical Low-Rank Approximation to the Solution of Wave Equations Julia Schweitzer joint work with Marlis Hochbruck INSTITUT FÜR ANGEWANDTE UND NUMERISCHE MATHEMATIK 1 KIT Universität des Landes Baden-Württemberg

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Resolving the White Noise Paradox in the Regularisation of Inverse Problems 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of

More information

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010 Linear Regression Aarti Singh Machine Learning 10-701/15-781 Sept 27, 2010 Discrete to Continuous Labels Classification Sports Science News Anemic cell Healthy cell Regression X = Document Y = Topic X

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Sparsity in system identification and data-driven control

Sparsity in system identification and data-driven control 1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky This signal is not sparse in the "time domain" 2 / 40 But it is sparse in the "frequency domain" (it is weighted sum of six

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Dynamical low rank approximation in hierarchical tensor format

Dynamical low rank approximation in hierarchical tensor format Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution

More information

Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models

Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models Olivier Le Maître 1 Lorenzo Tamellini 2 and Anthony Nouy 3 1 LIMSI-CNRS, Orsay, France 2 MOX, Politecnico Milano, Italy 3 GeM,

More information

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation Two well known examples Applications of structured low-rank approximation Ivan Markovsky System realisation Discrete deconvolution School of Electronics and Computer Science University of Southampton The

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

arxiv: v2 [math.na] 9 Jul 2014

arxiv: v2 [math.na] 9 Jul 2014 A least-squares method for sparse low rank approximation of multivariate functions arxiv:1305.0030v2 [math.na] 9 Jul 2014 M. Chevreuil R. Lebrun A. Nouy P. Rai Abstract In this paper, we propose a low-rank

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

Synthetic Geometry. 1.4 Quotient Geometries

Synthetic Geometry. 1.4 Quotient Geometries Synthetic Geometry 1.4 Quotient Geometries Quotient Geometries Def: Let Q be a point of P. The rank 2 geometry P/Q whose "points" are the lines of P through Q and whose "lines" are the hyperplanes of of

More information

Inner Product, Length, and Orthogonality

Inner Product, Length, and Orthogonality Inner Product, Length, and Orthogonality Linear Algebra MATH 2076 Linear Algebra,, Chapter 6, Section 1 1 / 13 Algebraic Definition for Dot Product u 1 v 1 u 2 Let u =., v = v 2. be vectors in Rn. The

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Real Time Pattern Detection

Real Time Pattern Detection Real Time Pattern Detection Yacov Hel-Or The Interdisciplinary Center joint work with Hagit Hel-Or Haifa University 1 Pattern Detection A given pattern is sought in an image. The pattern may appear at

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

The HJB-POD approach for infinite dimensional control problems

The HJB-POD approach for infinite dimensional control problems The HJB-POD approach for infinite dimensional control problems M. Falcone works in collaboration with A. Alla, D. Kalise and S. Volkwein Università di Roma La Sapienza OCERTO Workshop Cortona, June 22,

More information

BRUNO L. M. FERREIRA AND HENRIQUE GUZZO JR.

BRUNO L. M. FERREIRA AND HENRIQUE GUZZO JR. REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 60, No. 1, 2019, Pages 9 20 Published online: February 11, 2019 https://doi.org/10.33044/revuma.v60n1a02 LIE n-multiplicative MAPPINGS ON TRIANGULAR n-matrix

More information

Kähler manifolds and variations of Hodge structures

Kähler manifolds and variations of Hodge structures Kähler manifolds and variations of Hodge structures October 21, 2013 1 Some amazing facts about Kähler manifolds The best source for this is Claire Voisin s wonderful book Hodge Theory and Complex Algebraic

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Paradigms of Probabilistic Modelling

Paradigms of Probabilistic Modelling Paradigms of Probabilistic Modelling Hermann G. Matthies Brunswick, Germany wire@tu-bs.de http://www.wire.tu-bs.de abstract RV-measure.tex,v 4.5 2017/07/06 01:56:46 hgm Exp Overview 2 1. Motivation challenges

More information

Construction of a New Domain Decomposition Method for the Stokes Equations

Construction of a New Domain Decomposition Method for the Stokes Equations Construction of a New Domain Decomposition Method for the Stokes Equations Frédéric Nataf 1 and Gerd Rapin 2 1 CMAP, CNRS; UMR7641, Ecole Polytechnique, 91128 Palaiseau Cedex, France 2 Math. Dep., NAM,

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Kernels to detect abrupt changes in time series

Kernels to detect abrupt changes in time series 1 UMR 8524 CNRS - Université Lille 1 2 Modal INRIA team-project 3 SSB group Paris joint work with S. Arlot, Z. Harchaoui, G. Rigaill, and G. Marot Computational and statistical trade-offs in learning IHES

More information

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Sobol-Hoeffding decomposition Application to Global SA Computation of the SI Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Olivier Le Maître with Colleague & Friend Omar

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

LECTURE 2. (TEXED): IN CLASS: PROBABLY LECTURE 3. MANIFOLDS 1. FALL TANGENT VECTORS.

LECTURE 2. (TEXED): IN CLASS: PROBABLY LECTURE 3. MANIFOLDS 1. FALL TANGENT VECTORS. LECTURE 2. (TEXED): IN CLASS: PROBABLY LECTURE 3. MANIFOLDS 1. FALL 2006. TANGENT VECTORS. Overview: Tangent vectors, spaces and bundles. First: to an embedded manifold of Euclidean space. Then to one

More information

Empirical Interpolation Methods

Empirical Interpolation Methods Empirical Interpolation Methods Yvon Maday Laboratoire Jacques-Louis Lions - UPMC, Paris, France IUF and Division of Applied Maths Brown University, Providence USA Doctoral Workshop on Model Reduction

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Nonlinear seismic imaging via reduced order model backprojection

Nonlinear seismic imaging via reduced order model backprojection Nonlinear seismic imaging via reduced order model backprojection Alexander V. Mamonov, Vladimir Druskin 2 and Mikhail Zaslavsky 2 University of Houston, 2 Schlumberger-Doll Research Center Mamonov, Druskin,

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Overparametrization for Landscape Design in Non-convex Optimization

Overparametrization for Landscape Design in Non-convex Optimization Overparametrization for Landscape Design in Non-convex Optimization Jason D. Lee University of Southern California September 19, 2018 The State of Non-Convex Optimization Practical observation: Empirically,

More information

Parallel Domain Decomposition Strategies for Stochastic Elliptic Equations Part A: Local KL Representations

Parallel Domain Decomposition Strategies for Stochastic Elliptic Equations Part A: Local KL Representations Parallel Domain Decomposition Strategies for Stochastic Elliptic Equations Part A: Local KL Representations Andres A. Contreras Paul Mycek Olivier P. Le Maître Francesco Rizzi Bert Debusschere Omar M.

More information

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Karhunen-Loève decomposition of Gaussian measures on Banach spaces Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix GT APSSE - April 2017, the 13th joint work with Xavier Bay. 1 / 29 Sommaire 1 Preliminaries on Gaussian processes 2

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

LEAST-SQUARES FINITE ELEMENT MODELS

LEAST-SQUARES FINITE ELEMENT MODELS LEAST-SQUARES FINITE ELEMENT MODELS General idea of the least-squares formulation applied to an abstract boundary-value problem Works of our group Application to Poisson s equation Application to flows

More information

Dinesh Kumar, Mehrdad Raisee and Chris Lacor

Dinesh Kumar, Mehrdad Raisee and Chris Lacor Dinesh Kumar, Mehrdad Raisee and Chris Lacor Fluid Mechanics and Thermodynamics Research Group Vrije Universiteit Brussel, BELGIUM dkumar@vub.ac.be; m_raisee@yahoo.com; chris.lacor@vub.ac.be October, 2014

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Stochastic Modeling of Flows Behind a Square Cylinder with Uncertain Reynolds Numbers. Jacob Kasozi Wamala

Stochastic Modeling of Flows Behind a Square Cylinder with Uncertain Reynolds Numbers. Jacob Kasozi Wamala Multidisciplinary Simulation, Estimation, and Assimilation Systems Reports in Ocean Science and Engineering MSEAS-12 Stochastic Modeling of Flows Behind a Square Cylinder with Uncertain Reynolds Numbers

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Optimization on the Grassmann manifold: a case study

Optimization on the Grassmann manifold: a case study Optimization on the Grassmann manifold: a case study Konstantin Usevich and Ivan Markovsky Department ELEC, Vrije Universiteit Brussel 28 March 2013 32nd Benelux Meeting on Systems and Control, Houffalize,

More information

ANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS

ANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS ANALYSIS OF NONLINEAR PARIAL LEAS SQUARES ALGORIHMS S. Kumar U. Kruger,1 E. B. Martin, and A. J. Morris Centre of Process Analytics and Process echnology, University of Newcastle, NE1 7RU, U.K. Intelligent

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information