Reduced Modeling in Data Assimilation

Size: px
Start display at page:

Download "Reduced Modeling in Data Assimilation"

Transcription

1 Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation San Servolo May 5, 2016

2 Collaborators Albert Cohen, University Paris 6 Wolfgang Dahmen, RWTH-Aachen Ronald DeVore, Texas A&M University Guergana Petrova, Texas A&M University Przemyslaw Wojtaszczyk, University of Warsaw [arxiv: ]

3 A Few Reasons for Reduced Modeling computation of the full model could be very expensive in terms of time and computational resources if the model is just an approximation, there is no reason to compute it with a precision higher than the one provided by the model in parametric modeling one often wants to have a universal basis that provides a good approximation for the entire set of parameters Using a reduced model that can be viewed as approximating a subset M in an infinite-dimensional space U with an n-dimensional space V n notice that while M is usually a compact set, the set {u : dist(u, V n ) ε n } M is not even bounded

4 Example: Parameter-Dependent PDEs L ξ u = f with a differential operator L ξ depending on a parameter ξ Ξ from a (compact) (infinite-dimensional) set Ξ corresponding energy product, ξ variational formulation u, v ξ = f, v for all v V Galerkin discretization requires a very large subspace V V investigate the set of solutions M := {u ξ : ξ Ξ} reduced basis method: find a subspace V n such that dist(v n, M) ε n greedy approach using residual error estimates to build V n with V 0 := {0} find ξ j := argmax ξ Ξ Error(ũ(ξ, V j 1 ) u ξ ) define v j := ũ(ξ j, V ) and V j := span{v k } j k=1 weak greedy property: dist(v j, V j 1 ) γ dist(m, V j 1 ), 0 < γ 1

5 Data Assimilation How to best combine available reduced models with knowledge from external observations/measurements? the data can be used not only to validate a model but also to adjust the solution want to get closer to elements described by the model better estimation of the physical state described by the model estimate the model parameters that correspond to the state or nearby states of the observed phenomenon (inverse modeling) finite (often relatively small) number of measurements Measurements can be described by a finite dimensional space W, each point w W corresponds to a specific set of values; if W is the orthogonal complement of W, then all points with same measurements are described by w + W

6 Hilbert Space Setup Hilbert space U with norm U = :=, object of interest (the manifold): a (compact) subset M U goal: find (an approximation ũ U of) u M that fits given data {d j } m j=1 of measurements l j(u) = d j linear functionals l j U with representation l j (u) = u, ω j measurement space W = span{ω 1, ω 2,..., ω m } footprint of u on W w := P W u = (d j ) m j=1 W orthogonal complement W of W u ( w + W ) reduced modeling: use V n U instead of M with dist(m, V n ) ε n, where dist(m, V n ) : max u M u P V n u U approximation: find ũ that is close to V n and w + W

7 a picture instead of words

8 a picture instead of words

9 Least Square Recovery Formulation [MPPY] The problem for a single space V n was stated and considered in [Maday, Patera, Penn, and Yano, A parametrized-background data-weak approach to variational data assimilation: Formulation, analysis, and application to acoustics, Int.J.Numer.Meth.Engng. 102: (2015)] v (w) = arg min w P W v v V n For an orthonormal basis {ω j } m j=1 u (w) = v (w) + Then for β(v, W ) := inf v V u u (w) = ( 1 + m j=1 sup w W 1 β(v n, W ) of W define ( ) w, ω j v, ω j ω j v, w v w ) inf q W V n one has inf u q z z V n

10 The One-Space Algorithm

11 Least Square Recovery Formulation v (w) = arg min w P W v v V n For an orthonormal basis {ω j } m j=1 of W define u = v (w) + m j=1 ( ) w, ω j v, ω j ω j

12 Least Square Recovery Formulation v (w) = arg min w P W v v V n For an orthonormal basis {ω j } m j=1 u = v (w) + m j=1 of W define ( ) w, ω j v, ω j ω j = w + P W v We consider the mapping F : W W (not necessarily linear) called lifting that for given w finds an element F (w) of W to define u (w) = w + F (w) that is optimally close to u. We prove that the algorithm with F (w) := P W v for v as defined above is optimal even among ones with nonlinear liftings.

13 The Idea of Lifting consider the space V n W and W and its orthogonal projections on each v V n is split into two parts v = w + η, where w = w(v) W and η = η(v) W if dim P W V n = n, then the mapping V n P W V n invertible is given w W one can find w P W V n using the orthogonal projection P PW V n, then find v V n from w = w(v) and determine η(v) this is the way to find v V n for any given w W and consequently determine u

14 The One-Space Algorithm

15 Optimal Recovery [Michelli, Rivlin, Lecture Notes in Math (1985)] Find an algorithm A : W U that for each data point w = P W u W recovers A(w) = A(P W u) U A is instance optimal if u A(P W u) U C A (w)dist(u, V n ) model knowledge: u K n := {ũ U : dist(ũ, V n ) ε n } fix w W and consider U w := w + W and u Kw n := K n U w n multi-space problem K := K k and K w := K U w k=0 Performance criteria based on best approximations (S = K n, K): E A (S) := sup u A(P W u) u S E(S) := inf A E A(S)

16 Optimality of the One-Space Algorithm β(v, W ) := inf v V sup w W v, w, µ(v, W ) := sup v w η W Theorem Assume that β(v n, W ) > 0, K w and define η P V η = 1 β(v, W ) u = u (w) := arg min u P Vn u, v = v (w) := P Vn u. Then u U w (u v ) W, V n and u v = min u U w, v V n u v.

17 Optimality of the One-Space Algorithm β(v, W ) := inf v V sup w W v, w, µ(v, W ) := sup v w η W Theorem Assume that β(v n, W ) > 0, K w and define η P V η = 1 β(v, W ) u = u (w) := arg min u P Vn u, v = v (w) := P Vn u. Then u U w (u v ) W, V n and u v = min u U w, v V n u v. Moreover, (i) rad(k w ) = µ(v n, W ) ε 2 n u (w) v (w) 2 Chebychev radius

18 Optimality of the One-Space Algorithm β(v, W ) := inf v V sup w W v, w, µ(v, W ) := sup v w η W Theorem Assume that β(v n, W ) > 0, K w and define η P V η = 1 β(v, W ) u = u (w) := arg min u P Vn u, v = v (w) := P Vn u. Then u U w (u v ) W, V n and u v = min u U w, v V n u v. Moreover, (i) rad(k w ) = µ(v n, W ) ε 2 n u (w) v (w) 2 Chebychev radius (ii) A : w u (w) is optimal for recovering K w and E A (K w ) = E(K w ) = rad(k w ) = µ(v n, W ) ε 2 n u (w) v (w) 2

19 Optimality of the One-Space Algorithm β(v, W ) := inf v V sup w W v, w, µ(v, W ) := sup v w η W Theorem Assume that β(v n, W ) > 0, K w and define η P V η = 1 β(v, W ) u = u (w) := arg min u P Vn u, v = v (w) := P Vn u. Then u U w (u v ) W, V n and u v = min u U w, v V n u v. Moreover, (i) rad(k w ) = µ(v n, W ) ε 2 n u (w) v (w) 2 Chebychev radius (ii) A : w u (w) is optimal for recovering K w and E A (K w ) = E(K w ) = rad(k w ) = µ(v n, W ) ε 2 n u (w) v (w) 2 (iii) A is also optimal for recovering K W := E A (K W ) = E(K w ) = µ(v n, W ) ε n w W K w and

20 Multi-Space Problem Can we do better if consider all subspaces V 0 V 1... V n?

21 Multi-Space Problem the spaces V j for j = 1, 2,..., n are usually defined sequentially using a greedy procedure for each j = 0, 1,..., n we get an estimate dist(m, V j ) ε j thus M is a subset of K j := {x U : dist(x, V j ) ε j } since u (w + W ) we can consider K j w := K j (w + W ) all we know about the solution is u Kw j and it is better to use the intersection of all the sets instead of just one of them K 0 w is a hypersphere and in case V n W = {0}, K j w are (infinite-dimensional) hyperellipsoids for all practical purposes one can consider the intersections K j w (V n + W )

22 Multi-Space Algorithm Favorable bases } {φ 1,..., φ j } ONB for V j, j n {ω 1,..., ω m} ONB for W ω i, φ j SVD of ( { ) m,n {φ ω i, φ j i,j=1 1,..., φ n } ONB for Vn {ω1,..., ω m} ONB for W = si δ i,j (ψ j ) j 1 ONB for V n W if s j < 1 for j = p + 1,..., n, then ψ j := φ j s j ω j 1 s 2 j ONB for P W V n

23 Multi-Space Algorithm Favorable bases } {φ 1,..., φ j } ONB for V j, j n {ω 1,..., ω m} ONB for W ω i, φ j SVD of ( { ) m,n {φ ω i, φ j i,j=1 1,..., φ n } ONB for Vn {ω1,..., ω m} ONB for W = si δ i,j (ψ j ) j 1 ONB for V n W if s j < 1 for j = p + 1,..., n, then ψ j := φ j s j ω j 1 s 2 j ONB for P W V n u = m n n w j ω j + x j ψ j + y j ψj j=1 j=p+1 j 1 K n w n j=p+1 s 2 j (x j a j ) 2 + n j 1 the solution u (w) K w := y 2 j ε 2 n m j=n+1 (1 s 2 j )w 2 n Kw j an intersection of ellipsoids j=1 j

24 Multi-Space Algorithm universal bound Favorable bases } {φ 1,..., φ j } ONB for V j, j n {ω 1,..., ω m} ONB for W SVD of ( { ) m,n {φ ω i, φ j i,j=1 1,..., φ n } ONB for Vn {ω1,..., ω m} ONB for W ω i, φ j = si δ i,j, µ(v n, W ) = 1 s n Theorem Define θ i := n φ j, φ i ε j 1, k n, δ: δθk 2 n sk 2 n + j=1 for some 0 < δ 1 and let E 2 n := ε 2 n + δθ 2 k n + n j=k n+1 n j=k n+1 θ 2 j s 2 j θ 2 j, then rad(k 0 ) E n, rad(k w ) 2E n, w W, rad(k) 2E n E n 2 min 0 k n µ(v k, W ) ε k = ε n

25 Numerical Algorithm to Find a Point Near the Intersection Restrict the search on a finite dimensional space V n + W cyclic sequential projection: u 0 := w, u k+1 := P K np K n 1...P K 1P K 0P Uw u k

26 The End Thank you!

recent developments of approximation theory and greedy algorithms

recent developments of approximation theory and greedy algorithms recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in

More information

Empirical Interpolation Methods

Empirical Interpolation Methods Empirical Interpolation Methods Yvon Maday Laboratoire Jacques-Louis Lions - UPMC, Paris, France IUF and Division of Applied Maths Brown University, Providence USA Doctoral Workshop on Model Reduction

More information

Convergence Rates for Greedy Algorithms in Reduced Basis Methods

Convergence Rates for Greedy Algorithms in Reduced Basis Methods Convergence Rates for Greedy Algorithms in Reduced Basis Methods Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk October 4, 2010 Abstract The reduced

More information

Data Assimilation and Sampling in Banach spaces

Data Assimilation and Sampling in Banach spaces Data Assimilation and Sampling in Banach spaces Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk August 2, 2016 Abstract This paper studies the problem of approximating a function f in a Banach

More information

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation Y. Maday (LJLL, I.U.F., Brown Univ.) Olga Mula (CEA and LJLL) G.

More information

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control

More information

Math 3C Lecture 25. John Douglas Moore

Math 3C Lecture 25. John Douglas Moore Math 3C Lecture 25 John Douglas Moore June 1, 2009 Let V be a vector space. A basis for V is a collection of vectors {v 1,..., v k } such that 1. V = Span{v 1,..., v k }, and 2. {v 1,..., v k } are linearly

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be

More information

Rank reduction of parameterized time-dependent PDEs

Rank reduction of parameterized time-dependent PDEs Rank reduction of parameterized time-dependent PDEs A. Spantini 1, L. Mathelin 2, Y. Marzouk 1 1 AeroAstro Dpt., MIT, USA 2 LIMSI-CNRS, France UNCECOMP 2015 (MIT & LIMSI-CNRS) Rank reduction of parameterized

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

REDUCED BASIS METHOD

REDUCED BASIS METHOD REDUCED BASIS METHOD approximation of PDE s, interpolation, a posteriori estimates Yvon Maday Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie, Paris, France, Institut Universitaire de

More information

Greedy Control. Enrique Zuazua 1 2

Greedy Control. Enrique Zuazua 1 2 Greedy Control Enrique Zuazua 1 2 DeustoTech - Bilbao, Basque Country, Spain Universidad Autónoma de Madrid, Spain Visiting Fellow of LJLL-UPMC, Paris enrique.zuazua@deusto.es http://enzuazua.net X ENAMA,

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

MATH 263: PROBLEM SET 1: BUNDLES, SHEAVES AND HODGE THEORY

MATH 263: PROBLEM SET 1: BUNDLES, SHEAVES AND HODGE THEORY MATH 263: PROBLEM SET 1: BUNDLES, SHEAVES AND HODGE THEORY 0.1. Vector Bundles and Connection 1-forms. Let E X be a complex vector bundle of rank r over a smooth manifold. Recall the following abstract

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

How to Best Sample a Solution Manifold?

How to Best Sample a Solution Manifold? J A N U A R Y 2 0 1 5 P R E P R I N T 4 1 6 How to Best Sample a Solution Manifold? Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik Templergraben 55, 52056 Aachen, Germany This work has

More information

DISCRETE CDF 9/7 WAVELET TRANSFORM FOR FINITE-LENGTH SIGNALS

DISCRETE CDF 9/7 WAVELET TRANSFORM FOR FINITE-LENGTH SIGNALS DISCRETE CDF 9/7 WAVELET TRANSFORM FOR FINITE-LENGTH SIGNALS D. Černá, V. Finěk Department of Mathematics and Didactics of Mathematics, Technical University in Liberec Abstract Wavelets and a discrete

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

Inner Product, Length, and Orthogonality

Inner Product, Length, and Orthogonality Inner Product, Length, and Orthogonality Linear Algebra MATH 2076 Linear Algebra,, Chapter 6, Section 1 1 / 13 Algebraic Definition for Dot Product u 1 v 1 u 2 Let u =., v = v 2. be vectors in Rn. The

More information

Non-Intrusive Solution of Stochastic and Parametric Equations

Non-Intrusive Solution of Stochastic and Parametric Equations Non-Intrusive Solution of Stochastic and Parametric Equations Hermann G. Matthies a Loïc Giraldi b, Alexander Litvinenko c, Dishi Liu d, and Anthony Nouy b a,, Brunswick, Germany b École Centrale de Nantes,

More information

Empirical Interpolation of Nonlinear Parametrized Evolution Operators

Empirical Interpolation of Nonlinear Parametrized Evolution Operators MÜNSTER Empirical Interpolation of Nonlinear Parametrized Evolution Operators Martin Drohmann, Bernard Haasdonk, Mario Ohlberger 03/12/2010 MÜNSTER 2/20 > Motivation: Reduced Basis Method RB Scenario:

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Linear reconstructions and the analysis of the stable sampling rate

Linear reconstructions and the analysis of the stable sampling rate Preprint submitted to SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING DVI file produced on Linear reconstructions and the analysis of the stable sampling rate Laura Terhaar Department of Applied Mathematics

More information

Construction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam

Construction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Construction of wavelets Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Contents Stability of biorthogonal wavelets. Examples on IR, (0, 1), and (0, 1) n. General domains

More information

Sparse Approximation of PDEs based on Compressed Sensing

Sparse Approximation of PDEs based on Compressed Sensing Sparse Approximation of PDEs based on Compressed Sensing Simone Brugiapaglia Department of Mathematics Simon Fraser University Retreat for Young Researchers in Stochastics September 24, 26 2 Introduction

More information

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS Electronic Journal of Differential Equations, Vol. 017 (017), No. 15, pp. 1 7. ISSN: 107-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Maximal vectors in Hilbert space and quantum entanglement

Maximal vectors in Hilbert space and quantum entanglement Maximal vectors in Hilbert space and quantum entanglement William Arveson arveson@math.berkeley.edu UC Berkeley Summer 2008 arxiv:0712.4163 arxiv:0801.2531 arxiv:0804.1140 Overview Quantum Information

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

A generic property of families of Lagrangian systems

A generic property of families of Lagrangian systems Annals of Mathematics, 167 (2008), 1099 1108 A generic property of families of Lagrangian systems By Patrick Bernard and Gonzalo Contreras * Abstract We prove that a generic Lagrangian has finitely many

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Computation of operators in wavelet coordinates

Computation of operators in wavelet coordinates Computation of operators in wavelet coordinates Tsogtgerel Gantumur and Rob Stevenson Department of Mathematics Utrecht University Tsogtgerel Gantumur - Computation of operators in wavelet coordinates

More information

The Generalized Empirical Interpolation Method: Stability theory on Hilbert spaces with an application to the Stokes equation

The Generalized Empirical Interpolation Method: Stability theory on Hilbert spaces with an application to the Stokes equation The Generalized Empirical Interpolation Method: Stability theory on Hilbert spaces with an application to the Stokes equation The MIT Faculty has made this article openly available. Please share how this

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

The Theoretical Foundation of Reduced Basis Methods

The Theoretical Foundation of Reduced Basis Methods The Theoretical Foundation of Reduced Basis Methods Ronald DeVore January 9, 2014 Abstract The main theme of this volume is the efficient solution of families of stochastic or parametric partial differential

More information

Model Order Reduction Techniques

Model Order Reduction Techniques Model Order Reduction Techniques SVD & POD M. Grepl a & K. Veroy-Grepl b a Institut für Geometrie und Praktische Mathematik b Aachen Institute for Advanced Study in Computational Engineering Science (AICES)

More information

Multiobjective PDE-constrained optimization using the reduced basis method

Multiobjective PDE-constrained optimization using the reduced basis method Multiobjective PDE-constrained optimization using the reduced basis method Laura Iapichino1, Stefan Ulbrich2, Stefan Volkwein1 1 Universita t 2 Konstanz - Fachbereich Mathematik und Statistik Technischen

More information

Orientation transport

Orientation transport Orientation transport Liviu I. Nicolaescu Dept. of Mathematics University of Notre Dame Notre Dame, IN 46556-4618 nicolaescu.1@nd.edu June 2004 1 S 1 -bundles over 3-manifolds: homological properties Let

More information

Section 6.1. Inner Product, Length, and Orthogonality

Section 6.1. Inner Product, Length, and Orthogonality Section 6. Inner Product, Length, and Orthogonality Orientation Almost solve the equation Ax = b Problem: In the real world, data is imperfect. x v u But due to measurement error, the measured x is not

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Reduced Basis Method for Parametric

Reduced Basis Method for Parametric 1/24 Reduced Basis Method for Parametric H 2 -Optimal Control Problems NUMOC2017 Andreas Schmidt, Bernard Haasdonk Institute for Applied Analysis and Numerical Simulation, University of Stuttgart andreas.schmidt@mathematik.uni-stuttgart.de

More information

2.3 Variational form of boundary value problems

2.3 Variational form of boundary value problems 2.3. VARIATIONAL FORM OF BOUNDARY VALUE PROBLEMS 21 2.3 Variational form of boundary value problems Let X be a separable Hilbert space with an inner product (, ) and norm. We identify X with its dual X.

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Takens embedding theorem for infinite-dimensional dynamical systems

Takens embedding theorem for infinite-dimensional dynamical systems Takens embedding theorem for infinite-dimensional dynamical systems James C. Robinson Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. E-mail: jcr@maths.warwick.ac.uk Abstract. Takens

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

arxiv: v1 [math.fa] 20 Aug 2015

arxiv: v1 [math.fa] 20 Aug 2015 STRANGE PRODUCTS OF PROJECTIONS EVA KOPECKÁ AND ADAM PASZKIEWICZ arxiv:1508.05029v1 [math.fa] 20 Aug 2015 Abstract. Let H be an infinite dimensional Hilbert space. We show that there exist three orthogonal

More information

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)

More information

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS THE PROBLEMS FOR THE SECOND TEST FOR 18.102 BRIEF SOLUTIONS RICHARD MELROSE Question.1 Show that a subset of a separable Hilbert space is compact if and only if it is closed and bounded and has the property

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n

More information

Subsequences of frames

Subsequences of frames Subsequences of frames R. Vershynin February 13, 1999 Abstract Every frame in Hilbert space contains a subsequence equivalent to an orthogonal basis. If a frame is n-dimensional then this subsequence has

More information

Hadamard matrices and Compact Quantum Groups

Hadamard matrices and Compact Quantum Groups Hadamard matrices and Compact Quantum Groups Uwe Franz 18 février 2014 3ème journée FEMTO-LMB based in part on joint work with: Teodor Banica, Franz Lehner, Adam Skalski Uwe Franz (LMB) Hadamard & CQG

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

Chapitre 1. Cours Reduced Basis Output Bounds Methodes du 1er Semestre

Chapitre 1. Cours Reduced Basis Output Bounds Methodes du 1er Semestre Chapitre 1 Cours Methodes du 1er Semestre 2007-2008 Pr. Christophe Prud'homme christophe.prudhomme@ujf-grenoble.fr Université Joseph Fourier Grenoble 1 1.1 1 2 FEM Approximation Approximation Estimation

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Algebraic Theory of Entanglement

Algebraic Theory of Entanglement Algebraic Theory of (arxiv: 1205.2882) 1 (in collaboration with T.R. Govindarajan, A. Queiroz and A.F. Reyes-Lega) 1 Physics Department, Syracuse University, Syracuse, N.Y. and The Institute of Mathematical

More information

The Proper Generalized Decomposition: A Functional Analysis Approach

The Proper Generalized Decomposition: A Functional Analysis Approach The Proper Generalized Decomposition: A Functional Analysis Approach Méthodes de réduction de modèle dans le calcul scientifique Main Goal Given a functional equation Au = f It is possible to construct

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1)

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1) QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1) Each of the six questions is worth 10 points. 1) Let H be a (real or complex) Hilbert space. We say

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

A Sobolev trust-region method for numerical solution of the Ginz

A Sobolev trust-region method for numerical solution of the Ginz A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012

More information

Multivariate Topological Data Analysis

Multivariate Topological Data Analysis Cleveland State University November 20, 2008, Duke University joint work with Gunnar Carlsson (Stanford), Peter Kim and Zhiming Luo (Guelph), and Moo Chung (Wisconsin Madison) Framework Ideal Truth (Parameter)

More information

Adaptive Finite Element Methods Lecture 1: A Posteriori Error Estimation

Adaptive Finite Element Methods Lecture 1: A Posteriori Error Estimation Adaptive Finite Element Methods Lecture 1: A Posteriori Error Estimation Department of Mathematics and Institute for Physical Science and Technology University of Maryland, USA www.math.umd.edu/ rhn 7th

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Math 312 Spring08 Day 13

Math 312 Spring08 Day 13 HW 3: Due Monday May 19 th. 11. 1 # 2, 3, 6, 10 11. 2 # 1, 3, 6 Math 312 Spring08 Day 13 11.1 Continuous Functions and Mappings. 1. Definition. Let A be a subset of R n i. A mapping F: A R n is said to

More information

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1 Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,

More information

MAT 257, Handout 13: December 5-7, 2011.

MAT 257, Handout 13: December 5-7, 2011. MAT 257, Handout 13: December 5-7, 2011. The Change of Variables Theorem. In these notes, I try to make more explicit some parts of Spivak s proof of the Change of Variable Theorem, and to supply most

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

Oblique derivative problems for elliptic and parabolic equations, Lecture II

Oblique derivative problems for elliptic and parabolic equations, Lecture II of the for elliptic and parabolic equations, Lecture II Iowa State University July 22, 2011 of the 1 2 of the of the As a preliminary step in our further we now look at a special situation for elliptic.

More information

Math Solutions to homework 5

Math Solutions to homework 5 Math 75 - Solutions to homework 5 Cédric De Groote November 9, 207 Problem (7. in the book): Let {e n } be a complete orthonormal sequence in a Hilbert space H and let λ n C for n N. Show that there is

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Towards parametric model order reduction for nonlinear PDE systems in networks

Towards parametric model order reduction for nonlinear PDE systems in networks Towards parametric model order reduction for nonlinear PDE systems in networks MoRePas II 2012 Michael Hinze Martin Kunkel Ulrich Matthes Morten Vierling Andreas Steinbrecher Tatjana Stykel Fachbereich

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Chapter 6. Orthogonality and Least Squares

Chapter 6. Orthogonality and Least Squares Chapter 6 Orthogonality and Least Squares Section 6.1 Inner Product, Length, and Orthogonality Orientation Recall: This course is about learning to: Solve the matrix equation Ax = b Solve the matrix equation

More information

1.3 LECTURE 3. Vector Product

1.3 LECTURE 3. Vector Product 12 CHAPTER 1. VECTOR ALGEBRA Example. Let L be a line x x 1 a = y y 1 b = z z 1 c passing through a point P 1 and parallel to the vector N = a i +b j +c k. The equation of the plane passing through the

More information

How circular are generalized circles

How circular are generalized circles How circular are generalized circles Mario Ponce A plane circle is defined as the locus of points that have constant distance (radius) from a distinguished point (center). In this short note we treat with

More information

A perturbation-based reduced basis method for parametric eigenvalue problems

A perturbation-based reduced basis method for parametric eigenvalue problems A perturbation-based reduced basis method for parametric eigenvalue problems Eric Cancès 1, Virginie Ehrlacher 1, David Gontier 2, Damiano Lombardi 3 and Antoine Levitt 1 1. CERMICS, Ecole des Ponts ParisTech

More information

Robustness for a Liouville type theorem in exterior domains

Robustness for a Liouville type theorem in exterior domains Robustness for a Liouville type theorem in exterior domains Juliette Bouhours 1 arxiv:1207.0329v3 [math.ap] 24 Oct 2014 1 UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris,

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information