Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs
|
|
- Angelina Pope
- 5 years ago
- Views:
Transcription
1 Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
2 Contents Contents 1 Motivation, Background 2 Regularity Theorems Problem Setting Sparsity Models Main Results 3 Optimal Rank Approximation What is Known? Basic Strategy Main Result First Experiments W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
3 Motivation High-Dimensional Problems Data mining Parameter dependent PDEs: d = spatial dim. + parameter dim. Stochastic PDEs: d = Electronic Schrödinger equation: d = 3N Fokker-Planck equations for polymeric fluids: d = 3K, K = length of polymer chains Curse of Dimensionality: intractability results (Novak/Woźniakowski) Remedies (?): Accuracy ɛ comp. cost ɛ d/s Excessive regularity Hidden sparsity with respect to a problem dependent dictionary... separation of variables... W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
4 Main Paradigms Motivation Parameter dependent PDEs: (e.g. Reduced Basis Method) [Maday, Patera,...BCDDGW] a(u, v; p) = f, v, v X, p P R m, u(x, p) = n c i (p)u(x, p i ) High dimensional phase space: e.g. Fokker-Planck eqs operator splittings high-dimensional diffusion equation on product domain [Barrett/Süli] W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
5 Products help... Motivation f C s ([0, 1] d ) f (x) = f (x 1,..., x d ) ν n c ν ψ ν (x) r f k,1 (x 1 ) f k,d (x d ) k=1 r ( d ) l=1 c l,j ψ j (x l ) k=1 d.o.f. : n d =: N rdn =: N j n } {{ } f k,l (x l ) accur.: O(n s ) = O(N s/d ) rdn s = rd 1+s N s work/acc. : N ε d/s N r 1/s d 1+1/s ε 1/s W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
6 Regularity Theorems Problem Setting Setting: e.g. D = d j=1 2 x j = d D = I 1 I i 1 D i I i+1 I d, D j : H j (Ω j ) (H j (Ω j )) H j -elliptic, d j=1 Ω j =: Ω R dp d { H := L2 (Ω 1 ) L 2 (Ω j 1 ) H j (Ω j ) L 2 (Ω j+1 ) L 2 (Ω d ) } j=1 v 2 H := Dv, v, a(u, v) := Du, v v, w H D : H H, H L 2 (Ω) H, v 2 H s = v 2 s := D s v, v, H = H 1 Solution structure (?) a(u, v) = f, v, v H W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
7 Main Objectives Regularity Theorems Problem Setting Regularity : a(u, v) = f, v, v H... suppose that f is tensor sparse... n(ε)?... = u = D 1 f is also tensor sparse? u u k,1 u k,d k=1 Computatibility: Compute tensor sparse appr s to D 1 f with near-minimal cost realize (near-)minimal ranks find (near-)optimally sparse representations of tensor factors Issues: stability of tensor formats [Lathauwer, Hackbusch, Grasedyck, Oseledets,...] continuous versus discrete... a scaling trap W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
8 Tensor Sparsity Regularity Theorems Sparsity Models Model 1: Σ n := { g = r k=1 d g k,j : g k,j = j=1 µ Γ k,j c k,j,µ e j,µ, r d k=1 j=1 } #(Γ k,j ) n σ n (f ) H t := inf { f g H t : g Σ n } Given a growth sequence γ(n), n How to read this... A γ (1) ((Σ n), H t ) := {f H t : f γ,t := sup γ(n)σ n (f ) H t < }, n N v A γ (1) ((Σ n), H t ) σ n (v) H t γ(n) 1 v γ,t ε v ε Σ γ 1 ( v γ,t /ε), such that v v ε t ε...it takes γ 1 ( v γ,t /ε) d.o.f./rank to achieve accuracy ε in H t W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
9 Tensor Sparsity Regularity Theorems Sparsity Models Model 2: { Σ s,b n := g = n d k=1 j=1 } g k,j : g k H s, g k s, g s b, Σ n := Σ s,b n b>0 t < s : σ s,b n (f ) H t := inf g Σ s,b n f g t f γ,b,t,s := sup n N γ(n)σn s,b (f ) H t A γ (2) ((Σ n), H t ) := {f H t : b <, s.t. f γ,b,t,s < } W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
10 Sparsity Results Regularity Theorems Main Results Theorem 1: where i {1, 2} : f A γ (i) ((Σ n), H 1+ζ ) u = D 1 f Aˆγ (i)((σ n ), H 1 ) Complexity: ˆγ(n) := (γ G 1 )(n), G(n) := κ(ζ)n ( log(γ(n)c(f )) ) 2 i = 1: u ˆγ,1 2 f γ, 1+ζ, accur. ε #(c(u ε )) ˆγ 1( 2 f γ, 1+ζ /ε ) i = 2: #(c(u ε )) ( C 1 (f, ζ)γ 1 (ε 1 )(log(ε)) 2 d ) 1+1/ζ ε 1/ζ ( #ops(u ε ) = O (d ζ log(df(ε))f (ε) ζ ), F(ε) := κγ 1 C ε Examples: i = 1: γ(n) = n α ˆγ(n) (n/c log n) α i = 2: γ(n) = e αn C(ζ,f )(αn)1/3 ˆγ(n) e )( log(ε) ε ) 2 W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
11 Regularity Theorems Main Results A Tool: Exponential Sums...[Braess/Hackbusch] τ = τ 1 τ d sup x 1 sr (x) Ce π r, sr (x) = x [1, ] D 1 τ s r (D)τ := PROPOSITION 1: r ω r,k e α r,k D τ = k=1 For 1 t s 1, τ H t, one has r ω r,k e α r,k x k=1 r ( d ) ω r,k e α r,k D j τ j }{{} j=1 g j k=1 D 1 sr (2 s+t)π (D) H t H s Ce 2 r Eigensystem for the D j : {e j,k } k N, D j e j,k = λ j,k e j,k e ν : = e 1,ν1 e d,νd, De ν = λ ν e ν, λ ν = λ 1,ν1 + + λ d,νd v 2 s = ν N d λs ν v, e ν 2 W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
12 Complexity Regularity Theorems Main Results cost(d 1 f, ε) := computational cost of solving Du = f with accuracy ε f = τ: evaluate exponentials s r (D)τ = r ( d ) ω r,k e α r,k D j τ j, r = r(ε) log ε 2 k=1 j=1 e td j τ j = 1 2πi Γ e tγ (γi D j ) 1 τ j dγ, iy λ j Γ...truncation, sinc-quadrature d log ε solves at cost (ε/d) 1/ζ for log ε 2 terms x cost(d 1 τ, ε) < d 1+1/ζ ε 1/ζ log ε 3 (instead of: ε d/ζ ) W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
13 Inventory Optimal Rank Approximation What is Known? Some facts: [de Silva, Lathauwer, Hackbusch, Grasedyck, Oseledets, Schneider...] canonical format k N u k,1 u k,d i.g. unstable optimal subspace methods: unique best approximation exists and is realized by orthogonal projections - T- /(H-T)-formats HOSVD near minimimal rank approximation efficient numerical tools [Espig, Kolda,...] Operator equations: immediate reduction to a fixed discrete system accuracy considerations detached from continuous solution approximation error and residuals are measured in the same (Euclidean) norm - scaling trap accuracy and rank growth cannot be controlled simultaneously PGD...convergence, ranks?... [Falcó, Chinesta, Nouy,...] W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
14 Optimal Rank Approximation Basic Strategy Reduction to Problem in l 2 Universal background basis: {ψ ν = ψ ν1 ψ νd : ν J d } O.N.B. for L 2 (Ω) {( d ) Ψ = 2 2 ν 1 i 2 ψ ν =: s ν ψ ν }ν J Riesz-basis for H L 2(Ω) d Du = f Au = f, A = ( s ν a(ψ ν, ψ µ )s µ )ν,µ J, f = ( f, s ν ψ ν ) ν J Theorem: κ(a) := A A 1 < 1 u H u = (u ν ) ν J d l 2 (J d ) W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
15 Optimal Rank Approximation Basic Strategy Scheme: Perturbed Ideal Iteration Algorithm: u k+1 = C ε3 (k)( Pε2 (k)(u k + ω(f Au k )) ) u u k+1 ρ u u k, ρ < 1 Mode frames: U (j) k Tucker format: u = k 1 =1 k d =1 l 2 (J ), k N, j = 1,..., d, U (i) u, U (1) k 1 k, U(i) l = δ kl, k, l N U (d) (1) k d U k 1 U (d) k d =: c k U k k N d Hierarchical Tucker (H-T)-format: hierarchical factorization of core tensor (c k ) k N d W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
16 Optimal Rank Approximation Some New Ingredients... Basic Strategy Wavelet techniques for the 1D-tensor factors (coarsening, best N-term approximation) Thresholding Lemma: restoring a near-minimal rank approximation to the unknown solution from given approximations Contractions: π (i) (u) = ( π ν (i) i (u) ) ( ( ν i J := ) u ν 2 1 ) 2 ˇν i ν i J π (i) ν (u) = ( k U (i) ν,k 2 σ (i) 2 ) 1 2 k, π ν (i) (P U(u),r u) π ν (i) (u), ν J Exponetial sum approximation to (non-separable) scaling matrices S = (s ν δ ν,µ ) ν,µ J d in A = STS W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
17 Optimal Rank Approximation Optimal Convergence Benchmarks/Assumptions (cf. Model 2): Main Result u is tensor sparse u A γu ((Σ HT n ), l 2 (J d )) =: A γu HT A is tensor sparse - can be well approximated by low rank matrices π (i) (u) A s, i d, (v A s sup n n s (inf supp z n v z ) =: v A s < ) The low-rank approximations to A are s -compressible with s > s Theorem 2: For ε > 0 the Algorithm produces a u ε with u u ε ε s.t.: rank u ε γu 1 (C u A γu /ε), HT supp i (u) := k N supp U(i) k u ε A γu C u A γu, HT HT d ( d ) 1/s #(supp i (u ε )) < π (i) (u) A s/ε. Stability in Aγu HT, As : d d π (i) (u ε ) A s < π (i) (u) A s #(ops) < log ε C(A,f,log d)( d ) 1 max{ π (i) (u) A s, π (i) s (f) A s}/ε W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
18 Optimal Rank Approximation Numerical Experiments Experiments t (Tv)(t) := vds, 0 (I ω d d ) T u = f, f = d 2π χ[0,1/π] cos(2π 2 ) d = 32 d = 64 d = 128 W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
19 Optimal Rank Approximation Numerical Experiments Experiments d = 32 d = 64 d = 128 W. Dahmen (RWTH Aachen) Tensor Sparsity Oct.11, / 29
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More informationAdaptive Near-Optimal Rank Tensor Approximation for High-Dimensional Operator Equations
Adaptive Near-Optimal Ran Tensor Approximation for High-Dimensional Operator Equations Marus Bachmayr and Wolfgang Dahmen Aachen Institute for Advanced Study in Computational Engineering Science Financial
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationTENSOR-SPARSITY OF SOLUTIONS TO HIGH-DIMENSIONAL ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS
TENSOR-SPARSITY OF SOLUTIONS TO HIGH-DIMENSIONAL ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS WOLFGANG DAHMEN, RONALD DEVORE, LARS GRASEDYCK, AND ENDRE SÜLI Abstract. A recurring theme in attempts to break
More informationDFG-Schwerpunktprogramm 1324
DFG-Schwerpuntprogramm 1324 Extration quantifizierbarer Information aus omplexen Systemen Adaptive Near-Optimal Ran Tensor Approximation for High-Dimensional Operator Equations Marus Bachmayr, Wolfgang
More informationStochastic methods for solving partial differential equations in high dimension
Stochastic methods for solving partial differential equations in high dimension Marie Billaud-Friess Joint work with : A. Macherey, A. Nouy & C. Prieur marie.billaud-friess@ec-nantes.fr Centrale Nantes,
More informationTensor Product Approximation
Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationAdaptive low-rank approximation in hierarchical tensor format using least-squares method
Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,
More informationReduced Modeling in Data Assimilation
Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationGreedy algorithms for high-dimensional non-symmetric problems
Greedy algorithms for high-dimensional non-symmetric problems V. Ehrlacher Joint work with E. Cancès et T. Lelièvre Financial support from Michelin is acknowledged. CERMICS, Ecole des Ponts ParisTech &
More informationApproximation of High-Dimensional Rank One Tensors
Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More information4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationInterpolation via weighted l 1 -minimization
Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationSparse Tensor Galerkin Discretizations for First Order Transport Problems
Sparse Tensor Galerkin Discretizations for First Order Transport Problems Ch. Schwab R. Hiptmair, E. Fonn, K. Grella, G. Widmer ETH Zürich, Seminar for Applied Mathematics IMA WS Novel Discretization Methods
More informationrecent developments of approximation theory and greedy algorithms
recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in
More informationSparse Approximation of PDEs based on Compressed Sensing
Sparse Approximation of PDEs based on Compressed Sensing Simone Brugiapaglia Department of Mathematics Simon Fraser University Retreat for Young Researchers in Stochastics September 24, 26 2 Introduction
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationTensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij
1 Everything should be made as simple as possible, but not simpler. A. Einstein (1879-1955) Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij University
More informationAdaptive Wavelet Algorithms
Adaptive Wavelet Algorithms for solving operator equations Tsogtgerel Gantumur August 2006 Contents Notations and acronyms vii 1 Introduction 1 1.1 Background.............................. 1 1.2 Thesis
More informationA Posteriori Adaptive Low-Rank Approximation of Probabilistic Models
A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models Rainer Niekamp and Martin Krosche. Institute for Scientific Computing TU Braunschweig ILAS: 22.08.2011 A Posteriori Adaptive Low-Rank
More informationInterpolation via weighted l 1 -minimization
Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz
More informationEmpirical Interpolation Methods
Empirical Interpolation Methods Yvon Maday Laboratoire Jacques-Louis Lions - UPMC, Paris, France IUF and Division of Applied Maths Brown University, Providence USA Doctoral Workshop on Model Reduction
More informationON MANIFOLDS OF TENSORS OF FIXED TT-RANK
ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationA New Scheme for the Tensor Representation
J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:
More informationBasic Principles of Weak Galerkin Finite Element Methods for PDEs
Basic Principles of Weak Galerkin Finite Element Methods for PDEs Junping Wang Computational Mathematics Division of Mathematical Sciences National Science Foundation Arlington, VA 22230 Polytopal Element
More informationNavier-Stokes equations in thin domains with Navier friction boundary conditions
Navier-Stokes equations in thin domains with Navier friction boundary conditions Luan Thach Hoang Department of Mathematics and Statistics, Texas Tech University www.math.umn.edu/ lhoang/ luan.hoang@ttu.edu
More informationVast Volatility Matrix Estimation for High Frequency Data
Vast Volatility Matrix Estimation for High Frequency Data Yazhen Wang National Science Foundation Yale Workshop, May 14-17, 2009 Disclaimer: My opinion, not the views of NSF Y. Wang (at NSF) 1 / 36 Outline
More informationComputation of operators in wavelet coordinates
Computation of operators in wavelet coordinates Tsogtgerel Gantumur and Rob Stevenson Department of Mathematics Utrecht University Tsogtgerel Gantumur - Computation of operators in wavelet coordinates
More informationMultilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems
Multilevel and Adaptive Sparse of Inverse Problems Fachbereich Mathematik und Informatik Philipps Universität Marburg Workshop Sparsity and Computation, Bonn, 7. 11.6.2010 (joint work with M. Fornasier
More informationMax-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,
More informationKrylov subspace methods for linear systems with tensor product structure
Krylov subspace methods for linear systems with tensor product structure Christine Tobler Seminar for Applied Mathematics, ETH Zürich 19. August 2009 Outline 1 Introduction 2 Basic Algorithm 3 Convergence
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationAlgebraic Theory of Entanglement
Algebraic Theory of (arxiv: 1205.2882) 1 (in collaboration with T.R. Govindarajan, A. Queiroz and A.F. Reyes-Lega) 1 Physics Department, Syracuse University, Syracuse, N.Y. and The Institute of Mathematical
More informationLeast squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova
Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium
More informationExploiting off-diagonal rank structures in the solution of linear matrix equations
Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)
More informationSparse Quadrature Algorithms for Bayesian Inverse Problems
Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research
More informationBrownian Motion and lorentzian manifolds
The case of Jürgen Angst Institut de Recherche Mathématique Avancée Université Louis Pasteur, Strasbourg École d été de Probabilités de Saint-Flour juillet 2008, Saint-Flour 1 Construction of the diffusion,
More informationOn the numerical solution of the chemical master equation with sums of rank one tensors
ANZIAM J. 52 (CTAC21) pp.c628 C643, 211 C628 On the numerical solution of the chemical master equation with sums of rank one tensors Markus Hegland 1 Jochen Garcke 2 (Received 2 January 211; revised 4
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationA Domain Decomposition Method for Quasilinear Elliptic PDEs Using Mortar Finite Elements
W I S S E N T E C H N I K L E I D E N S C H A F T A Domain Decomposition Method for Quasilinear Elliptic PDEs Using Mortar Finite Elements Matthias Gsell and Olaf Steinbach Institute of Computational Mathematics
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationComputational Complexity
Computational Complexity (Lectures on Solution Methods for Economists II: Appendix) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 18, 2018 1 University of Pennsylvania 2 Boston College Computational
More informationOrthogonal tensor decomposition
Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.
More informationStationary mean-field games Diogo A. Gomes
Stationary mean-field games Diogo A. Gomes We consider is the periodic stationary MFG, { ɛ u + Du 2 2 + V (x) = g(m) + H ɛ m div(mdu) = 0, (1) where the unknowns are u : T d R, m : T d R, with m 0 and
More informationOn the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations
M A Y 2 5 P R E P R I N T 4 2 3 On the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations Mike Espig*, Wolfgang Hackbusch, Aram Khachatryan* Institut für Geometrie und
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationDynamical low-rank approximation
Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten
More informationSolving an Elliptic PDE Eigenvalue Problem via Automated Multi-Level Substructuring and Hierarchical Matrices
Solving an Elliptic PDE Eigenvalue Problem via Automated Multi-Level Substructuring and Hierarchical Matrices Peter Gerds and Lars Grasedyck Bericht Nr. 30 März 2014 Key words: automated multi-level substructuring,
More informationAdaptive approximation of eigenproblems: multiple eigenvalues and clusters
Adaptive approximation of eigenproblems: multiple eigenvalues and clusters Francesca Gardini Dipartimento di Matematica F. Casorati, Università di Pavia http://www-dimat.unipv.it/gardini Banff, July 1-6,
More informationN-Widths and ε-dimensions for high-dimensional approximations
N-Widths and ε-dimensions for high-dimensional approximations Dinh Dũng a, Tino Ullrich b a Vietnam National University, Hanoi, Information Technology Institute 144 Xuan Thuy, Hanoi, Vietnam dinhzung@gmail.com
More informationarxiv: v3 [math.ds] 22 Feb 2012
Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationA Stable Spectral Difference Method for Triangles
A Stable Spectral Difference Method for Triangles Aravind Balan 1, Georg May 1, and Joachim Schöberl 2 1 AICES Graduate School, RWTH Aachen, Germany 2 Institute for Analysis and Scientific Computing, Vienna
More informationOn the stochastic nonlinear Schrödinger equation
On the stochastic nonlinear Schrödinger equation Annie Millet collaboration with Z. Brzezniak SAMM, Paris 1 and PMA Workshop Women in Applied Mathematics, Heraklion - May 3 211 Outline 1 The NL Shrödinger
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationConstruction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam
Construction of wavelets Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Contents Stability of biorthogonal wavelets. Examples on IR, (0, 1), and (0, 1) n. General domains
More informationTensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations
TECHNISCHE UNIVERSITÄT BERLIN Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations Markus Bachmayr André Uschmajew Reinhold Schneider Preprint 2015/28
More informationWavelet Frames on the Sphere for Sparse Representations in High Angular Resolution Diusion Imaging. Chen Weiqiang
Wavelet Frames on the Sphere for Sparse Representations in High Angular Resolution Diusion Imaging Chen Weiqiang Overview 1. Introduction to High Angular Resolution Diusion Imaging (HARDI). 2. Wavelets
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationGreedy bases are best for m-term approximation
Greedy bases are best for m-term approximation Witold Bednorz Institute of Mathematics, Warsaw University Abstract We study the approximation of a subset K in a Banach space X by choosing first basis B
More informationA Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices
A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationSpline Element Method for Partial Differential Equations
for Partial Differential Equations Department of Mathematical Sciences Northern Illinois University 2009 Multivariate Splines Summer School, Summer 2009 Outline 1 Why multivariate splines for PDEs? Motivation
More informationTECHNISCHE UNIVERSITÄT BERLIN
TECHNISCHE UNIVERSITÄT BERLIN Adaptive Stochastic Galerkin FEM with Hierarchical Tensor Representations Martin Eigel Max Pfeffer Reinhold Schneider Preprint 2015/29 Preprint-Reihe des Instituts für Mathematik
More informationLECTURE NOTE #10 PROF. ALAN YUILLE
LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure
More informationNumerical Simulation of Powder Flow
Numerical Simulation of Powder Flow Stefan Turek, Abderrahim Ouazzi Institut für Angewandte Mathematik, Univ. Dortmund http://www.mathematik.uni-dortmund.de/ls3 http://www.featflow.de Models for granular
More informationNEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationThe Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression
The Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression Cun-hui Zhang and Jian Huang Presenter: Quefeng Li Feb. 26, 2010 un-hui Zhang and Jian Huang Presenter: Quefeng The Sparsity
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationA Semi-Lagrangian scheme for a regularized version of the Hughes model for pedestrian flow
A Semi-Lagrangian scheme for a regularized version of the Hughes model for pedestrian flow Adriano FESTA Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences
More informationTraces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains
Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis
More informationSubsequences of frames
Subsequences of frames R. Vershynin February 13, 1999 Abstract Every frame in Hilbert space contains a subsequence equivalent to an orthogonal basis. If a frame is n-dimensional then this subsequence has
More informationDensity Matrix Renormalization Group Algorithm - Optimization in TT-format
Density Matrix Renormalization Group Algorithm - Optimization in TT-format R Schneider (TU Berlin MPI Leipzig 2010 Density Matrix Renormalisation Group (DMRG DMRG algorithm is promininent algorithm to
More informationLow Rank Tensor Recovery via Iterative Hard Thresholding
Low Rank Tensor Recovery via Iterative Hard Thresholding Holger Rauhut, Reinhold Schneider and Željka Stojanac ebruary 16, 016 Abstract We study extensions of compressive sensing and low rank matrix recovery
More informationA NEARLY-OPTIMAL ALGORITHM FOR THE FREDHOLM PROBLEM OF THE SECOND KIND OVER A NON-TENSOR PRODUCT SOBOLEV SPACE
JOURNAL OF INTEGRAL EQUATIONS AND APPLICATIONS Volume 27, Number 1, Spring 2015 A NEARLY-OPTIMAL ALGORITHM FOR THE FREDHOLM PROBLEM OF THE SECOND KIND OVER A NON-TENSOR PRODUCT SOBOLEV SPACE A.G. WERSCHULZ
More informationTime domain boundary elements for dynamic contact problems
Time domain boundary elements for dynamic contact problems Heiko Gimperlein (joint with F. Meyer 3, C. Özdemir 4, D. Stark, E. P. Stephan 4 ) : Heriot Watt University, Edinburgh, UK 2: Universität Paderborn,
More informationGeneral Franklin systems as bases in H 1 [0, 1]
STUDIA MATHEMATICA 67 (3) (2005) General Franklin systems as bases in H [0, ] by Gegham G. Gevorkyan (Yerevan) and Anna Kamont (Sopot) Abstract. By a general Franklin system corresponding to a dense sequence
More informationc 2003 Society for Industrial and Applied Mathematics
SIAM J. NUMER. ANAL. Vol. 41, No. 5, pp. 1785 1823 c 2003 Society for Industrial and Applied Mathematics ADAPTIVE WAVELET SCHEMES FOR NONLINEAR VARIATIONAL PROBLEMS ALBERT COHEN, WOLFGANG DAHMEN, AND RONALD
More informationINCOMPRESSIBLE FLUIDS IN THIN DOMAINS WITH NAVIER FRICTION BOUNDARY CONDITIONS (II) Luan Thach Hoang. IMA Preprint Series #2406.
INCOMPRESSIBLE FLUIDS IN THIN DOMAINS WITH NAVIER FRICTION BOUNDARY CONDITIONS II By Luan Thach Hoang IMA Preprint Series #2406 August 2012 INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS UNIVERSITY OF
More informationDynamical systems with Gaussian and Levy noise: analytical and stochastic approaches
Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes
More informationFast Sparse Spectral Methods for Higher Dimensional PDEs
Fast Sparse Spectral Methods for Higher Dimensional PDEs Jie Shen Purdue University Collaborators: Li-Lian Wang, Haijun Yu and Alexander Alekseenko Research supported by AFOSR and NSF ICERM workshop, June
More informationDFG-Schwerpunktprogramm 1324
DFG-Schwerpunktprogramm 1324 Extraktion quantifizierbarer Information aus komplexen Systemen Multilevel preconditioning for sparse optimization of functionals with nonconvex fidelity terms S. Dahlke, M.
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationA space-time Trefftz method for the second order wave equation
A space-time Trefftz method for the second order wave equation Lehel Banjai The Maxwell Institute for Mathematical Sciences Heriot-Watt University, Edinburgh & Department of Mathematics, University of
More informationQuarkonial frames of wavelet type - Stability, approximation and compression properties
Quarkonial frames of wavelet type - Stability, approximation and compression properties Stephan Dahlke 1 Peter Oswald 2 Thorsten Raasch 3 ESI Workshop Wavelet methods in scientific computing Vienna, November
More informationEffective dynamics of many-body quantum systems
Effective dynamics of many-body quantum systems László Erdős University of Munich Grenoble, May 30, 2006 A l occassion de soixantiéme anniversaire de Yves Colin de Verdiére Joint with B. Schlein and H.-T.
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationThe Proper Generalized Decomposition: A Functional Analysis Approach
The Proper Generalized Decomposition: A Functional Analysis Approach Méthodes de réduction de modèle dans le calcul scientifique Main Goal Given a functional equation Au = f It is possible to construct
More informationRank reduction of parameterized time-dependent PDEs
Rank reduction of parameterized time-dependent PDEs A. Spantini 1, L. Mathelin 2, Y. Marzouk 1 1 AeroAstro Dpt., MIT, USA 2 LIMSI-CNRS, France UNCECOMP 2015 (MIT & LIMSI-CNRS) Rank reduction of parameterized
More information