Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij

Size: px
Start display at page:

Download "Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij"

Transcription

1 1 Everything should be made as simple as possible, but not simpler. A. Einstein ( ) Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij University of Leipzig/MPI MIS, winter sem. 2006/2007

2 Outline of the Lecture Course B. Khoromskij, Leipzig 2006(L1) 2 1. (A) Ubiquitous data-sparse matrix/tensor arithmetics. (B) Separable approximation of multi-variate functions in R d. 2. Sinc interpolation and quadratures; Celebrated sampling theorem; Fourier kingdom. 3. Tucker/canonical decomposition of high-order tensors: Formatted multi-linear algebra, approximation theory, numerical methods (there is still much to be understood!). 4. Kronecker-product representation to multi-dimensional integral operators Au = R R d g(, y)u(y)dy. 5. Structured representation to matrix-valued functions A 1, A α. 6. Applicability to the Hartree-Fock/Kohn-Sham and Ornstein-Zernicke equations.

3 ect. 1. (A) Ubiquitous data-sparse matrix/tensor arithmetics B. Khoromskij, Leipzig Basic physical models are described by nonlocal data transforms. Examples: 1. Multi-dimensional integral operators in R d (convolution, Fourier and Laplace transforms) 2. Elliptic/parabolic solution operators (Green s functions) 3. Density matrix calculation for many-particle systems (Hartree-Fock and Kohn-Sham equations in R 3 ) 4. Convolution and functional transforms from the Ornstein-Zernike equation in R 3 (theory of disordered matter) 5. Collision integrals from the deterministic Boltzmann equation in R 3 (dilute gas). 6. Multi-dimensional data in chemometric, psychometric, higher-order statistics, financial math.,...

4 Nonlocal operators in wide range applications B. Khoromskij, Leipzig 2006(L1) 4 Functions and integral oper. (e.g., convolution) in R d : A R nd n d R n2... n 2 A 1, A α, α > 0, exp( ta), sign(a). Objectives in many-particle models via Hartree-Fock eq.:» 1 2 V c(x) + ρ(x, y) = NP e /2 i=1 Z R 3 dy ρ(y, y) φ(x) 1 Z dy x y 2 R 3 φ i (x)φ i (y) electron density matrix, e µ x - density function for hydrogen atom, 1 x ρ(x, y) x y Hartree-Fock-Slater equation» 12 Z + V (x) + ρ(y) x y dy αv ρ(x) ψ = λψ, V ρ (x) = R 3 φ(y) = λφ(y), - Newton potential. j ff 3 1/3 π ρ(x) Ornstein-Zernike integral-algebraic eq. in R 3 (molecular density) Z h(r) = c(r)+ρ c( r r )h( r )dr, h(r) = exp[ βu(r)+h(r) c(r)] 1 R 3 Boltzmann integral-differential eq. in R 3 (dilute gas).

5 Breaking down the complexity B. Khoromskij, Leipzig 2006(L1) 5 Approximate multi-variate functions/multi-dimensional operators avoiding the curse of dimensionality Goal: Solving basic eq. with O(nd log q n) cost instead of O(n d ). Structured tensor decomposition in R d : Orthogonal rank-(r 1,..., r d ) Tucker model, T r Canonical (CP) approximation, C r Two-level rank-(r 1,..., r d ; q) and mixed models Approximation tools: Sinc interpolation/quadratures; exponential fitting. Greedy algorithms. Direct minimisation of the cost functional. Truncated Newton-Schulz iteration. Numerics: 1 x y, e x y γ, e x y x y, cos x y x y, c k e x y k.

6 Huge problems: special methods vs. super-computers B. Khoromskij, Leipzig 2006(L1) 6 The algebraic operations on high-dimensional, densely populated tensors require huge computational resources; cf. linear complexity O(N) with N = n d. Standard asymptotically optimal methods suffer from the curse of dimensionality (R. Bellman). Complexity of matrix operations in full arithmetics: N Stor N A v = O(N 2 ); N A 1 N A B N L U = O(N 3 ); N EV D N SV D = O(N 3 ). A paradigm of up-to-date numerical simulations: the faster the computer is the better asymptotical complexity of fast algorithms is required (speed increases proportional to memory).

7 Large problems in low dimensions B. Khoromskij, Leipzig 2006(L1) 7 In low dimensions (d 3) the goal is O(N)-methods. Basic principles: making use of hierarchical structures, low-rank pattern and recursive algorithms. Based on recursions via hierarchical structures: Classical Fourier ( ) methods, FFT in O(N log N) op. FFT-based circulant convolution, Toeplitz, Hankel matrices. Multiresolution representation via wavelets, FWT in O(N) op. Multigrid methods: O(N) - elliptic problem solvers. Domain decomposition: O(N/p) - parallel algorithms. Fast multipole, panel clustering, H-matrix in O(c d N log β N) op. Well suited for integral (nonlocal) operators in FEM/BEM.

8 Old and new ideas or what we are going to discuss B. Khoromskij, Leipzig 2006(L1) 8 In multi-dimensional perspective O(N)-complex. is not enough since exponential scaling in d: N = n d In the Schrödinger eq. d = 3N e ( mol. in 1 cm 3 of water). The challenge is to develop O(dn)-algorithms! Main ideas: tensor-product data formats + structured representation of low-dimensional components. Based on tensor-product data organization: Kronecker tensor-product (KT) representation in R N, N = n d (multiway decomposition): O(dn q log β n), q q(d) - fixed. Effective multi-linear algebra. Combination of KT formats with H-matrix, wavelet or FFT-based structures: O(dn log β n) op.

9 Alternative directions: Different compression strategies B. Khoromskij, Leipzig 2006(L1) 9 High order methods: hp-fem/bem, spectral methods, bcfem (Khoromskij, Melenk), Richardson extrapolation. Adaptive mesh refinement: a priori/a posteriori strateg. Dimension reduction: boundary/interface equations, Schur complement/domain decomposition methods. Combination of tensor-product basis with anisotropic adaptivity: hyperbolic cross approximation by FEM/wavelet (sparse grids). Model reduction: multi-scale, homogenization, neural networks. Monte-Carlo meth. (e.g., random walk dynamics, stochastic PDEs).

10 (B) Separabe approximation of functions B. Khoromskij, Leipzig 2006(L1) 10 Rank-1 approximation of a multi-variate function f = f(x 1,..., x d ) in the set of separable functions M 1 = {u : u(x) = φ (1) (x 1 )... φ (d) (x d ), φ (l) H}. (1) f is from a certain class H (say, H = L 2 (R d )). H is a real, separable Hilbert space of functions defined on R (say, H = L 2 (R)). Advantages: It poses tramendous reduction of comput. cost, removing d from the exponential, n d d n. dth order tensors can be interpreted as functions of d discrete arguments (multi-dimensional arrays), f : R n 1... n d R.

11 Tucker model B. Khoromskij, Leipzig 2006(L1) 11 Def. 1. (Tucker model). Rank-r Tucker approximation via a linear combination of separable products, M r = {u : u(x) = r k=1 b k φ (1) k 1 (x 1 )... φ (d) k d (x d ), b k R, φ (l) k H}, with k = (k 1,..., k d ), 1 k l r l, and r = (r 1,..., r d ), r l N. The set of coeff. B = {b k } R r 1... r d is called the core tensor. Storage cost: r d + rdn. Maximal canonical rank: r d 1. Assume φ (l) k l to be orthonormal, i.e., (φ (l) k l, φ (l) m l ) = δ kl,m l, k l, m l = 1,..., r l, l = 1,..., d. V l H r l (l = 1,..., d) is the set of r l -tuples Φ (l) = (φ (l) 1,..., φ(l) r l ) with orthonormal components.

12 Remarks on the Schrödinger eq. B. Khoromskij, Leipzig 2006(L1) 12 In the context of the Schrödinger eq., a separable function u(x) = φ (1) (x 1 )... φ (d) (x d ) is called Hartree product, while φ (l) k l (x l ) are known as single-particle functions. The time-dependent solution of the Schrödinger eq. in molecular dynamics is approximated (for a fixed time) by a linear combination of Hartree products from the set M r. Due to the Pauli principle, approximations of M-electron systems are built from anti-symmetrised products of single-particle functions (Slater determinants), M S := {u : u(x) = det(φ (i) (x j )) M i,j=1, φ (i) L 2 }.

13 Canonical decomposition B. Khoromskij, Leipzig 2006(L1) 13 Def. 2. (Canonical model). Approximation in the set M r = {u : u(x) = r k=1 b k φ (1) k (x 1)... φ (d) k (x d), φ (l) k H} M r, with b k R and with normalised components φ (l) k = 1 is the special case of the Ticker approximation in M r, r = (r,..., r), under the constraint: all off-diagonal elements of B = {b k } are zero. Since M r is not a linear space, we obtain a difficult nonlinear approximation problem on estimation f H : σ(f, S) := inf f s, (2) s S where either S = M r or S = M r, or some subspaces S M r.

14 Computing the Tucker decomposition B. Khoromskij, Leipzig 2006(L1) 14 Physical interpretation of the Tucker model is not easy since nonuniqueness in b k and φ (l) k l : the rotation transform φ (l) k l φ (l) k l := r l m l =1 S (l) k l,m l φ (l) m l, b k b k := r 1 r d (S (1) ) T k 1,m 1 (S (d) ) T k d,m d b k m 1 =1 m d =1 defines the same { u for any choice of orthogonal r l r l matrices S (l) = S (l) k l,m l }, l = 1,..., d. Not a problem from the computational point of view: The minimisation problem (2) is equivalent to the dual maximisation problem on V l (l = 1,..., d), not including b k.

15 Computing the Tucker decomposition B. Khoromskij, Leipzig 2006(L1) 15 Lem. 1. Assume that there exists a minimiser of the problem (2). Then, for given Φ (l) = (φ (l) 1,..., φ(l) r l ) V l (l = 1,..., d), the core tensor b k, minimising (2) is represented by ( ) b k = f, φ (1) k 1 ( )... φ (d) k d ( ), k = (k 1,..., k d ). (3) For given f H, the minimisation problem (2) with S = M r, is equivalent to the maximisation problem σ(f; {V l } d l=1 ) := Proof. Let f (r) = k problem (2), then sup Φ (l) V l, l=1,...,d X k f(x 1,..., x d )φ (1) k 1 (x 1 )... φ (d) k (x d d ) b k φ (1) k 1 (x 1 )... φ (d) k d (x d ) be the solution of f (r) = B F := b 2 k, since orthonormal components do not effect the L 2 -norm. k 2.

16 Computing the Tucker decomposition B. Khoromskij, Leipzig 2006(L1) 16 With fixed components Φ (l) (l = 1,..., d), relation (2) is actually a linear least-squares problem w.r.t. b k, (f, f) 2(f, k b k φ (1) k 1 (x 1 )... φ (d) k d (x d )) + (B, B) min. Solving the corresponding Lagrange equation (f, k δb k φ (1) k 1 (x 1 )... φ (d) k d (x d ))+(B, δb) = 0 for all δb R r 1... r d, implies (3) and then f f (r) 2 = f 2 B 2 F. Then substitution of (3) proves the assertion.

17 Computing canonical decomposition B. Khoromskij, Leipzig 2006(L1) 17 For S = M r, canonical decomposition can be considered in the framework of best r-term approximation with regard to a redundant dictionary. Def. 3. A system D of functions from H is called a dictionary if each g D has norm one and its linear span is dense in H. Denote by Σ r (D) the collection of s H which can be written in the form s = c g g, Λ D, #Λ r N with c g R. g Λ For f H, the best r-term approximation error is defined by σ r (f, D) := inf f s. s Σ r (D)

18 Pure Greedy Algorithm B. Khoromskij, Leipzig 2006(L1) 18 The Pure Greedy Algorithm (PGA) inductively computes an estimate to the best r-term approximation. Let g = g(f) D be an element maximising (f, g). Define G(f) G(f, D) := (f, g)g, R(f) R(f, D) := f G(f). The PGA reads as: Given f H, introduce R 0 (f) := f and G 0 (f) := 0. Then, for all 1 m r, we inductively define G m (f) := G m 1 (f) + G(R m 1 (f)), R m (f) := f G m (f) = R(R m 1 (f)).

19 Pure Greedy Algorithm B. Khoromskij, Leipzig 2006(L1) 19 PGA applied to functions characterised via the approximation property (low order approximation) σ r (f, D) r q, r = 1, 2,..., with some q (0, 1/2], leads to the error bound (Temlyakov) f G r (f, D) C(q, D)r q, r = 1, 2,..., which is too pessimistic in our applications. Our goal: An efficient r-term approximation to analytic funct. with point singularities, allowing exponential convergence σ r (f, D) C exp( r q ), r = 1, 2,..., with q = 1 or q = 1/2. We will discuss quadrature- and interpolation-based methods as well as the direct approximation by exponential sums.

20 Special cases of PGA B. Khoromskij, Leipzig 2006(L1) 20 The output of PGA, G r (f, D), is proven to realise the best r-term approximation in the particular case when D is an orthogonal basis of H. Results of such kind can be generalised to the case of λ-quasiorthogonal dictionaries (Temlyakov). For the approximation problem on M r we set D := {g H M 1, g = 1}, and hence Σ r (D) = M r. The assumption that the components {φ (l) k } (l = 1,..., d) belong to an orthogonal basis of H implies the orthogonality requirement for D.

21 The case d = 2 B. Khoromskij, Leipzig 2006(L1) 21 The approximation of functions f(x, y) by bilinear forms r u k (x)v k (y) in L 2 ([0, 1]), k=1 was considerd by E. Schmidt in The result is an analogue to SVD for rectangular matrices. Let {s k (J f )} be a nonincreasing sequence of SVs of the IO i.e., s 1 s , J f (g) := 1 0 f(x, y)g(y)dy, s k (J f ) := λ k (A) 1/2, A = J f J f, J f adjoint to J f with orthonormal sequences {ϕ k (x)}, {ψ k (y)}, Aψ k (y) = λ k ψ k (y); A ϕ k (x) = λ k ϕ k (x).

22 The case d = 2 B. Khoromskij, Leipzig 2006(L1) 22 The Schmidt expansion is given by f(x, y) = s k (J f )ϕ k (x)ψ k (y). k=1 The best bilinear approximation property was proven, rx f(x, y) s k ϕ k (x)ψ k (y) k=1 L 2 = inf u k,v k L 2, k=1,...,r rx f(x, y) u k (x)v k (y) Schmidt s expansion ensures that the best bilinear appr. can be realised by the PGA. The kernel function of A is given by k=1 L 2. f A (x, y) := 1 0 f(x, z)f(z, y)dz, hence, for Nyström s approx. the problem is reduced to SVD.

23 Orthogonal Decomposition with d 3 B. Khoromskij, Leipzig 2006(L1) 23 Let d 3. The PGA produces best nonlinear approximation in a situation with orthogonal components. We call the decomposition in M r, r f = a k v k, v k M 1, v k = 1, k=1 orthogonal if (v m, v k ) = 0 for all m k. Greedy orthogonal decomposition (GOD): Set R 0 (f) = f, G 0 (f) = 0 and define the pth residual tensor as p R p (f) := f b k u k, u k M 1, p = 1, 2,... k=1 On each recurrent step, we find the best 1-term approximation to R p (f) under orthogonality constraints,

24 Orthogonal Decomposition with d 3 B. Khoromskij, Leipzig 2006(L1) 24 min G p(b, u) := R p 1 (f) bu 2 with u U p 1, u M 1, u =1 where U p = {u 1,..., u p } with U 0 =. Since G p (b, u) = R p 1 2 2b(R p 1, u) + b 2 u 2, we solve the Lagrange eq. for b, G p b = 2(R p 1, u) + 2b u 2 = 0 b = (R p 1, u) = (f, u), to obtain u p as a solution of max u M 1, u =1 (f, u) with u U p 1 (challenging problem!). Finally, let b p = (f, u p ).

25 Greedy completely orthogonal decomposition B. Khoromskij, Leipzig 2006(L1) 25 The decomposition in M r, f = r k=1 a k v k, v k = φ (1) k (x 1)... φ (d) k (x d) M 1, is called completely orthogonal if (φ (l) k, φ(l) m ) = δ k,m l = 1,..., d, Φ (l) V l. Greedy completely orthogonal decomposition is defined as GOD with additional orthogonality constraint Φ (l) V l. Lem. 2. Let f H allow a rank-r completely orthogonal decomposition. Then this decomposition is unique, and the GCOD algorithm correctly computes it. Proof. Uniqueness id due to orthogonality assumption.

26 Greedy completely orthogonal decomposition B. Khoromskij, Leipzig 2006(L1) 26 The GCOD reduces to solving (for u p ) max (f, u) u M 1, u =1 with Φ(l) p V l (simple problem), and letting b p = (f, u p ). For example, with p = 1 and for we have r k=1 a k u 1 = d c k,l max l=1 d ( l=1 r k=1 with c k,l φ (l) k (x l)), r c 2 k,l = 1, l = 1,..., d. k=1 Assuming a 1 a 2... a r > 0, we obtain c 1,l = 1, c 2,l =... = c r,l = 0 u 1 = v 1 (Hint: use symmetry in l). This ensures b 1 = (f, u 1 ) = a 1. Hence we prove inductively p u p = a k v k. k=1

27 Literature to Lecture 1 B. Khoromskij, Leipzig 2006(L1) B.N. Khoromskij: An introduction to Structured Tensor-product representation of Discrete Nonlocal Operators. Lecture notes 27, MPI MIS, Leipzig W. Hackbusch and B.N. Khoromskij: Tensor-product Approximation to Operators and Functions in High Dimensions. Preprint 139, MPI MIS, Leipzig V.N. Temlyakov: Greedy Algorithms and M-Term Approximation with Regard to Redundant Dictionaries. J. of Approx. Theory 98 (1999), URL:

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

A New Scheme for the Tensor Representation

A New Scheme for the Tensor Representation J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:

More information

Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d

Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d Constr Approx (2009) 30: 599 620 DOI 10.1007/s00365-009-9068-9 Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d Boris N. Khoromskij Received: 19 August 2008 / Revised:

More information

H 2 -matrices with adaptive bases

H 2 -matrices with adaptive bases 1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information Contents Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information xi xiv xvii xix 1 Preliminaries 1 1.0 Introduction.............................

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Poisson Solver, Pseudopotentials, Atomic Forces in the BigDFT code

Poisson Solver, Pseudopotentials, Atomic Forces in the BigDFT code CECAM Tutorial on Wavelets in DFT, CECAM - LYON,, in the BigDFT code Kernel Luigi Genovese L_Sim - CEA Grenoble 28 November 2007 Outline, Kernel 1 The with Interpolating Scaling Functions in DFT for Interpolating

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

Sampling and low-rank tensor approximation of the response surface

Sampling and low-rank tensor approximation of the response surface Sampling and low-rank tensor approximation of the response surface tifica Alexander Litvinenko 1,2 (joint work with Hermann G. Matthies 3 ) 1 Group of Raul Tempone, SRI UQ, and 2 Group of David Keyes,

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Non-Intrusive Solution of Stochastic and Parametric Equations

Non-Intrusive Solution of Stochastic and Parametric Equations Non-Intrusive Solution of Stochastic and Parametric Equations Hermann G. Matthies a Loïc Giraldi b, Alexander Litvinenko c, Dishi Liu d, and Anthony Nouy b a,, Brunswick, Germany b École Centrale de Nantes,

More information

Application to Hyperspectral Imaging

Application to Hyperspectral Imaging Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Greedy algorithms for high-dimensional non-symmetric problems

Greedy algorithms for high-dimensional non-symmetric problems Greedy algorithms for high-dimensional non-symmetric problems V. Ehrlacher Joint work with E. Cancès et T. Lelièvre Financial support from Michelin is acknowledged. CERMICS, Ecole des Ponts ParisTech &

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Numerical Solution I

Numerical Solution I Numerical Solution I Stationary Flow R. Kornhuber (FU Berlin) Summerschool Modelling of mass and energy transport in porous media with practical applications October 8-12, 2018 Schedule Classical Solutions

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References Toeplitz matrices Niranjan U N NITK, Surathkal May 12, 2010 Niranjan U N (NITK, Surathkal) Linear Algebra May 12, 2010 1 / 15 1 Definition Toeplitz matrix Circulant matrix 2 Toeplitz theory Boundedness

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Numerical Mathematics

Numerical Mathematics Alfio Quarteroni Riccardo Sacco Fausto Saleri Numerical Mathematics Second Edition With 135 Figures and 45 Tables 421 Springer Contents Part I Getting Started 1 Foundations of Matrix Analysis 3 1.1 Vector

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Applications of Matrix Functions Part III: Quantum Chemistry

Applications of Matrix Functions Part III: Quantum Chemistry Applications of Matrix Functions Part III: Quantum Chemistry Emory University Department of Mathematics and Computer Science Atlanta, GA 30322, USA Prologue The main purpose of this lecture is to present

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

A spacetime DPG method for acoustic waves

A spacetime DPG method for acoustic waves A spacetime DPG method for acoustic waves Rainer Schneckenleitner JKU Linz January 30, 2018 ( JKU Linz ) Spacetime DPG method January 30, 2018 1 / 41 Overview 1 The transient wave problem 2 The broken

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

arxiv: v2 [math.na] 8 Apr 2017

arxiv: v2 [math.na] 8 Apr 2017 A LOW-RANK MULTIGRID METHOD FOR THE STOCHASTIC STEADY-STATE DIFFUSION PROBLEM HOWARD C. ELMAN AND TENGFEI SU arxiv:1612.05496v2 [math.na] 8 Apr 2017 Abstract. We study a multigrid method for solving large

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Risi Kondor, The University of Chicago. Nedelina Teneva UChicago. Vikas K Garg TTI-C, MIT

Risi Kondor, The University of Chicago. Nedelina Teneva UChicago. Vikas K Garg TTI-C, MIT Risi Kondor, The University of Chicago Nedelina Teneva UChicago Vikas K Garg TTI-C, MIT Risi Kondor, The University of Chicago Nedelina Teneva UChicago Vikas K Garg TTI-C, MIT {(x 1, y 1 ), (x 2, y 2 ),,

More information

Polynomial Approximation: The Fourier System

Polynomial Approximation: The Fourier System Polynomial Approximation: The Fourier System Charles B. I. Chilaka CASA Seminar 17th October, 2007 Outline 1 Introduction and problem formulation 2 The continuous Fourier expansion 3 The discrete Fourier

More information

Adaptive Application of Green s Functions:

Adaptive Application of Green s Functions: Adaptive Application of Green s Functions: Fast algorithms for integral transforms, with a bit of QM Fernando Pérez With: Gregory Beylkin (Colorado) and Martin Mohlenkamp (Ohio University) Department of

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Numerical Analysis Comprehensive Exam Questions

Numerical Analysis Comprehensive Exam Questions Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Contents. Acknowledgments

Contents. Acknowledgments Table of Preface Acknowledgments Notation page xii xx xxi 1 Signals and systems 1 1.1 Continuous and discrete signals 1 1.2 Unit step and nascent delta functions 4 1.3 Relationship between complex exponentials

More information

Compact symetric bilinear forms

Compact symetric bilinear forms Compact symetric bilinear forms Mihai Mathematics Department UC Santa Barbara IWOTA 2006 IWOTA 2006 Compact forms [1] joint work with: J. Danciger (Stanford) S. Garcia (Pomona

More information

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications)

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications) Chapter 6 Finite Element Method Literature: (tiny selection from an enormous number of publications) K.J. Bathe, Finite Element procedures, 2nd edition, Pearson 2014 (1043 pages, comprehensive). Available

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

An Adaptive Hierarchical Matrix on Point Iterative Poisson Solver

An Adaptive Hierarchical Matrix on Point Iterative Poisson Solver Malaysian Journal of Mathematical Sciences 10(3): 369 382 (2016) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES Journal homepage: http://einspem.upm.edu.my/journal An Adaptive Hierarchical Matrix on Point

More information

Tensor Product Approximation

Tensor Product Approximation Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for

More information

Optimal Prediction for Radiative Transfer: A New Perspective on Moment Closure

Optimal Prediction for Radiative Transfer: A New Perspective on Moment Closure Optimal Prediction for Radiative Transfer: A New Perspective on Moment Closure Benjamin Seibold MIT Applied Mathematics Mar 02 nd, 2009 Collaborator Martin Frank (TU Kaiserslautern) Partial Support NSF

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Aspects of Multigrid

Aspects of Multigrid Aspects of Multigrid Kees Oosterlee 1,2 1 Delft University of Technology, Delft. 2 CWI, Center for Mathematics and Computer Science, Amsterdam, SIAM Chapter Workshop Day, May 30th 2018 C.W.Oosterlee (CWI)

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Third Edition. William H. Press. Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin. Saul A.

Third Edition. William H. Press. Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin. Saul A. NUMERICAL RECIPES The Art of Scientific Computing Third Edition William H. Press Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin Saul A. Teukolsky Hans A. Bethe

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course Approximate Dynamic Programming (a.k.a. Batch Reinforcement Learning) A.

More information

Direct Minimization in Density Functional Theory

Direct Minimization in Density Functional Theory Direct Minimization in Density Functional Theory FOCM Hongkong 2008 Partners joint paper with: J. Blauert, T. Rohwedder (TU Berlin), A. Neelov (U Basel) joint EU NEST project BigDFT together with Dr. Thierry

More information

Wavelets For Computer Graphics

Wavelets For Computer Graphics {f g} := f(x) g(x) dx A collection of linearly independent functions Ψ j spanning W j are called wavelets. i J(x) := 6 x +2 x + x + x Ψ j (x) := Ψ j (2 j x i) i =,..., 2 j Res. Avge. Detail Coef 4 [9 7

More information

Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis

Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Angelos Mantzaflaris a Bert Jüttler a Boris N. Khoromskij b Ulrich Langer a a Radon Institute for Computational and Applied Mathematics (RICAM),

More information