Matrix assembly by low rank tensor approximation

Size: px
Start display at page:

Download "Matrix assembly by low rank tensor approximation"

Transcription

1 Matrix assembly by low rank tensor approximation Felix Scholz

2 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis by low rank tensor approximation. In Curves and Surfaces, volume 9213 of LNCS, pages Springer, Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Low rank tensor methods in galerkin-based isogeometric analysis. Technical report, Radon Institute for Computational and Applied Mathematics, W. Hackbusch. Tensor spaces and numerical tensor calculus. Springer, Berlin, M. Griebel and H. Harbrecht. Approximation of two-variate functions: Singular value decomposition versus sparse grids. IMA Journal of Numerical Analysis, Bert Jüttler and Dominik Mokriš. Low rank spline interpolation of boundary value curves. G+S Report No. 47, 2016.

3 Overview Motivation Singular Value Decomposition (SVD) of functions Discrete Singular Value Decomposition Numerical examples Generalisation to arbitrary dimensions

4 Motivation Given a parametrisation of the physical domain Ω by a regular tensor product B-Spline (or NURBS) function F : ˆΩ Ω we consider the weak formulation an elliptic equation as a model problem: Find u H0 1 (Ω), such that a(u, v) = (f, v) 0,Ω v H 1 0 (Ω), where a(u, v) = x u(x) (A(x) x v(x)) + cu(x)v(x)dx. Ω

5 Motivation We consider the 2D-case with ˆΩ = [0, 1] 2, A I and c 1. As the discrete space of functions on the parametric domain we choose a tensor spline space with the B-spline basis S p (Ξ) = S p1 (Ξ 1 ) S p2 (Ξ 2 ) ˆB i,p (ξ) = ˆB i1,p 1 (ξ 1 )ˆB i2,p 2 (ξ 2 ) and set the discrete space of functions on the physical domain to be (up to the boundary conditions) V h := span{ˆb i F 1 } = span{b i }.

6 Motivation Computing the L 2 -product of the basis elements we get the entries of the mass matrix: M ij = B i (x)b j (x)dx Ω = det ξ F (ξ) ˆB i (ξ)ˆb j (ξ)dξ ˆΩ = ω(ξ)ˆb i1 (ξ 1 )ˆB j1 (ξ 1 )ˆB i2 (ξ 2 )ˆB j2 (ξ 2 )dξ 1 dξ 2, where ω(ξ) = det ξ F (ξ). Analogously we get for the stiffness matrix S ij = x B i x B j dx = Ω 2 p,q= K pq (ξ) ˆB i ˆB j dξ, ξ p ξ q where K(ξ) = (det ξ F )( ξ F ) 1 ( ξ F ) T.

7 Motivation The complexity of computing the bivariate integrals in S ij and M ij by Gauss quadrature is in O(n 2 p 6 ) (if n = n 1 = n 2 and p 1 = p 2 ). We want to decompose the functions ω(ξ) and K pq (ξ) into products of univariate functions so we only need to compute univariate integrals where the complexity is O(np 3 ).

8 Singular value decomposition of a function Any bivariate continuous function f C([0, 1] [0, 1]) has a singular value decomposition f (ξ 1, ξ 2 ) = σ r u r (ξ 1 )v r (ξ 2 ), r=1 where σ 1 σ and {u r } r 1 and {v r } r 1 are L 2 ((0, 1))-orthonormal systems of continuous functions. The sum converges in L 2 ((0, 1) (0, 1)). The rank R-approximation f R (ξ 1, ξ 2 ) = R σ r u r (ξ 1 )v r (ξ 2 ) r=1 is the best approximation of f by a rank R function in the L 2 -norm.

9 Singular value decomposition of a function Lemma If f H s ((0, 1) (0, 1)), then (i) the singular values decay like (ii) the approximation error fulfils f f R L 2 = σ r r s. r=r+1 σ 2 r R 1 2 s Proof. Griebel, [4]

10 Discrete singular value decomposition A low rank approximation of the functions ω and K pq is computed as follows: 1. Project the function ω or K pq into a suitable spline space. 2. Decompose the coefficient tensor using matrix SVD. 3. Choose the rank R such that the overall approximation error is lower than a given constant ɛ, for example the discretisation error.

11 Projection into a spline space The function ω = det F is a tensor product spline function of higher polynomial degrees q d = 2p d 1 and lower smoothness than F. We can thus represent it exactly with respect to a B-Spline basis { B i } i (m1,m 2 ). ω(ξ) = Πω(ξ) = m 1 m 2 i 1 =1 i 2 =1 ω i1 i 2 B i1 (ξ 1 ) B i2 (ξ 2 ). For the functions K pq we choose a sufficiently refined spline space span{ B i } such that K pq ΠK pq L 2 (ˆΩ) = K pq m 1 m 2 i 1 =1 i 2 =1 (K pq ) i1 i 2 B i1 B i2 L 2 (ˆΩ) ɛ Π.

12 Decomposition of the coefficient tensor We compute the SVD of the m 1 m 2 matrix W = (ω i1 i 2 ), i.e. W = UΣV T = min(m 1,m 2 ) r=1 σ r u r v T r, where U is an orthogonal m 1 m 1 -matrix with columns u r, V is an orthogonal m 2 m 2 -matrix with columns v r and Σ = diag(σ 1,..., σ min(m1,m 2 )). We assume σ 1... σ min(m1,m 2 ).

13 Decomposition of the coefficient tensor The rank-r approximation R W R = σ r u r vr T r=1 is the best approximation of W by a matrix of rank R in the Frobenius norm and fulfils W W R F = min(m 1,m 2 ) σr 2. r=r+1

14 Decomposition of ω We multiply the decomposed coefficient tensor with the basis of the projection space to get the decomposition min(m 1,m 2 ) m 1 m 2 Πω(ξ) = (u r ) i1 B i1 (ξ 1 ) (v r ) i2 B i2 (ξ 2 ) = r=1 min(m 1,m 2 ) r=1 σ r i 1 =1 U r (ξ 1 )V r (ξ 2 ) i 2 =1

15 Error estimate for the low rank approximation Lemma The rank R-approximation Λ R ω(ξ) = R U r (ξ 1 )V r (ξ 2 ) r=1 fulfils Πω Λ R ω L (ˆΩ) W W R F = min(m 1,m 2 ) r=r+1 σ 2 r. Thus for a given accuracy ɛ Λ we can choose the smallest rank R such that the approximation error is below ɛ Λ.

16 Assembly of the matrices The entries of approximated mass matrix are M ij M ij = R r=1 1 0 U r (ξ 1 )ˆB i1 (ξ 1 )ˆB j1 (ξ 1 )dξ 1 1 and thus M can be written in the Kronecker format M = R X r Y r r=1 0 V r (ξ 2 )ˆB i2 (ξ 2 )ˆB j2 (ξ 2 )dξ 2 where each X r is a n 1 n 1 and Y r a n 2 n 2 -matrix containing the univariate integrals. For the stiffness matrix we can proceed in the same way.

17 Computational complexity We assume n 1 = n 2 = n, m 1 = m 2 = m, p 1 = p 2 = p and q 1 = q 2 = q. The complexity is bounded from below by the number of non-zeros in the matrix, which is O(n 2 p 2 ). The complexity of computing the matrix SVD up to rank R is O(Rm 2 ). For assembling the matrices X r and Y r using univariate element-wise Gauss quadrature the complexity is O(Rnp 3 ) The complexity for computing the Kronecker sum R r=1 X r Y r is O(Rn 2 p 2 ). Since generally n p, the overall complexity is dominated by the last step and is thus O(Rn 2 p 2 ).

18 Numerical examples For the method to be efficient we need to be able to choose the rank low. Table: Rank values for accuracy ɛ Λ = ɛ Π = 10 8 n p (1, 2) (1, 2) (4, 4) (7, 7) rank(ω) ( 1 ) ( 1 ) ( 7 ) ( 8 ) rank(k)

19 Numerical examples Assembly time Decomposition method Gauss Slope x10 6 1x10 7 1x10 8 n 3 Assembly time Decomposition method Gauss Slope x10 6 1x10 7 n 3 (a) Quarter annulus, p = 2 (b) 2nd Coons surface (star), p = 7 Figure: Comparison of computation times for the stiffness matrix using the decomposition method and an element-wise Gauss method.

20 Numerical examples Assembly time Full Gauss Slope 2 Slope p Figure: Comparison of the p-dependence of the computation times for the stiffness matrix using the decomposition method and an element-wise Gauss method. Computed on the quarter annulus with = DOF.

21 Generalisation to arbitrary dimensions The tensor decomposition method can be generalised to arbitrary dimensions since any d tensor T W (n1,...,n d ) possesses a canonical representation where v r k Rn k. T = R v1 r v2 r... vd r, r=1 However, the truncation operator Λ R T = argmin T U 2 rank(u) R leads to a non-linear optimisation problem.

Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis

Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Angelos Mantzaflaris a Bert Jüttler a Boris N. Khoromskij b Ulrich Langer a a Radon Institute for Computational and Applied Mathematics (RICAM),

More information

Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis. Angelos Mantzaflaris, Bert Jüttler, Boris N. Khoromskij, Ulrich Langer

Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis. Angelos Mantzaflaris, Bert Jüttler, Boris N. Khoromskij, Ulrich Langer Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Angelos Mantzaflaris, Bert Jüttler, Boris N. Khoromskij, Ulrich Langer G+S Report No. 46 June 2016 Low Rank Tensor Methods in Galerkin-based

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Dual-Primal Isogeometric Tearing and Interconnecting Solvers for Continuous and Discontinuous Galerkin IgA Equations

Dual-Primal Isogeometric Tearing and Interconnecting Solvers for Continuous and Discontinuous Galerkin IgA Equations Dual-Primal Isogeometric Tearing and Interconnecting Solvers for Continuous and Discontinuous Galerkin IgA Equations Christoph Hofer and Ulrich Langer Doctoral Program Computational Mathematics Numerical

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

PhD dissertation defense

PhD dissertation defense Isogeometric mortar methods with applications in contact mechanics PhD dissertation defense Brivadis Ericka Supervisor: Annalisa Buffa Doctor of Philosophy in Computational Mechanics and Advanced Materials,

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Isogeometric Analysis with Geometrically Continuous Functions on Two-Patch Geometries

Isogeometric Analysis with Geometrically Continuous Functions on Two-Patch Geometries Isogeometric Analysis with Geometrically Continuous Functions on Two-Patch Geometries Mario Kapl a Vito Vitrih b Bert Jüttler a Katharina Birner a a Institute of Applied Geometry Johannes Kepler University

More information

Isogeometric Analysis:

Isogeometric Analysis: Isogeometric Analysis: some approximation estimates for NURBS L. Beirao da Veiga, A. Buffa, Judith Rivas, G. Sangalli Euskadi-Kyushu 2011 Workshop on Applied Mathematics BCAM, March t0th, 2011 Outline

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

JOHANNES KEPLER UNIVERSITY LINZ. Institute of Computational Mathematics

JOHANNES KEPLER UNIVERSITY LINZ. Institute of Computational Mathematics JOHANNES KEPLER UNIVERSITY LINZ Institute of Computational Mathematics A Black-Box Algorithm for Fast Matrix Assembly in Isogeometric Analysis Clemens Hofreither Institute of Computational Mathematics,

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Condition number estimates for matrices arising in the isogeometric discretizations

Condition number estimates for matrices arising in the isogeometric discretizations www.oeaw.ac.at Condition number estimates for matrices arising in the isogeometric discretizations K. Gahalaut, S. Tomar RICAM-Report -3 www.ricam.oeaw.ac.at Condition number estimates for matrices arising

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Multipatch Space-Time Isogeometric Analysis of Parabolic Diffusion Problems. Ulrich Langer, Martin Neumüller, Ioannis Toulopoulos. G+S Report No.

Multipatch Space-Time Isogeometric Analysis of Parabolic Diffusion Problems. Ulrich Langer, Martin Neumüller, Ioannis Toulopoulos. G+S Report No. Multipatch Space-Time Isogeometric Analysis of Parabolic Diffusion Problems Ulrich Langer, Martin Neumüller, Ioannis Toulopoulos G+S Report No. 56 May 017 Multipatch Space-Time Isogeometric Analysis of

More information

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of, 269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

10-725/36-725: Convex Optimization Prerequisite Topics

10-725/36-725: Convex Optimization Prerequisite Topics 10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the

More information

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1 Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,

More information

THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm

THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION 1 The problem Let M be the manifold of all k by m column-wise orthonormal matrices and let f be a function defined on arbitrary k by m matrices.

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

Finite Elements. Colin Cotter. January 15, Colin Cotter FEM

Finite Elements. Colin Cotter. January 15, Colin Cotter FEM Finite Elements January 15, 2018 Why Can solve PDEs on complicated domains. Have flexibility to increase order of accuracy and match the numerics to the physics. has an elegant mathematical formulation

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

The Spectral-Element Method: Introduction

The Spectral-Element Method: Introduction The Spectral-Element Method: Introduction Heiner Igel Department of Earth and Environmental Sciences Ludwig-Maximilians-University Munich Computational Seismology 1 / 59 Outline 1 Introduction 2 Lagrange

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value Numerical Analysis of Differential Equations 188 5 Numerical Solution of Elliptic Boundary Value Problems 5 Numerical Solution of Elliptic Boundary Value Problems TU Bergakademie Freiberg, SS 2012 Numerical

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions Wayne McGee and Padmanabhan Seshaiyer Texas Tech University, Mathematics and Statistics (padhu@math.ttu.edu) Summary. In

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Space-time Finite Element Methods for Parabolic Evolution Problems

Space-time Finite Element Methods for Parabolic Evolution Problems Space-time Finite Element Methods for Parabolic Evolution Problems with Variable Coefficients Ulrich Langer, Martin Neumüller, Andreas Schafelner Johannes Kepler University, Linz Doctoral Program Computational

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials Dr. Noemi Friedman Contents of the course Fundamentals

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Approximation of High-Dimensional Rank One Tensors

Approximation of High-Dimensional Rank One Tensors Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Fitting multidimensional data using gradient penalties and combination techniques

Fitting multidimensional data using gradient penalties and combination techniques 1 Fitting multidimensional data using gradient penalties and combination techniques Jochen Garcke and Markus Hegland Mathematical Sciences Institute Australian National University Canberra ACT 0200 jochen.garcke,markus.hegland@anu.edu.au

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

1 Discretizing BVP with Finite Element Methods.

1 Discretizing BVP with Finite Element Methods. 1 Discretizing BVP with Finite Element Methods In this section, we will discuss a process for solving boundary value problems numerically, the Finite Element Method (FEM) We note that such method is a

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Model Order Reduction Techniques

Model Order Reduction Techniques Model Order Reduction Techniques SVD & POD M. Grepl a & K. Veroy-Grepl b a Institut für Geometrie und Praktische Mathematik b Aachen Institute for Advanced Study in Computational Engineering Science (AICES)

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Fast Sparse Spectral Methods for Higher Dimensional PDEs

Fast Sparse Spectral Methods for Higher Dimensional PDEs Fast Sparse Spectral Methods for Higher Dimensional PDEs Jie Shen Purdue University Collaborators: Li-Lian Wang, Haijun Yu and Alexander Alekseenko Research supported by AFOSR and NSF ICERM workshop, June

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 1. You are not allowed to use the svd for this problem, i.e. no arguments should depend on the svd of A or A. Let W be a subspace of C n. The

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

Randomized Numerical Linear Algebra: Review and Progresses

Randomized Numerical Linear Algebra: Review and Progresses ized ized SVD ized : Review and Progresses Zhihua Department of Computer Science and Engineering Shanghai Jiao Tong University The 12th China Workshop on Machine Learning and Applications Xi an, November

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Nonlinear multigrid for the solution of large scale Riccati equations in low-rank and H-matrix format (revised version: May 2007) by

More information

Experiences with Model Reduction and Interpolation

Experiences with Model Reduction and Interpolation Experiences with Model Reduction and Interpolation Paul Constantine Stanford University, Sandia National Laboratories Qiqi Wang (MIT) David Gleich (Purdue) Emory University March 7, 2012 Joe Ruthruff (SNL)

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Abstract. 1. Introduction

Abstract. 1. Introduction Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

The Finite Line Integration Method (FLIM) - A Fast Variant of Finite Element Modelling

The Finite Line Integration Method (FLIM) - A Fast Variant of Finite Element Modelling www.oeaw.ac.at The Finite Line Integration Method (FLIM) - A Fast Variant of Finite Element Modelling J. Synka RICAM-Report 2007-27 www.ricam.oeaw.ac.at The Finite Line Integration Method (FLIM) - A Fast

More information