Krylov subspace methods for linear systems with tensor product structure

Size: px
Start display at page:

Download "Krylov subspace methods for linear systems with tensor product structure"

Transcription

1 Krylov subspace methods for linear systems with tensor product structure Christine Tobler Seminar for Applied Mathematics, ETH Zürich 19. August 2009

2 Outline 1 Introduction 2 Basic Algorithm 3 Convergence bounds 4 Solving the compressed equation 5 Numerical experiments

3 Outline 1 Introduction 2 Basic Algorithm 3 Convergence bounds 4 Solving the compressed equation 5 Numerical experiments

4 Linear system with tensor product structure We consider the linear system Ax = b with d A = I n1 I ns 1 A s I ns+1 I nd, s=1 b = b 1 b d, A s R ns ns positive definite, b s R ns. Example for 3 dimensions: (A 1 I I + I A 2 I + I I A 3 )x = b 1 b 2 b 3

5 Linear system with tensor product structure We consider the linear system Ax = b with d A = I n1 I ns 1 A s I ns+1 I nd, s=1 b = b 1 b d, A s R ns ns positive definite, b s R ns. Example for 3 dimensions: (A 1 I I + I A 2 I + I I A 3 )x = b 1 b 2 b 3

6 Tensor decompositions CP decomposition: Tucker decomposition: k vec(x ) = a r b r c r r=1 a r R m, b r R n, c r R p vec(x ) = m ñ p i=1 s=1 l=1 G ijl a i b j c l =(A B C) vec(g) X c1 a1 b1 ck ak bk G R m ñ p, a i R m, b j R n, c l R p B C X A G

7 About this system The eigenvalues of the matrix A are given by all possible sums λ (1) i 1 + λ (2) i λ (d) i d where λ (s) i s denotes an eigenvalue of A s. For A s positive definite, the system has a unique solution. Note that x and b are vector representations of tensors in R n 1 n d, and b represents a rank-one tensor. A tensor arising from the discretization of a sufficiently smooth function f can be approximated by a short sum of rank-one tensors. By superposition, we can insert such a tensor as right-hand side into our system.

8 About this system The eigenvalues of the matrix A are given by all possible sums λ (1) i 1 + λ (2) i λ (d) i d where λ (s) i s denotes an eigenvalue of A s. For A s positive definite, the system has a unique solution. Note that x and b are vector representations of tensors in R n 1 n d, and b represents a rank-one tensor. A tensor arising from the discretization of a sufficiently smooth function f can be approximated by a short sum of rank-one tensors. By superposition, we can insert such a tensor as right-hand side into our system.

9 About this system The eigenvalues of the matrix A are given by all possible sums λ (1) i 1 + λ (2) i λ (d) i d where λ (s) i s denotes an eigenvalue of A s. For A s positive definite, the system has a unique solution. Note that x and b are vector representations of tensors in R n 1 n d, and b represents a rank-one tensor. A tensor arising from the discretization of a sufficiently smooth function f can be approximated by a short sum of rank-one tensors. By superposition, we can insert such a tensor as right-hand side into our system.

10 Tensorized Krylov subspaces A Krylov subspace is defined as K k (A, b) = span { b, Ab,..., A k 1 b }. We define a tensorized Krylov subspace as K K (A, b) := span( K k1 (A 1, b 1 ) K kd (A d, b d ) ). Note that K k0 (A, b) K K (A, b) for K = (k 0,..., k 0 ). Tensorized Krylov subspaces are richer than standard Krylov subspaces.

11 Tensorized Krylov subspaces A Krylov subspace is defined as K k (A, b) = span { b, Ab,..., A k 1 b }. We define a tensorized Krylov subspace as K K (A, b) := span( K k1 (A 1, b 1 ) K kd (A d, b d ) ). Note that K k0 (A, b) K K (A, b) for K = (k 0,..., k 0 ). Tensorized Krylov subspaces are richer than standard Krylov subspaces.

12 Tensorized Krylov subspaces A Krylov subspace is defined as K k (A, b) = span { b, Ab,..., A k 1 b }. We define a tensorized Krylov subspace as K K (A, b) := span( K k1 (A 1, b 1 ) K kd (A d, b d ) ). Note that K k0 (A, b) K K (A, b) for K = (k 0,..., k 0 ). Tensorized Krylov subspaces are richer than standard Krylov subspaces.

13 Outline 1 Introduction 2 Basic Algorithm 3 Convergence bounds 4 Solving the compressed equation 5 Numerical experiments

14 Reminder: CG method (Ax = b) The best approximation of x in K k (A, b) is defined by: x k x A = min x x A. x K k (A,b) Find U k with (orthonormal) columns that span the Krylov subspace K k (A, b). Set x k = U k y, then y is the solution of the compressed system H k y = b, H k = U k AU k, b = U k b. Convergence bound (κ = κ 2 (A)) ( κ 1 ) k. x k x A C(A, b, κ) κ + 1

15 Reminder: CG method (Ax = b) The best approximation of x in K k (A, b) is defined by: x k x A = min x x A. x K k (A,b) Find U k with (orthonormal) columns that span the Krylov subspace K k (A, b). Set x k = U k y, then y is the solution of the compressed system H k y = b, H k = U k AU k, b = U k b. Convergence bound (κ = κ 2 (A)) ( κ 1 ) k. x k x A C(A, b, κ) κ + 1

16 Reminder: CG method (Ax = b) The best approximation of x in K k (A, b) is defined by: x k x A = min x x A. x K k (A,b) Find U k with (orthonormal) columns that span the Krylov subspace K k (A, b). Set x k = U k y, then y is the solution of the compressed system H k y = b, H k = U k AU k, b = U k b. Convergence bound (κ = κ 2 (A)) ( κ 1 ) k. x k x A C(A, b, κ) κ + 1

17 Tensorized Krylov: A vec(x ) = b Applying the Arnoldi method to A s, b s results in U s, H s with U s A s U s = H s, U s column-orthogonal, H s upper Hessenberg matrix. The columns of U s span K ks (A s, b s ), and similarly the columns of U := U 1 U d span K K (A, b). Solve the compressed system Hy = b with x K = Uy, b = U b and H = U AU.

18 Tensorized Krylov: A vec(x ) = b Applying the Arnoldi method to A s, b s results in U s, H s with U s A s U s = H s, U s column-orthogonal, H s upper Hessenberg matrix. The columns of U s span K ks (A s, b s ), and similarly the columns of U := U 1 U d span K K (A, b). Solve the compressed system Hy = b with x K = Uy, b = U b and H = U AU.

19 Tensorized Krylov: A vec(x ) = b Applying the Arnoldi method to A s, b s results in U s, H s with U s A s U s = H s, U s column-orthogonal, H s upper Hessenberg matrix. The columns of U s span K ks (A s, b s ), and similarly the columns of U := U 1 U d span K K (A, b). Solve the compressed system Hy = b with x K = Uy, b = U b and H = U AU.

20 Compressed system The structure of Hy = b corresponds to that of Ax = b: H = d I k1 I ks 1 H s I ks+1 I kd s=1 b = b 1 b d = b 2 (e 1 e 1 ) For dense core tensor y, x K is a tensor in Tucker decomposition: x K = (U 1 U d )y

21 Outline 1 Introduction 2 Basic Algorithm 3 Convergence bounds 4 Solving the compressed equation 5 Numerical experiments

22 Convergence, s.p.d. case (1) Similarly to CG, we have: x K x A = min x K K (A,b) x x A Every vector in a Krylov subspace K ks (A s, b s ) can be seen as p(a s )b s, with p a polynomial of order at most k s 1. A similar thought reduces the bound calculation to the min-max problem 1 E Ω (K) := min p(λ 1,..., λ d ) Ω, p Π λ K λ d where Π K is a space of multivariate polynomials, and Ω contains the eigenvalues (λ 1,..., λ d ), with λ i [λ min (A i ), λ max (A i )].

23 Convergence, s.p.d. case (2) Inserting an upper bound for E Ω (K), we find x K x A d s=1 with κ s = 1 + λmax(as) λ min(a s) λ min (A). For the case A s, k s constant: ( κs 1 ks, C(A, b, κ s ) κs + 1) ( κ 1 ) k, x K x A C(A, b, d) κ + 1 with κ = d 1 d + κ 2(A) d. Note that the convergence rate improves with increasing dimension.

24 Convergence, non-symmetric positive definite case General convergence bound: x K x 2 d s=1 0 with ˆα s := j s α j and α j = λ min (A j + A j )/2. e ˆαst U s e ths e 1 e tas b s 2 dt The bound on U s e ths e 1 e tas b s 2 will depend on additional knowledge on A s. For example, when the field of values of each matrix A s is contained in a known ellipse, an explicit convergence bound can be found.

25 Outline 1 Introduction 2 Basic Algorithm 3 Convergence bounds 4 Solving the compressed equation 5 Numerical experiments

26 Grasedyck s method, system Hy = b For H positive definite: H 1 = 0 exp( th)dt The exponential of H has a Kronecker product structure, too: exp( th) = exp( t Approximation y m : d d Ĥ s ) = exp( tĥs) = s=0 s=1 s=1 d exp( th s ), y m = m j=1 ω j d s=1 ( exp α j H s ) bs, with certain coefficients α j, ω j.

27 Coefficients of the exponential sum (1) The coefficients α j, ω j should minimize sup 1 m z Λ(H) z ω j e α j z Case 1: H symmetric and condition number known The eigenvalues of H are real: Λ(H) [λ min, λ max ]. There are coefficients α j, ω j s.t. j=1 y y m 2 C(H, b) ( exp mπ 2 log(8κ 2 (H)) These coefficients may be found using a variant of the Remez algorithm. Tabellated values have been made available by Hackbusch. ).

28 Coefficients of the exponential sum (2) Case 2: Nonsymmetric H and/or unknown condition number There is an explicit formula for α j, ω j, where the eigenvalues of H only need to have positive real part. y y 2m+1 2 C(H, b) exp(µ/π) exp( m), where µ = max{ Im(Λ(H)) }. The convergence is significantly slower than for Case 1.

29 Theoretical convergence bounds for the coefficients Case 1 for cond. 1e5 Case 1 for cond. 1e15 Case 2 y y k k

30 Outline 1 Introduction 2 Basic Algorithm 3 Convergence bounds 4 Solving the compressed equation 5 Numerical experiments

31 Symmetric example: Poisson Equation Finite difference discretization of the Poisson equation in d dimensions: u = f u = 0 in Ω = [0, 1] d on Γ := Ω, where the right-hand side f is a separable function. 2 1 A s = h , b s : random numbers. 1 2 The approximation error is measured by relative residual, Ax K b 2 b 2.

32 Convergence for system size 200 d relative residual Numerical convergence d = 5 d = 10 d = 50 d = 100 relative residual Theoretical convergence rate d = 5 d = 10 d = 50 d = k k

33 Convergence for system size 1000 d relative residual Numerical convergence d = 5 d = 10 d = 50 d = 100 relative residual Theoretical convergence rate d = 5 d = 10 d = 50 d = k k

34 Extended Krylov subspaces Using extended Krylov subspaces K ks (A s, b s ) := span(k ks (A s, b s ) K ks+1(a 1 s, b s )), the algorithm works analogously. Convergence bound (A s, k s constant): ( κ 1 ) k. x K x 2 C(A, b, d) κ + 1 The convergence rate depends on κ: κ d 1 d + 1 d d 1 (d 1) d κ 2 2 (A) d 1 d d = 2 : κ = κ2 (A), d 0 : κ d 1 4 d + κ 2(A). d

35 Extended Krylov, system size 200 d relative residual Numerical convergence d = 5 d = 10 d = 50 d = 100 relative residual Theoretical convergence rate d = 5 d = 10 d = 50 d = k k

36 Non-symmetric example Finite difference discretization in d dimensions of the convection-diffusion equation u (c,..., c) u = f u = 0 in Ω = [0, 1] d on Γ := Ω, where f is again a separable function. A s = 1 h c 4h For c = 10, all eigenvalues of A are real. For c = 100, this is not the case, and the convergence is significantly worsened.

37 Non-symmetric case, system size 200 d relative residual Convergence for c = 10 d = 5 d = 10 d = 50 d = 100 relative residual Convergence for c = d = k k

38 Conclusions An efficient algorithm to calculate a low-rank approximation of the solution tensor The computational complexity is linear in the number of dimensions Only matrix-vector operations with the full system matrices are required A theoretical convergence bound was found.

39 Conclusions An efficient algorithm to calculate a low-rank approximation of the solution tensor The computational complexity is linear in the number of dimensions Only matrix-vector operations with the full system matrices are required A theoretical convergence bound was found. Thank you for your attention!

40 Literature D. Kressner, C. Tobler, Krylov subspace methods for linear systems with tensor product structure, Research report No , SAM, ETH Zurich, L. Grasedyck, Existence and computation of low Kronecker-rank approximations for large linear systems of tensor product structure, Computing, 72(3-4): , (2004). D. Braess, W. Hackbusch, Approximation of 1/x by exponential sums in [1, ), IMA J. Numer. Anal., 25(4): , (2005).

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Space-time sparse discretization of linear parabolic equations

Space-time sparse discretization of linear parabolic equations Space-time sparse discretization of linear parabolic equations Roman Andreev August 2, 200 Seminar for Applied Mathematics, ETH Zürich, Switzerland Support by SNF Grant No. PDFMP2-27034/ Part of PhD thesis

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Preconditioned GMRES Revisited

Preconditioned GMRES Revisited Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

Lecture 10b: iterative and direct linear solvers

Lecture 10b: iterative and direct linear solvers Lecture 10b: iterative and direct linear solvers MATH0504: Mathématiques appliquées X. Adriaens, J. Dular Université de Liège November 2018 Table of contents 1 Direct methods 2 Iterative methods Stationary

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L.

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. Projection Methods CHAPTER 6 Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. V (n m) = [v, v 2,..., v m ] basis of K W (n m) = [w, w 2,..., w m ] basis of L Let x 0

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Goal. Robust A Posteriori Error Estimates for Stabilized Finite Element Discretizations of Non-Stationary Convection-Diffusion Problems.

Goal. Robust A Posteriori Error Estimates for Stabilized Finite Element Discretizations of Non-Stationary Convection-Diffusion Problems. Robust A Posteriori Error Estimates for Stabilized Finite Element s of Non-Stationary Convection-Diffusion Problems L. Tobiska and R. Verfürth Universität Magdeburg Ruhr-Universität Bochum www.ruhr-uni-bochum.de/num

More information

Approximation of Geometric Data

Approximation of Geometric Data Supervised by: Philipp Grohs, ETH Zürich August 19, 2013 Outline 1 Motivation Outline 1 Motivation 2 Outline 1 Motivation 2 3 Goal: Solving PDE s an optimization problems where we seek a function with

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Introduction to Arnoldi method

Introduction to Arnoldi method Introduction to Arnoldi method SF2524 - Matrix Computations for Large-scale Systems KTH Royal Institute of Technology (Elias Jarlebring) 2014-11-07 KTH Royal Institute of Technology (Elias Jarlebring)Introduction

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

arxiv: v2 [math.na] 8 Apr 2017

arxiv: v2 [math.na] 8 Apr 2017 A LOW-RANK MULTIGRID METHOD FOR THE STOCHASTIC STEADY-STATE DIFFUSION PROBLEM HOWARD C. ELMAN AND TENGFEI SU arxiv:1612.05496v2 [math.na] 8 Apr 2017 Abstract. We study a multigrid method for solving large

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Sparse Tensor Galerkin Discretizations for First Order Transport Problems

Sparse Tensor Galerkin Discretizations for First Order Transport Problems Sparse Tensor Galerkin Discretizations for First Order Transport Problems Ch. Schwab R. Hiptmair, E. Fonn, K. Grella, G. Widmer ETH Zürich, Seminar for Applied Mathematics IMA WS Novel Discretization Methods

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Reduction to the associated homogeneous system via a particular solution

Reduction to the associated homogeneous system via a particular solution June PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 5) Linear Algebra This study guide describes briefly the course materials to be covered in MA 5. In order to be qualified for the credit, one

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Applied Mathematical Sciences, Vol. 4, 2010, no. 1, 21-30 Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Amer Kaabi Department of Basic Science

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

arxiv: v1 [math.na] 28 Jun 2016

arxiv: v1 [math.na] 28 Jun 2016 Noname manuscript No. (will be inserted by the editor Restarting for the Tensor Infinite Arnoldi method Giampaolo Mele Elias Jarlebring arxiv:1606.08595v1 [math.na] 28 Jun 2016 the date of receipt and

More information