Stability of Krylov Subspace Spectral Methods

Similar documents
Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments

A Spectral Time-Domain Method for Computational Electrodynamics

Krylov Subspace Spectral Methods for the Time-Dependent Schrödinger Equation with Non-Smooth Potentials

Explicit high-order time stepping based on componentwise application of asymptotic block Lanczos iteration

Explicit High-Order Time-Stepping Based on Componentwise Application of Asymptotic Block Lanczos Iteration

Analysis of Wave Propagation in 1D Inhomogeneous Media

Introduction. Chapter One

Computers and Mathematics with Applications

Efficient Denoising and Sharpening of Color Images through Numerical Solution of Nonlinear Diffusion Equations

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Numerical Methods in Matrix Computations

FDM for parabolic equations

Numerical Mathematics

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Numerical Solutions to Partial Differential Equations

NONLINEAR DIFFUSION PDES

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Medical Image Analysis

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

The Conjugate Gradient Method

APPLIED NUMERICAL LINEAR ALGEBRA

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Efficient wave propagation on complex domains

A Fast Fourier transform based direct solver for the Helmholtz problem. AANMPDE 2017 Palaiochora, Crete, Oct 2, 2017

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Block Bidiagonal Decomposition and Least Squares Problems

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2

Stabilization and Acceleration of Algebraic Multigrid Method

On the Vorobyev method of moments

arxiv: v1 [math.na] 21 Nov 2017

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

A STUDY OF MULTIGRID SMOOTHERS USED IN COMPRESSIBLE CFD BASED ON THE CONVECTION DIFFUSION EQUATION

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Computational Linear Algebra

Algebraic Multigrid as Solvers and as Preconditioner

FDM for wave equations

Exponential integration of large systems of ODEs

Fast Structured Spectral Methods

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS

PARTIAL DIFFERENTIAL EQUATIONS

WHEN studying distributed simulations of power systems,

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 9 Implicit integration, incompressible flows

MATHEMATICAL METHODS INTERPOLATION

Numerical tensor methods and their applications

Total least squares. Gérard MEURANT. October, 2008

Research Statement. Tonatiuh Sánchez-Vizuet

Numerical Solution of partial differential equations

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Preconditioned inverse iteration and shift-invert Arnoldi method

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS

Inverse Eigenvalue Problems: Theory and Applications

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

Partial Differential Equations

Discrete Orthogonal Harmonic Transforms

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Numerical Methods for Random Matrices

Preliminary Examination, Numerical Analysis, August 2016

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Spectral Processing. Misha Kazhdan

7 Hyperbolic Differential Equations

Numerical Methods I Non-Square and Sparse Linear Systems

1 Linearity and Linear Systems

Matrices, Moments and Quadrature, cont d

Motivation: Sparse matrices and numerical PDE's

Introduction to numerical schemes

Krylov methods for the computation of matrix functions

Dimension-Independent likelihood-informed (DILI) MCMC

Preliminary Examination in Numerical Analysis

Medical Image Analysis

Preface to the Second Edition. Preface to the First Edition

Numerical Methods for Engineers and Scientists

Exponential integrators for semilinear parabolic problems

Scale Space Analysis by Stabilized Inverse Diffusion Equations

Lecture 18 Classical Iterative Methods

Preface. 2 Linear Equations and Eigenvalue Problem 22

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

ME Computational Fluid Mechanics Lecture 5

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications

M.A. Botchev. September 5, 2014

1 Number Systems and Errors 1

Solutions Preliminary Examination in Numerical Analysis January, 2017

Kasetsart University Workshop. Multigrid methods: An introduction

Iterative Methods for Ill-Posed Problems

Fast and accurate methods for the discretization of singular integral operators given on surfaces

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Numerical Solutions to Partial Differential Equations

Stochastic Spectral Approaches to Bayesian Inference

Part 1. The diffusion equation

NUMERICAL MATHEMATICS AND COMPUTING

Transcription:

Stability of Krylov Subspace Spectral Methods James V. Lambers Department of Energy Resources Engineering Stanford University includes joint work with Patrick Guidotti and Knut Sølna, UC Irvine Margot Gerritsen, Stanford University Harrachov 07 August 23, 2007

Model Variable-coefficient Problem where We assumel(x,d) is positive semi-definite, and that the coefficients are smooth 2/30

Quick-and-dirty Solution Let { φ ν } be a set of orthonormal 2π-periodic functions. Then, an approximate solution is where This works if the φ ν are nearly eigenfunctions, but if not, how can we compute φ ν, e -L(x,D) t f as accurately and efficiently as possible? 3/30

Elements of Functions of Matrices If A is N N and symmetric, then u T e -At v is given by a Riemann-Stieltjes integral provided the measure α(λ), which is based on the spectral decomposition of A, is positive and increasing This is the case if v = u, or if v is a small perturbation of u 4/30

The u v Case For general u and v, the bilinear form u T e -At v can be obtained by writing it as the difference quotient where δ is a small constant. Both forms lead to Riemann-Stieltjes integrals with positive, increasing measures How can we approximate these integrals? 5/30

Gaussian Quadrature These two integrals can be approximated using Gaussian quadrature rules (G. Golub and G. Meurant, '94) The nodes and weights are obtained by applying the Lanczos algorithm to A with starting vectors u and v to produce T, the tridiagonal matrix of recursion coefficients The nodes are the eigenvalues of T, and the weights are the products of the first components of the left and right eigenvectors We want rules for each Fourier component 6/30

What if A is a differential operator? If u = v = e ω = e iω x, then recursion coefficients α j, β j are functions of ω Let A = L(x,D) from before, with q(x) ) = 0. The recursion coefficients for a 2-node Gaussian rule are where = Avgf Similar formulas apply in higher dimensions 7/30

Updating Coefficients for u v To produce modified recursion coefficients generated by r 0 = u and r 0 + δv: This is particularly useful if the r j, α j, β j are parameterized families and v is a fixed vector 8/30

Krylov Subspace Spectral Methods To compute Fourier components of u(x,t n+ 1 ): Apply symmetric Lanczos algorithm to L with starting vector e iωx Use fast updating to obtain modified recursion coefficients for starting vectors e iω x, e iω x + δu n Approximate, by Gaussian quadrature, Finally, 9/30

Why Do it This Way? Other, more general Krylov subspace methods (e.g. M. Hochbruck and C. Lubich, '96) use a single Krylov subspace for each time step, with u n as the starting vector, to approximate exp(-lt Lt)u n KSS methods obtain Fourier components from derivatives of frequency-dependent Krylov subspace bases in the direction of u n Thus, each component receives individual attention, enhancing accuracy and stability 10/30

Properties They're High-Order Accurate! Each Fourier component of u(x,t n+1 ) is computed with local accuracy of O( t 2 K ), where K is the number of nodes in the Gaussian rule They're Explicit but Very Stable! If p(x) is constant and q(x) is bandlimited, then for K = 1, method is unconditionally stable! For K = 2, solution operator is bounded independently of t and N 11/30

Demonstrating Stability They do not experience the same difficulties with stiffness as other Krylov subspace methods, or the same weak instability as the unsmoothed Fourier method Contrasting Krylov subspace methods applied to parabolic problem on different grids Fourier, KSS methods applied to u t = (sin(x)u) x, smooth initial data until T = 5 12/30

Properties, cont d They re efficient and scalable! Performance of MATLAB implementation comparable or superior to that of built-in ODE solvers Accuracy and efficiency scale to finer grids or higher spatial dimension 13/30

In the Limit: Derivatives of Moments! Each Fourier component approximates the Gâteaux derivative K-node approximate solution has the form 14/30

The Splitting Perspective Derivatives of nodes and weights w.r.t. δ are Fourier components of applications of pseudodifferential operators applied to u(x,t n ) K-node approximate solution has form where and each C k is a constant-coefficient pseudo-differential operator, of the same order as L(x,D), and positive semi-definite 15/30

Simplest Splittings The case K = 1 reduces to solving the averaged-coefficient problem exactly, after applying forward Euler with the residual operator In the case K = 2, with p(x) constant, the operators V k are second-order, and yet the approximate solution operator S N ( t) satisfies where B is a bounded operator! 16/30

General Splittings Operators C k and V k are defined in terms of derivatives of nodes and weights, which represent pseudo-differential operators For each ω, let T ω be the K K Jacobi matrix output by Lanczos, with initial vector e ω Derivatives of nodes: T ω =QΛ ω Q T, where nodes are on diagonal of Λ, and Λ ω ω =Q T T ω Q, where T ω is available from fast updating algorithm Derivatives of weights: from solution of systems of form Aw = b, where A = (T j ω (1:K 1,1: 1,1:K 1) 1) - λ j I) + R j, for j = 1,,K, with R j a rank-one matrix 17/30

The Wave Equation The integrands are obtained from the spectral decomposition of the propagator (P. Guidotti, JL and K. Sølna '06): For each Fourier component, O( t 4 K ) local accuracy! For K = 1,p(x) constant, q(x) bandlimited, global error is 3 rd order in time, and the method is unconditionally stable! 18/30

Results for the Wave Equation Comparison of 2-node KSS method with semiimplicit 2 nd -order method of H.-O. Kreiss, N. Petersson and J. Yström, and 4 th -order explicit scheme of B. Gustafsson and E. Mossberg 19/30

Discontinuous Coefficients and Data 20/30

The Modified Perona-Malik equation The Perona-Malik equation is a nonlinear diffusion equation used for image de-noising It exhibits both forward and backward diffusion, and so is ill-posed, but surprisingly well-behaved numerically A modification proposed by P. Guidotti weakens the nonlinearity slightly, to obtain well-posedness while still de-noising KSS methods can limit effects of backward diffusion by truncating recursion coefficients for selected frequencies 21/30

De-Noising by Modified Perona-Malik 22/30

Handling Discontinuities In progress: other bases (e.g. wavelets, multiwavelets) for problems with rough or discontinuous coefficients Ideally, trial functions should conform to the geometry of the symbol as much as possible Encouraging results: Freud reprojection (A. Gelb & J. Tanner 06; code by A. Nelson) to deal with Gibbs phenomenon Improves accuracy for heat equation, stability for wave 23/30

Preconditioning Through Homogenizing Transformations KSS methods are most accurate when the coefficients are smooth, so trial functions are also approximate eigenfunctions Problem can be preconditioned by applying unitary similarity transformations that homogenize coefficients In 1-D, leading coefficient p(x) is easily homogenized by a change of independent variable; to make unitary, add a diagonal transformation 24/30

Continuing the Process To homogenize q(x), we use a transformation of the form L 1 (x,d) =U(x,D) * L(x,D)U(x,D): where D + is the pseudo-inverse of D This introduces variable coefficients of order 2, but process can be repeated Generalizes to higher dimensions, can be applied efficiently using Fast FIO algorithms (Candes, Demanet and Ying '06) 25/30

Smoothing These transformations yield an operator that is nearly constant-coefficient. The gain in accuracy is comparable to that achieved by deferred correction with an analytically exact residual, which KSS methods naturally provide! 26/30

In Progress: Systems of Equations Generalization to systems of equations is straightforward For a system of the form u t = A(x,D)u )u, where each entry of A(x,D) is a differential operator, we can use trial functions v j e iωx where v j is an eigenvector of Avg x A(x,ω) For systems arising from acoustics, in which the solution operator can be expressed in terms of products of entries of A(x,D), order O( t 4 K ) accuracy is possible More on this topic at ICNAAM 07 27/30

Conclusions Krylov subspace spectral methods are showing more promise as their development progresses Applicability to problems with rough behavior Stability like that of implicit methods Competitive performance and scalability Through the splitting perspective, they provide an effective means of stably extending other solution methods to variablecoefficient problems Still much to do! Especially implementation with other bases of trial functions 28/30

References JL, Krylov Subspace Spectral Methods for Variable- Coefficient Initial-Boundary Value Problems, Electr. Trans. Num. Anal. 20 (2005), p. 212-234 P. Guidotti, JL, K. Sølna, Analysis of 1D Wave Propagation in Inhomogeneous Media, Num. Funct. Anal. Opt. 27 (2006), p. 25-55 JL, Practical Implementation of Krylov Subspace Spectral Methods, J. Sci. Comput. 32 (2007), p. 449-476 JL, Derivation of High-Order Spectral Methods for Time-Dependent PDE Using Modified Moments, submitted 29/30

Contact Info James Lambers lambers@stanford.edu http://www.stanford.edu/~lambers/ 30/30