Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Size: px
Start display at page:

Download "Golub-Kahan iterative bidiagonalization and determining the noise level in the data"

Transcription

1 Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech republic, Prague *** Technical University, Liberec with thanks to P. C. Hansen, M. Kilmer and many others SIAM ALA, Monterey, October

2 Six tons large scale real world ill-posed problem: 2

3 Solving large scale discrete ill-posed problems is frequently based upon orthogonal projections-based model reduction using Krylov subspaces, see, e.g., hybrid methods. This can be viewed as approximation of a Riemann-Stieltjes distribution function via matching moments. 3

4 Consider the Riemann-Stieltjes distribution function ω(λ) with the n points of increase associated with the HPD matrix B and the normalized inital vector s, (or with the transfer function given by the Laplace transform of a linear dynamical system determined by B, s ). Then s (λi B) 1 s = n j=1 ω j λ λ j F n (λ), where λ j, j = 1,..., n denote the eigenvalues of B and ω j the squared size of the component of s in the corresponding invariant subspace respectively. 4

5 The continued fraction on the right hand side is given by F n (λ) R n(λ) P n (λ) λ γ 1 λ γ 2 1 δ 2 2 λ γ 3... δ λ γ n 1 δ2 n λ γ n and the entries γ 1,..., γ n and δ 2,..., δ n form the Jacobi matrix 5

6 T n γ 1 δ 2 δ 2 γ δ n δ n γ n, δ l > 0, l = 2,..., n. Consider the kth Gauss-Christoffel quadrature approximation ω (k) (λ) of the Riemann-Stieltjes distribution function ω(λ). Its algebraic degree is 2k 1, i.e., it matches the first 2k moments ξ l 1 = λ l 1 dω(λ) = k j=1 ω (k) j {λ (k) j } l 1, l = 1,...,2k. 6

7 The nodes and weights of ω (k) (λ) are given by the eigenvalues and the corresponding squared first elements of the normalized eigenvectors of T k. Expansion of the continued fraction F n (λ) in terms of the decreasing powers of λ and the approximation by its kth convergent F k (λ) gives F n (λ) = 2k l=1 ξ l 1 λ l + O ( 1 ) λ 2k+1 = F k (λ) + O ( 1 ) λ 2k+1. Here F k (λ) approximates F n (λ) with the error proportional to λ (2k+1), which represents the minimal partial realization matching the first 2k moments, cf. [Stieltjes , Chebyshev ]. 7

8 Discrete ill-posed problem, the smallest node and weight in approximation of ω(λ): 10 0 ω(λ): nodes σ j 2, weights (b/β1,u j ) 2 nodes (θ 1 (k) ) 2, weights (p1 (k),e1 ) weight δ 2 noise the leftmost node 8

9 Outline 1. Problem formulation 2. Golub-Kahan iterative bidiagonalization, Lanczos tridiagonalization, and approximation of the Riemann-Stieltjes distribution function 3. Propagation of the noise in the Golub-Kahan bidiagonalization 4. Determination of the noise level 5. Numerical illustration 6. Summary and future work 9

10 Consider an ill-posed square nonsingular linear algebraic system A x b, A R n n, b R n, with the right-hand side corrupted by a white noise b = b exact + b noise 0 R n, b exact b noise, and the goal to approximate x exact A 1 b exact. The noise level δ noise bnoise b exact 1. 10

11 Properties (assumptions): matrices A, A T, AA T have a smoothing property; left singular vectors u j of A represent increasing frequencies as j increases; the system A x = b exact satisfies the discrete Picard condition. Discrete Picard condition (DPC): On average, the components (b exact, u j ) of the true right-hand side b exact in the left singular subspaces of A decay faster than the singular values σ j of A, j = 1,..., n. White noise: The components (b noise, u j ), j = 1,..., n do not exhibit any trend. 11

12 Problem Shaw: Noise level, Singular values, and DPC: [Hansen Regularization Tools] σ j (b, u j ) for δ noise = (b, u j ) for δ noise = 10 8 (b, u j ) for δ noise = σ j, double precision arithmetic σ j, high precision arithmetic (b exact, u j ), high precision arithmetic singular value number singular value number 12

13 Regularization is used to suppress the effect of errors in the data and extract the essential information about the solution. In hybrid methods, see [O Leary, Simmons 81], [Hansen 98], or [Fiero, Golub Hansen, O Leary 97], [Kilmer, O Leary 01], [Kilmer, Hansen, Español 06], [O Leary, Simmons 81], the outer bidiagonalization is combined with an inner regularization e.g., truncated SVD (TSVD), or Tikhonov regularization of the projected small problem (i.e. of the reduced model). The bidiagonalization is stopped when the regularized solution of the reduced model matches some selected stopping criteria. 13

14 Stopping criteria are typically based, amongst others, see [Björk 88], [Björk, Grimme, Van Dooren 94], on estimation of the L-curve [Calvetti, Golub, Reichel 99], [Calvetti, Morigi, Reichel, Sgallari 00], [Calvetti, Reichel 04]; estimation of the distance between the exact and regularized solution [O Leary 01]; the discrepancy principle [Morozov 66], [Morozov 84]; cross validation methods [Chung, Nagy, O Leary 04], [Golub, Von Matt 97], [Nguyen, Milanfar, Golub 01]. For an extensive study and comparison see [Hansen 98], [Kilmer, O Leary 01]. 14

15 Stopping criteria based on information from residual vectors: A vector ˆx is a good approximaton to x exact = A 1 b exact if the corresponding residual approximates the (white) noise in the data ˆr b Aˆx b noise. Behavior of ˆr can be expressed in the frequency domain using discrete Fourier transform, see [Rust 98], [Rust 00], [Rust, O Leary 08], or discrete cosine transform, see [Hansen, Kilmer, Kjeldsen 06], and then analyzed using statistical tools cumulative periodograms. 15

16 This talk: Under the given (quite natural) assumptions, the Golub-Kahan iterative bidiagonalization reveals the noise level δ noise. 16

17 Outline 1. Problem formulation 2. Golub-Kahan iterative bidiagonalization, Lanczos tridiagonalization, and approximation of the Riemann-Stieltjes distribution function 3. Propagation of the noise in the Golub-Kahan bidiagonalization 4. Determination of the noise level 5. Numerical illustration 6. Summary and future work 17

18 Golub-Kahan iterative bidiagonalization (GK) of A : given w 0 = 0, s 1 = b / β 1, where β 1 = b, for j = 1,2,... α j w j = A T s j β j w j 1, w j = 1, β j+1 s j+1 = A w j α j s j, s j+1 = 1. Let S k = [ s 1,..., s k ], W k = [ w 1,..., w k ] be the associated matrices with orthonormal columns. 18

19 Denote L k = α 1 β 2 α β k α k, L k+ = α 1 β 2 α β k α k β k+1 = [ L k e T k β k+1 ], the bidiagonal matrices containing the normalization coefficients. Then GK can be written in the matrix form as A T S k = W k L T k, A W k = [ S k, s k+1 ] Lk+ = S k+1 L k+. 19

20 GK is closely related to the Lanczos tridiagonalization of the symmetric matrix A A T with the starting vector s 1 = b / β 1, A A T S k = S k T k + α k β k+1 s k+1 e T k, T k = L k L T k = α 2 1 α 1 β 1 α 1 β 1 α β α k 1 β k, α k 1 β k α 2 k + β2 k i.e. the matrix L k from GK represents a Cholesky factor of the symmetric tridiagonal matrix T k from the Lanczos process. 20

21 Approximation of the distribution function: The Lanczos tridiagonalization of the given (SPD) matrix B R n n generates at each step k a non-decreasing piecewise constant distribution function ω (k), with the nodes being the (distinct) eigenvalues of the Lanczos matrix T k and the weights ω (k) j being the squared first entries of the corresponding normalized eigenvectors [Hestenes, Stiefel 52]. The distribution functions ω (k) (λ), k = 1, 2,... represent Gauss- Christoffel quadrature (i.e. minimal partial realization) approximations of the distribution function ω(λ), [Hestenes, Stiefel 52], [Fischer 96], [Meurant, Strakoš 06]. 21

22 Consider the SVD L k = P k Θ k Q k T, P k = [p (k) 1,..., p(k) k ], Q k = [q (k) 1,..., q(k) k ], Θ k = diag(θ (k) 1,..., θ(k) n ), with the singular values ordered in the increasing order, 0 < θ (k) 1 <... < θ (k) k. Then T k = L k L T k = P k Θ 2 k P T k is the spectral decomposition of T k, (θ (k) l ) 2 are its eigenvalues (the Ritz values of AA T ) and p (k) l its eigenvectors (which determine the Ritz vectors of AA T ), l = 1,..., k. 22

23 Summarizing: The GK bidiagonalization generates at each step k the distribution function ω (k) (λ) with nodes (θ (k) l ) 2 and weights ω (k) l = (p (k) l, e 1 ) 2 that approximates the distribution function ω(λ) with nodes σj 2 and weights ω j = (b/β 1, u j ) 2, where σj 2, u j are the eigenpairs of A A T, for j = n,..., 1. Note that unlike the Ritz values (θ (k) l ) 2, the squared singular values are enumerated in descending order. σ 2 j 23

24 Outline 1. Problem formulation 2. Golub-Kahan iterative bidiagonalization, Lanczos tridiagonalization, and approximation of the Riemann-Stieltjes distribution function 3. Propagation of the noise in the Golub-Kahan bidiagonalization 4. Determination of the noise level 5. Numerical illustration 6. Summary and future work 24

25 GK starts with the normalized noisy right-hand side s 1 = b / b. Consequently, vectors s j contain information about the noise. Can this information be used to determine the level of the noise in the observation vector b? Consider the problem Shaw from [Hansen Regularization Tools] (computed via [A,b exact,x] = shaw(400)) with a noisy righthand side (the noise was artificially added using the MATLAB function randn). As an example we set δ noise bnoise b exact =

26 Components of several bidiagonalization vectors s j computed via GK with double reorthogonalization: s 1 s 6 s 11 s 16 s s 18 s 19 s 20 s 21 s

27 The first 80 spectral coefficients of the vectors s j in the basis of the left singular vectors u j of A: U T s 1 U T s 6 U T s 11 U T s 16 U T s U T s 18 U T s 19 U T s 20 U T s 21 U T s

28 Signal space noise space diagrams: s 1 > s 2 s 5 > s 6 s 6 > s 7 s 10 > s 11 s 13 > s 14 s 15 > s 16 s 16 > s 17 s 17 > s 18 s 18 > s 19 s 19 > s 20 s 20 > s 21 s 21 > s 22 s k (triangle) and s k+1 (circle) in the signal space span{u 1,..., u k+1 } (horizontal axis) and the noise space span{u k+2,..., u n } (vertical axis). 28

29 The noise is amplified with the ratio α k /β k+1 : GK for the spectral components: α 1 (V T w 1 ) = Σ(U T s 1 ), β 2 (U T s 2 ) = Σ(V T w 1 ) α 1 (U T s 1 ), and for k = 2,3,... α k (V T w k ) = Σ(U T s k ) β k (V T w k 1 ), β k+1 (U T s k+1 ) = Σ(V T w k ) α k (U T s k ). 29

30 Since dominance in Σ(U T s k ) and (V T w k 1 ) is shifted by one component, in α k (V T w k ) = Σ(U T s k ) β k (V T w k 1 ), one can not expect a significant cancelation, and therefore α k β k. Whereas Σ(V T w k ) and (U T s k ) do exhibit dominance in the direction of the same components. If this dominance is strong enough, then the required orthogonality of s k+1 and s k in β k+1 (U T s k+1 ) = Σ(V T w k ) α k (U T s k ) can not be achieved without a significant cancelation, and one can expect β k+1 α k.. 30

31 Absolute values of the first 25 components of Σ(V T w k ), α k (U T s k ), and β k+1 (U T s k+1 ) for k = 7, β 8 /α 7 = (left) and for k = 12, β 13 /α 12 = (right), Shaw with the noise level δ noise = : 10 0 Σ V T w 7 α 7 U T s 7 β 8 U T s Σ V T w 12 α 12 U T s 12 β 13 U T s

32 Outline 1. Problem formulation 2. Golub-Kahan iterative bidiagonalization, Lanczos tridiagonalization, and approximation of the Riemann-Stieltjes distribution function 3. Propagation of the noise in the Golub-Kahan bidiagonalization 4. Determination of the noise level 5. Numerical illustration 6. Summary and future work 32

33 Depending on the noise level, the smaller nodes of ω(λ) are completely dominated by noise, i.e., there exists an index J noise such that for j J noise (b/β1, uj) 2 (b noise /β1, uj) 2 and the weight of the set of the associated nodes is given by δ 2 n j=j noise (b noise /β 1, u j ) 2. Recall that the large nodes σ1 2, σ2 2,... are well-separated (relatively to the small ones) and their weights on average decrease faster than σ1 2, σ2 2, see (DPC). Therefore the large nodes essentially control the behavior of the early stages of the Lanczos process. 33

34 At any iteration step, the weight corresponding to the smallest node (θ (k) 1 )2 must be larger than the sum of weights of all σ 2 j smaller than this (θ (k) 1 )2, see [Fischer, Freund 94]. As k increases, some (θ (k) 1 )2 eventually approaches (or becomes smaller than) the node σ 2 J noise, and its weight becomes (p (k) 1, e 1) 2 δ 2 δ 2 noise. The weight (p (k) 1, e 1) 2 corresponding to the smallest Ritz value (θ (k) 1 )2 is strictly decreasing. At some iteration step it sharply starts to (almost) stagnate on the level close to the squared noise level δnoise 2. 34

35 Square roots of the weights (p (k) 1, e 1) 2, k = 1, 2,..., Shaw with the noise level δ noise = : δ noise k noise = 18 1 = iteration number k 35

36 Square roots of the weights (p (k) 1, e 1) 2, k = 1, 2,..., Shaw with the noise level δ noise = 10 4 : δ noise k noise = 8 1 = iteration number k 36

37 The smallest node and weight in approximation of ω(λ): 10 0 ω(λ): nodes σ j 2, weights (b/β1,u j ) 2 nodes (θ 1 (k) ) 2, weights (p1 (k),e1 ) weight δ 2 noise the leftmost node 37

38 The smallest node and weight in approximation of ω(λ): 10 0 ω(λ): nodes σ j 2, weights (b/β1,u j ) 2 nodes (θ 1 (k) ) 2, weights (p1 (k),e1 ) weight δ 2 noise the leftmost node 38

39 Outline 1. Problem formulation 2. Golub-Kahan iterative bidiagonalization, Lanczos tridiagonalization, and approximation of the Riemann-Stieltjes distribution function 3. Propagation of the noise in the Golub-Kahan bidiagonalization 4. Determination of the noise level 5. Numerical illustration 6. Summary and future work 39

40 Image deblurring problem, image size pixels, problem dimension n = , the exact solution (left) and the noisy right-hand side (right), δ noise = x exact b exact + b noise 40

41 Square roots of the weights (p (k) 1, e 1) 2, k = 1, 2,... (top) and error history of LSQR solutions (bottom): ( p (k) 1, e 1 ), b noise 2 / b exact 2 = ( p 1 (k), e 1 ) δ noise Error history x k LSQR x exact the minimal error, k =

42 The best LSQR reconstruction (left), x LSQR 41, and the corresponding componentwise error (right). GK without any reorthogonalization! LSQR reconstruction with minimal error, x LSQR 41 Error of the best LSQR reconstruction, x exact x LSQR

43 Outline 1. Problem formulation 2. Golub-Kahan iterative bidiagonalization, Lanczos tridiagonalization, and the approximation of the Riemann-Stieltjes distribution function 3. Propagation of the noise in the Golub-Kahan bidiagonalization 4. Determination of the noise level 5. Numerical illustration 6. Summary and future work 43

44 Message: Using GK, information about the noise can be obtained in a straightforward way. Future work: Large scale problems; Behavior in finite precision arithmetic (GK without reorthogonalization); Regularization; Denoising; Colored noise. 44

45 References Golub, Kahan: Calculating the singular values and pseudoinverse of a matrix, SIAM J. B2, Hansen: Rank-deficient and discrete ill-posed problems, SIAM Monographs Math. Modeling Comp., Hansen, Kilmer, Kjeldsen: Exploiting residual information in the parameter choice for discrete ill-posed problems, BIT, Hnětynková, Strakoš: Lanczos tridiag. and core problem, LAA, Meurant, Strakoš: The Lanczos and CG algorithms in finite precision arithmetic, Acta Numerica, Paige, Strakoš: Core problem in linear algebraic systems, SIMAX, Rust: Truncating the SVD for ill-posed problems, Technical Report, Rust, O Leary: Residual periodograms for choosing regularization parameters for ill-posed problems, Inverse Problems, Hnětynková, Plešinger, Strakoš: The regularizing effect of the Golub-Kahan iterative bidiagonalization and revealing the noise level, BIT,

46 Main message: Whenever you see a blurred elephant which is a bit too noisy, the best thing is to apply the GK iterative bidiagonalization. Full version of the talk can be found at 46

47 Thank you for your kind attention! 47

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Matching moments and matrix computations

Matching moments and matrix computations Matching moments and matrix computations Jörg Liesen Technical University of Berlin and Petr Tichý Czech Academy of Sciences and Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

DIPLOMA THESIS. Bc. Kamil Vasilík Linear Error-In-Variable Modeling

DIPLOMA THESIS. Bc. Kamil Vasilík Linear Error-In-Variable Modeling Charles University in Prague Faculty of Mathematics and Physics DIPLOMA THESIS Bc. Kamil Vasilík Linear Error-In-Variable Modeling Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková,

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

On the Vorobyev method of moments

On the Vorobyev method of moments On the Vorobyev method of moments Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Conference in honor of Volker Mehrmann Berlin, May 2015

More information

Lanczos tridiagonalization, Krylov subspaces and the problem of moments

Lanczos tridiagonalization, Krylov subspaces and the problem of moments Lanczos tridiagonalization, Krylov subspaces and the problem of moments Zdeněk Strakoš Institute of Computer Science AS CR, Prague http://www.cs.cas.cz/ strakos Numerical Linear Algebra in Signals and

More information

Old and new parameter choice rules for discrete ill-posed problems

Old and new parameter choice rules for discrete ill-posed problems Old and new parameter choice rules for discrete ill-posed problems Lothar Reichel Giuseppe Rodriguez Dedicated to Claude Brezinski and Sebastiano Seatzu on the Occasion of Their 70th Birthdays. Abstract

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN

More information

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. Emory

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 14, pp. 2-35, 22. Copyright 22,. ISSN 168-9613. ETNA L-CURVE CURVATURE BOUNDS VIA LANCZOS BIDIAGONALIZATION D. CALVETTI, P. C. HANSEN, AND L. REICHEL

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday.

ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday. ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM ENYINDA ONUNWOR AND LOTHAR REICHEL Dedicated to Ken Hayami on the occasion of his 60th birthday. Abstract. The singular

More information

A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION

A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION A. H. BENTBIB, M. EL GUIDE, K. JBILOU, AND L. REICHEL Abstract. Image restoration often requires the solution of large linear systems of equations with a very

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Per Christian Hansen Technical University of Denmark Ongoing work with Kuniyoshi Abe, Gifu Dedicated to Dianne P. O Leary

More information

An interior-point method for large constrained discrete ill-posed problems

An interior-point method for large constrained discrete ill-posed problems An interior-point method for large constrained discrete ill-posed problems S. Morigi a, L. Reichel b,,1, F. Sgallari c,2 a Dipartimento di Matematica, Università degli Studi di Bologna, Piazza Porta S.

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Probabilistic upper bounds for the matrix two-norm

Probabilistic upper bounds for the matrix two-norm Noname manuscript No. (will be inserted by the editor) Probabilistic upper bounds for the matrix two-norm Michiel E. Hochstenbach Received: date / Accepted: date Abstract We develop probabilistic upper

More information

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations

Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations Numerical Algorithms manuscript No. (will be inserted by the editor) Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations Tomáš Gergelits Zdeněk

More information

UPRE Method for Total Variation Parameter Selection

UPRE Method for Total Variation Parameter Selection UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Alessandro Buccini a, Yonggi Park a, Lothar Reichel a a Department of Mathematical Sciences, Kent State University,

More information

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. International

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Dedicated to Gérard Meurant on the occasion of his 60th birthday.

Dedicated to Gérard Meurant on the occasion of his 60th birthday. SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to Gérard Meurant on the occasion of his 60th birthday. Abstract. Tikhonov regularization of linear discrete ill-posed

More information

On the properties of Krylov subspaces in finite precision CG computations

On the properties of Krylov subspaces in finite precision CG computations On the properties of Krylov subspaces in finite precision CG computations Tomáš Gergelits, Zdeněk Strakoš Department of Numerical Mathematics, Faculty of Mathematics and Physics, Charles University in

More information

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems A. H. Bentbib Laboratory LAMAI, FSTG, University of Cadi Ayyad, Marrakesh, Morocco. M. El Guide Laboratory LAMAI, FSTG,

More information

FGMRES for linear discrete ill-posed problems

FGMRES for linear discrete ill-posed problems FGMRES for linear discrete ill-posed problems Keiichi Morikuni Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies, Sokendai, 2-1-2 Hitotsubashi,

More information

Automatic stopping rule for iterative methods in discrete ill-posed problems

Automatic stopping rule for iterative methods in discrete ill-posed problems Comp. Appl. Math. DOI 10.1007/s40314-014-0174-3 Automatic stopping rule for iterative methods in discrete ill-posed problems Leonardo S. Borges Fermín S. Viloche Bazán Maria C. C. Cunha Received: 20 December

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 33, pp. 63-83, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to

More information

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa

More information

2017 Society for Industrial and Applied Mathematics

2017 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 39, No. 5, pp. S24 S46 2017 Society for Industrial and Applied Mathematics GENERALIZED HYBRID ITERATIVE METHODS FOR LARGE-SCALE BAYESIAN INVERSE PROBLEMS JULIANNE CHUNG AND ARVIND

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

On the interplay between discretization and preconditioning of Krylov subspace methods

On the interplay between discretization and preconditioning of Krylov subspace methods On the interplay between discretization and preconditioning of Krylov subspace methods Josef Málek and Zdeněk Strakoš Nečas Center for Mathematical Modeling Charles University in Prague and Czech Academy

More information

Embedded techniques for choosing the parameter in Tikhonov regularization

Embedded techniques for choosing the parameter in Tikhonov regularization NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (24) Published online in Wiley Online Library (wileyonlinelibrary.com)..934 Embedded techniques for choosing the parameter in Tikhonov

More information

ON EFFICIENT NUMERICAL APPROXIMATION OF THE BILINEAR FORM c A 1 b

ON EFFICIENT NUMERICAL APPROXIMATION OF THE BILINEAR FORM c A 1 b ON EFFICIENT NUMERICAL APPROXIMATION OF THE ILINEAR FORM c A 1 b ZDENĚK STRAKOŠ AND PETR TICHÝ Abstract. Let A C N N be a nonsingular complex matrix and b and c complex vectors of length N. The goal of

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION A. BOUHAMIDI, K. JBILOU, L. REICHEL, H. SADOK, AND Z. WANG Abstract. This paper is concerned with the computation

More information

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. B332 B359 c 2015 Society for Industrial and Applied Mathematics A FRAMEWORK FOR REGULARIZATION VIA OPERATOR APPROXIMATION JULIANNE M. CHUNG, MISHA E. KILMER, AND

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

On numerical stability in large scale linear algebraic computations

On numerical stability in large scale linear algebraic computations ZAMM Z. Angew. Math. Mech. 85, No. 5, 307 325 (2005) / DOI 10.1002/zamm.200410185 On numerical stability in large scale linear algebraic computations Plenary lecture presented at the 75th Annual GAMM Conference,

More information

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

1. Introduction. We consider linear discrete ill-posed problems of the form

1. Introduction. We consider linear discrete ill-posed problems of the form AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the

More information

A study on regularization parameter choice in Near-field Acoustical Holography

A study on regularization parameter choice in Near-field Acoustical Holography Acoustics 8 Paris A study on regularization parameter choice in Near-field Acoustical Holography J. Gomes a and P.C. Hansen b a Brüel & Kjær Sound and Vibration Measurement A/S, Skodsborgvej 37, DK-285

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

Stability of Krylov Subspace Spectral Methods

Stability of Krylov Subspace Spectral Methods Stability of Krylov Subspace Spectral Methods James V. Lambers Department of Energy Resources Engineering Stanford University includes joint work with Patrick Guidotti and Knut Sølna, UC Irvine Margot

More information

The structure of iterative methods for symmetric linear discrete ill-posed problems

The structure of iterative methods for symmetric linear discrete ill-posed problems BIT manuscript No. (will be inserted by the editor) The structure of iterative methods for symmetric linear discrete ill-posed problems L. Dyes F. Marcellán L. Reichel Received: date / Accepted: date Abstract

More information

arxiv: v1 [hep-lat] 2 May 2012

arxiv: v1 [hep-lat] 2 May 2012 A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität

More information

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments James V. Lambers September 24, 2008 Abstract This paper presents a reformulation

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information