Why the QR Factorization can be more Accurate than the SVD

Size: px
Start display at page:

Download "Why the QR Factorization can be more Accurate than the SVD"

Transcription

1 Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA May 10, 2004

2 Problem: or Ax = b for A square (1) min b Ax for A m n, m n (2) where A is very ill-conditioned, assuming that b = b 0 + δb. Goal: recover x 0 where Ax 0 = b 0. Applications: inverse problems, image reconstruction, computer-assisted tomography (CAT), backward heat equation, inverse scattering,... 1

3 Regularization using low rank approximations: Replace A with a lower rank approximation Â. Solve for the minimum norm solution to min b Âx (3) Low-rank approximations can be constructed by: the SVD (Golub-65 and others), LAPACK s xgelsd UTV decomposition (Stewart-93, Mathias-93, Fierro-97,...), rank revealing QR factorizations (Foster-86, Chan-87, Hansen-90, Bischof-91, Chandrasekaran-94...), pivoted QR algorithm (Golub-65, Lawson & Hanson-74), LAPACK s xgelsy 2

4 Why is this Interesting? There are a variety of commonly used matrix decompositions. In the full rank case the advantages / disadvantages of the decompositions is relatively well known. In the case that the rank is not full rank there are unanswered questions. For this case we provide results comparing the accuracy and efficiency of two important decompositions. 3

5 Truncated SVD (assuming m = n) A = U S DV T S = ( ÛS U S0 ) ( D 0 0 D 0 ) ( V S V S0 ) T.  = ÛS D V T S, Let x S = Â+ b. Â+ = V S D 1 Û T S Truncated QRP (assuming m = n) A = Q 1 RP T = ( Û U 0 ) ( R F 0 G  = Û ( R F ) P T. ) P T. Factor ( R F ) = ( L 0 ) Q T 2 so  = Û ( L 0 ) Q T 2 P T = Û ( L 0 ) V T where V = P Q 2 = ( V V 0 )  + = Let x T = Â+ b. V L 1Û T 4

6 Modification xgelsz of xgelsy: G is not needed in the calculated solution x T xgelsy uses a complete factorization of A since G is not needed it does not need to be factored Properties of modification: calculates same numerical rank and essentially the same solution as xgelsy does not interfere with BLAS-3 calls in xgelsy O(mnr) flops for low-rank problems, much quicker (source of observation: Golub and Van Loan , page 240) 5

7 80 time for solving a 1600 by 1600 linear system DGELSY (modified) DGELSY DGELSD time in seconds numerical rank Timings for DGELSZ (x), DGELSY (o) and DGELSD (+). Conclusion: QRP is faster than SVD, especially for low-rank problems 6

8 Accuracy of Regularized Solution Suppose that x =  + b is the regularization solution to (1)and that x 0 is the underlying noiseless solution then x x 0 =  + b x 0 = ( + A I)x 0 + + (δb) and x x 0 2 = ( + A I)x  + (δb) 2. The first term on the right hand side is called the regularization error and the second term the perturbation error. 7

9 x x 0 2 = ( + A I)x  + (δb) 2. The regularization error decreases as the rank r increases The perturbation error increases as the rank r increases Need to choose the minimum error using a technique such as Generalized Cross Validation or the L-Curve Our goal is to compare the minimum errors when using two different factorizations 8

10 10 1 x / x 0 vs. truncation index (+ svd, x qrp) 10 0 x / x effective rank x / x 0 for SVD (+) and QRP (x) versus the rank of the low rank approximation Note that the minimum is smaller for QRP for this example. WHY? 9

11 Theorem 1 Let x T be the solution to (3) calculated using the truncated QRP and x S be the solution calculated using the truncated SVD. Then x T x 0 2 = x S x δb T M δb + x T 0 N x 0. (4) where Ũ = U T U S = ( Û T ÛS U T 0 ÛS Û T U S0 U0 T U S0 ) ( Ũ11 = Ũ12 Ũ 21 Ũ22 ), ( V Ṽ = V T V S = T V V T S V S0 V0 T V S V0 T V S0 ) ( Ṽ11 = Ṽ12 Ṽ 21 Ṽ22 ), 10

12 ( M = D 1Ṽ 21 T Ṽ21 D 1 Ũ 11 T T T T 1 Ũ 12 Ũ12 T T T T 1 Ũ 11 Ũ12 T T T T 1 Ũ 12 ) N = ( Ṽ T 21 Ṽ21 Ṽ T 22 Ṽ21 Ṽ T 21 Ṽ22 Ṽ T 12 Ṽ12 ) δb = U T S δb and x 0 = V T S x 0. 11

13 Meaning of x T x 0 2 = x S x δb T M δb + x T 0 N x 0. M = N = ( ) M11 M 12 M T 21 M 22 ( ) N11 N 12 N T 21 N 22,, where M 11, M 22, N 11, and N 22 are positive definite. M and N are definitely indefinite There will be cases where x T x 0 is smaller than x S and also where x S is smaller than x T. HOW OFTEN? 12

14 Perturbation Error (N = 0, M 0) Large Gap in Singular Values Theorem 2 Assume that N = 0 in x T x 0 2 = x S x δb T M δb + x T 0 N x 0, the QR factorization is rank revealing, and the components of the noise δb come from Gaussian white noise (uncorrelated zero mean Gaussian random variables with common variance) then as the gap in the singular values, s r+1 /s r, approaches 0 it follows that the probability that x T is less than x S approaches onehalf. 13

15 Ideas in the proof of theorem 2: Using a result of Bunch, Fierro and Hansen (95) we can show M = ( M11 M 12 where M = M T 21 M 22 ) = M ( 0 M12 M21 T 0 ) M has eigenvalues that come in + and - pairs of equal magnitudes therefore for white noise δb the distribution of δb T M δb is nearly symmetric about 0. 14

16 Perturbation Error No Gap in Singular Values (Extreme Case where A is orthogonal) Theorem 3 Assume that A is orthogonal, N is 0 in (4) and that the components of the noise δb come from Gaussian white noise. Then the probability that x T x 0 is less than x S x 0 is one-half. Note: In this case the QR and SVD factorizations are not unique. The result is true for each choice of Q, R, U S, D and V S. 15

17 Perturbation Error Summary For perturbation errors in the case where there is a large gap in the singular values or where there is no gap in the singular values then, with our assumptions, the probability that x T x 0 is less than x S is nearly one-half. We can examine cases in between experimentally. The (very simple) case of a rank one approximation to a 2 by 2 system is informative. We will look at more realistic cases later. 16

18 Perturbation Error Histogram of x T x 0 x S x 0 E( x S x 0 ) A is 2 by 2 singular values = 1,.01 (large gap) Histogram ( trials) mean = e 006 median = e 006 % QR better = standard dev. = count ( x T x S ) / E( x S ) trials QR better than SVD in 50.02% of the cases 17

19 Perturbation Error Histogram of x T x 0 x S x 0 E( x S x 0 ) A is 2 by 2 singular values = 1, 1 (no gap) Histogram ( trials) mean = median = % QR better = standard dev. = count ( x T x S ) / E( x S ) trials QR better than SVD in 50.14% of the cases 18

20 Perturbation Error Histogram of x T x 0 x S x 0 E( x S x 0 ) A is 2 by 2 singular values = 1,.6 (small gap) Histogram ( trials) mean = median = % QR better = standard dev. = count ( x T x S ) / E( x S ) trials QR better than SVD in 44.4% of the cases 19

21 Regularization error term: Consider x T x 0 2 = x S x δb T M δb + x T 0 N x 0. with M = 0 and N 0. N in the term x T 0 N x 0 is indefinite as discussed earlier. Therefore x T 0 N x 0 can be positive or negative depending on x 0. To examine how often we will use a model for x 0 used by others. 20

22 Class used by Bertero, et. al. 1980, and Neumaier, 1998 is x 0 = V S D p w (5) where w is governed by white noise. Attractive Properties: x 0 is a weighted combination of the singular vectors, x 0 is usually a smoothly varying solution and for p > 0, the discrete Picard condition is true. The condition is that the components of US T b 0 should decrease faster than the singular values of A. p is called the relative decay rate in the Picard coefficients. 21

23 Using this model in x T x 0 2 = x S x δb T M δb + x T 0 N x 0. where x 0 = V T S x 0 it follows that x T 0 N x 0 = w T N p w with N p = ( D p Ṽ T 21 Ṽ21 D p D p Ṽ T 21 Ṽ22 D p 0 D p 0 Ṽ T 22 Ṽ21 D p D p 0 Ṽ T 12 Ṽ12 D p 0 ) 22

24 Regularization Error (M = 0, N 0) Large Gap in Singular Values Theorem 4 Assume that M = 0 in x T x 0 2 = x S x δb T M δb + x T 0 N x 0, the QR factorization is rank revealing, and x 0 satisfies x 0 = V S D p w with 0 p 1 where w follows white Gaussian noise then as the gap in the singular values, s r+1 /s r, approaches 0 it follows that the probability that x T is less than x S approaches onehalf. 23

25 Regularization Error No Gap in Singular Values (Extreme Case where A is orthogonal) Theorem 5 Assume that A is orthogonal, M is 0 in (4) and that x 0 satisfies (5) where w follows white Gaussian noise. Then the probability that x T is less than x S is one-half. Note: Theorem 5 has no restrictions on p. Theorem 4 has the condition 0 p 1. Values of p in the range 0 p 1 are common in practice. Also note that numerical experiments suggest that Theorem 4 is true for 0 p 2. 24

26 Regularization Error Summary For regularization errors if there is a large gap in the singular values or if there is no gap in the singular values and assuming that p 2 then, with our assumptions, the probability that x T is less than x S is nearly one-half. Numerical experiments with rank one approximations to 2 by 2 systems are very similar to those presented earlier for perturbation errors, in the case that p 2. 25

27 Summary - Both Perturbation and Regularization Errors In the case that M 0 and N 0 if there is a large gap in the singular values or if there is no gap in the singular values then, with our assumptions including p 2 which is common in practice, the probability that x T is less than x S is nearly one-half. 26

28 Numerical Experiments We will illustrate the above results by using regularization to solve for the regularized solution to Ax = b for 64 by 64 random matrices A with a variety of choices of singular values 64 by 64 matrices A from Hansen s Regularization Tools. Examples in this collection have characteristic features of ill-posed problems. 27

29 Samples of 64 by 64 random matrices with perturbation and regularization errors matrices A chosen use Per Christian Hansen s regutm the singular values chosen according to specified distributions noise δb is Gaussian white noise x 0 chosen according to (5) where w follows white Gaussian noise noise to signal ratios set to seven values:.3,.1,.01,.001,.0001, 10 6 and rank set to specified value in some cases and chosen dynamically in other cases trials for each case 28

30 Random 64 by 64 matrices, large gap in singular values rank of approximation = 16 gap = 100, p = 1 Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) 8000 count mean = e 006 median = e 006 % QR better = standard dev. = ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 49.9% of the cases 29

31 Random 64 by 64 matrices, moderate gap in singular values rank of approximation = 16 gap = 4, p = 1 count Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) mean = median = % QR better = standard dev. = ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 41.3% of the cases 30

32 Random 64 by 64 matrices, in cluster of singular values rank of approximation in cluster of 10 sing. val s, p = 1 count Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) mean = median = % QR better = standard dev. = ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 45% of the cases 31

33 Random 64 by 64 matrices, singular values decreasing rapidly rank chosen dynamically mean gap = 10, p = 1 count Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) mean = median = % QR better = standard dev. = ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 49.1% of the cases 32

34 Random 64 by 64 matrices, singular value decrease moderate rank chosen dynamically mean gap = 2, p = 1 count Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) mean = median = % QR better = standard dev. = ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 43% of the cases 33

35 Caution With probabilistic models of the accuracy of solutions to (1) one can skew the results in favor of either SVD or QRP solutions by a careful choice of the model. 34

36 Random 64 by 64 matrices, SVD better than QRP rank chosen dynamically mean gap = 2, p = 3, x 0 = V S D p w Histogram of 3000 x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) count mean = median = % QR better = standard dev. = ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 14.7% of the cases 35

37 Random 64 by 64 matrices, QRP better than SVD rank chosen dynamically mean gap = 2, p = 3, x 0 = V D p w Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (70000 trials) mean = median = % QR better = standard dev. = count ( x T x S ) / max( x S, x T ) 10 0 singular values singingular value k distribution of singular values QR better than SVD in 85.4% of the cases 36

38 Summary of numerical experiments with random matrices The numerical experiments are consistent with the theory. If there is a large gap in the singular values and if p the decay rate of the Picard coefficients is not large then using the QRP is, on average, nearly as accurate at the SVD. For matrices with small gaps in the singular values the truncated SVD solutions may be, on average, better than truncated QRP solutions but the difference may be modest. By selecting the model for the true solutions x 0 one can skew the results to favor the SVD or to favor the QRP 37

39 Numerical Experiments for Examples from Hansen s Regularization Tools These examples have characteristic features of ill-posed problems A, x 0, b 0 not random Examples come from integral equations, numerical differentiation, inverse heat equations and inverse Laplace transforms Most examples do not have a gap in the singular values Noise δb chosen from white noise with seven noise to signal ratios:.3,.1,.01,.001,.0001, 10 6 and

40 Examples from Hansen s Regularization Tools A, x 0 and b 0 not random 16 examples, 7 noise levels, 100 noise vectors in each case rank chosen dynamically Histogram of x T x 0 x S x 0 max( x S x 0, x T x 0 ) Histogram (10700 trials) mean = % QR better = standard dev. = count ( x T x S ) / max( x S, x T ) QR better than SVD in 50.5% of the cases 39

41 Examples from Hansen s Regularization Tools For each of 112 cases we calculated mean( x T x S ) mean( x S ) The number (of 112) of instances of the ratio in various ranges are listed below. less -50% -10% -1% 1% 10% 50% than to to to to to or -50% -10% -1% 1% 10% 50% more

42 Summary of numerical experiments with matrices from Regularization Tools Overall the truncated QRP appears to be better for these nonrandom examples than for the random examples. In some cases the truncated SVD solution is closer to the true solution and in others the truncated QRP solution is. Overall the SVD and QRP have very similar accuracies for these examples. 41

43 Conclusions: Consider regularized solutions to Ax = b = b 0 + δb for an ill-conditioned matrix A. 1. For systems governed by statistical assumptions used by others, with reasonable parameter values, if the matrices have a gap in their singular values then the truncated QRP is better than the truncated SVD approximately half the time. 2. For these systems, if there is not a large gap, truncated SVD solutions appear overall somewhat better than truncated QRP solutions but the difference may be modest. The analysis in this case is not complete. 3. For problems from the Regularization Tools collection truncated QRP solutions appear to be better close to half the time. 42

44 References: L. Foster, Solving Rank-Deficient and Ill-posed Problems using UTV and QR Factorizations, SIAM J. Matrix Anal. Appl. 25, pp , L. Foster and R. Kommu, An Efficient Algorithm for Solving Rank- Deficient Least Squares Problems, submitted to the ACM Transactions on Mathematical Software. 43

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Block-Partitioned Algorithms for. Solving the Linear Least Squares. Problem. Gregorio Quintana-Orti, Enrique S. Quintana-Orti, and Antoine Petitet

Block-Partitioned Algorithms for. Solving the Linear Least Squares. Problem. Gregorio Quintana-Orti, Enrique S. Quintana-Orti, and Antoine Petitet Block-Partitioned Algorithms for Solving the Linear Least Squares Problem Gregorio Quintana-Orti, Enrique S. Quintana-Orti, and Antoine Petitet CRPC-TR9674-S January 1996 Center for Research on Parallel

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale Subset Selection Deterministic vs. Randomized Ilse Ipsen North Carolina State University Joint work with: Stan Eisenstat, Yale Mary Beth Broadbent, Martin Brown, Kevin Penner Subset Selection Given: real

More information

Fast Hessenberg QR Iteration for Companion Matrices

Fast Hessenberg QR Iteration for Companion Matrices Fast Hessenberg QR Iteration for Companion Matrices David Bindel Ming Gu David Garmire James Demmel Shivkumar Chandrasekaran Fast Hessenberg QR Iteration for Companion Matrices p.1/20 Motivation: Polynomial

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations Davod Khojasteh Salkuyeh 1 and Mohsen Hasani 2 1,2 Department of Mathematics, University of Mohaghegh Ardabili, P. O. Box.

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem: 1 Trouble points Notes for 2016-09-28 At a high level, there are two pieces to solving a least squares problem: 1. Project b onto the span of A. 2. Solve a linear system so that Ax equals the projected

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Rank revealing factorizations, and low rank approximations

Rank revealing factorizations, and low rank approximations Rank revealing factorizations, and low rank approximations L. Grigori Inria Paris, UPMC January 2018 Plan Low rank matrix approximation Rank revealing QR factorization LU CRTP: Truncated LU factorization

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Manning & Schuetze, FSNLP (c) 1999,2000

Manning & Schuetze, FSNLP (c) 1999,2000 558 15 Topics in Information Retrieval (15.10) y 4 3 2 1 0 0 1 2 3 4 5 6 7 8 Figure 15.7 An example of linear regression. The line y = 0.25x + 1 is the best least-squares fit for the four points (1,1),

More information

Analysis of Error Produced by Truncated SVD and Tikhonov Regularization Met hods *

Analysis of Error Produced by Truncated SVD and Tikhonov Regularization Met hods * Analysis of Error Produced by Truncated SVD and Tikhonov Regularization Met hods * Irina F. Gorodnitsky and Bhaskar D. Rao Dept. of Electrical and Computer Engineering University of California, San Diego

More information

Linear Methods in Data Mining

Linear Methods in Data Mining Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Stable and Efficient Gaussian Process Calculations

Stable and Efficient Gaussian Process Calculations Journal of Machine Learning Research x (2009) x-xx Submitted 5/08; Published xx/09 Stable and Efficient Gaussian Process Calculations Leslie Foster Alex Waagen Nabeela Aijaz Michael Hurley Apolonio Luis

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Accuracy and Stability in Numerical Linear Algebra

Accuracy and Stability in Numerical Linear Algebra Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13 18, 1999 pp. 105 112 Accuracy and Stability in Numerical Linear Algebra Zlatko Drmač Abstract. We

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Randomized algorithms for the low-rank approximation of matrices

Randomized algorithms for the low-rank approximation of matrices Randomized algorithms for the low-rank approximation of matrices Yale Dept. of Computer Science Technical Report 1388 Edo Liberty, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

= (G T G) 1 G T d. m L2

= (G T G) 1 G T d. m L2 The importance of the Vp/Vs ratio in determining the error propagation and the resolution in linear AVA inversion M. Aleardi, A. Mazzotti Earth Sciences Department, University of Pisa, Italy Introduction.

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University Lecture 13 Stability of LU Factorization; Cholesky Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Subset Selection. Ilse Ipsen. North Carolina State University, USA

Subset Selection. Ilse Ipsen. North Carolina State University, USA Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

arxiv: v1 [math.na] 23 Nov 2018

arxiv: v1 [math.na] 23 Nov 2018 Randomized QLP algorithm and error analysis Nianci Wu a, Hua Xiang a, a School of Mathematics and Statistics, Wuhan University, Wuhan 43007, PR China. arxiv:1811.09334v1 [math.na] 3 Nov 018 Abstract In

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Linear Algebra Review 1 / 16 Midterm Exam Tuesday Feb

More information

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n,

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n, SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS L. DYKES AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) often is used to solve Tikhonov

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

PDE Model Reduction Using SVD

PDE Model Reduction Using SVD http://people.sc.fsu.edu/ jburkardt/presentations/fsu 2006.pdf... John 1 Max Gunzburger 2 Hyung-Chun Lee 3 1 School of Computational Science Florida State University 2 Department of Mathematics and School

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics

QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics 60 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 48, NO 1, JANUARY 2000 QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics Xiaohua Li and H (Howard) Fan, Senior

More information

Performance of Random Sampling for Computing Low-rank Approximations of a Dense Matrix on GPUs

Performance of Random Sampling for Computing Low-rank Approximations of a Dense Matrix on GPUs Performance of Random Sampling for Computing Low-rank Approximations of a Dense Matrix on GPUs Théo Mary, Ichitaro Yamazaki, Jakub Kurzak, Piotr Luszczek, Stanimire Tomov, Jack Dongarra presenter 1 Low-Rank

More information

Randomized Kaczmarz Nick Freris EPFL

Randomized Kaczmarz Nick Freris EPFL Randomized Kaczmarz Nick Freris EPFL (Joint work with A. Zouzias) Outline Randomized Kaczmarz algorithm for linear systems Consistent (noiseless) Inconsistent (noisy) Optimal de-noising Convergence analysis

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

On Incremental 2-norm Condition Estimators

On Incremental 2-norm Condition Estimators On Incremental 2-norm Condition Estimators Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Miroslav Tůma Institute of Computer

More information

arxiv:cs/ v1 [cs.ms] 13 Aug 2004

arxiv:cs/ v1 [cs.ms] 13 Aug 2004 tsnnls: A solver for large sparse least squares problems with non-negative variables Jason Cantarella Department of Mathematics, University of Georgia, Athens, GA 30602 Michael Piatek arxiv:cs/0408029v

More information

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction The 2002 International Congress and Exposition on Noise Control Engineering Dearborn, MI, USA. August 19-21, 2002 Inverse Source Identification based on Acoustic Particle Velocity Measurements R. Visser

More information

RANDOMIZED METHODS FOR RANK-DEFICIENT LINEAR SYSTEMS

RANDOMIZED METHODS FOR RANK-DEFICIENT LINEAR SYSTEMS Electronic Transactions on Numerical Analysis. Volume 44, pp. 177 188, 2015. Copyright c 2015,. ISSN 1068 9613. ETNA RANDOMIZED METHODS FOR RANK-DEFICIENT LINEAR SYSTEMS JOSEF SIFUENTES, ZYDRUNAS GIMBUTAS,

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

arxiv: v1 [math.na] 29 Dec 2014

arxiv: v1 [math.na] 29 Dec 2014 A CUR Factorization Algorithm based on the Interpolative Decomposition Sergey Voronin and Per-Gunnar Martinsson arxiv:1412.8447v1 [math.na] 29 Dec 214 December 3, 214 Abstract An algorithm for the efficient

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Jacobian conditioning analysis for model validation

Jacobian conditioning analysis for model validation Neural Computation 16: 401-418 (2004). Jacobian conditioning analysis for model validation Isabelle Rivals and Léon Personnaz Équipe de Statistique Appliquée École Supérieure de Physique et de Chimie Industrielles

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Lecture 5: Randomized methods for low-rank approximation

Lecture 5: Randomized methods for low-rank approximation CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 5: Randomized methods for low-rank approximation Gunnar Martinsson The University of Colorado at Boulder Research

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Randomized projection algorithms for overdetermined linear systems

Randomized projection algorithms for overdetermined linear systems Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College ISMP, Berlin 2012 Setup Setup Let Ax = b be an overdetermined, standardized, full rank system

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction

Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction Downloaded from orbit.dtu.dk on: Nov 02, 2018 Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction Hansen, Per Christian; Jensen, Søren Holdt Published in: I E E E Transactions

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information