Gram-Schmidt Orthogonalization: 100 Years and More

Size: px
Start display at page:

Download "Gram-Schmidt Orthogonalization: 100 Years and More"

Transcription

1 Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008

2 Outline of Talk Early History ( ) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality 2. The work of Heinz Rutishauser Selective reorthogonalization and Superorthogonalization Modern Research 1. Recent results on the Gram-Schmidt algorithms 2. Column Pivoting and Rank Revealing factorizations 3. Applications to Krylov Subspace Methods

3 Earlier History Least Squares - Gauss and Legendre Laplace 1812, Analytic Theory of Probabilities (1814, 1820) Cauchy 1837, 1847 (Interpolation) and Bienaymé 1853 J. P. Gram, 1879(Danish), 1883(German) Erhard Schmidt 1907

4 Gauss and least squares Priority dispute: A. M. Legendre -first publication 1805; Gauss claimed discovery in 1795 G. Piazzi, January 1801 discovered asteroid Ceres. Tracked it for 6 weeks.

5 The Gauss derivation of the normal equations Let ˆx be the solution A T Ax = A T b. For any x we have r = (b Aˆx) + A(ˆx x) ˆr + Ae, and since A Tˆr = A T b A T Aˆx = 0 r T r = (ˆr + Ae) T (ˆr + Ae) = ˆr Tˆr + (Ae) T (Ae). Hence r T r is minimized when x = ˆx.

6 Pierre-Simon Laplace ( ) Mathematical Astronomy Celestial Mechanics Laplace s Equation Laplace Transforms Probability Theory and Least Squares

7 Théorie Analytique des Probabilités - Laplace 1812 Problem: Compute masses of Saturn and Jupiter from systems of normal equations (Bouvart) and to compute the distribution of error in the solutions. Method: Laplace successively projects the system of equations orthogonally to a column of the observation matrix to eliminate all variables but the one of interest. Basically Laplace uses MGS to prove that, given an overdetermined system with a normal perturbation on the right-hand side, its solution has a centered normal distribution with variance independent from the parameters of the noise.

8 Laplace Linear Algebra Laplace uses MGS to derive the Cholesky form of the normal equations, R T Rx = A T x Laplace does not seem to realize that the vectors generated are mutually orthogonal. He does observe that the generated vectors are each orthogonal to the residual vector.

9 Cauchy and Bienaymé Figure: A. Cauchy Figure: I. J. Bienaymé

10 Cauchy and Bienaymé Cauchy (1837) and (1847)- interpolation method leading to systems of the form Z T Ax = Z T b, where Z = (z 1,..., z n ) and z ij = ±1. Bienaymé (1853) new derivation of Cauchy s algorithm based on Gaussian elimination. Bienaymé noted that the Cauchy s choice of Z was not optimal in the least squares sense. Least squares solution if Z = A (normal equations) or more generally if R(Z) = R(A). The matrix Z can be determined a column at a time as the elimination steps are carried out.

11 3 3 example z1 T a 1 z1 T a 2 z1 T a 3 z2 T a 1 z2 T a 2 z2 T a 3 z3 T a 1 z3 T a 2 z3 T a 3 Transform i, jth element (2 i, j 3) zi T a j zt i a 1 z1 T a z1 T a j = zi T 1 x 1 x 2 x 3 = z T 1 b z T 2 b z T 3 b ( a j zt 1 a ) j z1 T a a 1 zi T a (2) j, 1

12 Bienaymé Reduction The reduced system has the form zt 2 a(2) 2 z2 T a(2) 3 z3 T a(2) 2 z3 T a(2) 3 where we have defined x 2 x 3 = zt 2 b(2) z3 T b(2) a (2) j = a j zt 1 a j z1 T a a 1, 1 b (2) = b zt 1 b z T 1 b a 1. Finally z 3 is chosen and used to form the single equation z T 3 a (3) 3 x 3 = z T 3 b (3). Taking the first equation from each step gives a triangular system defining the solution.

13 An interesting choice of Z Åke Björck has noted that since we want R(Z) = R(A), if we choose Z = Q where Then we have q 1 = a 1, q 2 = a (2) 2, q 3 = a (3) 3,... q 2 = a 2 qt 1 a 2 q1 T q q 1, q 3 = a (2) 1 3 qt 2 a(2) 3 q T 2 q 2 q 2,..., which is exactly the modified Gram-Schmidt procedure!

14 Jørgen Pedersen Gram ( ) Dual career: Mathematics (1873) and Insurance (1875, 1884) Research in modern algebra, number theory, models for forest management, integral equations, probability, numerical analysis. Active in Danish Math Society, edited Tidsskrift journal ( ). Best known for his orthogonalization process

15 J. P. Gram: 1879 Thesis and 1883 paper Series expansions of real functions using least squares MGS orthogonalization applied to orthogonal polynomials Determinantal representation for resulting orthogonal functions Application to integral equations

16 Erhard Schmidt ( ) Ph.D. on integral equations, Gottengin 1905 Student of David Hilbert 1917 University of Berlin, set up Inst for Applied Math Director of Math Res Inst of German Acad of Sci Known for his work in Hilbert Spaces Played important role in the development of modern functional analysis

17 Schmidt 1907 paper p 442 CGS for sequence of functions φ 1,..., φ n with respect to inner product f, g = b a f (x)g(x)dx p 473 CGS for an infinite sequence of functions In footnote (p 442) Schmidt claims that in essence the formulas are due to J. P. Gram. Actually Gram does MGS and Schmidt does CGS Y. K. Wong paper refers to Gram-Schmidt Orthogonalization Process (First such linkage?)

18 Original algorithm of Schmidt for CGS ψ 1 (x) = ψ 2 (x) =. ψ n (x) = φ 1 (x) b a φ 1(y) 2 dy φ 2 (x) ψ 1 (x) b a φ 2(z)ψ 1 (z))dz b a (φ 2(y) ψ 1 (y) b a φ 2(z)ψ 1 (z))dz) 2 dy φ n (x) ρ=n 1 ρ=1 ψ ρ (x) b a φ n(z)ψ ρ (z)dz b a (φ n(x) ρ=n 1 ρ=1 ψ ρ (x) b a φ n(z)ψ ρ (z)dz) 2 dy

19 Åke Björck Specialist in Numerical Least Squares 1967 Backward stability of MGS for least squares 1992 Björck and Paige: Loss and recapture of orthogonality in MGS 1994 Numerics of Gram-Schmidt Orthogonalization 1996 SIAM: Numerical Methods for Least Squares

20 Björck 1967: Solving Least Squares Problems by Gram-Schmidt Orthogonalization If A has computed MGS factorization Loss of orthogonality in MGS Q R I Q T Q 2 c 1 (m, n) 1 c 2 (m, n)κu κu Backward stability A + E = Q R where E c(m, n)u A 2 and Q T Q = I

21 Björck and Paige 1992 Ã = O A = Q R = Q 11 Q 12 R1 Q 21 Q22 O where Q = H 1 H 2 H n (a product of Householder matrices) Q 11 = O and A = Q 21 R1 (C. Sheffield) Q 21 R1 and A = QR (MGS) numerically equivalent In fact if Q = (q 1,..., q n ) then where H k = I v k v T k, v k = e k q k, k = 1,..., n k = 1,..., n

22 Loss of Orthogonality in CGS With CGS you could have catastrophic cancellation. Gander Better to compute Cholesky factorization of A T A and then set Q = AR 1 Smoktunowicz, Barlow, and Langou, 2006 If A T A is numerically nonsingular and the pythagorean version of CGS is used then the loss of orthogonality is proportional to κ 2

23 Pythagorean version of CGS Computation of diagonal entry r kk at step k. CGS: r kk = w k 2 where w k = a k k 1 i=1 r ikq i CGSP: If k 1 1/2 s k = a k and p k = rik 2 then r 2 kk + p2 k = s2 k and hence i=1 r kk = (s k p k ) 1/2 (s k + p k ) 1/2

24 Heinz Rutishauser ( ) Pioneer in Computer Science and Computational Mathematics Long history with ETH as both student and distinguished faculty Selective Reorthogonalization for MGS Superorthogonalization

25 Classic Gram-Schmidt CGS Algorithm Q = A; for k = 1 : n for i = 1 : k 1; R(i, k) = Q(:, i) Q(:, k); end (Omit this line for CMGS) for i = 1 : k 1, (Omit this line for CMGS) Q(:, k) = Q(:, k) R(i, k) Q(:, i); end R(k, k) = norm(q(:, k)); Q(:, k) = Q(:, k)/r(k, k); end

26 Columnwise MGS CMGS Algorithm Q = A; for k = 1 : n for i = 1 : k 1 R(i, k) = Q(:, i) Q(:, k); Q(:, k) = Q(:, k) R(i, k) Q(:, i); end R(k, k) = norm(q(:, k)); Q(:, k) = Q(:, k)/r(k, k); end

27 Reorthogonalization Iterative orthogonalization - As each q k is generated reorthogonalize with respect to q 1,..., q k 1 Twice is enough (W. Kahan), (Giraud, Langou, and Rozložnik, 2002) Selective reorthogonalization

28 To reorthogonalize or not to reorthogonalize For most applications it is not necessary to reorthogonalize if MGS is used Both MGS least squares and MGS GMRES are backward stable In many applications it is important that the residual vector r be orthogonal to the column vectors of Q. In these cases if MGS is used, r should be reorthogonalized with respect to q n, q n 1,..., q 1 (note the reverse order)

29 Selective reorthogonalization Reorthogonalize when loss of orthogonality is detected. k 1 b = a k r ik q i i=1 Indication of cancellation if b << a k (1967) Rutishauser test: Reorthogonalize if b 1 10 a k

30 Selective reorthogonalization The Rutishauser reorthogonalization condition can be rewritten as a k b 10 or more generally a k b K There are many papers using different values of K. Generally 1 K κ 2 (A); popular choice is K = 2. Hoffman, 1989, investigates a range of K values. Giraud and Langou, 2003, use the alternate condition k 1 i=1 r ik r kk > L

31 Superorthogonalization x T y fl(x T y) γ n x T y γ n = nε/(1 nε), x is the vector with elements x i. If x is orthogonal y then fl(x T y) γ n x T y (1) Rutishauser Superorthogonalization: Given x and y keep orthogonalizing until (1) is satisfied.

32 Superorthogonalization Example x = , y = are numerically orthogonal, however x T y and x T y Perform two reorthogonalizations: y y (2) y (3)

33 Superorthogonalization Calculations x y y (2) y (3) 1e+00 1e e e 25 1e 40 1e e e+00 1e 20 1e e e 10 1e 10 1e e e 21 1e 15 1e e e 10 Decrease in scalar products x T y = 1e 20 x T y (2) = e 37 x T y (3) = 0

34 Column pivoting Golub, 1965 Golub and Businger, 1965 After k steps Qk T AΠ k = H k H 1 AP 1 P k = R (k) = R (k) 11 R (k) 12 O R (k) 22 Choose pivot column corresponding to the column (with smallest index) of R 22 that has smallest 2-norm. This strategy minimizes R 22 (1,2) at each step

35 The (1,2) Matrix Norm The (1,2) norm of an m n matrix A is defined by A (1,2) = ( a 1 2,..., a n 2 ) Note: Since Ax 2 A (1,2) x 1 it follows that A (1,2) = max x 0 Ax 2 x 1 Furthermore, if B is any n k matrix, then AB (1,2) A 2 max( b 1 2,..., b k 2 ) = A 2 B (1,2)

36 Failure of column pivoting to detect rank deficiencies Column pivoting may fail to detect rank deficiencies. Example of W. Kahan K n = diag(1, s, s 2,..., s n 1 ) where c 2 + s 2 = 1 and c 0. 1 c c c c 0 1 c c c c c c

37 Pivoting strategies based on 2-norm optimizations Singular value interlacing σ k (R (k) 11 ) σ k(a) and σ 1 (R (k) 22 ) σ k+1(a) To reveal rank, choose a pivoting strategy to minimize R (k) 22 2 (rather than (1,2) norm). Alternative approach: Choose pivoting strategy to maximize 1 (R (k) = σ k (R (k) 11 ) 1 11 ) 2

38 Rank Revealing QR (RRQR) L. V. Foster, 1986 and Tony Chan 1987 algorithms use Linpack condition estimator. Chan first to refer to factorizations as rank revealing Existence of RRQR s shown by Hong and Pan, 1992 Chandrasekaran and Ipsen, 1994, Hybrid methods combining both optimization approaches

39 More hybrid RRQR algorithms Pan and Tang, 1999 Hybrid algorithms based on Pivoted Blocks and Cyclic Pivoting Bischof and Quintana-Orti, 1998 Block versions of hybrid algorithms with ICE Incremental Condition Estimation (1990)

40 Nullspace Computations Given A has numerical rank k and an RRQR factorization AP = (Q 1, Q 2 ) R 11 R 12 O R 22 where R 11 is k k and R 22 O. Approximate nullspace basis Z = P R 1 11 R 12 I n k

41 Strong RRQR factorizations Problem in computing nullspace basis if R 11 is ill-conditioned. Strong RRQR: Gu and Eisentat, 1996 Strong RRQR factorization if the entries of R 1 11 R 12 are not too large. In many applications the nullspaces need to be updated as A is updated.

42 UTV decompositions G. W. Stewart, URV (1992) and ULV (1993) Here AV = UT where V = (V 1, V 2 ) is orthogonal and T is triangular. The columns of V 2 form an approximate null space basis. Fierro and Hansen (1997) Low rank UTV factorizations MATLAB routines: UTV Tools (1995), UTV Expansion Pack (2005)

43 Krylov Subspace Methods for Ax = b Find orthonormal basis for the Krylov subspace K k (A, r 0 ) = span{r 0, Ar 0, A 2 r 0,..., A k 1 r 0 }. If MGS is used to compute QR-decomposition then in the process we compute K = [r 0, Ar 0,..., A k 1 r 0 ] = Q k R q j = c 0 r 0 + c 1 Ar c j 1 A j 1 r 0, with c j 1 = 1 r jj 0. Arnoldi Process: Use Aq j instead of A j r 0.

44 MGS GMRES MGS GMRES is the most usual way to apply Arnoldi to large sparse unsymmetric linear systems. Succeeds despite loss of orthogonality

45 MGS GMRES Greenbaum, Rozložnik, and Strakoš (1997) Arnoldi basis vectors begin to loose their linear independence only after the GMRES residual norm has been reduced to almost its final level of accuracy. C. Paige, A. Rozložnik, and Z. Strakoš (2006) MGS-GMRES without reorthogonalization is backward stable. What matters is not the loss of orthogonality but the loss of linear independence.

46 Conclusions Orthogonality plays a fundamental role in applied mathematics. The Gram-Schmidt algorithms are at the core of much of what we do in computational mathematics. Stability of GS is now well understood. The GS Process is central to solving least squares problems and to Krylov subspace methods. The QR factorization paved the way for modern rank revealing factorizations. What will the future bring?

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe Rank Revealing QR factorization F. Guyomarc h, D. Mezher and B. Philippe 1 Outline Introduction Classical Algorithms Full matrices Sparse matrices Rank-Revealing QR Conclusion CSDA 2005, Cyprus 2 Situation

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,

More information

Subset Selection. Ilse Ipsen. North Carolina State University, USA

Subset Selection. Ilse Ipsen. North Carolina State University, USA Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Numer. Math. (2005) 101: 87 100 DOI 10.1007/s00211-005-0615-4 Numerische Mathematik Luc Giraud Julien Langou Miroslav Rozložník Jasper van den Eshof Rounding error analysis of the classical Gram-Schmidt

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale Subset Selection Deterministic vs. Randomized Ilse Ipsen North Carolina State University Joint work with: Stan Eisenstat, Yale Mary Beth Broadbent, Martin Brown, Kevin Penner Subset Selection Given: real

More information

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Dominant feature extraction

Dominant feature extraction Dominant feature extraction Francqui Lecture 7-5-200 Paul Van Dooren Université catholique de Louvain CESAME, Louvain-la-Neuve, Belgium Goal of this lecture Develop basic ideas for large scale dense matrices

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Masami Takata 1, Hiroyuki Ishigami 2, Kini Kimura 2, and Yoshimasa Nakamura 2 1 Academic Group of Information

More information