Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul,
|
|
- Sharyl Joseph
- 5 years ago
- Views:
Transcription
1 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD RUDNEI DIAS DA CUNHA Computing Laboratory, University of Kent at Canterbury, U.K. Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul, Brasil and TIM HOPKINS Computing Laboratory, University of Kent at Canterbury, U.K. Abstract. In this paper we investigate the performance of four dierent acceleration techniques on a variety of linear systems. Two of these techniques have been proposed by Dancis [] who uses a polynomial acceleration together with a sub-optimal. The two other techniques discussed are vector accelerations the " algorithm proposed by Wynn [9] and a generalisation of Aitken's algorithm, proposed by Graves-Morris [3]. The experimental results show that these accelerations can reduce the amount ofwork required to obtain a solution and that their rates of convergence are generally less sensitive to the value of than the straightforward method. However a poor choice of can result in particularly inecient solutions and more work is required to enable cheap estimates of a eective parameter to be obtained. Necessary conditions for the reduction in the computational work required for convergence are given for each of the accelerations, based on the number of oating-point operations. It is shown experimentally that the reduction in the number of iterations is related to the separation between the two largest eigenvalues of the iteration matrix for a given. This separation inuences the convergence of all the acceleration techniques above. Another important characteristic exhibited by these accelerations is that even if the number of iterations is not reduced signicantly compared to the method, they are competitive in terms of number of oating-point operations used and thus they reduce the overall computational workload. Key words:, Acceleration. Introduction Consider the solution of the non-singular, symmetric, positive-denite system of n linear equations Ax = b () by the iteration (I + D ; A L )x (k+) = ; ( ; )I ; D ; A U x (k) + D ; b k =0 ::: () where D = diag(a), A L and A U are the strictly lower and upper triangular parts of A and is the relaxation parameter. Convergence of the method is guaranteed for 0 <<. The iterative method is commonly used for the solution of large, sparse, linear systems that arise from the approximation of partial dierential equations. Its
2 RUDNEI DIAS DA CUNHA AND TIM HOPKINS rate of convergence is dependent on the value chosen for the iteration parameter. Following Young [5], the minimum number of iterations required for convergence is obtained when this parameter has an optimal value, b,which minimises the spectral radius of the iteration matrix, L =(I + D ; A L ) ; ; ( ; )I ; D ; AU : The value of b may be obtained from the largest eigenvalue,, of the Jacobi iteration matrix B = ;D ; (A L + A U ) using b = p + ; However computing b is relatively expensive in most cases. Adaptive procedures exist that can be used to update some initial approximation to b,asintheitpack C package [4], but some of the initial iterations have large error vectors (when compared to using some >). In this case some of the initial estimates of the solution are \wasted" during the iterative process and this may be undesirable. An alternative is to use an acceleration technique that may not be so sensitive to the choice of. The rst two acceleration techniques, detailed in x were proposed by Dancis [] and follow the usual approach of trying to select some for the iteration. We show that for one of his techniques the selection of is not as sensitive as expected. In section 3 we look at an extension of the " algorithm of Wynn applicable to vector and matrices iterations ([8][9]) and in x4 a generalisation of Aitken's algorithm as proposed by Graves-Morris [3] is described. These are vector accelerations and thus do not give a prescription for selecting. Our investigation is aimed at establishing how sensitive these accelerations are with respect to the value chosen for. Section x6 describes the test problems used in the investigation and the results obtained from the experiments. In section x7 we summarise the results and draw some conclusions. (3). Dancis's Acceleration Dancis proposes the use of a polynomial accelerator with, being chosen such that the coordinate of the error vector, corresponding to the largest eigenvalue of L, is annihilated. Dancis recommends that = +, where is the second largest eigenvalue of L. By computing the second largest eigenvalue,, of the Jacobi iteration matrix we can obtain the value of using Equation (3), with replaced by. Two dierent accelerations are proposed. The rst, which we refer to as P, is as follows. Perform r ; iterations and then apply x (r) =~x = ; r x (r;) ; r ; r x (0) (4) continuing with the iterations thereafter. The second acceleration (referred to as P) is obtained by ^x (i+) = ; ai x(i+) ; ai+ + ai ( ; a) ; a i+ x(0) i = ::: r (5)
3 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD 3 where a = ;. After r steps of the above iteration have been performed, we set x (r+) =^x(r+) and the iteration is used from then on. Dancis propose that the value of r be chosen according to a spectral radius analysis (see [, page 87]) its value is given by min =( ; ) m;r (( ; ) r + r r ) =( ; r )( b ; ) m r = ::: (6) where m is some arbitrarily chosen number of iterations. 3. Wynn's " Algorithm In 96 Wynn proposed an extension of the " algorithm ([8]) for vector and matrix iterations ([9]). Consider a sequence S = fs k g k=0 which isslowly convergent. If we dene the sequences " (k) ; = f0g k= and "(k) 0 = S then a new sequence " (k) i+ is generated by " (k) i+ =("(k+) i ; " (k) i ) ; + " (k+) i =0 ::: k =0 ::: (7) i; In certain circumstances, the sequences " (k) i, i = ::: converge faster to the limit of S. We investigate the behaviour of the " (k) sequence obtained from vectors generated by. We can express the new vector iterates, generated by two successive applications of (7) to three vectors, x (k;) " (k+) =, x(k) and x(k+),as (x (k+) ; x(k) ); ; (x (k) ; ); ; x(k;) (k) + x (8) where u ; =u=(j u j ) is the Moore-Penrose generalised inverse and u denotes the complex conjugate. In [3] it is shown that the value of should be taken as + i when using a sequence f" (k) i g i. For the acceleration of shown above, with "(k) as the sequence to be used, then =+, which isthevalue proposed by Dancis. 4. Graves-Morris's Acceleration Graves-Morris suggests, [3, page 5], that the sequence of vectors x (k) generated by () may be accelerated using t (k) = x (k;) (x (k;) ) ; x (k;) x (k;) x(k;) k = 3 ::: (9) where x (k) = x (k+) ; x (k), which is a generalisation of Aitken's process [6]. We will refer to (9) as the G-M iteration. Experimental results on a few model problems given in [3] show that, for the G-M iteration, using the value of which maximises the separation between the largest eigenvalue of the iteration matrix and the other eigenvalues, a reduction in the number of iterations needed for convergence occurs. We investigate whether this reduction is also observed in larger, practical problems and, if so, how critical the eigenvalue separation is to the rate of convergence. The value of is chosen in a similar way as for the " algorithm, following [3].
4 4 RUDNEI DIAS DA CUNHA AND TIM HOPKINS 5. Conditions for the Eectiveness of the Acceleration Techniques Our aim is to discover whether we can reduce the amount of computation needed by to solve a system of the form () by using one of the accelerations techniques. For instance, using the G-M acceleration one might expect (intuitively) that if at least one iteration is saved, the computing workload is reduced by a factor of roughly n multiplications compared to the iteration. We present below necessary conditions for the techniques discussed to require a smaller number of oating point multiplications (ops) than. These conditions are obtained in terms of the number of iterations (k) and the number of ops per iteration. For and each of the accelerations we consider that the total number of ops is ops = k (n + n)o ops P = ; k P (n + n)+n O ops P = ; k P (n + n)+rn O ops " = ; k " (n +7n) ; 6n O ops G-M = k G-M (n +3n)O where O represents a oating-point multiplication. It is easy to show that for k >k accel where accel denotes any of the acceleration techniques, we have ops > ops accel if and only if the following conditions are satised P: n>( + k P ; k )=(k ; k P ) P: n>(r + k P ; k )=(k ; k P ) " : n>(7k " ; k ; 6)=(k ; k " ) G-M: n>(3k G-M ; k )=(k ; k G-M )
5 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD 5 6. Description of the Test Problems and Experiments In this section we present the results obtained from solving a set of problems using MATLAB implementations of the above methods. Three of the problems are taken from the Boeing-Harwell library [] and for these we used Fortran77 and BLAS routines to implement the methods and LAPACK subroutines to compute the eigenvalues. The problems solved present dierent characteristics with respect to the distribution of eigenvalues and are of interest to the analysis of the acceleration techniques discussed. In each experiment, we describe the system of linear equations used, the value of b computed using (3) and the convergence criteria. The results are tabulated for each method in terms of number of iterations to achieve convergence and the ops counting of and the ratios between the ops counting of each acceleration with respect to. For the analysis of the G-M iteration, we provide a graph showing the two largest eigenvalues of the iteration matrix, and,such that j j > j j.for these test problems, the value of r for the Dancis's accelerations was found to be. 6.. Varga's Problem This is a system of order n = 6 described in [7, Appendix B] derived from the ve-point nite-dierence discretisation y)u + (x y)u = S(x in the region R =(0 :)(0 :) subject to the boundary is the outward normal. The functions D, and S are as shown in Figure. The condition number is (A) =3: Thevalue of b is :977.. Fig.. Region for Varga's problem. D(x,y)= σ(x,y)=0.03 D(x,y)= σ(x,y)=0.0 except where marked D(x,y)=3 σ(x,y)=0.05 S(x,y)=0 everywhere 0.
6 6 RUDNEI DIAS DA CUNHA AND TIM HOPKINS 6... Experiment In this problem, we are solving Ax =0 () which has the zero vector as the unique solution. The initial vector x (0) was set to ( ::: ) T and we iterated until the -norm of the solution vector was less than 0 ;4 (this stopping criterion was used in order to reproduce the behaviour of presented in [7, Appendix B, page 304]). A maximum of 000 iterations was allowed. An impressive reduction in the number of iterations is achieved by the G-M, " and P accelerations. Note that while the minimum number of iterations for and P is obtained when = b, for G-M, " and P this minimum occurs at some < b. TABLE I Varga's problem: number of iterations. G-M " P P : : : : : TABLE II Varga's problem: ops counting. Mops ratio to (%) G-M " P P :09 0:54 3:35 6:07 3:0 00:0 :3059 0:54 :30 4:06 :06 00:0 :5098 0:35 :0 3:4 :80 00:0 :737 0:9 5:84 9:89 5:4 00:0 :977 0:04 9:0 40:6 8:59 00:08 ratio to minimum ops(%) :09 369:86 45:93 83:6 4:8 369:94 : :38 3:39 55:36 8:6 364:46 : :08 7:6 8:49 5:83 878:6 : :93 8:3 47:95 5:4 485:0 :977 00:00 9:0 40:6 8:59 00:08
7 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD 7 Fig.. Varga's problem: Eigenvalue separation and number of iterations of the methods. Varga s problem Varga s problem G-M " P P k
8 8 RUDNEI DIAS DA CUNHA AND TIM HOPKINS 6.. Problem TRIDIAG This problem of order n = 00 is the system dened by A = b= () The condition number is (A) =3:9964 and the value of b is : Experiment The starting vector used is x (0) =(0 0 ::: 0) T and the iterations proceed until the -norm of the residual of the solution vector is less than 0 ;0 or the number of iterations exceeded 00. This example shows a situation where b is close to. The experiment shows that in this case little, if any, gain is achieved by the accelerations. Nonetheless even if a single iteration is spared a reduction in the computational eort is veried. TABLE III Problem TRIDIAG: number of iterations. G-M " P P : : :066 3 : : TABLE IV Problem TRIDIAG: ops counting. Mops ratio to (%) G-M " P P :03 0:4 93:48 9:45 9:75 00:08 :0369 0:3 93: 0:08 95:74 00:09 :066 0: 0:98 0:49 00:09 00:09 :0863 0: 97:34 00:86 04:64 00:09 :09 0:3 93: 96:47 04:43 00:09 ratio to minimum ops(%) :03 09:09 0:98 00:86 00:09 09:8 : :55 97:34 05:67 00:09 04:64 :066 00:00 0:98 0:49 00:09 00:09 : :00 97:34 00:86 04:64 00:09 :09 04:55 97:34 00:86 09:8 04:64
9 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD 9 Fig. 3. Problem TRIDIAG: Eigenvalue separation and number of iterations of the methods. Problem TRIDIAG Problem TRIDIAG G-M " P P k
10 0 RUDNEI DIAS DA CUNHA AND TIM HOPKINS 6.3. Problem NOS4 This problem is taken from the Harwell-Boeing sparse matrix collection [, pp ]. It is derived from a nite-element approximation to a structural engineering problem. The system has order n = 00 and its condition number is (A) =: The RHS vector was chosen as ( 0 0 ::: 0) T. The value of b is : Experiment In this problem we iterated until the -norm of the residual of the solution vector was less than 0 ;4 or the number of iterations exceeded 000. The initial estimate of x was (0 0 ::: 0) T. This example shows a similar behaviour to that of Varga's problem. However in this case the " acceleration is worse than and P and P fail to produce any acceleration. TABLE V Problem NOS4: number of iterations. G-M " P P : : : : : TABLE VI Problem NOS4: ops counting. Mops ratio to (%) G-M " P P :0978 :0 64:86 60:69 8:99 8:99 :933 7:55 8:90 60:74 67:38 67:38 :4889 4:94 6:8 57:9 409:00 409:00 :6845 :86 :34 53:09 706:7 706:7 :880 0:96 0:57 7:03 05:8 05:8 ratio to minimum ops(%) : :84 750:36 858:9 05:8 05:8 : :37 7:58 65:65 05:8 05:8 : :74 35:6 8:89 05:8 05:8 : :89 66:56 456:04 05:8 05:8 :880 00:00 0:57 7:03 05:8 05:8
11 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD Fig. 4. Problem NOS4: Eigenvalue separation and number of iterations of the methods. Problem NOS NOS4 problem G-M " P P k
12 RUDNEI DIAS DA CUNHA AND TIM HOPKINS 6.4. Problem 685 BUS This problem is taken from the Harwell-Boeing sparse matrix collection [, pp. 7]. The coecient matrix is derived from the modelling of a power system network. The system has order n = 685 and its condition number is (A) =4: The RHS vector was chosen as ( 0 0 ::: 0) T. The value of b is : Experiment In this experiment we used the same stopping criteria as in problem NOS4. It shows a behaviour similar to that exhibited in Varga's and NOS4 problems except for the " acceleration which was always worse than except at = b. TABLE VII Problem 685 BUS: number of iterations. G-M " P P : : : : : TABLE VIII Problem 685 BUS: ops counting. Mops ratio to (%) G-M " P P : :8 30:79 00:87 00:00 00:00 : :00 6:04 36:4 35:3 35:3 : :98 3:66 89:6 87:97 87:97 :855 57:04 :45 75:56 365:63 365:63 : :35 95:60 97:33 69:59 69:59 ratio to minimum ops(%) :066 69:59 360: 79:8 69:59 69:59 : :9 5: 79:8 69:59 69:59 :6394 6: 47: 79:8 69:59 69:59 :855 39:88 68:6 56:59 69:59 69:59 : :00 95:60 97:33 69:59 69:59
13 A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD 3 Fig. 5. Problem 685 BUS: Eigenvalue separation and number of iterations of the methods. Problem 685 BUS Problem 685 BUS G-M " P P k
14 4 RUDNEI DIAS DA CUNHA AND TIM HOPKINS 7. Summary We summarise the results presented in x6. The main points are as follows. The P iteration performed poorly in the test problems used in this paper. The other three methods generally outperformed the basic method with the G-M acceleration showing the most consistent improvements.. The G-M, " and P iterations almost invariably reduced the number of iterations required to obtain a specied accuracy when < b. In some cases this reduction was observed for = b. 3. As with the basic method, the G-M, " and P iterations are poor if chosen is greater than b. 4. The reduction in the number of iterations using the G-M method is proportional to the separation of and. Since the " and P iterations produce similar behaviour to G-M, we believe this separation also has an inuence on the convergence properties of these methods. Note that as the separation between and decreases all three iterations exhibit similar behaviour to. The experimental results presented show that the Graves-Morris's acceleration technique is the most attractive of the techniques discussed here, from the point of view both of the overall amount of computational work required and the range of the parameter for which the rate of convergence is improved. Though the number of experiments performed was small we believe the results indicate that these accelerations of the method are eective and may be applicable to other systems. Acknowledgements We would like to thank Peter Graves-Morris for his valuable comments which helped to improve this paper. References. J. Dancis. The optimal is not best for the iteration method. Linear Algebra and its Applications, 54-56:89{845, 99.. I.S. Du, R.G. Grimes, and J.G. Lewis. Users' guide for the Harwell-Boeing sparse matrix collection. Report TR/PA/9/86, CERFACS, October P.R. Graves-Morris. A review of Pade methods for the acceleration of convergence of a sequence of vectors. Report No. NA 9-3, Department of Mathematics, University of Bradford, October R.G. Grimes, D.R. Kincaid, and D.M. Young. ITPACK.0 user's guide. Report No. CNA-50, Center for Numerical Analysis, University of Texas at Austin, August L.A. Hageman and D.M. Young. Applied Iterative Methods. Academic Press, New York, D.R. Kincaid and W. Cheney. Numerical Analysis. Brooks/Cole, Pacic Grove, R.S. Varga. Matrix Iterative Analysis. Prentice-Hall, Englewood Clis, P. Wynn. On a device for computing the em(sn) transformation. Mathematical Tables and Aids of Computation, 0:9{96, P. Wynn. Acceleration techniques for iterated vector and matrix problems. Mathematics of Computation, 6:30{3, 96.
CAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationPROJECTED GMRES AND ITS VARIANTS
PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,
More informationComparison results between Jacobi and other iterative methods
Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied
More informationCelebrating Fifty Years of David M. Young s Successive Overrelaxation Method
Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method David R. Kincaid Department of Computer Sciences University of Texas at Austin, Austin, Texas 78712 USA Abstract It has been
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationA MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY
A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,
More informationReduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves
Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationARPACK. Dick Kachuma & Alex Prideaux. November 3, Oxford University Computing Laboratory
ARPACK Dick Kachuma & Alex Prideaux Oxford University Computing Laboratory November 3, 2006 What is ARPACK? ARnoldi PACKage Collection of routines to solve large scale eigenvalue problems Developed at
More informationMaximum-weighted matching strategies and the application to symmetric indefinite systems
Maximum-weighted matching strategies and the application to symmetric indefinite systems by Stefan Röllin, and Olaf Schenk 2 Technical Report CS-24-7 Department of Computer Science, University of Basel
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationContribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa
Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationMotivation: Sparse matrices and numerical PDE's
Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationWaveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Waveform Relaxation Method with Toeplitz Operator Splitting Sigitas Keras DAMTP 1995/NA4 August 1995 Department of Applied Mathematics and Theoretical
More informationSolution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method
Ninth International Conference on Computational Fluid Dynamics (ICCFD9), Istanbul, Turkey, July 11-15, 2016 ICCFD9-0113 Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume
More informationThe Algebraic Multigrid Projection for Eigenvalue Problems; Backrotations and Multigrid Fixed Points. Sorin Costiner and Shlomo Ta'asan
The Algebraic Multigrid Projection for Eigenvalue Problems; Backrotations and Multigrid Fixed Points Sorin Costiner and Shlomo Ta'asan Department of Applied Mathematics and Computer Science The Weizmann
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationAn Extrapolated Gauss-Seidel Iteration
mathematics of computation, volume 27, number 124, October 1973 An Extrapolated Gauss-Seidel Iteration for Hessenberg Matrices By L. J. Lardy Abstract. We show that for certain systems of linear equations
More informationModelling and implementation of algorithms in applied mathematics using MPI
Modelling and implementation of algorithms in applied mathematics using MPI Lecture 3: Linear Systems: Simple Iterative Methods and their parallelization, Programming MPI G. Rapin Brazil March 2011 Outline
More information2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique
STABILITY OF A METHOD FOR MULTIPLYING COMPLEX MATRICES WITH THREE REAL MATRIX MULTIPLICATIONS NICHOLAS J. HIGHAM y Abstract. By use of a simple identity, the product of two complex matrices can be formed
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More informationContents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces
Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationAn Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence
An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence John Pearson MSc Special Topic Abstract Numerical approximations of
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationNovel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL
Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh
More informationPositive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract
Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More informationAutomatica, 33(9): , September 1997.
A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationRelative Irradiance. Wavelength (nm)
Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationN.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a
WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure
More information11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)
Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an
More informationKEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio
COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SC-CM Stanford University Stanford,
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationproblem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e
A Parallel Solver for Extreme Eigenpairs 1 Leonardo Borges and Suely Oliveira 2 Computer Science Department, Texas A&M University, College Station, TX 77843-3112, USA. Abstract. In this paper a parallel
More informationBLZPACK: Description and User's Guide. Osni A. Marques. CERFACS Report TR/PA/95/30
BLZPAK: Description and User's Guide Osni A. Marques ERFAS Report TR/PA/95/30 BLZPAK: Description and User's Guide Osni A. Marques y October 1995 Abstract This report describes BLZPAK, an implementation
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationPreconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994
Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationD. Gimenez, M. T. Camara, P. Montilla. Aptdo Murcia. Spain. ABSTRACT
Accelerating the Convergence of Blocked Jacobi Methods 1 D. Gimenez, M. T. Camara, P. Montilla Departamento de Informatica y Sistemas. Univ de Murcia. Aptdo 401. 0001 Murcia. Spain. e-mail: fdomingo,cpmcm,cppmmg@dif.um.es
More informationNORTHERN ILLINOIS UNIVERSITY
ABSTRACT Name: Santosh Kumar Mohanty Department: Mathematical Sciences Title: Ecient Algorithms for Eigenspace Decompositions of Toeplitz Matrices Major: Mathematical Sciences Degree: Doctor of Philosophy
More informationMinimum and maximum values *
OpenStax-CNX module: m17417 1 Minimum and maximum values * Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 In general context, a
More information6 Linear Systems of Equations
6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationBaltzer Journals April 24, Bounds for the Trace of the Inverse and the Determinant. of Symmetric Positive Denite Matrices
Baltzer Journals April 4, 996 Bounds for the Trace of the Inverse and the Determinant of Symmetric Positive Denite Matrices haojun Bai ; and Gene H. Golub ; y Department of Mathematics, University of Kentucky,
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationPreface to Second Edition... vii. Preface to First Edition...
Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and
More informationNUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING
NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical
More informationADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo
35 Additive Schwarz for the Schur Complement Method Luiz M. Carvalho and Luc Giraud 1 Introduction Domain decomposition methods for solving elliptic boundary problems have been receiving increasing attention
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationS. Lucidi F. Rochetich M. Roma. Curvilinear Stabilization Techniques for. Truncated Newton Methods in. the complete results 1
Universita degli Studi di Roma \La Sapienza" Dipartimento di Informatica e Sistemistica S. Lucidi F. Rochetich M. Roma Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained
More informationGeneralization of Hensel lemma: nding of roots of p-adic Lipschitz functions
Generalization of Hensel lemma: nding of roots of p-adic Lipschitz functions (joint talk with Andrei Khrennikov) Dr. Ekaterina Yurova Axelsson Linnaeus University, Sweden September 8, 2015 Outline Denitions
More informationALGEBRAIC COMBINATORICS. c1993 C. D. Godsil
ALGEBRAIC COMBINATORICS c1993 C. D. Godsil To Gillian Preface There are people who feel that a combinatorial result should be given a \purely combinatorial" proof, but I am not one of them. For me the
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationNP-hardness of the stable matrix in unit interval family problem in discrete time
Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and
More informationNotes on the matrix exponential
Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not
More informationValidating an A Priori Enclosure Using. High-Order Taylor Series. George F. Corliss and Robert Rihm. Abstract
Validating an A Priori Enclosure Using High-Order Taylor Series George F. Corliss and Robert Rihm Abstract We use Taylor series plus an enclosure of the remainder term to validate the existence of a unique
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationReduction of two-loop Feynman integrals. Rob Verheyen
Reduction of two-loop Feynman integrals Rob Verheyen July 3, 2012 Contents 1 The Fundamentals at One Loop 2 1.1 Introduction.............................. 2 1.2 Reducing the One-loop Case.....................
More information1e N
Spectral schemes on triangular elements by Wilhelm Heinrichs and Birgit I. Loch Abstract The Poisson problem with homogeneous Dirichlet boundary conditions is considered on a triangle. The mapping between
More informationt x 0.25
Journal of ELECTRICAL ENGINEERING, VOL. 52, NO. /s, 2, 48{52 COMPARISON OF BROYDEN AND NEWTON METHODS FOR SOLVING NONLINEAR PARABOLIC EQUATIONS Ivan Cimrak If the time discretization of a nonlinear parabolic
More information2 Tikhonov Regularization and ERM
Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods
More informationLECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]
LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue
More informationNumerical Results on the Transcendence of Constants. David H. Bailey 275{281
Numerical Results on the Transcendence of Constants Involving e, and Euler's Constant David H. Bailey February 27, 1987 Ref: Mathematics of Computation, vol. 50, no. 181 (Jan. 1988), pg. 275{281 Abstract
More informationX. Hu, R. Shonkwiler, and M.C. Spruill. School of Mathematics. Georgia Institute of Technology. Atlanta, GA 30332
Approximate Speedup by Independent Identical Processing. Hu, R. Shonkwiler, and M.C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, GA 30332 Running head: Parallel iip Methods Mail
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,
More informationExam in TMA4215 December 7th 2012
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationIterative solution methods and their rate of convergence
Uppsala University Graduate School in Mathematics and Computing Institute for Information Technology Numerical Linear Algebra FMB and MN Fall 2007 Mandatory Assignment 3a: Iterative solution methods and
More informationComputers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices
Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis
More informationAn Optimal Finite Element Mesh for Elastostatic Structural. Analysis Problems. P.K. Jimack. University of Leeds. Leeds LS2 9JT, UK.
An Optimal Finite Element Mesh for Elastostatic Structural Analysis Problems P.K. Jimack School of Computer Studies University of Leeds Leeds LS2 9JT, UK Abstract This paper investigates the adaptive solution
More information