Numerical methods part 2

Similar documents
Space-time kinetics. Alain Hébert. Institut de génie nucléaire École Polytechnique de Montréal.

Iterative Methods for Solving A x = b

Iterative Methods. Splitting Methods

Chapter 7 Iterative Techniques in Matrix Algebra

The Conjugate Gradient Method

The collision probability method in 1D part 1

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Conjugate Gradient (CG) Method

Math 411 Preliminaries

JACOBI S ITERATION METHOD

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Course Notes: Week 1

Iterative methods for Linear System

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Introduction. Chapter One

Numerical Methods I Non-Square and Sparse Linear Systems

Computational Linear Algebra

Linear Algebra Massoud Malek

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Solvers. Andrew Hazel

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

6.4 Krylov Subspaces and Conjugate Gradients

APPLIED NUMERICAL LINEAR ALGEBRA

LINEAR SYSTEMS (11) Intensive Computation

Review Questions REVIEW QUESTIONS 71

4.6 Iterative Solvers for Linear Systems

Contents. Preface... xi. Introduction...

1 Number Systems and Errors 1

The multigroup Monte Carlo method part 1

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Iterative techniques in matrix algebra

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Iterative Methods for Ax=b

Numerical optimization

Lab 1: Iterative Methods for Solving Linear Systems

Numerical Methods in Matrix Computations

Diagonalization by a unitary similarity transformation

Process Model Formulation and Solution, 3E4

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Numerical Methods - Numerical Linear Algebra

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Domain decomposition on different levels of the Jacobi-Davidson method

Stabilization and Acceleration of Algebraic Multigrid Method

9.1 Preconditioned Krylov Subspace Methods

Numerical Optimization

The collision probability method in 1D part 2

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

8 Numerical methods for unconstrained problems

17 Solution of Nonlinear Systems

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Solving Linear Systems

Notes on Some Methods for Solving Linear Systems

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

CLASSICAL ITERATIVE METHODS

Computational Economics and Finance

ECE580 Partial Solution to Problem Set 3

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Math 411 Preliminaries

COMP 558 lecture 18 Nov. 15, 2010

Krylov Subspaces. Lab 1. The Arnoldi Iteration

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Introduction to Applied Linear Algebra with MATLAB

A Review of Matrix Analysis

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

M.A. Botchev. September 5, 2014

Programming, numerics and optimization

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Preface to the Second Edition. Preface to the First Edition

Iterative Methods for Smooth Objective Functions

Lecture 4: Linear Algebra 1

Numerical Analysis Comprehensive Exam Questions

Spectral Processing. Misha Kazhdan

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Optimization and Root Finding. Kurt Hornik

DELFT UNIVERSITY OF TECHNOLOGY

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

An Iterative Descent Method

Background Mathematics (2/2) 1. David Barber

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Introduction to Numerical Analysis

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Notes on PCG for Sparse Linear Systems

Applied Linear Algebra in Geoscience Using MATLAB

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Conjugate Gradient Method

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

1 Matrices and vector spaces

Transcription:

Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33

Content (week 6) 1 Solution of an eigenvalue problem The inverse power method The preconditioned power method The multigroup partitioning Convergence acceleration Solution of a fixed source eigenvalue problem The inverse power method The preconditioned power method ENE6103: Week 6 Numerical methods part 2 2/33

Solution of an eigenvalue problem 1 A consistent discretization of the transport or neutron diffusion equation leads to an eigenvalue matrix system of the form (47) A 1 «B v l = 0 ; l = 1, L λ l A and B are non-symmetric matrices resulting from the discretization of the transport or neutron diffusion equation λ l is the l th eigenvalue and v l is the corresponding eigenvector. The right hand side 0 is the zero-vector (whose components are zero). The non-symmetry of matrices A and B is due to the discretization process that is generally performed for G > 1 energy groups. The order L of these matrices is equal to the product of the number of energy groups times the number of flux unknowns per group. ENE6103: Week 6 Numerical methods part 2 3/33

Solution of an eigenvalue problem 2 Equation (47) has L eigensolutions or harmonics, each of them corresponding to root λ l of the characteristics equation, written as (48) det A 1 «B λ l = 0 ; l = 1, L. If the reactor geometry has symmetries, some eigenvalues may be degenerated (i.e., λ k = λ l if k l). We are looking for the fundamental solution, corresponding to the first harmonics of Eq. (47). K eff = λ 1 is the effective multiplication factor and Φ = v 1 is the discretized particle flux. Only the fundamental solution corresponds to a positive particle flux defined over the domain. A fundamental solution is never degenerated. The fundamental problem is therefore written A 1 «(49) B Φ = 0. K eff ENE6103: Week 6 Numerical methods part 2 4/33

The inverse power method 1 The basic algorithm for finding the fundamental solution of Eq. (49) is the inverse power method, an iterative strategy written as (50) Φ (0) given Φ (k+1) = 1 K (k) eff A 1 B Φ (k) if k 0 where Φ (0) is a non-zero initial estimate of the particle flux solution and K (k) eff of the effective multiplication factor at iteration k. is an estimate Different definitions of K (k) eff can be used with success. We take the internal product of each term in Eq. (49) with vector BΦ. We obtain (51) < AΦ, BΦ > 1 K eff < BΦ, BΦ >= 0 where we used the notation x y < x, y >. Equation (51) leads to the following definition of K (k) eff : (52) K (k) eff = < BΦ(k), BΦ (k) > < AΦ (k), BΦ (k) >. ENE6103: Week 6 Numerical methods part 2 5/33

The inverse power method 2 The product A 1 B is the iterative matrix of the inverse power method. Its spectrum determines the convergence characteristics of iteration (50). The harmonics that are elements of this spectrum are the solution to (53) λ l v l A 1 B v l = 0 ; l = 1, L. The required fundamental solution corresponds to l = 1, so that K eff = λ 1 and Φ = v 1. The spectrum is ordered as (54) K eff = λ 1 > λ 2 λ 3... λ L. The inverse power method generates an asymptotic series of components denoted Φ (k). The eigenvectors {v l ; l = 1, L} of matrix A 1 B are linearly independent, so that the estimate Φ (k) can be expressed as a linear combination of the form (55) Φ (k) = LX l=1 c (k) l v l. ENE6103: Week 6 Numerical methods part 2 6/33

The inverse power method 3 Now consider a quasi-converged estimate of the fundamental solution, so that K (k) eff λ 1. The next estimate (k + 1) can be obtained by substituting Eq. (55) into the iteration (50): (56) Φ (k+1) = 1 λ 1 A 1 B LX l=1 c (k) l v l. Substituting Eq. (53) into Eq. (56), we find (57) Φ (k+1) = 1 λ 1 L X l=1 c (k) l λ l v l. After m iterations, we obtain (58) Φ (k+m) = LX l=1 c (k) l λl λ 1 «m v l (59) with «m λl lim = 0 if l 2 m λ 1 since λ 1 > λ l l > 1, as suggested by Eq. (54). ENE6103: Week 6 Numerical methods part 2 7/33

The inverse power method 4 We may conclude that the inverse power method converges to the fundamental solution v 1. The convergence characteristics of the asymptotic series are the convergence order p and the asymptotic convergence constant C defined from the relation (60) C = lim k Φ (k+2) Φ (k+1) Φ (k+1) Φ (k) p. Comparing Eqs. (55) and (57), we write Φ (k+1) Φ (k) = (61) LX l=1 c (k) l Performing one more iteration, we find «" λ2 «λl 1 v l = c (k) 2 1 v 2 + λ 1 λ 1 LX l=3 c (k) l c (k) 2 «# λl 1 v l λ 1. (62) Φ (k+2) Φ (k+1) = LX l=1 = c (k) 2 c (k) l " λ 2 λ 1 λ l λ 1 «λl 1 v l λ 1 «λ2 1 v 2 + λ 1 LX l=3 c (k) l c (k) 2 λ l λ 1 «# λl 1 v l λ 1. ENE6103: Week 6 Numerical methods part 2 8/33

The inverse power method 5 The norm of a vector x is symbolized as x. Any norm of a vector x satisfies the three following conditons: 1. x > 0 if and only if x 0 2. k x = k x for any complex number k 3. x + y x + y We take the norm of Eqs. (61) and (62) at the limit where k tends to infinity. We write (63) «lim k Φ(k+1) Φ (k) = c (k) λ2 2 1 v 2 = c (k) 2 λ λ2 1 1 λ «v 2 1 and (64) lim k Φ(k+2) Φ (k+1) = c (k) 2 where we used the following property: λ 2 λ 1 «λ2 1 v 2 = c (k) 2 λ λ 2 1 λ 1 1 λ «v 2 1 λ2 (65) lim k c (k) l c (k) 2 = 0 if l 3. ENE6103: Week 6 Numerical methods part 2 9/33

The inverse power method 6 In conclusion: The inverse power method converges linearly (i.e., p = 1) toward the fundamental solution v 1 with an asymptotic convergence constant C equal to (66) C = lim k Φ (k+2) Φ (k+1) Φ (k+1) Φ (k) = λ 2 λ 1. For a linearly converging asymptotic series, the asymptotic convergence constant is the convergence rate of this series and is equal to the dominance ratio λ 2 /λ 1 of the iterative matrix A 1 B. Any iterative process with p = 1 converges if its asymptotic convergence constant (or convergence rate) is smaller than one. ENE6103: Week 6 Numerical methods part 2 10/33

The inverse power method 7 The practical implementation of the inverse power method takes advantage of any particular characteristics of matrices A and B in order to reduce computational costs. If these matrices are full, one may compute the iterative matrix A 1 B once at the beginning of the algorithm. The Matlab script f=aleig(a,b,eps) uses this idea. If matrix A has a profiled shape, a factorization of A may be a better alternative. The Matlab script [iter,eval,evect]=aleig(a,b,eps) finds the fundamental eigenvalue and corresponding eigenvector of Eq. (49) using the inverse power method. The script uses three input parameters: a and b are the input matrices eps is the convergence parameter of the inverse power method. The script returns a list containing: the number of iterations the fundamental eigenvalue 1/K eff the fundamental eigenvector Φ. ENE6103: Week 6 Numerical methods part 2 11/33

The inverse power method 8 Matrix A is first inverted using function inv(a) before being multiplied by matrix b. This approach is similar to an implementation of this algorithm where matrix A is inverted in place. We could have used a = a \ b in the Matlab script as an alternative approach. The estimate of K eff at iteration k is obtained using a relation similar to Eq. (52) and written DΦ λ (k) = 1 (k), A 1 B Φ (k)e (67) = DA 1 B Φ (k), A 1 B Φ (k)e. K (k) eff The iterative process is assumed to be converged when (68) λ (k) λ (k 1) ǫ λ (k) and (69) (k) max φ 1 i L i φ (k 1) i ǫ max 1 i L φ (k) i where ǫ is the stopping criterion of the iterative process The first condition (68) is not sufficient to stop iterations as K eff may converge to the computer epsilon before the eigenvector convergence. ENE6103: Week 6 Numerical methods part 2 12/33

The inverse power method 9 In spite of its simplicity, the script aleig is generally inefficient in real-life situations. Its most important weakness are: Matrices A and B are stored as full matrices in spite of the fact that they may contain many zero components. This storage is causing an excess usage of memory and many unnecessary multiplications by zero components. Inversion of matrix A generates a large number of non-zero components not initially present in this matrix. This is the fill-in phenomenon. Convergence of the inverse power method may become too slow to succeed in cases where the dominance ratio is close to one. This phenomenon generally occurs with the modeling of a large power reactor, such as the pressurized water reactors (PWRs) used for electricity production. In this case, it is possible to meet the convergence criteria of Eqs. (68) and (69) to the computer epsilon without having effectively converged. This is the false convergence phenomenon. ENE6103: Week 6 Numerical methods part 2 13/33

The inverse power method 10 Corrective techniques exist to alleviate the effects of each of these weaknesses. The first corrective technique consists in replacing the inversion of matrix A by its factorization. Two factorization techniques are available: The Cholesky factorization is used in cases where matrix A is symmetric. The Crout factorization is used when matrix A is non-symmetric. Such factorizations are only effective in so far as matrix A is first partitioned into group-by-group blocks, each block corresponding to specific primary and secondary energy groups. The discretization of large 2D and 3D domains generates matrices with many zero components inside their external profile. Consequently, fill-in will occur during factorization, making these approaches non-feasible. It is possible to introduce the preconditioned power method and to completely avoid the fill-in phenomena. Convergence acceleration techniques are available in cases where the dominance ratio is close to one. ENE6103: Week 6 Numerical methods part 2 14/33

The preconditioned power method 1 The preconditioned power method is a kind of power method permitting us to avoid inversion or factorization of matrix A. We start from the basic recursion of the inverse power method: (70) Φ (k+1) = 1 K (k) eff A 1 BΦ (k). The calculation of x (k+1) = A 1 B Φ (k) is equivalent to the resolution of a linear system written as Ax (k+1) = b (k) where b (k) = B Φ (k). Such a solution can be performed by a direct approach (either using the Gauss elimination or a factorization approach), at the cost of component fill-in that may become excessive in situations related to the discretization over 2D or 3D domains. This is suggesting an alternative approach based on the iterative method. Two iteration levels are required. The outer or power iterative level is controlled by index k and involves the computation of successive estimates of the effective multiplication factor. The inner level is related to the solution of a linear system and is controlled by index j. ENE6103: Week 6 Numerical methods part 2 15/33

The preconditioned power method 2 The calculation of Φ (k+1) involves the iterative resolution of a linear system Ax (k+1) = b (k) using an initial estimate written x (k+1,0) and defined as (71) x (k+1,0) = K (k) eff Φ(k). After performing J inner iterations, Eq. (70) is replaced by (72) Φ (k+1) = 1 K (k) eff x (k+1,j). where the estimate x (k+1,j) is obtained in terms of x (k+1,0) as (73) x (k+1,j) = (I R J ) A 1 b (k) + R J x (k+1,0) = (I R J ) A 1 BΦ (k) + R J K (k) eff Φ(k) where the residual matrix is defined as R = I MA, with M representing a preconditioning matrix, close to A 1. ENE6103: Week 6 Numerical methods part 2 16/33

The preconditioned power method 3 Substituting Eq. (73) into Eq. (72), we obtain (74) Φ (k+1) = 1 K (k) eff = Φ (k) M (J) " (I R J ) A 1 BΦ (k) + R J Φ (k) AΦ (k) 1 K (k) eff B Φ (k) # where M (J) is the preconditioning matrix representing J inner iterations. Its generating definition is (75) and its first three values are M (J) = (I R J ) A 1 (76) M (1) = M M (2) = (2 I M A) M M (3) = [3 I (3 I M A) M A] M. The residual matrices associated with these three preconditioning matrices are R, R 2 and R 3, respectively. Each inner iteration reduces the error by one order of magnitude. ENE6103: Week 6 Numerical methods part 2 17/33

The preconditioned power method 4 At the limit where R = O, the preconditioning matrix becomes identical to A 1 and the preconditioned power method reduces to the inverse power method. In the general case where the preconditioning matrix is different from A 1, the preconditioned power method converges linearly (i.e., p = 1) at a rate slower than the convergence rate of the inverse power method. We can show that the asymptotic convergence constant in this case is (77) C = lim k Φ (k+2) Φ (k+1) Φ (k+1) Φ (k) λ 2 λ 1 + 1 λ 2 RJ λ 1 for an iterative process with J inner iterations per power iteration. The preconditioned power method converges linearly, so that C < 1 is a necessary condition for convergence. According to Eq. (77), this condition is met if the norm of the residual matrix is smaller than one. In other situations, divergence may occur. ENE6103: Week 6 Numerical methods part 2 18/33

The preconditioned power method 5 The practical definition of the recursion for the preconditioned power method is summarized as (78) Φ (0) given Φ (k+1) = Φ (k) M (J) " AΦ (k) 1 K (k) eff B Φ (k) # if k 0. Two choices are left to the user of this method. One must select the type of preconditioning matrix. Available choices are the Jacobi, Gauss-Seidel, SSOR or ADI preconditioning matrices. One must select the number of inner iterations J to be performed in each power iteration. Values J = 1 are J = 2 are the most usual ones. We suggest selecting the smallest value of J that guarantees a convergence of the power method in less than 75 iterations. In code Trivac, the value of J is automatically increased by one unity when we observe convergence difficulties of the power method. ENE6103: Week 6 Numerical methods part 2 19/33

The multigroup partitioning 1 The matrix A resulting from a consistent discretization of the transport or neutron diffusion equation is not symmetric, except for one-speed problems. A generally exhibits a block structure, similar to the example depicted in figure, where the diagonal blocks are symmetric. The complete matrix system is an eigenvalue problem of the form (79) A 1 «B Φ = 0 K eff and can be written in a block structure, each block representing specific values of the primary and secondary energy group indices. A 11 Φ 1 B 11 B 12 B 13 B 14 B 15 Φ 1 0 A 21 A 22 A 32 A 42 A 33 A 43 A 44 A 45 Φ 2 Φ 3 Φ 4 1 K eff B 21 B 22 B 23 B 24 B 25 Φ 2 Φ 3 Φ 4 = 0 0 0 A 54 A 55 Φ 5 Φ 5 0 ENE6103: Week 6 Numerical methods part 2 20/33

The multigroup partitioning 2 Assuming that the diagonal blocks are symmetric, they can be factorized using the Cholesky method (L D L ). Equation (79) can be rewritten in its multigroup form as (80) A g,g Φ g = GX h=1 h g A g,h Φ h + 1 K eff GX B g,h Φ h h=1 where G is the number of energy groups. In the particular case where the up-scattering blocks vanish, i.e., if A g,h = O for all group indices h > g, the eigenvalue system (80) can be evaluated in a recursive way, using (81) Φ 1 = Φ g = A 1 g,g 1 A 1 1,1 K eff 0 @ g 1 X h=1 GX B 1,h Φ h h=1 A g,h Φ h + 1 K eff 1 GX B g,h Φ h A ; g = 2, G. h=1 ENE6103: Week 6 Numerical methods part 2 21/33

The multigroup partitioning 3 This approach is the equivalent of using the following definition for the inverse of A: (82) A 1 = 0 B @ A 1 33 A 1 11 O O... O A 1 22 A 21 A 1 11 A 1 22 O... O A 31 + A 32 A 1 22 A 21. A 1 11 A 1 33 A 32 A 1 22 A 1 33... O..... A 1 G,G 1 C A If a preconditioning power method is used, we introduce diagonal preconditioning blocks written {M g,g ; g = 1, G} so as to approximate the inverse blocks {A 1 g,g ; g = 1, G}. A global preconditioning matrix consistent with the definition of A 1 of Eq. (82) is therefore written M (J) = (83) 0 B @ M (J) 33 M (J) 11 O O... O M (J) 22 A 21 M (J) 11 M (J) 22 O... O A 31 + A 32 M (J) 22 A 21. M (J) 11 M (J) 33 A 32 M (J) 22 M (J) 33... O..... M (J) G,G 1 C A where J is the number of inner iterations per outer iteration. ENE6103: Week 6 Numerical methods part 2 22/33

The multigroup partitioning 4 In cases where the up-scattering block contributions are small, it is possible to use Eq. (83) as preconditioning matrix, even though this matrix is neglecting the up-scattering phenomena. In this case, the preconditioning power method is written (84) Φ (0) g Φ (k+1) g ; g = 1, G given = Φ (k) g M (J) g,g " A g,g Φ (k) g GX h=g+1 g 1 X h=1 A g,h Φ (k+1) h A g,h Φ (k) h 1 K (k) eff GX h=1 B g,h Φ (k) h # if k 0 and g = 1, G. A similar approach can be set up to obtain the adjoint solution. In this case, the group iterations start with the matrix sub-system of group G and proceed downward toward group 1. ENE6103: Week 6 Numerical methods part 2 23/33

Convergence acceleration 1 The convergence of the inverse or preconditioning power method becomes very slow in cases where the asymptotic convergence constant C is close to one. Acceleration techniques are available to reduce the asymptotic convergence constant, provided that this constant is initially smaller than one. Three well-known approaches are In the Wielandt method, Eq. (49) is replaced by (85)» «1 (A λ e B) λ e B Φ = 0 K eff where λ e is an approximation of the requested eigenvalue. The iterative matrix is set equal to (A λ e B) 1 B and the eigenvalue of the iterative process is 1 λ K e. eff This method is efficient at reducing the dominance ratio of the iterative matrix. Numerical difficulties appear when λ e is too close to 1/K eff as matrix (A λ e B) becomes quasi-singular in this case. A linear system using this matrix on the left side cannot be solved with the multigroup partitioning method. Its use is therefore limited to few-energy cases. It is the favorite iterative method for solving the matrix system originating from the analytic nodal method. ENE6103: Week 6 Numerical methods part 2 24/33

Convergence acceleration 2 The Chebyshev acceleration method is based on a rewriting of the inverse power method of the form Φ (0) given Φ (1) = Φ (0) + α (0) g (0) (86) Φ (k+1) = Φ (k) + α (k) n g (k) + β (k) h Φ (k) Φ (k 1)io if k 1 where (87) g (k) = Φ (k) + 1 K (k) eff A 1 BΦ (k). Constants α (k) and β (k) are the acceleration parameters computed in such a way as to reduce the asymptotic convergence constant. Successive power iteration cycles, of about six iterations each, are performed using an optimal sequence of acceleration parameters based on the knowledge of the dominance ratio of the iterative matrix. A free iteration can be performed by setting α (k) = 1 and β (k) = 0. This method is very efficient provided the dominance ratio is known accurately. An over-estimation of the dominance ratio destabilizes the iterative process. ENE6103: Week 6 Numerical methods part 2 25/33

Convergence acceleration 3 The variational acceleration method consists in computing the acceleration parameters in such a way as to minimize a norm of the residual of the numerical solution at the next iteration. This approach will now be presented in more detail. The variational acceleration method is somewhat similar to the Chebyshev acceleration method. It is based on a rewriting of the preconditioned power method of the form Φ (0) given Φ (1) = Φ (0) + α (0) g (0) (88) Φ (k+1) = Φ (k) + α (k) n g (k) + β (k) h Φ (k) Φ (k 1)io if k 1 where (89) g (k) = M (J) " AΦ (k) 1 K (k) eff B Φ (k) #. Constants α (k) and β (k) are the acceleration parameters computed in such a way as to minimize a norm of the residual of the numerical solution at the next iteration. ENE6103: Week 6 Numerical methods part 2 26/33

The variational acceleration method 1 Knowledge of the dominance ratio or of any spectral property of the iterative matrix is not required. We will limit ourselves to a variant known as the symmetric variational acceleration technique (SVAT) permitting the acceleration of convergence in cases where matrices A and B are non-symmetric, without requiring an evaluation of the adjoint solution. The residual R (k) at iteration k is defined as (90) R (k) = AΦ (k) 1 K (k) eff B Φ (k) where the estimate of the effective multiplication factor K (k) eff write DAΦ (k), BΦ (k)e is given by Eq. (52). We (91) R (k) = AΦ (k) DBΦ (k), BΦ (k)e B Φ(k). We introduce the L 2 norm of this residual, defined as (92) x 2 = x x ENE6103: Week 6 Numerical methods part 2 27/33

The variational acceleration method 2 We select values of the acceleration parameters α (k) and β (k) in such a way as to minimize the L 2 norm of the residual at iteration k + 1. The two parameters are the solution of non-linear relations obtained from (93) and (94) α (k) R(k+1) 2 = 0 β (k) R(k+1) 2 = 0 Equations (93) and (94) form a non-linear system of two equations with α (k) and β (k) as unknowns. The solution α (k) and β (k) is obtained by solving Eqs. (93) and (94) using a Newton-Raphson iterative method. It is not possible to prove that a solution exists and that a solution is unique. However, the practical use of these relations for solving a large variety of problems has always led to the determination of consistent acceleration parameters with α (k) > 1. ENE6103: Week 6 Numerical methods part 2 28/33

The variational acceleration method 3 Two or three iterations are generally sufficient to converge, starting from the initial estimate α (k) = 1 and β (k) = 0. It was observed that the variational acceleration strategy is stable, even when all the power iterations are accelerated. A variational acceleration strategy using a unique acceleration parameter α (k) is unstable in this case. Using two acceleration parameters appears to be a minimum. If all the power iterations are accelerated, the SVAT method becomes very similar to the conjugate gradient method applied to the eigenvalue problem. Using a unique acceleration parameter is similar to a steepest descent approach. There is no need to accelerate every power iteration. In its default behavior, the computer code Trivac uses cycles of six iterations, three free followed by three accelerated. This strategy represents a practical choice for reducing the computer resources and maintaining a good convergence stability. ENE6103: Week 6 Numerical methods part 2 29/33

Fixed source eigenvalue problems 1 An eigenvalue matrix system of the form of Eq. (49) is written (95) (A µ 1 B)Φ = 0 where A et B are non-symmetric matrices, µ 1 = 1/K eff is the fundamental eigenvalue, Φ is the discretized particle flux and 0 is the zero-vector (whose components are zero). The adjoint problem is defined after transposition of the matrices as (96) A µ 1 B Φ = 0 and the corresponding direct fixed source eigenvalue equation is (97) (A µ 1 B)Γ = S where the fixed source satisfies S Φ = 0. If Γ is a solution of Eq. (97), then any vector of the form Γ = Γ + αφ is also a solution for any value of constant α. A particular solution can be selected using the following normalization condition: (98) Γ B Φ = 0. ENE6103: Week 6 Numerical methods part 2 30/33

The inverse power method 1 The basic algorithm for finding the fundamental solution of Eq. (97) is the inverse power method, an iterative strategy written as Γ (0) given (99) Γ (k+1) = A 1 S + µ 1 B Γ (k) if k 0 where Γ (0) is an initial estimate of the fixed source eigenvalue solution. However, the above iterative strategy may not converge without imposing a normalization condition similar to Eq. (98). A decontamination procedure can be introduced in Eq. (99) as (100) Γ (0) given Γ (k+1) = A 1 S + µ 1 A 1 φ «φ φ B Γ (k) if k 0. Bφ The effectiveness of this decontamination procedure and the convergence of the inverse power method can be proven using a spectral approach. The decontamination is only required for the first iteration. However, it is generally applied to all iterations in order to get rid of numerical instabilities produced by roundoff errors. ENE6103: Week 6 Numerical methods part 2 31/33

The inverse power method 2 The Matlab script [iter,eval,delta]=alfse(a,b,evect,adect,sour,eps) finds the solution of the fixed source eigenvalue problem defined in Eq. (97) using the inverse power method. The script uses six input parameters: a and b are the input matrices Arrays evect and adect are the direct and adjoint solutions of the corresponding eigenvalue problem Array sour is the fixed source. It must be orthogonal to adect. eps is the convergence parameter of the inverse power method. The script returns a list containing: the number of iterations the fundamental eigenvalue µ 1 = 1/K eff of the corresponding eigenvalue problem the solution of the fixed source eigenvalue problem Γ. ENE6103: Week 6 Numerical methods part 2 32/33

The preconditioned power method 1 The iterative system used to solve the fixed source eigenvalue Eq. (97) can be written in a form similar to Eq. (78), suitable for the application of variational acceleration: (101) Γ (0) given h i Γ (k+1) = Γ (k) M (J) AΓ (k) µ 1 B Γ (k) S φ φ φ Bφ B Γ(k) if k 0 where M (J) is a preconditioning matrix corresponding to the application of J inner iterations. The variational acceleration method is based on a preconditioned power method of the form Γ (0) given Γ (1) = Γ (0) + α (0) g (0) (102) Γ (k+1) = Γ (k) + α (k) n g (k) + β (k) h Γ (k) Γ (k 1)io if k 1 (103) h i where g (k) = M (J) AΓ (k) µ 1 B Γ (k) S φ φ φ Bφ B Γ(k). Constants α (k) and β (k) are the acceleration parameters computed in such a way as to minimize a norm of the residual of the numerical solution at the next iteration. ENE6103: Week 6 Numerical methods part 2 33/33