Numerical methods part 2

Size: px
Start display at page:

Download "Numerical methods part 2"

Transcription

1 Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33

2 Content (week 6) 1 Solution of an eigenvalue problem The inverse power method The preconditioned power method The multigroup partitioning Convergence acceleration Solution of a fixed source eigenvalue problem The inverse power method The preconditioned power method ENE6103: Week 6 Numerical methods part 2 2/33

3 Solution of an eigenvalue problem 1 A consistent discretization of the transport or neutron diffusion equation leads to an eigenvalue matrix system of the form (47) A 1 «B v l = 0 ; l = 1, L λ l A and B are non-symmetric matrices resulting from the discretization of the transport or neutron diffusion equation λ l is the l th eigenvalue and v l is the corresponding eigenvector. The right hand side 0 is the zero-vector (whose components are zero). The non-symmetry of matrices A and B is due to the discretization process that is generally performed for G > 1 energy groups. The order L of these matrices is equal to the product of the number of energy groups times the number of flux unknowns per group. ENE6103: Week 6 Numerical methods part 2 3/33

4 Solution of an eigenvalue problem 2 Equation (47) has L eigensolutions or harmonics, each of them corresponding to root λ l of the characteristics equation, written as (48) det A 1 «B λ l = 0 ; l = 1, L. If the reactor geometry has symmetries, some eigenvalues may be degenerated (i.e., λ k = λ l if k l). We are looking for the fundamental solution, corresponding to the first harmonics of Eq. (47). K eff = λ 1 is the effective multiplication factor and Φ = v 1 is the discretized particle flux. Only the fundamental solution corresponds to a positive particle flux defined over the domain. A fundamental solution is never degenerated. The fundamental problem is therefore written A 1 «(49) B Φ = 0. K eff ENE6103: Week 6 Numerical methods part 2 4/33

5 The inverse power method 1 The basic algorithm for finding the fundamental solution of Eq. (49) is the inverse power method, an iterative strategy written as (50) Φ (0) given Φ (k+1) = 1 K (k) eff A 1 B Φ (k) if k 0 where Φ (0) is a non-zero initial estimate of the particle flux solution and K (k) eff of the effective multiplication factor at iteration k. is an estimate Different definitions of K (k) eff can be used with success. We take the internal product of each term in Eq. (49) with vector BΦ. We obtain (51) < AΦ, BΦ > 1 K eff < BΦ, BΦ >= 0 where we used the notation x y < x, y >. Equation (51) leads to the following definition of K (k) eff : (52) K (k) eff = < BΦ(k), BΦ (k) > < AΦ (k), BΦ (k) >. ENE6103: Week 6 Numerical methods part 2 5/33

6 The inverse power method 2 The product A 1 B is the iterative matrix of the inverse power method. Its spectrum determines the convergence characteristics of iteration (50). The harmonics that are elements of this spectrum are the solution to (53) λ l v l A 1 B v l = 0 ; l = 1, L. The required fundamental solution corresponds to l = 1, so that K eff = λ 1 and Φ = v 1. The spectrum is ordered as (54) K eff = λ 1 > λ 2 λ 3... λ L. The inverse power method generates an asymptotic series of components denoted Φ (k). The eigenvectors {v l ; l = 1, L} of matrix A 1 B are linearly independent, so that the estimate Φ (k) can be expressed as a linear combination of the form (55) Φ (k) = LX l=1 c (k) l v l. ENE6103: Week 6 Numerical methods part 2 6/33

7 The inverse power method 3 Now consider a quasi-converged estimate of the fundamental solution, so that K (k) eff λ 1. The next estimate (k + 1) can be obtained by substituting Eq. (55) into the iteration (50): (56) Φ (k+1) = 1 λ 1 A 1 B LX l=1 c (k) l v l. Substituting Eq. (53) into Eq. (56), we find (57) Φ (k+1) = 1 λ 1 L X l=1 c (k) l λ l v l. After m iterations, we obtain (58) Φ (k+m) = LX l=1 c (k) l λl λ 1 «m v l (59) with «m λl lim = 0 if l 2 m λ 1 since λ 1 > λ l l > 1, as suggested by Eq. (54). ENE6103: Week 6 Numerical methods part 2 7/33

8 The inverse power method 4 We may conclude that the inverse power method converges to the fundamental solution v 1. The convergence characteristics of the asymptotic series are the convergence order p and the asymptotic convergence constant C defined from the relation (60) C = lim k Φ (k+2) Φ (k+1) Φ (k+1) Φ (k) p. Comparing Eqs. (55) and (57), we write Φ (k+1) Φ (k) = (61) LX l=1 c (k) l Performing one more iteration, we find «" λ2 «λl 1 v l = c (k) 2 1 v 2 + λ 1 λ 1 LX l=3 c (k) l c (k) 2 «# λl 1 v l λ 1. (62) Φ (k+2) Φ (k+1) = LX l=1 = c (k) 2 c (k) l " λ 2 λ 1 λ l λ 1 «λl 1 v l λ 1 «λ2 1 v 2 + λ 1 LX l=3 c (k) l c (k) 2 λ l λ 1 «# λl 1 v l λ 1. ENE6103: Week 6 Numerical methods part 2 8/33

9 The inverse power method 5 The norm of a vector x is symbolized as x. Any norm of a vector x satisfies the three following conditons: 1. x > 0 if and only if x 0 2. k x = k x for any complex number k 3. x + y x + y We take the norm of Eqs. (61) and (62) at the limit where k tends to infinity. We write (63) «lim k Φ(k+1) Φ (k) = c (k) λ2 2 1 v 2 = c (k) 2 λ λ2 1 1 λ «v 2 1 and (64) lim k Φ(k+2) Φ (k+1) = c (k) 2 where we used the following property: λ 2 λ 1 «λ2 1 v 2 = c (k) 2 λ λ 2 1 λ 1 1 λ «v 2 1 λ2 (65) lim k c (k) l c (k) 2 = 0 if l 3. ENE6103: Week 6 Numerical methods part 2 9/33

10 The inverse power method 6 In conclusion: The inverse power method converges linearly (i.e., p = 1) toward the fundamental solution v 1 with an asymptotic convergence constant C equal to (66) C = lim k Φ (k+2) Φ (k+1) Φ (k+1) Φ (k) = λ 2 λ 1. For a linearly converging asymptotic series, the asymptotic convergence constant is the convergence rate of this series and is equal to the dominance ratio λ 2 /λ 1 of the iterative matrix A 1 B. Any iterative process with p = 1 converges if its asymptotic convergence constant (or convergence rate) is smaller than one. ENE6103: Week 6 Numerical methods part 2 10/33

11 The inverse power method 7 The practical implementation of the inverse power method takes advantage of any particular characteristics of matrices A and B in order to reduce computational costs. If these matrices are full, one may compute the iterative matrix A 1 B once at the beginning of the algorithm. The Matlab script f=aleig(a,b,eps) uses this idea. If matrix A has a profiled shape, a factorization of A may be a better alternative. The Matlab script [iter,eval,evect]=aleig(a,b,eps) finds the fundamental eigenvalue and corresponding eigenvector of Eq. (49) using the inverse power method. The script uses three input parameters: a and b are the input matrices eps is the convergence parameter of the inverse power method. The script returns a list containing: the number of iterations the fundamental eigenvalue 1/K eff the fundamental eigenvector Φ. ENE6103: Week 6 Numerical methods part 2 11/33

12 The inverse power method 8 Matrix A is first inverted using function inv(a) before being multiplied by matrix b. This approach is similar to an implementation of this algorithm where matrix A is inverted in place. We could have used a = a \ b in the Matlab script as an alternative approach. The estimate of K eff at iteration k is obtained using a relation similar to Eq. (52) and written DΦ λ (k) = 1 (k), A 1 B Φ (k)e (67) = DA 1 B Φ (k), A 1 B Φ (k)e. K (k) eff The iterative process is assumed to be converged when (68) λ (k) λ (k 1) ǫ λ (k) and (69) (k) max φ 1 i L i φ (k 1) i ǫ max 1 i L φ (k) i where ǫ is the stopping criterion of the iterative process The first condition (68) is not sufficient to stop iterations as K eff may converge to the computer epsilon before the eigenvector convergence. ENE6103: Week 6 Numerical methods part 2 12/33

13 The inverse power method 9 In spite of its simplicity, the script aleig is generally inefficient in real-life situations. Its most important weakness are: Matrices A and B are stored as full matrices in spite of the fact that they may contain many zero components. This storage is causing an excess usage of memory and many unnecessary multiplications by zero components. Inversion of matrix A generates a large number of non-zero components not initially present in this matrix. This is the fill-in phenomenon. Convergence of the inverse power method may become too slow to succeed in cases where the dominance ratio is close to one. This phenomenon generally occurs with the modeling of a large power reactor, such as the pressurized water reactors (PWRs) used for electricity production. In this case, it is possible to meet the convergence criteria of Eqs. (68) and (69) to the computer epsilon without having effectively converged. This is the false convergence phenomenon. ENE6103: Week 6 Numerical methods part 2 13/33

14 The inverse power method 10 Corrective techniques exist to alleviate the effects of each of these weaknesses. The first corrective technique consists in replacing the inversion of matrix A by its factorization. Two factorization techniques are available: The Cholesky factorization is used in cases where matrix A is symmetric. The Crout factorization is used when matrix A is non-symmetric. Such factorizations are only effective in so far as matrix A is first partitioned into group-by-group blocks, each block corresponding to specific primary and secondary energy groups. The discretization of large 2D and 3D domains generates matrices with many zero components inside their external profile. Consequently, fill-in will occur during factorization, making these approaches non-feasible. It is possible to introduce the preconditioned power method and to completely avoid the fill-in phenomena. Convergence acceleration techniques are available in cases where the dominance ratio is close to one. ENE6103: Week 6 Numerical methods part 2 14/33

15 The preconditioned power method 1 The preconditioned power method is a kind of power method permitting us to avoid inversion or factorization of matrix A. We start from the basic recursion of the inverse power method: (70) Φ (k+1) = 1 K (k) eff A 1 BΦ (k). The calculation of x (k+1) = A 1 B Φ (k) is equivalent to the resolution of a linear system written as Ax (k+1) = b (k) where b (k) = B Φ (k). Such a solution can be performed by a direct approach (either using the Gauss elimination or a factorization approach), at the cost of component fill-in that may become excessive in situations related to the discretization over 2D or 3D domains. This is suggesting an alternative approach based on the iterative method. Two iteration levels are required. The outer or power iterative level is controlled by index k and involves the computation of successive estimates of the effective multiplication factor. The inner level is related to the solution of a linear system and is controlled by index j. ENE6103: Week 6 Numerical methods part 2 15/33

16 The preconditioned power method 2 The calculation of Φ (k+1) involves the iterative resolution of a linear system Ax (k+1) = b (k) using an initial estimate written x (k+1,0) and defined as (71) x (k+1,0) = K (k) eff Φ(k). After performing J inner iterations, Eq. (70) is replaced by (72) Φ (k+1) = 1 K (k) eff x (k+1,j). where the estimate x (k+1,j) is obtained in terms of x (k+1,0) as (73) x (k+1,j) = (I R J ) A 1 b (k) + R J x (k+1,0) = (I R J ) A 1 BΦ (k) + R J K (k) eff Φ(k) where the residual matrix is defined as R = I MA, with M representing a preconditioning matrix, close to A 1. ENE6103: Week 6 Numerical methods part 2 16/33

17 The preconditioned power method 3 Substituting Eq. (73) into Eq. (72), we obtain (74) Φ (k+1) = 1 K (k) eff = Φ (k) M (J) " (I R J ) A 1 BΦ (k) + R J Φ (k) AΦ (k) 1 K (k) eff B Φ (k) # where M (J) is the preconditioning matrix representing J inner iterations. Its generating definition is (75) and its first three values are M (J) = (I R J ) A 1 (76) M (1) = M M (2) = (2 I M A) M M (3) = [3 I (3 I M A) M A] M. The residual matrices associated with these three preconditioning matrices are R, R 2 and R 3, respectively. Each inner iteration reduces the error by one order of magnitude. ENE6103: Week 6 Numerical methods part 2 17/33

18 The preconditioned power method 4 At the limit where R = O, the preconditioning matrix becomes identical to A 1 and the preconditioned power method reduces to the inverse power method. In the general case where the preconditioning matrix is different from A 1, the preconditioned power method converges linearly (i.e., p = 1) at a rate slower than the convergence rate of the inverse power method. We can show that the asymptotic convergence constant in this case is (77) C = lim k Φ (k+2) Φ (k+1) Φ (k+1) Φ (k) λ 2 λ λ 2 RJ λ 1 for an iterative process with J inner iterations per power iteration. The preconditioned power method converges linearly, so that C < 1 is a necessary condition for convergence. According to Eq. (77), this condition is met if the norm of the residual matrix is smaller than one. In other situations, divergence may occur. ENE6103: Week 6 Numerical methods part 2 18/33

19 The preconditioned power method 5 The practical definition of the recursion for the preconditioned power method is summarized as (78) Φ (0) given Φ (k+1) = Φ (k) M (J) " AΦ (k) 1 K (k) eff B Φ (k) # if k 0. Two choices are left to the user of this method. One must select the type of preconditioning matrix. Available choices are the Jacobi, Gauss-Seidel, SSOR or ADI preconditioning matrices. One must select the number of inner iterations J to be performed in each power iteration. Values J = 1 are J = 2 are the most usual ones. We suggest selecting the smallest value of J that guarantees a convergence of the power method in less than 75 iterations. In code Trivac, the value of J is automatically increased by one unity when we observe convergence difficulties of the power method. ENE6103: Week 6 Numerical methods part 2 19/33

20 The multigroup partitioning 1 The matrix A resulting from a consistent discretization of the transport or neutron diffusion equation is not symmetric, except for one-speed problems. A generally exhibits a block structure, similar to the example depicted in figure, where the diagonal blocks are symmetric. The complete matrix system is an eigenvalue problem of the form (79) A 1 «B Φ = 0 K eff and can be written in a block structure, each block representing specific values of the primary and secondary energy group indices. A 11 Φ 1 B 11 B 12 B 13 B 14 B 15 Φ 1 0 A 21 A 22 A 32 A 42 A 33 A 43 A 44 A 45 Φ 2 Φ 3 Φ 4 1 K eff B 21 B 22 B 23 B 24 B 25 Φ 2 Φ 3 Φ 4 = A 54 A 55 Φ 5 Φ 5 0 ENE6103: Week 6 Numerical methods part 2 20/33

21 The multigroup partitioning 2 Assuming that the diagonal blocks are symmetric, they can be factorized using the Cholesky method (L D L ). Equation (79) can be rewritten in its multigroup form as (80) A g,g Φ g = GX h=1 h g A g,h Φ h + 1 K eff GX B g,h Φ h h=1 where G is the number of energy groups. In the particular case where the up-scattering blocks vanish, i.e., if A g,h = O for all group indices h > g, the eigenvalue system (80) can be evaluated in a recursive way, using (81) Φ 1 = Φ g = A 1 g,g 1 A 1 1,1 K eff g 1 X h=1 GX B 1,h Φ h h=1 A g,h Φ h + 1 K eff 1 GX B g,h Φ h A ; g = 2, G. h=1 ENE6103: Week 6 Numerical methods part 2 21/33

22 The multigroup partitioning 3 This approach is the equivalent of using the following definition for the inverse of A: (82) A 1 = 0 A 1 33 A 1 11 O O... O A 1 22 A 21 A 1 11 A 1 22 O... O A 31 + A 32 A 1 22 A 21. A 1 11 A 1 33 A 32 A 1 22 A O..... A 1 G,G 1 C A If a preconditioning power method is used, we introduce diagonal preconditioning blocks written {M g,g ; g = 1, G} so as to approximate the inverse blocks {A 1 g,g ; g = 1, G}. A global preconditioning matrix consistent with the definition of A 1 of Eq. (82) is therefore written M (J) = (83) 0 M (J) 33 M (J) 11 O O... O M (J) 22 A 21 M (J) 11 M (J) 22 O... O A 31 + A 32 M (J) 22 A 21. M (J) 11 M (J) 33 A 32 M (J) 22 M (J) O..... M (J) G,G 1 C A where J is the number of inner iterations per outer iteration. ENE6103: Week 6 Numerical methods part 2 22/33

23 The multigroup partitioning 4 In cases where the up-scattering block contributions are small, it is possible to use Eq. (83) as preconditioning matrix, even though this matrix is neglecting the up-scattering phenomena. In this case, the preconditioning power method is written (84) Φ (0) g Φ (k+1) g ; g = 1, G given = Φ (k) g M (J) g,g " A g,g Φ (k) g GX h=g+1 g 1 X h=1 A g,h Φ (k+1) h A g,h Φ (k) h 1 K (k) eff GX h=1 B g,h Φ (k) h # if k 0 and g = 1, G. A similar approach can be set up to obtain the adjoint solution. In this case, the group iterations start with the matrix sub-system of group G and proceed downward toward group 1. ENE6103: Week 6 Numerical methods part 2 23/33

24 Convergence acceleration 1 The convergence of the inverse or preconditioning power method becomes very slow in cases where the asymptotic convergence constant C is close to one. Acceleration techniques are available to reduce the asymptotic convergence constant, provided that this constant is initially smaller than one. Three well-known approaches are In the Wielandt method, Eq. (49) is replaced by (85)» «1 (A λ e B) λ e B Φ = 0 K eff where λ e is an approximation of the requested eigenvalue. The iterative matrix is set equal to (A λ e B) 1 B and the eigenvalue of the iterative process is 1 λ K e. eff This method is efficient at reducing the dominance ratio of the iterative matrix. Numerical difficulties appear when λ e is too close to 1/K eff as matrix (A λ e B) becomes quasi-singular in this case. A linear system using this matrix on the left side cannot be solved with the multigroup partitioning method. Its use is therefore limited to few-energy cases. It is the favorite iterative method for solving the matrix system originating from the analytic nodal method. ENE6103: Week 6 Numerical methods part 2 24/33

25 Convergence acceleration 2 The Chebyshev acceleration method is based on a rewriting of the inverse power method of the form Φ (0) given Φ (1) = Φ (0) + α (0) g (0) (86) Φ (k+1) = Φ (k) + α (k) n g (k) + β (k) h Φ (k) Φ (k 1)io if k 1 where (87) g (k) = Φ (k) + 1 K (k) eff A 1 BΦ (k). Constants α (k) and β (k) are the acceleration parameters computed in such a way as to reduce the asymptotic convergence constant. Successive power iteration cycles, of about six iterations each, are performed using an optimal sequence of acceleration parameters based on the knowledge of the dominance ratio of the iterative matrix. A free iteration can be performed by setting α (k) = 1 and β (k) = 0. This method is very efficient provided the dominance ratio is known accurately. An over-estimation of the dominance ratio destabilizes the iterative process. ENE6103: Week 6 Numerical methods part 2 25/33

26 Convergence acceleration 3 The variational acceleration method consists in computing the acceleration parameters in such a way as to minimize a norm of the residual of the numerical solution at the next iteration. This approach will now be presented in more detail. The variational acceleration method is somewhat similar to the Chebyshev acceleration method. It is based on a rewriting of the preconditioned power method of the form Φ (0) given Φ (1) = Φ (0) + α (0) g (0) (88) Φ (k+1) = Φ (k) + α (k) n g (k) + β (k) h Φ (k) Φ (k 1)io if k 1 where (89) g (k) = M (J) " AΦ (k) 1 K (k) eff B Φ (k) #. Constants α (k) and β (k) are the acceleration parameters computed in such a way as to minimize a norm of the residual of the numerical solution at the next iteration. ENE6103: Week 6 Numerical methods part 2 26/33

27 The variational acceleration method 1 Knowledge of the dominance ratio or of any spectral property of the iterative matrix is not required. We will limit ourselves to a variant known as the symmetric variational acceleration technique (SVAT) permitting the acceleration of convergence in cases where matrices A and B are non-symmetric, without requiring an evaluation of the adjoint solution. The residual R (k) at iteration k is defined as (90) R (k) = AΦ (k) 1 K (k) eff B Φ (k) where the estimate of the effective multiplication factor K (k) eff write DAΦ (k), BΦ (k)e is given by Eq. (52). We (91) R (k) = AΦ (k) DBΦ (k), BΦ (k)e B Φ(k). We introduce the L 2 norm of this residual, defined as (92) x 2 = x x ENE6103: Week 6 Numerical methods part 2 27/33

28 The variational acceleration method 2 We select values of the acceleration parameters α (k) and β (k) in such a way as to minimize the L 2 norm of the residual at iteration k + 1. The two parameters are the solution of non-linear relations obtained from (93) and (94) α (k) R(k+1) 2 = 0 β (k) R(k+1) 2 = 0 Equations (93) and (94) form a non-linear system of two equations with α (k) and β (k) as unknowns. The solution α (k) and β (k) is obtained by solving Eqs. (93) and (94) using a Newton-Raphson iterative method. It is not possible to prove that a solution exists and that a solution is unique. However, the practical use of these relations for solving a large variety of problems has always led to the determination of consistent acceleration parameters with α (k) > 1. ENE6103: Week 6 Numerical methods part 2 28/33

29 The variational acceleration method 3 Two or three iterations are generally sufficient to converge, starting from the initial estimate α (k) = 1 and β (k) = 0. It was observed that the variational acceleration strategy is stable, even when all the power iterations are accelerated. A variational acceleration strategy using a unique acceleration parameter α (k) is unstable in this case. Using two acceleration parameters appears to be a minimum. If all the power iterations are accelerated, the SVAT method becomes very similar to the conjugate gradient method applied to the eigenvalue problem. Using a unique acceleration parameter is similar to a steepest descent approach. There is no need to accelerate every power iteration. In its default behavior, the computer code Trivac uses cycles of six iterations, three free followed by three accelerated. This strategy represents a practical choice for reducing the computer resources and maintaining a good convergence stability. ENE6103: Week 6 Numerical methods part 2 29/33

30 Fixed source eigenvalue problems 1 An eigenvalue matrix system of the form of Eq. (49) is written (95) (A µ 1 B)Φ = 0 where A et B are non-symmetric matrices, µ 1 = 1/K eff is the fundamental eigenvalue, Φ is the discretized particle flux and 0 is the zero-vector (whose components are zero). The adjoint problem is defined after transposition of the matrices as (96) A µ 1 B Φ = 0 and the corresponding direct fixed source eigenvalue equation is (97) (A µ 1 B)Γ = S where the fixed source satisfies S Φ = 0. If Γ is a solution of Eq. (97), then any vector of the form Γ = Γ + αφ is also a solution for any value of constant α. A particular solution can be selected using the following normalization condition: (98) Γ B Φ = 0. ENE6103: Week 6 Numerical methods part 2 30/33

31 The inverse power method 1 The basic algorithm for finding the fundamental solution of Eq. (97) is the inverse power method, an iterative strategy written as Γ (0) given (99) Γ (k+1) = A 1 S + µ 1 B Γ (k) if k 0 where Γ (0) is an initial estimate of the fixed source eigenvalue solution. However, the above iterative strategy may not converge without imposing a normalization condition similar to Eq. (98). A decontamination procedure can be introduced in Eq. (99) as (100) Γ (0) given Γ (k+1) = A 1 S + µ 1 A 1 φ «φ φ B Γ (k) if k 0. Bφ The effectiveness of this decontamination procedure and the convergence of the inverse power method can be proven using a spectral approach. The decontamination is only required for the first iteration. However, it is generally applied to all iterations in order to get rid of numerical instabilities produced by roundoff errors. ENE6103: Week 6 Numerical methods part 2 31/33

32 The inverse power method 2 The Matlab script [iter,eval,delta]=alfse(a,b,evect,adect,sour,eps) finds the solution of the fixed source eigenvalue problem defined in Eq. (97) using the inverse power method. The script uses six input parameters: a and b are the input matrices Arrays evect and adect are the direct and adjoint solutions of the corresponding eigenvalue problem Array sour is the fixed source. It must be orthogonal to adect. eps is the convergence parameter of the inverse power method. The script returns a list containing: the number of iterations the fundamental eigenvalue µ 1 = 1/K eff of the corresponding eigenvalue problem the solution of the fixed source eigenvalue problem Γ. ENE6103: Week 6 Numerical methods part 2 32/33

33 The preconditioned power method 1 The iterative system used to solve the fixed source eigenvalue Eq. (97) can be written in a form similar to Eq. (78), suitable for the application of variational acceleration: (101) Γ (0) given h i Γ (k+1) = Γ (k) M (J) AΓ (k) µ 1 B Γ (k) S φ φ φ Bφ B Γ(k) if k 0 where M (J) is a preconditioning matrix corresponding to the application of J inner iterations. The variational acceleration method is based on a preconditioned power method of the form Γ (0) given Γ (1) = Γ (0) + α (0) g (0) (102) Γ (k+1) = Γ (k) + α (k) n g (k) + β (k) h Γ (k) Γ (k 1)io if k 1 (103) h i where g (k) = M (J) AΓ (k) µ 1 B Γ (k) S φ φ φ Bφ B Γ(k). Constants α (k) and β (k) are the acceleration parameters computed in such a way as to minimize a norm of the residual of the numerical solution at the next iteration. ENE6103: Week 6 Numerical methods part 2 33/33

Space-time kinetics. Alain Hébert. Institut de génie nucléaire École Polytechnique de Montréal.

Space-time kinetics. Alain Hébert. Institut de génie nucléaire École Polytechnique de Montréal. Space-time kinetics Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE613: Week 1 Space-time kinetics 1/24 Content (week 1) 1 Space-time kinetics equations

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

The collision probability method in 1D part 1

The collision probability method in 1D part 1 The collision probability method in 1D part 1 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6101: Week 8 The collision probability method in 1D part

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

The multigroup Monte Carlo method part 1

The multigroup Monte Carlo method part 1 The multigroup Monte Carlo method part 1 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 11 The multigroup Monte Carlo method part 1 1/23

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

The collision probability method in 1D part 2

The collision probability method in 1D part 2 The collision probability method in 1D part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE611: Week 9 The collision probability method in 1D part

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

ECE580 Partial Solution to Problem Set 3

ECE580 Partial Solution to Problem Set 3 ECE580 Fall 2015 Solution to Problem Set 3 October 23, 2015 1 ECE580 Partial Solution to Problem Set 3 These problems are from the textbook by Chong and Zak, 4th edition, which is the textbook for the

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

Numerical Analysis Comprehensive Exam Questions

Numerical Analysis Comprehensive Exam Questions Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order

More information

Spectral Processing. Misha Kazhdan

Spectral Processing. Misha Kazhdan Spectral Processing Misha Kazhdan [Taubin, 1995] A Signal Processing Approach to Fair Surface Design [Desbrun, et al., 1999] Implicit Fairing of Arbitrary Meshes [Vallet and Levy, 2008] Spectral Geometry

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

An Iterative Descent Method

An Iterative Descent Method Conjugate Gradient: An Iterative Descent Method The Plan Review Iterative Descent Conjugate Gradient Review : Iterative Descent Iterative Descent is an unconstrained optimization process x (k+1) = x (k)

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information