DELFT UNIVERSITY OF TECHNOLOGY
|
|
- Francis Wade
- 5 years ago
- Views:
Transcription
1 DELFT UNVESTY OF TECHNOLOGY EPOT - Convergence Analysis of Multilevel Sequentially Semiseparable Preconditioners Yue Qiu, Martin B. van Gijzen, Jan-Willem van Wingerden Michel Verhaegen, and Cornelis Vui SSN 89- eports of the Delft nstitute of Applied Mathematics Delft
2 Copyright by Delft nstitute of Applied Mathematics, Delft, The Netherlands. No part of the Journal may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission from Delft nstitute of Applied Mathematics, Delft University of Technology, The Netherlands.
3 CONVEGENCE ANALYSS OF MULTLEVEL SEQUENTALLY SEMSEPAABLE PECONDTONES YUE QU, MATN B. VAN GJZEN, JAN-WLLEM VAN WNGEDEN, MCHEL VEHAEGEN, AND CONELS VUK Abstract. Multilevel sequentially semiseparable (MSSS) matrices form a class of structured matrices that have low-ran off-diagonal structure, which allows the matrix-matrix operations to be performed in linear computational complexity. MSSS preconditioners are computed by replacing the Schur complements in the bloc LU factorization of the global linear system by MSSS matrix approximations with low off-diagonal ran. n this manuscript, we analyze the convergence properties of such preconditioners. We show that the spectrum of the preconditioned system is contained in a circle centered at (,) and give an analytic bound of the radius of this circle. This radius can be made arbitrarily small by properly setting a parameter in the MSSS preconditioner. Our results apply to a wide class of linear systems. The system matrix can be either symmetric or unsymmetric, definite or indefinite. We demonstrate our analysis by numerical experiments. Key words. multilevel sequentially semiseparable preconditioners, convergence analysis, saddlepoint systems, Helmholtz equation AMS subject classifications. B99, Fxx, 9C, Y. ntroduction. The most time consuming part for many numerical simulations in science and engineering is to solve one or more linear systems of the following type (.) Kx = b, where K = [K ij ] is an n n matrix and b is a given right-hand-side vector of compatible size. For the discretization of partial differential equations (PDEs), the system matrix K is usually large and sparse. Many efforts have been dedicated to finding efficient numerical solution for such systems. Krylov subspace methods, such as the conjugate gradient (CG), minimal residual (MNES), generalized minimal residual (GMES) and induced dimension reduction (D(s)) methods [8,,, 9], have attracted considerable attention over the last decades. When Krylov subspace methods are applied to solve large linear systems, preconditioning is necessary to improve the robustness and convergence of such iterative solvers. This manuscript studies the multilevel sequentially semiseparable (MSSS) preconditioners. The efficiency of MSSS preconditioners has been shown in [] for difficult linear systems arising from PDE-constrained optimization problems. Later, it is extended to solve the general computational fluid dynamics (CFD) problems in []. The computational results given by [, ] demonstrate that the induced dimension reduction (D(s)) method can compute the solution in - iterations by using the MSSS preconditioner for saddle-point system of the following type [ ][ [ A B T x b (.) =. B C y] d] }{{} A This research is partly supported by the NWO Veni Grant # 9 econfigurable Floating Wind Farms. Delft Center for Systems and Control, Delft University of Technology, 8 CD, Delft, the Netherlands. (Y.Qiu@tudelft.nl, Y.Qiu@gmx.com). Delft nstitute of Applied Mathematics, Delft University of Technology, 8 CD, Delft, the Netherlands. (M.B.vanGijzen@tudelft.nl).
4 Convergence Analysis of MSSS Preconditioners Here, A n n is symmetric positive definite, B m n has full ran, C m m is symmetric positive semidefinite and usually m n. Such systems arise in PDEconstrained optimization problems [, ], computational fluid fluid dynamics [, ], optimal flow control [, 7] et al., cf. [] for a general survey of the saddle-point systems and numerical solutions. t is shown in [, ] that all the sub-matrices of the saddle-point systems (.) have a multilevel sequentially semiseparable (MSSS) structure. This sub-structure can be exploited to obtain a global MSSS structure of the form (.) by performing a simple permutation. This permutation unites the linear systems from the discretization of scalar PDEs such as the Helmholtz equation together with coupled PDEs, e.g. the Stoes equation and gives a general linear system of the form (.) with a global MSSS structure. n this way, we can obtain a global factorization of the system by using MSSS matrix computations in linear computation complexity. The global factorization can be computed with a prescribed accuracy, which gives a flexible and numerically efficient way to solve a wide class of linear systems. The central concept for this factorization is that the off-diagonal blocs of the Schur complements are observed to have numerical low-ran [9, ]. The low numerical off-diagonal blocs can be approximated efficiently and accurately by structured matrices, such as the multilevel sequentially semiseparable (MSSS) matrices, hierarchically semiseparable (HSS) matrices, hierarchical matrices, et al. Multilevel sequentially semiseparable (MSSS) matrix generalize the sequentially semiseparable (SSS) matrices [8] to the multidimensional case. They can be directly inferred from interconnected systems []. Early study of preconditioning by using SSS matrix computations for symmetric positive definite systems can be found in [9] while MSSS matrix computations have been extensively studied in [8,, ] for preconditioning unsymmetric and saddle-point systems. The advantage of MSSS matrix computations is their simplicity and low computational cost, which is O(r N). Here, N is the number of blocs, r is the ran of the off-diagonal blocs and is usually much smaller compared with N [8, ]. elated structured matrices include the hierarchical matrices (H-matrix) [], H -matrices [, ], hierarchically semiseparable (HSS) matrices [7, 8], with computational complexity O(N log α N) for some moderate α. Here N is the size of the matrix. H-matrices originate from approximating the ernel of integral equations and have been extended to elliptic PDEs [, ]. H -matrices and HSS matrices are specific subsets of H-matrices and HSS matrices have been successfully applied in the multi-frontal solvers [8]. Some recent efforts have been devoted to preconditioning symmetric positive definite systems by exploiting the HSS matrix structure [7] and unsymmetric systems by H-matrix computations [, ]. To eep a clear structure of this manuscript, we only focus on the MSSS preconditioning techniques. n this manuscript, we present a full convergence analysis of the MSSS preconditioners for a wide class of linear systems. The system matrix can be either symmetric or unsymmetric, definite or indefinite, where saddle-point systems, discretized Helmholtz equations, and discretized convection-diffusion equations are automatically covered. Our analysis gives an analytic bound for the spectrum of the preconditioned system. We show that the spectrum of the preconditioned system is contained in a circle centered at (,) and give an analytic bound for the radius of this circle. This radius can be made arbitrarily small by properly setting a parameter in the MSSS preconditioner. Some related wor includes [] and []. Both analyses apply only to symmetric
5 Yue Qiu, Martin B. van Gijzen et al. positive definite systems. The analysis for MSSS preconditioners in [] is restricted to -level MSSS matrix computations, while our analysis can be applied to -level MSSS matrix computations. Our wor in this manuscript is closely related to []. Our contributions include: () We extend the wor in [, ] from the symmetric positive definite case to the general linear systems; () Our analysis can also be applied to saddle-point systems that are bloc systems, while the analysis in [, ] only applies to symmetric positive definite linear systems that arise from discretization of scalar PDEs; () We give an analytic bound for the error introduced by the model order reduction that is necessary for the MSSS preconditioning technique, which has not been studied before; () The analysis for MSSS preconditioning in [] only concerns -level MSSS matrix computations, while our analysis also includes the -level MSSS cases; () For the first time, we apply this MSSS preconditioning technique to the Helmholtz equation. The structure of our manuscript is as follows. We give a brief introduction of the MSSS preconditioning technique in Section, and analyze its convergence in Section. Section studies the numerical stability of this preconditioning technique and gives a sufficient condition to avoid breadown. The underlying condition can be satisfied by properly setting a parameter. We show how to choose this parameter in Section. Numerical experiments are given in Section, while conclusions are drawn in the final part. A companion technical report [] is available online and contains more numerical results to illustrate our analysis for the convergence of MSSS preconditioners.. Multilevel Sequentially Semiseparable Preconditioners. To start with, we first introduce the sequentially semiseparable matrices and the generators definition for such matrices is given by Definition.. Definition. ([8]). Let A be an N N matrix with SSS matrix structure and let n positive integers m, m, m n satisfy N = m +m + +m n such that A can be written in the following bloc-partitioned form (.) A ij = U i W i+ W j V j, if i < j; D i, if i = j; P i i j+ Q j, if i > j. The matrices U i, W i, V i, D i, P i, i, Q i are matrices whose sizes are compatible for matrix-matrix product when their sizes are not mentioned. They are called generators of the SSS matrix A. Basic operations such as addition, multiplication and inversion are closed under the SSS matrix structure and can be performed in linear computational complexity. Multilevel sequentially semiseparable matrices generalize the SSS matrices to the multi-dimensional case. Similar to Definition. for SSS matrices, the generators representation for MSSS matrices, specifically the -level SSS matrices, is defined in Definition.. Definition.. The matrix A is said to be a -level SSS matrix if all its generators are ( )-level SSS matrices. The -level SSS matrix is the SSS matrix that satisfies Definition.. The MSSS matrix structure can be inferred directly from the discretization of PDEs, which is studied in [8,, ]. With these definitions, we start to introduce the MSSS preconditioners for discretized partial differential equations (PDEs). Consider the following PDE Lu = f, with u = u D on Γ D,
6 Convergence Analysis of MSSS Preconditioners on a square domain Ω d with d =, or, L is a linear differential operator, and Γ D = Ω. Discretizing the PDE using the finite difference method or the finite element method and using lexicographical to order the grid points gives the following linear system, Kx = b, where the stiffness matrix K is bloc tridiagonal of the following type K, K, K, K, K,. (.) K = K, K.., KN,N Here K i,j is again a tridiagonal matrix for d = and bloc tridiagonal matrix for d =. For discretized scalar PDEs using uniform mesh, it is quite natural to infer that the stiffness matrix K has an MSSS matrix structure, i.e., a -level MSSS structure for d = and a -level MSSS structure for d =. Discretized coupled PDEs, such as the Stoes equation, and linearized Navier-Stoe equation yield a saddle-point system of the form (.). t is shown in [], that all the sub-matrix in (.) have an MSSS structure and can be permuted into a global MSSS structure that has the same form as (.) with K i,j a -level SSS matrix for d = and -level SSS matrix for d =. For a strongly regular N N bloc matrix K, it admits the bloc factorization that is given by K = LSU. Here we say that a matrix is strongly regular if all the leading principle sub-matrices are nonsingulari. S is a bloc diagonal matrix with its i-th diagonal bloc given by { K i,i if i = (.) S i = K i,i K i,i S i K i,i if i N, wheres i is the Schur complement at the i-th step. The matrixland U are bloc bidiagonal matrix of the lower-triangular form and upper-triangular form, respectively. They are obtained by computing L i,j = { if i = j K i,j S j if i = j +, and U i,j = { if j = i S i K i,j if j = i+. To compute such a factorization, one needs to compute the Schur complements via (.). This is computationally expensive both in time and memory since the Schur complement S i is a full matrix. Some earlier papers [, ] propose for symmetric positive definite systems to approximate S i by using the off-diagonal decay property of the inverse of a symmetric positive definite tridiagonal matrix. Alternatively, an incompletely factorization can be made to reduce the fill-in within the bandwidth for such a factorization []. However, these methods do not yield satisfactory performance. n [, 9], it is stated that the Schur complement S i from the factorization of a discretized symmetric positive definite differential operator has low off-diagonal ran and can be approximated by an H-matrix or SSS matrix. n this manuscript,
7 Yue Qiu, Martin B. van Gijzen et al. we use SSS matrices to approximate the Schur complements in the above factorization for a general class of linear systems. Note that the H-matrix computations have mainly been applied to approximate the Schur complements that are either symmetric positive definite [, ] or unsymmetric, whose eigenvalues have positive real part []. Some recent efforts have been made to solve the symmetric indefinite Helmholtz equation []. For the approximated Schur complement by MSSS matrix computations denoted by S i,(i =,,, N), we have the MSSS preconditioner K that is given by (.) K = L SŨ. Here L i,j = { if i = j K S i,j j if i = j +, Ũ i,j = { if j = i S i K i,j if j = i+, and S is a bloc-diagonal matrix with S i,(i =,,, N) as its diagonal blocs. The Schur complement always corresponds to a problem that is one dimension lower than the linear system, i.e., for a D system, the Schur complement is of D, and D for a D system. When applying MSSS preconditioners to D problems, one needs -level MSSS matrix computations to approximate the Schur complement. How to reduce the off-diagonal ran of a -level MSSS matrix efficiently is still an open problem [9,, ]. This maes extending MSSS preconditioning technique from D to D nontrival, and some extra efforts need to be devoted, cf. [, ]. To eep consistent of this manuscript, we only focus on the convergence analysis of MSSS preconditioner for D systems.. Convergence Analysis. n this section, we analyze the convergence of the MSSS preconditioner. Some recent wor devoted to the analysis of structured preconditioners are in [, ]. n [], the nested dissection method is used to order the unnowns of the discretized diffusion-reaction equation in D and the symmetric positive definite Schur complements are approximated by SSS matrix and HSS matrix computations, respectively. Analytic bounds of the spectrum of the preconditioned system are given. n [], H-matrix computations are applied to preconditioning the D symmetric positive definite Helmholtz equation. Both studies focus on the symmetric positive definite case. n [], it is stated that the ey point for the H-matrix preconditioner for symmetric positive definite systems is not how well the approximate Schur complement denoted by S i approximates the exact Schur complement S i, but how small the distance between S i and K i,i K i,i S i K i,i is. This statement is denoted by the so-called condition ε in []. n this manuscript, we also mae use of this condition for convergence analysis, which is given by the following definition. Definition. (Condition ε []). There exists a constant ε such that S K, ε, (.) ( S i K i,i K i,i S i i,i) K ε, hold for i N. f condition ε in Definition. holds, we have the following lemma that gives the distance between the preconditioner and the original system.
8 Convergence Analysis of MSSS Preconditioners Lemma.. Let K be a nonsingular matrix that has the form of (.), suppose S i, i =,,...,N in (.) are nonsingular and condition ε holds. The MSSS preconditioner is given by (.), and K K ε. Proof. For the proof of this lemma, cf. []. Next, we introduce how to compute an MSSS preconditioner that satisfies Lemma.. Here, we assume the exact arithmetic. Lemma.. Let the lower semiseparable order and upper semiseparable order are defined as the maximal ran of the lower off-diagonal blocs and upper off-diagonal blocs, respectively. For a nonsingular SSS matrix A with lower semiseparable order and upper semiseparable order r l and r u, the exact inverse of A can be computed using SSS matrix computations in linear computational complexity provided that r l and r u are much smaller than the size of A. The inverse of A is again an SSS matrix with r l and r u as its lower and upper semiseparable order, respectively. Proof. This can be shown by carefully checing the arithmetic for inverting SSS matrices described in [8, ]. Lemma.. Let SSS matrices A and B are with compatible sizes and properly partitioned blocs for matrix-matrix addition and multiplication, then A+B and AB can be computed exactly using SSS matrix computations in linear computational complexity if no model order reduction is performed. Proof. Proof of this lemma is given by checing the algorithms for SSS matrices introduced in [8, ]. For the D matrix K of the form in (.), all its sub-blocs are SSS matrices, therefore we have the following corollary that shows how the condition ε in Definition. can be satisfied. Corollary.. Suppose S i, i =,,...,N in (.) are nonsingular, then the condition ε can be satisfied by applying the following procedure.. nvert S i using the SSS inversion algorithm.. Compute K i,i K i,i S i K i,i using SSS matrix computations without performing the model order reduction.. Perform the model order reduction for K i,i K i,i S i K i,i by choosing a proper semiseparable order r or a proper bound τ for the discarded singular values, which will be introduced in Section, such that the condition S i ( K i,i K i,i S i K i,i) ε is satisfied. Proof. According to Lemma. and Lemma., both step and can be performed exactly. By applying the Hanel blocs approximation introduced in [8, ], K i,i K i,i S i K i,i can be approximated by an SSS matrix S i in a prescribed accuracy ε measured in the matrix -norm by choosing a properly set parameter in the Hanel bloc approximations. The details will be introduced in Section. emar.. To satisfy condition ε, we need to apply Corollary. to compute the MSSS preconditioner. This is computationally cheaper and feasible, because the semiseparable order of K i,i, S i, K i,i and K i,i are small. Performing step in Corollary. just increases the semiseparable order slightly. Here, the semiseparable order is defined as the maximum off-diagonal ran. For details of the increase of the
9 Yue Qiu, Martin B. van Gijzen et al. 7 semiseparable order, cf. []. After performing the model order reduction in step, the semiseparable order is reduced and bounded. This gives the approximate Schur complements S i with small semiseparable order. Lemma. gives the distance between the preconditioner and the original system matrix while Corollary. illustrates how to satisfy the condition ε. Normally, we do not consider this distance, but the distance between the preconditioned matrix and the identity matrix. Next, we give an analytic bound of this distance. Theorem.. Let a nonsingular matrix K be of the form (.) and let the condition ε in Definition. hold by Corollary.. f S i (i =,,...,N) are nonsingular and ε < ε, then the MSSS preconditioner is given by (.) and K K ε ε ε. Here ε is the smallest singular value of K. Proof. Since S i (i =,,...,N) are nonsingular, K is nonsingular. Then, K K = K (K K) K K K ε K = ε. ε Since ε < ε, we have ε ε <, then the Neumann series converges to Then, we can obtain ( ) ( + K K + K K) + ( ( K K) ) = K K. This in turn gives, K ) K = ( + K K K = ( + K K) + + K ( K + K K) + ε ( ) ε + + ε ε = ε ε ε. K KK K K + K ε ε ε = ε ε ε. This in turn yields K K = K ( K K) K K K ε ε ε. According to Theorem. we have the following proposition that gives the condition number of the preconditioned matrix. Proposition.7. Let a nonsingular matrix K be of the form (.) and let the condition ε in Definition. hold by Corollary.. f S i (i =,,...,N) are
10 8 Convergence Analysis of MSSS Preconditioners nonsingular and ε < ε, then we have the MSSS preconditioner K is of the form (.) and κ ( K K) ε ε ε. Here ε is the smallest singular value of K. Proof. According to Theorem., we have K K ε ε ε, associated with ε < ε, we get converge to ε <. Then the Neumann series ε ε ( + K ) ( K + K K) + ( ( K K)) = K K. This yields K ) ( K = ( + K K + K K) + + K ( K + K K) + + ε ( ) ε ε ε + + ε ε = ε ε ε ε. According to Theorem., we have K K ε ε ε, then we obtain κ ( K K) = K K K K ε ε ε According to Theorem., we can also give an analytic bound on the spectrum of the preconditioned matrix. Proposition.8. Let a nonsingular matrix K be of the form (.) and let the condition ε in Definition. hold by following Corollary.. f S i (i =,,...,N) are nonsingular, then we have the MSSS preconditioner K is of the form (.). Denote the eigenvalues of the preconditioned matrix by λ( K K). f ε < ε, we have λ( K K) ε ε ε. Here ε is the smallest singular value of K. Proof. According to Theorem., we have K K ε ε ε.
11 Yue Qiu, Martin B. van Gijzen et al. 9 Therefore, we can obtain λ( K K) ε ε ε, owing to λ( K K) K K. Since λ( K K) = λ( K K), we get λ( K K) ε ε ε. emar.. Proposition.8 states that the spectrum of the preconditioned ε system is contained in a circle centered at (,) with a maximum radius ε. Therefore, the smaller ε is, the closer the eigenvalues of the preconditioned system are to ε (,). This in turn gives better convergence for a wide class of Krylov solvers to solve the preconditioned system by applying the MSSS preconditioner. According to Theorem., Proposition.7, and Proposition.8, we conclude that the smaller the ε is, the better-conditioned the preconditioned matrix is. For the extreme case ε = when there is no approximation of the Schur complement, this factorization is exact. This is in turn verified by Theorem., Proposition.7, and Proposition.8. n Section, we will show that ε can be made arbitrarily small by setting a parameter in the MSSS preconditioner.. Breadown Free Condition. n the previous section, we have analyzed the conditioning and spectrum of the preconditioned matrix. n this section, we discuss how to compute the MSSS preconditioner without breadown, i.e., how to set the bound ε to compute the nonsingular S i (i =,,..., N). To start with, we give the following lemmas that are necessary for the analysis. Lemma. ([]). Let A be an m n matrix with, say, m n. Sort its singular values in a non-increasing order by σ σ σ n. Let à = A+E be a perturbation of A, and sort its singular values in a non-increasing order by Then, we have σ σ σ n. σ i σ i E, i =,,, n. By applying this lemma, we have the following lemma that gives a sufficient condition for a nonsingular perturbation, i.e., for a full ran matrix A, its perturbed analog à is still of full ran. Lemma.. Let A be an m n full ran matrix with, say, m n, and à = A+E be a perturbation of A. f E < σ n, where σ n is the smallest singular value of A, then the perturbed matrix full ran. à is still of
12 Convergence Analysis of MSSS Preconditioners Proof. Denote the smallest singular value ofãby σ n, then according to Lemma., we have Since E < σ n, this yields, We can obtain σ n σ n E. σ n σ n < σ n. < σ n < σ n, which states that à is still of full ran. With these lemmas, we can give a sufficient condition for ε that guarantees nonsingular approximate Schur complements S i (i =,,..., N), which satisfies condition ε in Definition.. Theorem.. f ε satisfies the following inequality, ε < σ where σ min N { ( )} min σ(k p ), p= and K p is the leading principle sub-matrix of K of size pn pn for p =,,, N. Here min(σ(k p )) denotes the smallest singular value of K p. Then the condition ε can be satisfied and all the approximate Schur complements S i (i =,,..., N) are nonsingular. Proof. At each step j (j ) of the approximate factorization, we use S j to approximate K j,j K j,j S j K j,j by performing a model order reduction according to Corollary. to satisfy the condition ε. This introduces a small perturbation that is given by E j S ( ) j K j,j K j,j S j K j,j, and E j ε (j ). Since S = K, is an SSS matrix with small off-diagonal ran, no model order reduction is performed, we have E =. Denote the (j )-th leading principle sub-matrix of K by K j, then we have (.) Kj = [ ] Kj Kj,j, Kj = K j,j K j,j [ ] Kj Kj,j, K j,j K j,j +E j where K j is the j-th principle leading sub-matrix of K and Kj is its approximation. K j is also the j-th leading principle sub-matrix of K, and Kj,j = [ ] K [ ] j,j, K j,j =. Moreover, we have K j,j E K j K E j =,... Ej
13 Yue Qiu, Martin B. van Gijzen et al. where E i ε ( i j). Then we can obtain, ( Kj K ) j x = E x + E x + + E jx j This in turn yields E x + E x + + E j x j j ε x = ε x. = Kj K j ε, j =,,,N. According to Lemma., if ǫ < min N { ( min σ( K )} j ), then Kj (j =,,,N) is j= nonsingular. Since Kj and Kj are nonsingular, according to (.), the Schur complement of K j in Kj is also nonsingular and is given by S j = K j,j +E j K j,j K j K j,j, which is exactly S j for j =,,N, while S = K, is also nonsingular. Theorem. gives a sufficient condition for ε to compute nonsingular approximate Schur complements for a general class of linear systems. For the symmetric positive definite case, this sufficient condition can be simplified by the following lemma. Lemma.. Let K be an m m symmetric positive definite matrix of the form (.) and denote its smallest eigenvalue by λ min (K). f ε < λ min (K), then all the approximated Schur complements S i (i =,,..., N) are nonsingular. Before giving the proof of Lemma., we first introduce the following lemma that is necessary for the proof. Lemma. (Corollary 8.. in []). Let A be an m m Hermitian matrix, and A be a principle sub-matrix of A with < m. Then, and λ min (A) λ min (A ) λ max (A ) λ max (A), λ min (A ) λ (A). With Lemma., we give the proof of Lemma. as follows. Proof. According to Theorem., if ε < min N { ( )} min σ(k p ), the approximate Schur complements S i (i =,,..., N) are nonsingular. For the symmetric positive definite matrix K, its eigenvalues and singular values are identical. Then the condition for ε is given by ε < min N { ( )} min λ(k p ). According to Lemma., we have p= p= min(λ(k p )) λ min (K),
14 Convergence Analysis of MSSS Preconditioners this in turn gives the condition for ε, that is ε < λ min (K). Theorem. gives a sufficient condition for ε to obtain nonsingular approximate Schur complements. Next, we use a simple example to illustrate this condition. Example.. Consider the D stationary SchröÌĹdinger equation Ψ(x,y)+ Ψ(x,y) =, with homogeneous Dirichlet boundary condition on a unite square domain Ω = [, ] [, ]. Using -point stencil finite difference discretization on a uniform grid with gird size h = gives a linear system that has a -level SSS structure, here h = π /. Factorize the linear system by using MSSS matrix computations that satisfies condition ε in Definition. gives an MSSS preconditioner K. For different settings of ε, the smallest singular value σ of the leading principle sub-matrix K of size N N, the approximation error ε at each step to compute the approximate Schur complement, the bound of ε which is denoted by ε max, and the preconditioned spectrum are plotted in Figure -. We start decreasing ε from. to, this corresponds to ε > σ for relatively big ε. For the case ε > σ, Theorem. does not hold, which means that we may fail to get nonsingular approximate Schur complements S i. However, we succeed in computing the nonsingular Schur complements S i. n fact, the possibility of perturbing a matrix from nonsingularity to singularity is quite small. Although we get nonsingular approximate Schur complements for ε > σ, our analysis is not suited to analyzing the preconditioned spectrum for such case. This is illustrated by the spectrum of the preconditioned system in Figure (b). The preconditioned spectrum corresponds to ε = O(.) is not well clustered and a portion of the eigenvalues is far away from (,). For the cases that ε is slightly bigger than σ, the preconditioned spectrum is already contained in a quite small circle, cf. Figure (b). When ε is of the same order as σ, the circle is even smaller, cf. Figure (b) and Figure (b). Continue decreasing ε, the radius of the circle can be made arbitrarily small. At a certain moment, the MSSS factorization can be used as a direct solver for small enough ε. ε σ εmax 8 (a) Smallest singular values and ε (b) Preconditioned spectrum Fig.. Condition ε and preconditioned spectrum for ε = O(.)
15 Yue Qiu, Martin B. van Gijzen et al. ε σ εmax (a) Smallest singular values and ε (b) Preconditioned spectrum Fig.. Condition ε and preconditioned spectrum for ε = O( ) 8 ε σ εmax x (a) Smallest singular values and ε (b) Preconditioned spectrum Fig.. Condition ε and preconditioned spectrum for ε = O( ) emar.. We observed that for the case ε < σ and ε is of the same order as σ, if ε decreases by a factor, the radius of the circle that contains the preconditioned spectrum is also reduced by a factor of around. This verifies the statement of the radius of the circle in Proposition.8. The results for different ε given by Figure - verify our analysis of the convergence property of the MSSS preconditioner and the spectrum of the preconditioned system in Section. Both theoretical analysis and numerical experiments state that a small ε is preferred. Normally, decreasing ε will increase the computational complexity to a small extent. t is therefore favorable to choose a moderate ε to compute an MSSS factorization and use it as a preconditioner. This gives linear computational complexity and satisfactory convergence for a wide class of Krylov solvers. Details will be discussed in Section. Both theoretical analysis and numerical experiments indicate that a small ε is preferred. n the next section, we will discuss how to perform the model order reduction to mae ε up to a prescribed accuracy.. Approximation Error of Model Order eduction. We assume by Corollary. that the condition ε is satisfied via a model order reduction operation and the error of the model order reduction should be bounded by ε. To start, we use the model order reduction algorithm which is called Hanel blocs approximation that is studied in [8]. n the following part, we will show how to do this model order reduction
16 Convergence Analysis of MSSS Preconditioners to mae the approximation error up to a prescribed accuracy ε. n this section, we use the algorithm style notation, i.e., by letting a = b, we assign the variable a with the value of b. To begin this analysis, we recall some concepts that are necessary. Definition. ([8]). Hanel blocs denote the off-diagonal blocs that extend from the diagonal to the northeast corner (for the upper case) or to the southwest corner (for the lower case). Tae a bloc SSS matrix A for example, the Hanel blocs for the strictly lower-triangular part are shown in Figure by H, H and H. Fig.. Hanel blocs of a bloc SSS matrix For the Hanel blocs H of the SSS matrix A, it has the following low-ran factorization, H = O C, where the low-ran factors O and C have a bacward and forward recursion, respectively. They are given by and O = [ P N, if ] = N, P O =, if < N, O + { C = Q, if =, ] C = [ C Q, if < N. The ran r of the Hanel bloc H has the following equality ran(h ) = ran(o ) = ran(c ) = r. The low-ran factors O and C are the observability factor and controllability factor of a linear time-varying (LTV) system that corresponds to the SSS matrix A. Moreover, the Hanel bloc H corresponds to the discrete Hanel map of a LTV system. SSS matrices and their relations with LTV system are studied in []. The basic idea for the model order reduction of SSS matrices is to reduce the ran of the Hanel blocs H from r to r with r < r, where r is the ran of the approximated Hanel map H and ran( H ) = ran(õ) = ran( C ) = r.
17 Yue Qiu, Martin B. van Gijzen et al. To start the model order reduction, first we need to transform C i to the form that has orthonormal rows, which is called the right-proper form in [8]. This is obtained by performing a singular value decomposition (SVD) on C i. For i =, C = U Σ V T, and let C = Q = V T. To eep the Hanel map (bloc) H unchanged, we let O = O U Σ. This gives P = P U Σ, = U Σ. From step i to i+, we have C i+ = [ ] [ ] [ ] C i C i Q i = i Q i i. [ Ci Since C i has orthonormal rows, also has orthonormal rows. To complete this ] procedure for C i+, perform the following SVD [ i Q i ] = Ui Σ i V T i, and let [ ] i Q i = V T i. To eep the Hanel map at step i + unchanged, we let O i+ = O i+ U i Σ i. After finishing the above procedure, we can mae all the factors C i have orthonormal rows. The next step of the model order reduction is to transform the low-ran factors O i to the form with orthonormal columns, which is called the left-proper form. Then we reduce the ran of the Hanel map (blocs). Since the recursion for the factor O i is performed bacward, we start from i = N. First we approximate O N by ÕN, this gives the factor Õ N for the next step. Here Õ N denotes the approximated factor O N because of the propagation of the approximation error of O N. Then we compute a low-ran approximation of ON, which gives Õ N. We continue this procedure till step i =. At step i, we use Õ i to approximate Õ i, this introduces an approximation error that is bounded by τ. We use Figure to depict this bacward recursion of approximation. For the details of the Hanel blocs approximation algorithm, cf. [8, ]. Fig.. Low-ran approximation of O Here Õ represents the approximation of O by considering the propagation of the error introduced in the previous steps. And Õ is further approximated by Õ by performing a low-ran approximation. Then we have the following lemma that underlies the error between the original Hanel map and the approximate Hanel map. Lemma.. Let A be a bloc N N SSS matrix and its Hanel blocs H i be approximated by H i using the Hanel blocs approximation described by the above procedure, then (.) H i H i (N i+)τ, i N.
18 Convergence Analysis of MSSS Preconditioners where τ is the upper bound for the discarded singular values that is applied in the singular value decomposition of the Hanel factors. For the approximated Hanel factors O i that are illustrated in Figure, we have (.) O i Õ i (N i)τ, i N. and (.) Õ i Q i O i Q i (N i+)τ, i N. Here we use the notation to denote a factor or matrix after approximation. Proof. For the proof of this lemma, cf. Appendix A. emar.. To perform the Hanel blocs approximation to reduce the offdiagonal ran of an SSS matrix, the reduction of the strictly lower-triangular part is clearly covered by the analysis above. To reduce the off-diagonal ran of the strictly upper-triangular part, we can first transpose it to the strictly lower-triangular form. Then perform the Hanel blocs approximation to the strictly lower-triangular part and transpose bac to the strictly upper-triangular form. This gives strictly uppertriangular part with reduced off-diagonal ran. n Section, we gave an analytic bound of the radius of the circle that contains the preconditioned spectrum. This analytic bound is closely related to the approximation error ε by the model order reduction for SSS matrices, cf. Corollary. and Proposition.8. This model order reduction error ε can be made arbitrarily small by setting a parameter in the MSSS preconditioner. Now, we have all the ingredients to help to compute a controllable ε. Next, we will give the main theorem of this section to show how to compute the controllable ε. Theorem.. Let the N N bloc SSS matrix A be approximated by à with lower off-diagonal ran using the Hanel blocs approximation, then A à N(N )τ, where τ is the upper bound of the discarded singular values for the singular value decomposition that is performed in approximating the Hanel blocs. Proof. To prove this theorem, we use Figure to illustrate the column structure of the off-diagonal blocs for an SSS matrix. Since the strictly upper-triangular part and the strictly lower-triangular part have similar structure, here we just tae the strictly lower-triangular part for example. t is not difficult to verify that the i-th off-diagonal column of the strictly lower triangular part of an SSS matrix, denoted by C i, can be represented by C i = O i+ Q i, (i =,,,N ). After performing the Hanel blocs approximation, C i is approximated by C i = Õ i+ Q i, (i =,,,N ).
19 Yue Qiu, Martin B. van Gijzen et al. 7 (a) Hanel columns (b) Approximate Hanel columns Fig.. Lower-triangular Hanel columns before and after approximation Denote C i = C i C i, then we have C i = O i+ Q i Õ Q i+ i (N i)τ. (Lemma.) We can write the SSS matrix A by using the following form A = L+D+U, where L is the strictly lower-triangular part, D is the bloc diagonal part, and U is the strictly upper-triangular part of A, respectively. Performing the Hanel blocs approximation on the strictly lower-triangular part and the strictly upper-triangular part, we obtain à = L+D +Ũ, where L and Ũ are approximated independently. Moreover, we have And ( L L ) N x = C i x i This yields L L = [ C C C N ]. i= (N )τ N i= N i= C i x i max N N C i i= x i (N )τ N x i i= (N )τ N N x i = N(N )τ x. i= L L (L L)x max x x N(N )τ. i= x i Here τ is the upper bound for the discarded singular values for the singular value decomposition that is performed in the Hanel blocs approximation. Similarly, we have U Ũ N(N )τ, this gives A à = (L L)+(U Ũ) L L + U Ũ N(N )τ.
20 8 Convergence Analysis of MSSS Preconditioners emar.. Theorem. gives an analytical bound of the error introduced by the model order reduction. This error is only related to the maximum discarded singular value τ for the singular value decomposition that is performed in the Hanel blocs approximation. This states that this approximation error can be made arbitrarily small by setting τ small enough. This in turn gives a relatively bigger off-diagonal ran compared with moderate τ. A trade-off has to be made between the computational complexity and accuracy. emar.. The model order reduction can be also performed by setting a fixed reduced off-diagonal ran. This is convenient in practice and maes the computational complexity and memory consumption easily predictable. The disadvantage, however, is that the approximation error is unpredictable. n contrast, by setting a fixed τ for the model order reduction, we can easily control the error bound and get an adaptive reduced off-diagonal ran. However, the disadvantage is that the computational complexity is difficult to estimate. n practice, we observe that for many applications, a properly chosen τ also gives small enough off-diagonal ran, which in turn gives predictable computational complexity. This will be highlighted in the numerical experiments part. emar.. n many applications, the error introduced by the model order reduction using the Hanel blocs approximation is of O(τ), which is quite small compared with the bound given by Theorem.. Only in some extreme cases, the bound given by Theorem. is sharp. However, it is quite difficult to prove for which case the error bound given by Theorem. is sharp. Normally, a small τ still results a small reduced off-diagonal ran, which yields linear computational complexity. This will be illustrated by numerical experiments in the next section. n practice, it would be desirable to estimate the semiseparable order of the Schur complement that corresponds to a given τ. Normally, this is quite challenging since the off-diagonal ran depends not only on the differential operator of the PDE, but also on the coefficients of the PDE. Only some preliminary results can be found in the literature. These results are summarized by Lemma.. Lemma. ([9]). Let the symmetric positive definite bloc tridiagonal system K arise from the discretization of PDEs with Laplacian operator, constant coefficients, and Dirichlet boundary condition everywhere on the boundary. Then the Schur complement S i has a monotonically convergence rate and the limit of S i, i.e., S is also symmetric positive definite. The τ-ran of the Hanel blocs of S are bounded by ( ( )) D r +8ln, τ where r is the maximal Hanel bloc ran of K i,i and K i,i, D is the diagonal bloc of K. Here, the τ-ran of a matrix is defined by the number of singular values that are bigger than or equal to τ. Lemma. gives the upper bound of the limit of the Schur complement for the infinite dimensional symmetric positive definite systems. For finite dimensional symmetric positive definite systems with constant coefficients, similar results hold. For detailed ( discussion, ) cf. [9]. Note that this bound is not sharp because the term D ln can be much bigger than the size of K. τ ecall Lemma. states that for the symmetric positive definite system K, τ can be often chosen as τ < λ min (K). n Example., it is shown that usually we can choose τ = O(λ min (K)). f D = O( K ), then we get the bound of the ran of the
21 Yue Qiu, Martin B. van Gijzen et al. 9 Hanel blocs is of O ( rln κ (K) ). Even this bound is not sharp, it states that for ill-conditioned system, a bigger semiseparable order is needed to get a considerably good approximation, which will be shown in Section. For the symmetric positive definite systems from the discretization of PDEs with variable coefficients and the indefinite systems, the analytic bound of the ran of the Hanel blocs of the Schur complement is quite difficult to analyze. elevant wor on analyzing the off-diagonal ran of the Schur complement of the symmetric positive definite type by using hierarchical matrix computations is done in []. emar.. The τ ran of the off-diagonal blocs of the Schur complement for symmetric positive definite systems studied in Lemma. is not sharp. n many applications, it can be made quite small and even bounded by a small number for a wide class of linear systems. This will be illustrated by numerical experiments in the next section.. Numerical Experiments. n this section, we use numerical experiments to investigate our analysis in the previous sections. We use three types of experiments, which include unsymmetric systems, symmetric indefinite systems from discretization of scalar PDEs and saddle-point systems, to demonstrate our results. For all the numerical experiments performed in this section, the induced dimension reduction (D(s))[] is used as a Krylov solver. The D(s) solver is terminated when the -norm of the residual is reduced by a factor of. The numerical experiments are implemented in MATLAB b on a destop of ntel Core i CPU of. GHz and Gb memory with the Debian GNU/Linux 8. system... Unsymmetric System. n this subsection, we use the convection-diffusion equation as a numerical example to demonstrate our analysis for the unsymmetric case. The convection-diffusion problem is described by Example., which is given as the example.. in []. The details of the discretization of the convection-diffusion equation can be also found in []. We generate the linear system in this example using the ncompressible Flow and terative Solver Software [8] (FSS). To investigate the performance of the MSSS preconditioning technique, we consider the case for a moderate ν and a small ν. Example. ([]). Zero source term, recirculating wind, characteristic boundary layers. (.) ν u+ ω u = f in Ω u = u D on Ω where Ω = {(x,y) x, y }, ω = ( y( x ), x( y ) ), f =. Homogeneous Dirichlet boundary condition is imposed everywhere and there are discontinuities at the two corners of the wall, x =, y = ±. We use the Q finite element method to discretize the convection-diffusion equation. First, we consider a moderate value for the viscosity parameter ν = /. According to Proposition.8, the preconditioned spectrum is contained in a circle centered at (,) and the radius of this circle is directly related to the approximation error ε introduced by the model order reduction for SSS matrix computations. n Section, we show that ε can be made arbitrarily small by setting the bound of the FSS is a computational laboratory for experimenting with state-of-the-art preconditioned iterative solvers for the discrete linear equations that arise in incompressible flow modeling, which can be run under Matlab or Octave.
22 Convergence Analysis of MSSS Preconditioners discarded singular values τ properly. We give detailed information of the spectrum of the preconditioned matrix of dimension 89 89, which corresponds to a mesh size h =. For different values of τ, the preconditioned spectrum and the adaptive semiseparable order are plotted in Figure 7-8. Figure 7(a) and Figure 8(a) illustrate that the error introduced by the model order reduction at step in computing the MSSS preconditioner, which is denoted by ε and measured by the matrix -norm, is of the same order as τ. Here ε is computed by ( (.) ε = S K, K, S,) K. t also illustrates that by setting the approximation error of the model order reduction with the same order as the smallest singular value σ, we can compute a nonsingular preconditioner and get satisfactory convergence. σ ε τ r l r u... (a) Minimal singular value and ε (b) Preconditioned spectrum (c) Adaptive semiseparable order Fig. 7. Preconditioned spectrum and adaptive semiseparable order for τ =. x σ r l ε τ r u. 8. (a) Minimal singular value and ε (b) Preconditioned spectrum (c) Adaptive order Fig. 8. Preconditioned spectrum and adaptive semiseparable order for τ = semiseparable By decreasing τ, we get a smaller approximation error, which corresponds to smaller ε. According to our analysis, the circle that contains the preconditioned spectrum is even smaller. This is depicted by Figure 7(b)-Figure 8(b). Moreover, it is shown that by decreasing τ by a factor of, the radius of the circle that contains the preconditioned spectrum also decreases by a factor of about. This validates our bound of the radius in Proposition.8. n fact, for the preconditioned spectrum in Figure 7(b), only iterations are needed to compute the solution by the D() solver and only iterations are needed to solve the system corresponding to the preconditioned spectrum in Figure 8(b). Figure 7(c)-Figure 8(c) give the maximum adaptive ran for the off-diagonal blocs of the Schur complement at step to compute the MSSS preconditioner. Since we have an unsymmetric matrix, the τ-ran for the upper-triangular part and the
23 Yue Qiu, Martin B. van Gijzen et al. lower-triangular part are different from each other. Here the τ-ran represents the number of singular values that is bigger than or equal to τ for a matrix. Figure 7(c)- Figure 8(c) illustrate that the upper semiseparable order r u is bigger than the lower semiseparable order r l which states that the upper-triangular part is more difficult to approximate. Both r l and r u are small and this gives small average semiseparable order, which yields linear computational complexity. We plot the spectrum of the system without preconditioning in Figure 9 to compare with the preconditioned spectrum in Figure 7(b)-Figure 8(b) Fig. 9. Spectrum of the system without preconditioning The driving force of preconditioning is to push the eigenvalues away from and mae them cluster. We have already seen that moderate or small setting of the model reduction error give satisfactory results. Next, we use a big model order reduction error bound by settingτ = and test the performance of the MSSS preconditioner. The approximation error at each step in computing the MSSS preconditioner is plotted in Figure (a), the preconditioned spectrum is given in Figure (b), and the adaptive semiseparable order is given in Figure (c).. σ r l ε τ. r u... (a) Minimal singular value and ε (b) Preconditioned spectrum (c) Adaptive order Fig.. Preconditioned spectrum and adaptive semiseparable order for τ = semiseparable Note that even this setting of the error bound is much bigger than the smallest singular value of the leading principle sub-matrices, which is used to guarantee to compute a nonsingular preconditioner, we still compute an MSSS preconditioner. This is because the possibility of perturbing a nonsingular matrix to singularity is quite small. Since we have a preconditioner that is less accurate because we use a relatively big error bound, the radius of the circle that contains the spectrum of the preconditioned matrix in Figure (b) is not as small as the radius in Figure 7(b)- Figure 8(b). However, the spectrum is away from and only a few eigenvalues are out of this cluster. D() computes the solution in just 8 iterations. Moreover, the semiseparable order for such computations is just as shown in Figure (c), which maes the computational complexity even smaller.
24 Convergence Analysis of MSSS Preconditioners The performance of MSSS preconditioners for different mesh sizes h and τ are reported in Table. For different settings of τ, the adaptive semiseparable order is given in Figure. 7 9 r l r u 8 7 r l r u 8 r l r u (a) h =, τ = (b) h =, τ = 8 (c) h =, τ = r l r l r l 8 r u 8 r u r u 8 8 (d) h =, τ = (e) h = 7, τ = (f) h = 7, τ = r l r l r u r u 8 (g) h = 8, τ = (h) h = 8, τ = Fig.. Adaptive semiseparable order for convection-diffusion equation with ν = / Table Computational results of the MSSS preconditioner for the convection-diffusion equation with ν = / h N τ # iter..9e+.e+.e+ 7.e+ 8.e+ 7
EFFICIENT PRECONDITIONERS FOR PDE-CONSTRAINED OPTIMIZATION PROBLEM WITH A MULTILEVEL SEQUENTIALLY SEMISEPARABLE MATRIX STRUCTURE
Electronic Transactions on Numerical Analysis. Volume 44, pp. 367 4, 215. Copyright c 215,. ISSN 168 9613. ETNA EFFICIENT PRECONDITIONERS FOR PDE-CONSTRAINED OPTIMIZATION PROBLEM WITH A MULTILEVEL SEQUENTIALLY
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationMultilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses
Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationFast algorithms for hierarchically semiseparable matrices
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2010; 17:953 976 Published online 22 December 2009 in Wiley Online Library (wileyonlinelibrary.com)..691 Fast algorithms for hierarchically
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationBlock-tridiagonal matrices
Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 14-1 Nested Krylov methods for shifted linear systems M. Baumann and M. B. van Gizen ISSN 1389-652 Reports of the Delft Institute of Applied Mathematics Delft 214
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationConstruction of a New Domain Decomposition Method for the Stokes Equations
Construction of a New Domain Decomposition Method for the Stokes Equations Frédéric Nataf 1 and Gerd Rapin 2 1 CMAP, CNRS; UMR7641, Ecole Polytechnique, 91128 Palaiseau Cedex, France 2 Math. Dep., NAM,
More informationKey words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method
PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in
More informationFast matrix algebra for dense matrices with rank-deficient off-diagonal blocks
CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices
More informationMathematics and Computer Science
Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationFast Iterative Solution of Saddle Point Problems
Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationAdaptive algebraic multigrid methods in lattice computations
Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More informationComparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems
Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences Abstract.
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationCONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES
European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT
More informationMultilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota
Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationEfficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems
Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/
More informationJ.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009
Parallel Preconditioning of Linear Systems based on ILUPACK for Multithreaded Architectures J.I. Aliaga M. Bollhöfer 2 A.F. Martín E.S. Quintana-Ortí Deparment of Computer Science and Engineering, Univ.
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More informationLecture 2 Decompositions, perturbations
March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationSOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER
SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER LARS ELDÉN AND VALERIA SIMONCINI Abstract. Almost singular linear systems arise in discrete ill-posed problems. Either because
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationX. He and M. Neytcheva. Preconditioning the incompressible Navier-Stokes equations with variable viscosity. J. Comput. Math., in press, 2012.
List of papers This thesis is based on the following papers, which are referred to in the text by their Roman numerals. I II III IV V VI M. Neytcheva, M. Do-Quang and X. He. Element-by-element Schur complement
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationPreconditioners for the incompressible Navier Stokes equations
Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationBackground. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58
Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 18-05 Efficient and robust Schur complement approximations in the augmented Lagrangian preconditioner for high Reynolds number laminar flows X. He and C. Vuik ISSN
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationRobust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations
Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,
More informationA robust multilevel approximate inverse preconditioner for symmetric positive definite matrices
DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 13-07 On preconditioning incompressible non-newtonian flow problems Xin He, Maya Neytcheva and Cornelis Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied
More informationEffective matrix-free preconditioning for the augmented immersed interface method
Effective matrix-free preconditioning for the augmented immersed interface method Jianlin Xia a, Zhilin Li b, Xin Ye a a Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA. E-mail:
More informationSolving Large Nonlinear Sparse Systems
Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary
More informationDomain decomposition for the Jacobi-Davidson method: practical strategies
Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.
More informationEfficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization
Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationChapter 9 Implicit integration, incompressible flows
Chapter 9 Implicit integration, incompressible flows The methods we discussed so far work well for problems of hydrodynamics in which the flow speeds of interest are not orders of magnitude smaller than
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationDepartment of Computer Science, University of Illinois at Urbana-Champaign
Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler
More informationDomain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions
Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationMultigrid and Iterative Strategies for Optimal Control Problems
Multigrid and Iterative Strategies for Optimal Control Problems John Pearson 1, Stefan Takacs 1 1 Mathematical Institute, 24 29 St. Giles, Oxford, OX1 3LB e-mail: john.pearson@worc.ox.ac.uk, takacs@maths.ox.ac.uk
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationEFFECTIVE AND ROBUST PRECONDITIONING OF GENERAL SPD MATRICES VIA STRUCTURED INCOMPLETE FACTORIZATION
EFFECVE AND ROBUS PRECONDONNG OF GENERAL SPD MARCES VA SRUCURED NCOMPLEE FACORZAON JANLN XA AND ZXNG XN Abstract For general symmetric positive definite SPD matrices, we present a framework for designing
More informationOn Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time
On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time Alle-Jan van der Veen and Michel Verhaegen Delft University of Technology Department of Electrical Engineering
More informationA greedy strategy for coarse-grid selection
A greedy strategy for coarse-grid selection S. MacLachlan Yousef Saad August 3, 2006 Abstract Efficient solution of the very large linear systems that arise in numerical modelling of real-world applications
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationSolving PDEs with Multigrid Methods p.1
Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationAn Efficient Two-Level Preconditioner for Multi-Frequency Wave Propagation Problems
An Efficient Two-Level Preconditioner for Multi-Frequency Wave Propagation Problems M. Baumann, and M.B. van Gijzen Email: M.M.Baumann@tudelft.nl Delft Institute of Applied Mathematics Delft University
More informationMULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS
MULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS JIANLIN XIA Abstract. We propose multi-layer hierarchically semiseparable MHS structures for the fast factorizations of dense matrices arising from
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-19 Fast Iterative Methods for Discontinuous Galerin Discretizations for Elliptic PDEs P. van Slingerland, and C. Vui ISSN 1389-650 Reports of the Department of
More informationApplications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices
Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationOn the choice of abstract projection vectors for second level preconditioners
On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More information