real and symmetric positive denite. The examples will only display the upper triangular part of matrices, with the understanding that the lower triang

Size: px
Start display at page:

Download "real and symmetric positive denite. The examples will only display the upper triangular part of matrices, with the understanding that the lower triang"

Transcription

1 NEESSRY ND SUFFIIENT SYMOLI ONDITION FOR THE EXISTENE OF INOMPLETE HOLESKY FTORIZTION XIOGE WNG ND RNDLL RMLEY DEPRTMENT OF OMPUTER SIENE INDIN UNIVERSITY - LOOMINGTON KYLE. GLLIVN DEPRTMENT OF ELETRIL ND OMPUTER ENGINEERING UNIVERSITY OF ILLINOIS T URN-HMPIGN ugust, 99 bstract. This paper presents a sucient condition on sparsity patterns for the existence of the incomplete holesky factorization. Given the sparsity pattern P () of a matrix, and a target sparsity pattern P satisfying the condition, incomplete holesky factorization successfully completes for all symmetric positive denite matrices with the same pattern P (). It is also shown that this condition is necessary in the sense that for a given P () and target pattern P,ifP does not satisfy the condition then there is at least one symmetric positive denite matrix whose holesky factor has the same sparsity pattern as the holesky factor of, for which incomplete holesky factorization fails because of a nonpositive pivot.. Introduction. Incomplete holesky factorization (I) is a widely known and eective method of accelerating the convergence of conjugate gradient (G) iterative methods for solving symmetric positive denite linear systems. major weakness of I is that it may break down due to nonpositive pivots. Methods of overcoming this problem can be divided into two classes: numerical and structural strategies. numerical strategy uses numerical values generated during the factorization process to modify the factorization, as in the work by [Jennings and Malik, 9, Manteuel, 99, Munksgaard, 98, Wittum and Liebau, 989]. structural strategy, asinthework by [oleman, 988], selects the sparsity pattern to insure the completion of the I process. In this paper, we do not give any specic algorithm for modifying the sparsity pattern to assure the existence of I. Instead, we prove a sucient and necessary condition on the sparsity pattern of the incomplete holesky factor for the existence of I on the symmetric positive denite matrices with a given sparsity. The condition is more general than that in [oleman, 988], and includes oleman's sparsity pattern condition as a special case. Structural strategies are especially important for applications where a sequence of linear systems must be solved, each coecient matrix with the same non-zero pattern. This occurs when solving linear programming problems using interior point methods, or when solving discretized nonlinear partial dierential equations with a xed mesh. lthough the values can change from one step to another, the sparsity pattern is xed. Using a target I sparsity pattern that satises our sucient condition, the preconditioner can be set up on each step using a static data structure and with assurance that the preconditioner will exist. Some specic algorithms that modify a sparsity pattern to satisfy the sucient condition have been proposed and tested in [Wang et al., 99b, Wang, 99].. Denitions and Notation. efore the main results are presented, some denitions and notation that are used are given. Unless otherwise specied, we assume that matrices are THIS WORK WS SUPPORTED Y THE NTIONL SIENE FOUNDTION UNDER GRNT NOS. D-99 ND R-9

2 real and symmetric positive denite. The examples will only display the upper triangular part of matrices, with the understanding that the lower triangular parts are specied by symmetry. apital letters denote matrices, and the elements of a matrix will be denoted as the corresponding lower case letter with two subscripts indicating the row and column indices. For example, a ij denotes the element of matrix in the ith row andjth column. Let P n = f(i; j)j i j ng be the set of all possible non-zero positions in an n n upper triangular matrix. The sparsity pattern P () of a symmetric matrix is dened as P () =f(i; j)ja ij = a ji =; i j ng. Note that P () only considers the positions of the non-zero elements of in upper triangular part and P () P n. Let U be the upper triangular holesky factor of formed assuming no cancellation, so that = U T U. Since it is clear from context when we are discussing a triangular or symmetric matrix, we exploit a convenient abuse of notation and let P (U) denote the sparsity pattern of U and P (U) P n. The incomplete holesky factorization of a matrix using sparsity pattern P refers to a holesky factorization where all entries occurring outside of the specied sparsity pattern are immediately discarded, and no other numerical modication is made to the matrix. Given a matrix and a sparsity pattern P P n of the target factor U, it is assumed that P P (U) without loss of generality because a position (i; j) P (U) will not aect the nal factor. In order for the I factorization to complete all diagonal positions (i; i) must be in P, a requirement we now impose on all the sparsity patterns P dealt with in this paper. The I algorithm can be described as follows: lgorithm [R]=I[,P] begin for k = ; ; :::; n; if a kk > then () r kk = p a kk for j = ( k +; k +; :::; n (k; j) P () r kj = a kj =r kk (k; j) P endfor for i = k +; k +; :::; n for j = i; i +; :::; n () a ij = a ij, a ki a kj (i; j) P; (k; j) P and (k; i) P endfor endfor else () Stop (incomplete factorization fails) endif endfor end lthough the algorithm is most commonly stated (and implemented) by initially copying to R and then referring to R alone, the form shown above is valid for target sparsity patterns P P (), and simplies the proof of Theorem. In practice, the updates to would not be performed since generally needs to be retained for the iterative method that is preconditioned by R. Next, we dene a new property on sparsity patterns. This property uses an auxiliary This assumption is made throughout the paper when referring to the holesky factorization of a matrix

3 sparsity pattern Q, and when Q is the sparsity pattern, P (U), of the complete holesky factorization the property gives our sucient condition for existence of the I factorization. Definition.. Let Q P n and P P n be given sparsity patterns. P is said to have property + on Q, ifthe following condition is satised: for any position (j; k) P \ Q, if (i; j) and (i; k), i<j, are both in Q, then they both are in P or neither is in P. If the condition is not satised then P is said to violate property + on Q. Definition.. Let Q P n and P P n be given sparsity patterns and let + (Q) denote the set of all patterns in P n that have property + on Q. P + (Q) and P + (Q) will denote P having property + on Q and violating it, respectively. From the matrix sparsity point of view, P + (Q) means that for any (j; k) P \ Q, columns j and k of any matrix with sparsity pattern P have the same structure in rows i<j<kwhere (i; j) and(i; k) areinq. ecause property + requires either both elements to be present or both elements to be absent, it can also be viewed as an \not exclusive or" condition, modulo the sparsity pattern Q. The structure of rows i<jwhere at least one of (i; j) and (i; k) isnotinq, is unrestricted. Example. Let be a full matrix. ll sparsity patterns P P are in + (P ) except the following two patterns: P = f(; ); (; ); (; ); (; ); (; )g and P = f(; ); (; ); (; ); (; ); (; )g. The example follows by considering the following cases. If (; ) P then among the possible combinations of (; ), (; ) in or not in P, only the listed patterns violate property + (P ). If (; ) P, then any combination of (; ) and (; ) in or not in P will satisfy property + (P ). Example. If is the matrix arising from a ve-point dierence operator on a regular two-dimensional mesh using the red-black ordering, then P () + (P (U)). simpleway of generating a pattern for I is to start with U and consider subsets of P (U). The next example shows that a simple selection strategy is consistent with property +. Example. Let be a matrix with holesky factor U. If V is a submatrix of U that consists of all the diagonal elements and any subset of the set of rows of U, then P (V ) + (P (U)). To see this fact, note rst that (j; k) P (V ) implies (j; k) P (U). Now consider any pair of positions (i; j) and (i; k) in P (U). If row i of U is a row included in V, then both (i; j) and (i; k) are in P (V ) by construction of V. Otherwise, by construction, the complete row i of U is not in V and both (i; j) and (i; k) are not in P (V ). No other possibilities exists so by the denition, we have P (V ) + (P (U)). For a given matrix, there is a combinatorially large number of submatrices V satisfying the construction conditions in Example, but for each P (V ) + (P (U)). In addition, the density of target sparsity patterns can vary from as sparse as a diagonal matrix to the pattern of the complete holesky factor U. Example. Suppose the graph of is chordal and the factorization of is performed using the perfect elimination ordering that guarantees no ll-in, then P () = P (U) and P () + (P (U)), i.e., this is a special case of Example. Example indicates the relationship of oleman's preconditioner and property +. oleman identies submatrices of which are chordal and permutes so that these chordal blocks form a block diagonal submatrix of. The block diagonal matrix is used as a preconditioner by performing holesky factorization on each diagonal block using the appropriate perfect elimination ordering. Example shows that such a preconditioner satises proper +.

4 Example. Let = ;so that U = is the symbolic holesky factor of, andp (U) =P () [f(; ); (; )g. Then P () + (P (U)). Theorem below shows that P () + (P (U)) implies that I(), incomplete holesky factorization with zero levels of ll-in, exists regardless of the numerical values in the positive denite matrix of Example. When I() fails to accelerate convergence suciently, a standard technique for generating a pattern with better convergence accleration is to augment P () and move towards P (U). Suppose P = P () [f(; )g in Example. Then P + (P (U)) and I will also exist for this pattern. However, if we letq = P () [f(; )g then Q + (P (U)) because (; ) is in Q, both (; ) and (; ) are in P (U), but only one of (; ) and (; ) is in Q. In this case the I factorization may ormay not exist, but in Section we will show there exists some matrix whose full holesky factor has the same sparsity pattern as that of, and for which I with the augmented pattern Q will fail to exist. These examples indicate how property + can be satised and violated in some common circumstances. The next Section shows that property + is sucient for completion of I factorization.. Suciency of Property +. The rst theorem shows that property + guarantees the existence of the I factorization. Theorem. Let matrix < nn be symmetric positive denite and have holesky factor U. Suppose that position set P + (P (U)). The incomplete holesky factorization of using position set P completes successfully. Proof: Since any(i; j) P (U) will not aect the computation of the incomplete holesky factorization, we assume that P P (U) in the proof without loss of generality. Let i = fjj(j; i) P g be the set of row indices of elements of P in column i for each i n. onstruct a set S i in the following way: Initialize S i = i, and then repeat the augmentation of S i S i SjS i j until S i does not grow (which occurs in n, or fewer steps). This construction assures that for any j S i, j S i. The principal submatrix of dened by S i consists of the components of with row and column indices both in S i. We prove the theorem by showing that if P + (P (U)), then the computation of the incomplete holesky factorization of on each principal submatrix dened by S i, i n is equivalent to the computation of complete holesky factorization on the submatrix. Therefore the diagonal elements of the matrix at each step of the incomplete holesky factor will always be positive. This implies that the I factorization will not break down due to nonpositive pivots. We rst show that for any i and any fj; kg S i,if(j; k) P (U) then(j; k) P. ssume that (j; k) P. Then there is (j; l) P with l S i, or otherwise j would not be in S i. Now consider the position (minfl; kg; maxfl; kg). Since both (j; l) and (j; k) are in P (U), it follows that (minfl; kg; maxfl; kg), the position that will be aected (updated or lled) by the element (j; k) and(j; l), must be in P (U).If (minfl; kg; maxfl; kg) inp, this contradicts P + (P (U)) because (j; l) isinp but (j; k) is not in P. If (minfl; kg; maxfl; kg) P then we have a position (minfl; kg; maxfl; kg) which has the same status as (j; k): fl; kg S i, (minfl; kg; maxfl; kg) P (U), but (minfl; kg; maxfl; kg) P. Note that minfl; kg >jand maxfl; kg k. y replacing (j; k) with (minfl; kg; maxfl; kg) and applying this argument recursively, eventually we reach a row number m S i that has only one o-diagonal index

5 in S i. This last element must be in P, or m would not be in S i, proving the assertion by contradiction and (j; k) P follows. On the other hand, any position (j; k) with fj; kg S i will not aect the values of the elements on the submatrix dened by S i in the incomplete factorization. To see this, let's examine how a element in such a principal submatrix changes its value during the I factorization. Let fs; tg S i. Now a st will be updated by a st = a st, a ks a kt =a kk only when there exists a k such that both (k; s) and (k; t) are in P. If k S i then it can be shown that either (k; s) P (U) or (k; s) P : consider (k; s) P (U), otherwise it does not aect the computation. If (k; s) P then k will be in S i by construction of S i,contradicting the assumption that k S i. Hence (k; s) P. Therefore, (k; s) is either not in P or not in P (U). The same argument can also be applied to (k; t). Therefore, the value of elements in the principal submatrix dened by S i are not aected by elements outside this submatrix. Therefore, for any i n, the computation of the diagonal element a ii of the incomplete holesky factorization of is equivalent to computing the last diagonal element of a complete holesky factorization of the principal submatrix dened by S i. ecause any principal submatrix of a symmetric positive denite matrix is symmetric positive denite, holesky factorization of the principal submatrix of dened by S i can be completed, and hence all the diagonal elements will remain positive in the incomplete holesky factorization of with pattern P. This establishes a sucient condition on the target factor for the existence of an incomplete holesky factorization. In order to illustrate Theorem and its proof, consider a particular matrix that has the sparsity patterns given in Example, for which P () + (P (U)). The steps of the incomplete holesky factorization algorithm with sparsity pattern P = P () shown earlier, with i dened as the updated matrix after step i of I factorization, are: = = =, 8 9, ; = ; = ; =, ;, 9= The I factor R can then be obtained by setting r ii = p a ii,andr ij = a ij =r ii if (i; j) P, giving R =, p = 9= From the denition of S i given in the proof of the theorem, S = fg, S = f; g, S = f; g, S = f; ; g, S = f; ; ; g, :

6 Let i be the principal submatrix of dened by S i. If we consider the computation of I of above and extract the principal submatrix dened by S i after each step, we see exactly the computation of complete holesky factorization on the submatrix. For example, complete holesky factorization for i = gives, = =, 8! ; =,! ; and for i =, = = = 9 = = 9= The proof of Theorem relied on noting that these computations are exactly the same as those of I of on the principal submatrix dened by S and S, respectively. On the other hand, from Example, P = P () [f(; )g + (P (U)). Using this target sparsity pattern causes I to break down because of a zero pivot in step : = = =, 8 9, ; = ; =,,. Necessity ofproperty +. P + (P (U)) is not a necessary condition for I completion on a particular symmetric positive denite matrix; examples are readily constructed to show this. However, property + is a necessary condition in the sense that incomplete holesky factorization with a given target sparsity P will exist for all symmetric positive denite matrices with the same pattern P (U) onlyifp + (P (U)). Lemma.. symmetric positive denite matrix R nn can be created with any given sparsity pattern P and any given symmetric positive denite principal submatrix S such that the sparsity pattern of S is consistent with P. For example, given the sparsity pattern P = ; :

7 and given the positive denite principal submatrix ( : ; :)= says that the unspecied entries in () = ; Lemma. can be dened so that is positive denite. Proof: The proof is by construction. ssume that S R kk. Let P be the symmetrically permuted pattern of P so that the principal submatrix S of becomes the leading submatrix. Generate as the symmetric positive denite matrix with sparsity pattern P in the following way: Let the leading principle k k submatrix of be S. The expansion starts from row and column k +. t each step i, k + i n, compute the holesky factorization of the leading submatrix of size i, of, and denote the lower triangular factor as L i,. Let v be a vector of length i, with sparsity pattern consistent with that of the o{diagonal part of column i of. Set the ith diagonal element to be a ii = v T, i, v + = vt L,T i, L, i, v +, where > is arbitrary, and dene the o{diagonal part of column i to equal v (of course, the o{diagonal part of row i must also be set to v T to retain symmetry.) ontinue this process until an n n symmetric positive denite matrix is generated. Then apply the symmetric inverse permutation to, completing the creation of matrix. To illustrate the process in Lemma, we construct the unspecied entries in Equation (). First, permute the rst row and column of sparsity pattern P to the be the last so that the principal submatrix dened by indices f; ; g in the matrix will be the leading principal submatrix. The permuted pattern P becomes P = Denote i as the leading submatrix of of size i. The expansion starts with i =. = S =. Let v =( ) T, and =. Then a = v T L,T i, L, i, v + =,giving =. Next, let v = ( )T and =,giving a = and so = Lemma.. Let R. Finally, apply the inverse permutation to get = be symmetric positive denite. Let U be theupper triangular holesky factor of. If P P (U) does not have property + on P (U), then there is a.

8 symmetric positive denite matrix R such that the holesky factor of has the same structure as U and incomplete holesky factorization of using P fails because of a nonpositive pivot. Proof: The only possible pattern P (U) with subsets P P (U) not in + (P (U)) is P (U) =P. Furthermore, if (; ) P then P + (P (U)). So assume that (; ) P. Then P + (P (U)) implies P = f(; ); (; ); (; ); (; ); (; )g or P = f(; ); (; ); (; ); (; ); (; )g; as in Example. Note that in both cases the incompleteness aects the pattern in the rst row. The trailing principal submatrix of order is dense in both. Therefore, the strategy of the proof is to make the trailing principal submatrix of order positive denite after one step of complete holesky is performed and not positive denite after one step of incomplete holesky. ase : P = f(; ); (; ); (; ); (; ); (; )g and P (U) =P. Dene as follows: Let the leading full matrix of be any symmetric positive denite matrix and b be any non-zero number. Two elements in the third column are yet to be determined. If we apply two steps of I with the pattern P the value of the third pivot can be determined and set to. This yields the value of b = b b =(b b, b ) >. Satisfying this condition means that the trailing principal submatrix of order is not positive denite after one step of I. Note that b does not appear in the value for b due to P. Therefore, we must set b to guarantee that if complete holesky is performed it will succeed. This can be done by taking it to be a solution of the inequality b (b b, b b ) >. It follows that ( <b < b b =b if b b > b b =b <b < if b b < pplying complete holesky and incomplete holesky factorization on shows that is positive denite and I breaks down with the third pivot nonpositive. ase : P = f(; ); (; ); (; ); (; ); (; )g and P (U) = P. Let the trailing submatrix of be any symmetric positive denite matrix. We can set the values in the rst row of the matrix to achieve the same two conditions as in ase. Let b be any non-zero number. Determining the nal pivot of I and setting it to yields b = b b =(b b,b ) >. The completion of the holesky factorization can be guaranteed by setting the remaining element b. Let b be a solution of the inequalities: Solving the inequalities gives ( b <b b b, b =b, (b, b b b ) =(b, b b ) > : ( <b < minfb b =b ; p b b g if b b > maxfb b =b ;, p b b g <b < if b b < : pplying holesky and incomplete holesky factorization on again shows that is positive denite but I breaks down when using pattern P. 8

9 Examples of the two cases of Lemma. are easily generated. Let be the leading principal submatrix of a symmetric positive denite matrix. Dene to have the same leading principle submatrix, b =,andb = b b =(b b, b ) = =(, )=. Since b b >, b isrequired to satisfy the condition < b < b b =b =. Dene = b ==, giving =. is symmetric positive but I with the pattern of ase breaks down.! Similarly, let be the trailing submatrix of a symmetric positive denite. Dene to have the same trailing submatrix, b =, and b = b b =(b b, b ) = =(, ) =. Since b b >, b needs to be a solution tothe inequalities p = <b < minfb b =b ; b b g =. Let b ==. We thenhave =. is symmetric positive but I with the pattern in ase breaks down. Theorem. Let R nn be symmetric positive denite, U be the upper triangular holesky factor of, and P (U) be the sparsity pattern of U.! If P P n does not have the property + on P (U), then there is a symmetric positive denite matrix R nn such that the complete holesky factor of has the same non-zero structure as U, and incomplete holesky factorization of using P will fail due to a nonpositive pivot. Proof: The theorem is proven by induction on the size of the problem, n. Note that all sparsity patterns for matrices necessarily have property + on P (U). Lemma. shows that the theorem is true for problems of size less or equal to, establishing the induction basis. We assume that for all the problems of size less or equal to n, the theorem is true and show below that it is true for matrices of order n. Let P l be the \leading" subset of P dened by P l = f(i; j)j(i; j) P; i<j<ng and let P t be the \trailing" subset of P dened by P t = f(i; j)j(i; j) P; <i<jng. Similarly, denote P (U) l = f(i; j)j(i; j) P (U);i < j <ng, P (U) t = f(i; j)j(i; j) P (U); <i<jg. P is in at least one of the following three cases: ase : P l does not have property + on P (U) l. ase : P t does not have property + on P (U) t. ase : P l has property + on P (U) l and P t has property + on P (U) t but P does not have property + on P (U). In ase the induction hypothesis implies that there is a symmetric positive denite matrix S of size n, whose complete holesky factor has the same sparsity as P (U) l, and for which incomplete holesky factorization breaks down when applied to S using P l. Let the matrix be constructed so that its leading principal submatrix of order n, is equal to S and then use the result of Lemma. to construct its last row and column. Then incomplete holesky factorization on the constructed symmetric positive denite matrix breaks down during the rst n, steps of computation. In ase, create a matrix in the following way: Since P t does not have property + on P (U) t,by the induction hypothesis there is a symmetric positive denite matrix S of order n, such thatits complete holesky factor has the same structure of P (U) t and the incomplete holesky factorization of S using P t break down. From Lemma. we can create a symmetric positive denite matrix R nn such that it has the same non-zero structure 9

10 as on the rst row and column and its trailing principal submatrix is equal to S. Dene the vector v of length n as: 8 >< v i = >: ; i = ; (;i) P; c i = p c ; (;j) P; and let = + vv T. is symmetric and positive denite because is symmetric positive denite and vv T is nonnegative denite. Furthermore, the non-zero structure of the complete holesky factor of is the same as P (U). fter applying one step of incomplete holesky factorization to, the reduced matrix of order n, is equal to S. y the denition of, incomplete holesky factorization will break down. In ase, P l and P t have property + on P (U) l and P (U) t, respectively but, P violates property + on P (U). Therefore, there must be a (i; n) P such that both (;i) and (;n) are in P (U) but only one of them is in P. fter applying one step of incomplete holesky factorization on, consider incomplete holesky factorization of the reduced matrix of order n, with sparsity pattern P t. Since P t has property + on P (U) t, we cannot use ase above to show that I can fail for some set of values assigned to the non-zero positions. However, we nowshow that the non-zero values can be set so that the reduced matrix of order n, has a principal submatrix which is not positive denite. Dene the symmetric positive denite matrix as follows: let 's principal submatrix given by indices f;i;ng be equal to the matrix shown in Lemma. corresponding to the one of the two possible sparsity patterns that applies. Use Lemma. to expand the matrix to have the same sparsity pattern as. s shown in the proof of Theorem the computation of the I factorization of the reduced matrix on the elements corresponding to index set S t n, of P t is equivalent to the computation of the complete holesky factorization on the principal submatrix dened by the index set Sn,. t ecause was constructed using the techniques in the proof of Lemma., the principal submatrix dened by fi; ng S t n, is not positive denite after one step of I. Therefore, the complete holesky factorization of the principal submatrix dened by S t n, breaks down and the incomplete holesky factorization on the reduced matrix must also fail. Together, this shows that for a problem of order n, if P does not have property + on (P (U)), then there is a symmetric positive denite matrix such that its holesky factor has the same sparsity pattern as the holesky factor of, but incomplete holesky factorization of will fail. y induction the theorem is true. To illustrate the proof of ase above, let P () = so that P (U) =P. Let P = P (),f(; )g. Then P l + (P (U) l ) and P t + (P (U) t ), but P + (P (U)). Now for i =, as indicated in the proof (i; n) P, both (;i) and (;n) are in P (U), but only one of them is in P. onstruct the matrix as follows: let 's = principal submatrix given by indices f; ; g be ; which is equal to the ;

11 matrix shown in the example corresponding to ase in Lemma.. Using Lemma. we can expand the matrix to have the same sparsity pattern as, for example, = = pplying one step of incomplete holesky factorization using P on gives = = Since (; ) P, no updates are performed on column in computing. Now consider incomplete holesky factorization of the reduced matrix of order n, with sparsity pattern P t, and recall that P t has property + on P (U) t. s shown in the proof of Theorem performing incomplete holesky factorization on the reduced matrix with pattern P t has the same eect on the elements corresponding to index set S t n, = f; ; g as the computation : : of complete holesky factorization on the principal submatrix : However, the principal submatrix dened by f; g S t n, is not positive denite. Therefore, the complete holesky factorization of the principal submatrix dened by S t n, breaks down and the incomplete holesky factorization on the reduced matrix also fails on the next step: = = = =. Summary and Future Work. In this paper we dene property + for a target sparsity pattern for incomplete holesky factorization. Property + is a sucient condition on the sparsity pattern that ensures the existence of incomplete holesky factorization of a symmetric positive denite matrix. If property + is not satised, then a symmetric positive denite matrix with the same non-zero structure of its holesky factor can be found so that when applying incomplete holesky factorization with the specied pattern, the factorization will break down due to a nonpositive pivot. These results show that property + is fundamental for incomplete holesky factorization. Every other sucient condition on the sparsity pattern that we know of is a special case of property +, including the reordered chordal graph strategy in [oleman, 988] and the trivial cases of P containing only diagonal entries and P = P (U). Theorem implies that property + is necessary and sucient, if only the sparsity pattern of the matrix is considered. The proof of these results show that property + relates portions of the incomplete factorization of a matrix to complete factorizations of particular principal submatrices. This work was originally motivated by research on the incomplete Gram-Schmidt factorization

12 (IGS) of a least squares problem, for which is the coecient matrix of the normal equations. The existence of that IGS factorization can be guaranteed [Wang et al., 99a], and property + is the condition under which I applied to is essentially equivalent to IGS and therefore also guaranteed to exist. lgorithms for modifying a given sparsity pattern to assure that property + holds have been proposed in [Wang, 99]. However, the algorithms proceed simply by either always dropping positions that cause violations, or by always adding positions into P to prevent violations. more eective strategy is likely to be based on a combination of those two approaches, allowing the amount of storage used by the preconditioner to be specied in advance. This work is of great importance in applications where a sequence of problems are solved, each with the same sparsity pattern. This occurs in the solution of nonlinear partial dierential equations, and in the iterative solution of the linear systems occurring in interior point methods for linear programming. This suggests one direction of future work: what conditions on a discretization will a priori assure that I(s), incomplete holesky factorization with s levels of ll, will exist? Extensive research has also been carried out by other researchers to assure the existence of incomplete holesky factorization based on modifying the numerical values encountered, most often by augmenting pivot elements. ombining the two approaches, structural and numerical, is another promising research direction. REFERENES [oleman, 988] oleman, T. (988). chordal preconditioner for large-scale optimization. Mathematical Programming, :{8. [Jennings and Malik, 9] Jennings,. and Malik, G. M. (9). Partial elimination. Journal of the Institute of Mathematics and Its pplications, :{. [Manteuel, 99] Manteuel, T.. (99). Shifted incomplete holeski factorization. In Du, I. S. and Stewart, G., editors, Sparse Matrix Proceedings 98, Philadelphia,P. SIM Publications. [Munksgaard, 98] Munksgaard, N. (98). Solving sparse symmetric sets of linear equations by preconditioned conjugate gradients. M Transactions on Mathematical Software, :{9. [Wang, 99] Wang, X. (99). Incomplete Factorization Preconditioning for Linear Least Squares Problems. PhD thesis, University of Illinois Urbana-hampaign. lso availableastech. Rep. UIUDS-R-9-8, omputer Science Department, University of Illinois { Urbana. [Wang et al., 99a] Wang, X., Gallivan, K.., and ramley, R. (99a). IMGS: incomplete orthogonalization preconditioner. Technical Report 9, Indiana University{loomington, loomington, IN. Submitted to SIM J. Sci. omp. [Wang et al., 99b] Wang, X., Gallivan, K.., and ramley, R. (99b). Incomplete holesky factorization with sparsity pattern modication. Technical Report 9, Indiana University{ loomington, loomington, IN. [Wittum and Liebau, 989] Wittum, G. and Liebau, F. (989). On truncated incomplete decompositions. IT, 9:9{.

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4)

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4) Latin Squares Denition and examples Denition. (Latin Square) An n n Latin square, or a latin square of order n, is a square array with n symbols arranged so that each symbol appears just once in each row

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Roger Fletcher, Andreas Grothey and Sven Leyer. September 25, Abstract

Roger Fletcher, Andreas Grothey and Sven Leyer. September 25, Abstract omputing sparse Hessian and Jacobian approximations with optimal hereditary properties Roger Fletcher, Andreas Grothey and Sven Leyer September 5, 1995 Abstract In nonlinear optimization it is often important

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

total work bounds to approximately solve the linear systems are O(n); if = O(1): For class (4) we are within a small increasing factor of these bounds

total work bounds to approximately solve the linear systems are O(n); if = O(1): For class (4) we are within a small increasing factor of these bounds Ecient Approximate Solution of Sparse Linear Systems John H. Reif z Abstract. We consider the problem of approximate solution ex of a linear system Ax = b over the reals, such that kaex? bk kbk; for a

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

k-degenerate Graphs Allan Bickle Date Western Michigan University

k-degenerate Graphs Allan Bickle Date Western Michigan University k-degenerate Graphs Western Michigan University Date Basics Denition The k-core of a graph G is the maximal induced subgraph H G such that δ (H) k. The core number of a vertex, C (v), is the largest value

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Back circulant Latin squares and the inuence of a set. L F Fitina, Jennifer Seberry and Ghulam R Chaudhry. Centre for Computer Security Research

Back circulant Latin squares and the inuence of a set. L F Fitina, Jennifer Seberry and Ghulam R Chaudhry. Centre for Computer Security Research Back circulant Latin squares and the inuence of a set L F Fitina, Jennifer Seberry and Ghulam R Chaudhry Centre for Computer Security Research School of Information Technology and Computer Science University

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

ARE202A, Fall Contents

ARE202A, Fall Contents ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions

More information

Math 775 Homework 1. Austin Mohr. February 9, 2011

Math 775 Homework 1. Austin Mohr. February 9, 2011 Math 775 Homework 1 Austin Mohr February 9, 2011 Problem 1 Suppose sets S 1, S 2,..., S n contain, respectively, 2, 3,..., n 1 elements. Proposition 1. The number of SDR s is at least 2 n, and this bound

More information

76. LIO ND Z. I vector x 2 R n by a quasi-newton-type algorithm. llwright, Woodgate[2,3] and Liao[4] gave some necessary and sufficient conditions for

76. LIO ND Z. I vector x 2 R n by a quasi-newton-type algorithm. llwright, Woodgate[2,3] and Liao[4] gave some necessary and sufficient conditions for Journal of Computational Mathematics, Vol.2, No.2, 23, 75 82. LEST-SQURES SOLUTION OF X = D OVER SYMMETRIC POSITIVE SEMIDEFINITE MTRICES X Λ) nping Liao (Department of Mathematics, Hunan Normal University,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

The doubly negative matrix completion problem

The doubly negative matrix completion problem The doubly negative matrix completion problem C Mendes Araújo, Juan R Torregrosa and Ana M Urbano CMAT - Centro de Matemática / Dpto de Matemática Aplicada Universidade do Minho / Universidad Politécnica

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and An average case analysis of a dierential attack on a class of SP-networks Luke O'Connor Distributed Systems Technology Centre, and Information Security Research Center, QUT Brisbane, Australia Abstract

More information

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later. 34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir Order Results for Mono-Implicit Runge-Kutta Methods K urrage, F H hipman, and P H Muir bstract The mono-implicit Runge-Kutta methods are a subclass of the wellknown family of implicit Runge-Kutta methods

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014 Quivers of Period 2 Mariya Sardarli Max Wimberley Heyi Zhu ovember 26, 2014 Abstract A quiver with vertices labeled from 1,..., n is said to have period 2 if the quiver obtained by mutating at 1 and then

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Nordhaus-Gaddum Theorems for k-decompositions

Nordhaus-Gaddum Theorems for k-decompositions Nordhaus-Gaddum Theorems for k-decompositions Western Michigan University October 12, 2011 A Motivating Problem Consider the following problem. An international round-robin sports tournament is held between

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs

Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs Nathan Lindzey, Ross M. McConnell Colorado State University, Fort Collins CO 80521, USA Abstract. Tucker characterized

More information

On the Embedding of a Class of Regular Graphs. Yu-Chee Tseng and Ten-Hwang Lai. Department of Computer and Information Science

On the Embedding of a Class of Regular Graphs. Yu-Chee Tseng and Ten-Hwang Lai. Department of Computer and Information Science On the Embedding of a lass of Regular Graphs in a Faulty Hypercube Yu-hee Tseng and Ten-Hwang Lai Department of omputer and Information Science The Ohio State University olumbus, Ohio 432 Tel: (64)292-583,

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

Strang-type Preconditioners for Solving System of Delay Dierential Equations by Boundary Value Methods Raymond H. Chan Xiao-Qing Jin y Zheng-Jian Bai

Strang-type Preconditioners for Solving System of Delay Dierential Equations by Boundary Value Methods Raymond H. Chan Xiao-Qing Jin y Zheng-Jian Bai Strang-type Preconditioners for Solving System of Delay Dierential Equations by oundary Value Methods Raymond H. han iao-qing Jin y Zheng-Jian ai z ugust 27, 22 bstract In this paper, we survey some of

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix Steve Kirkland University of Regina June 5, 2006 Motivation: Google s PageRank algorithm finds the stationary vector of a stochastic

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter Stability

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

The Inclusion Exclusion Principle and Its More General Version

The Inclusion Exclusion Principle and Its More General Version The Inclusion Exclusion Principle and Its More General Version Stewart Weiss June 28, 2009 1 Introduction The Inclusion-Exclusion Principle is typically seen in the context of combinatorics or probability

More information

Algorithms for Fast Linear System Solving and Rank Profile Computation

Algorithms for Fast Linear System Solving and Rank Profile Computation Algorithms for Fast Linear System Solving and Rank Profile Computation by Shiyun Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets

Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets Navid Nasr Esfahani, Ian Goldberg and Douglas R. Stinson David R. Cheriton School of Computer Science University of

More information

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A = Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1 LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just

More information

q-alg/ Mar 96

q-alg/ Mar 96 Integrality of Two Variable Kostka Functions Friedrich Knop* Department of Mathematics, Rutgers University, New Brunswick NJ 08903, USA knop@math.rutgers.edu 1. Introduction q-alg/9603027 29 Mar 96 Macdonald

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

SYMBOLIC AND EXACT STRUCTURE PREDICTION FOR SPARSE GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

SYMBOLIC AND EXACT STRUCTURE PREDICTION FOR SPARSE GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING SYMBOLIC AND EXACT STRUCTURE PREDICTION FOR SPARSE GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING LAURA GRIGORI, JOHN R. GILBERT, AND MICHEL COSNARD Abstract. In this paper we consider two structure prediction

More information

Reduction of two-loop Feynman integrals. Rob Verheyen

Reduction of two-loop Feynman integrals. Rob Verheyen Reduction of two-loop Feynman integrals Rob Verheyen July 3, 2012 Contents 1 The Fundamentals at One Loop 2 1.1 Introduction.............................. 2 1.2 Reducing the One-loop Case.....................

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

More information

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems Zsolt Csizmadia and Tibor Illés and Adrienn Nagy February 24, 213 Abstract In this paper we introduce the

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. 10. Rank-nullity Definition 10.1. Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. The nullity ν(a) of A is the dimension of the kernel. The

More information