Key words. convex quadratic programming, interior point methods, KKT systems, preconditioners.
|
|
- Arlene Horn
- 5 years ago
- Views:
Transcription
1 A COMPARISON OF REDUCED AND UNREDUCED SYMMETRIC KKT SYSTEMS ARISING FROM INTERIOR POINT METHODS BENEDETTA MORINI, VALERIA SIMONCINI, AND MATTIA TANI Abstract. We address the iterative solution of symmetric KKT systems arising in the solution of convex quadratic programming problems. Two strictly related and well established formulations for such systems are studied with particular emphasis on the effect of preconditioning strategies on their relation. Constraint and augmented preconditioners are considered, and the choice of the augmentation matrix is discussed. A theoretical and experimental analysis is conducted to assess which of the two formulations should be preferred for solving large-scale problems. Key words. convex quadratic programming, interior point methods, KKT systems, preconditioners.. Introduction. Interior Point (IP) methods for solving large-scale Quadratic Programming (QP) problems have been an intensive area of research in the past decades. Their efficiency depends heavily on the per-iteration cost and this is mainly due to the solution of a structured algebraic linear system. Therefore, much effort has been devoted to developing properly tailored preconditioned iterative solvers, whose computational cost and memory requirements may be lower than those of direct solvers. The analysis and development of these resulting inexact IP methods have covered several different aspects such as the level of accuracy in the solution of linear systems, the design of suitable iterative methods and preconditioners, and the convergence analysis of the inexact IP solver, including worst-case iteration complexity, see, e.g., [4, 7, 8,, 4, 6, 9, 22, 34]. We consider the sequence of symmetric KKT systems arising in the solution of convex QP problems in standard form and analyze two well established formulations for such systems. The former is the system originally formed during the application of an IP method and its symmetrized version; the latter is a widely used formulation of smaller dimension obtained by an inexpensive variable reduction. In the following, we refer to these systems as unreduced and reduced respectively. Relevant alternative formulations, possibly definite, such as condensed systems and doubly augmented systems (see, e.g., [9]) are not considered in this work. It is well-known that the reduced systems become increasingly ill-conditioned in the progress of the IP iterations while the unreduced systems may be well-conditioned throughout the IP iteration and nonsingular even in the limit. This distinguishing feature, observed in [8] and recently supported by spectral analysis in [25, 3], motivates our interest in this possibly less exercised formulation. On the one hand, an in-depth study of the ill-conditioning and computational error of direct system solvers in the reduced formulation has shown that, in spite of possible increasing ill-conditioning at the late stage of the IP method, the steps remain sufficiently accurate to be useful [2, 39]. This analysis has supported the Version of February 6, 26. Work partially supported by INdAM-GNCS under the 25 Project Metodi di regolarizzazione per problemi di ottimizzazione e applicazioni. Dipartimento di Matematica, Piazza di Porta S. Donato 5, 427 Bologna, Italia. valeria.simoncini@unibo.it Dipartimento di Ingegneria Industriale, Università degli Studi di Firenze, viale G.B. Morgagni 4, 534 Firenze, Italia. benedetta.morini@unifi.it Dipartimento di Matematica, Università di Pavia, via A. Ferrata, 27 Pavia, Italia. mattia.tani@unipv.it
2 practical use of the reduced systems, due to their lower dimension. On the other hand, a theoretical and experimental comparison of the iterative solution of reduced and unreduced formulations in the preconditioned regime has not been performed and it is the goal of this work. First we aim to assess whether the use of unreduced systems may still offer some advantage over the reduced ones in terms of eigenvalues and conditioning as occurs in the unpreconditioned case [25, 3]; successively we conduct a computational comparison between the unreduced and reduced formulations. For the unreduced systems we use both the unsymmetric and symmetric formulations, while the reduced systems are genuinely symmetric. Regularization of the QP problem is also considered. We relate the systems by either matrix transformations or congruence transformations. Then, we consider some frequently employed constraint preconditioners and augmented preconditioners and show that in the unreduced case, their condition number may vary slowly with the IP iterations and remain considerably smaller than in the reduced formulation. We also show that the two formulations remain strictly related in terms of spectra of the preconditioned matrices, block structure of the linear equations and, possibly, convergence behaviour of the iterative solver s residuals. Numerical experiments with standard QP problems and the IP solver pdco [33] confirm that the conditioning of the unreduced systems, as well as of some corresponding preconditioners, may be considerably smaller than in the reduced formulation. However, these better spectral properties may not play a role as long as the reduced systems are numerically nonsingular, which is a realistic situation if the QP problem is well-scaled and the solution accuracy is not very high. Specifically, in such a case our experience shows that the IP implementations with the unreduced and reduced formulations are both successful and the preconditioned linear solvers and the IP solvers behave similarly. Thus the smaller dimension of the reduced systems makes such formulation preferable in terms of computation time. In principle the unreduced formulation may maintain better spectral properties than the reduced one and its use may enhance the robustness of the interior point solver when the reduced systems become severely ill-conditioned, or even numerically singular, although we were not able to see this in practice. The paper is organized as follows. In 2 we review the linear system formulations under study and well-known classes of structured preconditioners suitable for such matrices, and establish connections between the systems. In 3 we show that the unreduced and reduced formulations are closely related when some types of preconditioners are used, and characterize the spectra of the preconditioned matrices. In 4 we experimentally compare the two formulations and their iterative solution in an IP solver, and in 5 we draw our conclusions. Notation. In the following, denotes the vector 2-norm or its induced matrix norm. For any vector x, the ith component is denoted as either x i or (x) i. Given x R n, X = diag(x) is the diagonal matrix with diagonal entries x,..., x n. Given column vectors x and y, we write (x, y) for the column vector given by their concatenation instead of using [x T, y T ] T. Given a matrix A, the set of its eigenvalues is indicated by Λ(A). The notation diag(a) denotes the diagonal matrix derived from the diagonal of A. Finally, for any positive integer p, the identity matrix of dimension p is indicated by I p. 2
3 2. Preliminaries. We consider the primal-dual pair of QP problems min x c T x + 2 xt Hx subject to Jx = b, x, (2.) max x,y,z bt y 2 xt Hx subject to J T y + z Hx = c, z, (2.2) where J R m n has dimensions m n, H R n n is symmetric and positive semidefinite, x, z, c R n, y, b R m, and the inequalities are meant componentwise. A primal-dual IP method [4] is defined by replacing the inequality constraint x in (2.) by a logarithmic barrier function and then using the Lagrangian duality theory to derive first-order optimality conditions. These conditions take the form of a system of nonlinear equations and are solved by Newton s method. Omitting for simplicity the iteration index k in the IP method and letting (x, y, z) be the current iterate with x, z >, we may write the Newton direction ( x, y, z) as the solution to the linear system H J T I n x J y = c Hx + J T y + z, b Jx,, (2.3) Z X z XZ n µ n where X = diag(x), Z = diag(z) are diagonal positive definite matrices, n R n is the vector of ones, and the positive scalar µ is the barrier parameter, which controls the distance to optimality and is gradually reduced during the IP iterations. If correctors are used, systems of the form (2.3) need to be solved with different right-hand sides [4]. The above system is unsymmetric, 3 3 block structured, and of dimension 2n+m. Letting K3 be the matrix in (2.3) and R be the diagonal positive definite matrix I n R = I m, (2.4) Z 2 we can transform the system (2.3) into a symmetric one with matrix K 3 = R K3 R = H J T Z 2 J, (2.5) Z 2 X and proper right-hand side [8]. We refer to [8, 25, 3] for a detailed analysis of K 3 and K 3 and their relations. Alternatively to the above 3 3 formulations, it is common to eliminate z in (2.3) and use a system of smaller dimension n + m. The resulting system has matrix and right-hand side formed accordingly. H + X K 2 = Z J T, (2.6) J Throughout we use the hat symbol,, for matrices corresponding to the nonsymmetric 3 3 formulation. 3
4 In order to provide a comprehensive analysis of the 3 3 systems, it is useful to consider the case where regularizations are applied to the optimization problems (2.), (2.2). Several regularization techniques have been proposed in order to improve the numerical properties of the KKT systems, and for details we refer to [,, 2, 33, 37]. Here we focus on primal-dual regularizations such that system (2.3) becomes K 3,reg 3 = f 3 (2.7) with K 3,reg = H + ρi n J T I n x J δi m, 3 = y, f3 = Z X z f x f y f z, (2.8) where δ, ρ and the right-hand side vectors f x, f z R n and f y R m are appropriately computed, see, e.g., [2, 25, 33]. Using again the block diagonal matrix R in (2.4) gives the corresponding symmetric formulation K 3,reg 3 = f 3, (2.9) where K 3,reg = H + ρi n J T Z 2 x f x J δi m, 3 = y, f 3 = f y. (2.) Z 2 X Z 2 z Z 2 f z Clearly K 3,reg and K 3,reg can be cast into KKT form by proper block reorderings. Upon reduction of z, system (2.3) becomes with H + ρin + X K 2,reg = Z J T, J δi 2 = m K 2,reg 2 = f 2, (2.) x fx X, f y 2 = f z. (2.2) f y For δ, ρ, throughout the paper we refer to K 2,reg as the reduced matrix and to K 3,reg and K 3,reg as the unsymmetric and symmetric unreduced matrices and recover K 2, K3 and K 3 by setting δ = ρ =. A similar terminology is used for the corresponding systems. For later convenience, we observe that as long as X is nonsingular, the reduced and unreduced matrices are mathematically related by means of suitable transformations. Indeed, setting I I L = I, L2 = I, (2.3) X I X Z I gives K 3,reg = L T K 2,reg L2, (2.4) 4
5 while setting I L = I, (2.5) X Z 2 I gives the congruence transformation (see, e.g., [25]) K 3,reg = L T K 2,reg L. (2.6) We use (2.4) and (2.6) only for theoretical purposes, but it is interesting to observe that the possibly ill-conditioned matrices L, L2 and L do not in fact necessarily transfer their conditioning to K 3,reg. The following theorem from [25, Corollary 3.4, Corollary 3.6, Theorem 3.9] characterizes the nonsingularity of the considered matrices. Theorem 2.. Suppose that H is symmetric and positive semidefinite, and X, Z are diagonal with positive entries. Then K 2,reg in (2.2), K 3,reg in (2.8), and K 3,reg in (2.) are nonsingular if and only if either δ >, or δ = and J has full row rank. The use of the 2 2 formulation is supported by its reduced dimension and by the variety of direct solvers, iterative solvers and preconditioners available for its numerical solution [5]. However, the presence of X Z may cause ill-conditioning of K 2,reg as a solution is approached, and it represents the key difference between K 2,reg and the matrices K 3,reg, K 3,reg. The unreduced matrices K 3,reg, K 3,reg and K 3 are nonsingular throughout the IP iterations, and remarkably remain nonsingular at a solution (ˆx, ŷ, ẑ) of (2.)-(2.2) if proper conditions, including strict complementarity of ˆx and ẑ and Linear Independence Constraint Qualification at ˆx, are satisfied. These properties may favor the use of unreduced systems and have indeed motivated the study of their spectral estimates [25, 3]. Regarding the use of K 3,reg, it is important to note that ill-conditioning occurs in the matrix R defined in (2.4) but the square root in the (3, 3) block has a damping effect on ill-conditioning at the final stage of the IP process. On the other hand, when either of the systems in (2.7) and (2.9) is solved iteratively and preconditioning is required, assessing the advantages of the unreduced formulation over the reduced one is still an open issue. We address this problem by studying, both theoretically and computationally, the systems (2.8), (2.9) and (2.) when corresponding preconditioners are applied, and we establish their distinguishing features. We conclude this section by recalling a few suitable preconditioners for our systems that will be analyzed in the following: constraint preconditioners and preconditioners based on augmentation of the (, ) block. In order to handle the above systems simultaneously, we consider the general formulation for a KKT matrix E F T M =, G C with E R n n, C R n2 n2 both positive semidefinite, F R n2 n, G R n2 n. We suppose that M is nonsingular and allow any relation between n and n 2. 5
6 A block diagonal preconditioner based on augmentation is P D = E + F T W G, (2.7) W where W R n2 n2 is a symmetric positive definite weight matrix [9, 24, 26, 3]. The advantages of using P D over preconditioners based on the Schur complement of E are visible for E singular or ill-conditioned. Interestingly, this may be the case for both the reduced and unreduced matrices, as E may be ill-conditioned in K 2,reg at the late stage of the IP method, and singular in K 3 at any IP iteration. Clearly, P D is positive definite if F = G and ker(e) ker(f ) = {}. Alternatively, block triangular preconditioners based on augmentation of E can be defined; see, e.g., [2, 6, 9, 34, 38], where, however, most results are for a zero (2,2) block C. In this nonsymmetric framework, one option is to set E + F P T = T W G κf T, (2.8) W where κ is a scalar and W R n2 n2 is a symmetric positive definite weight matrix. If κ =, P T is block diagonal and indefinite. Another possibility is to consider E + F T W = T W G κf T, (2.9) (W + C) where κ is a scalar and W R n2 n2 is symmetric positive definite. This preconditioner was introduced for symmetric matrices in [38] with κ = 2. Finally, indefinite constraint preconditioners for M are commonly used in optimization, see, e.g., [7, 8, 3, 4, 5, 6, 9, 28, 29], and take the form where Ẽ is a symmetric approximation to E. [Ẽ ] F T P C =, (2.2) G C 3. Standard preconditioners for the unreduced systems. In this section we consider some of the preconditioners of type (2.7) (2.2). We establish two different results on the relationship between preconditioned 2 2 and 3 3 formulations. The first result is given in 3. and holds for specific constraint preconditioners and augmented triangular preconditioners; invariance of the spectra of the preconditioned matrices and correspondence of block equations in the preconditioned systems are shown. The second result, given in 3.2, is valid for specific augmented diagonal and triangular preconditioners and indicates that, with the same elimination of z as in (2.9), the preconditioned 3 3 systems reduce to the preconditioned 2 2 systems. In light of the spectral invariance of the preconditioned matrices K 2,reg, K3,reg, K 3,reg via specific constraint and triangular preconditioners, the computational performance of iterative solvers is expected to be the same; the situation is not equally clear when preconditioning is applied, since the application of a preconditioner may preserve the distinctive differences between the formulations and make the unreduced system preferable; this topic is further explored in 4. 6
7 3.. Equivalence properties of 2 2 and 3 3 formulations under certain preconditioners. The considered constraint preconditioners have the form diag(h + ρin + X P 2,C = Z) J T, (3.) J δi m P 3,C = diag(h + ρi n) J T I n J δi m = L T P 2,C L2, (3.2) Z X diag(h + ρi n ) J T Z 2 P 3,C = J δi m = L T P 2,C Z 2 X L, (3.3) where the factorizations in (3.2) and (3.3) are analogous to those for K 3,reg and K 3,reg in (2.4), (2.6) and follow from the equality diag(h +ρi n +X Z) = diag(h +ρi n )+ X Z. Constraint preconditioners where the (, ) block is approximated by its main diagonal are frequently used in optimization, see, e.g., [7, 8, 6, 28, 29]. Augmentation in (2.8) is performed on the (, ) block of K 2,reg and K 3,reg as follows. Let R d R m m be positive definite and such that R d = δi m, with δ >, Rd and let W = for system (2.9), and W = R X d for system (2.). The augmented block triangular preconditioners P 2,T, P 3,T, P 3,T for K 2,reg, K 3,reg, K 3,reg are of the form [ H + ρin + X P 2,T = Z + J T R d J κj ] T, (3.4) R d P 3,T = H + ρi n + X Z + J T R d J κj T κi n R d = P κi n 2,T, (3.5) P 3,T = H + ρi n + X Z + J T R d J κj T κz 2 R d = P κz 2 2,T.(3.6) The form of W is motivated by the result in [38, Remark 2.] showing that GMRES [36] can converge in at most two iterations if P 2,T (or P 3,T ) is applied exactly and round-off errors are ignored. Now we establish connections between the spectra of the systems (2.), (2.8) and (2.9) preconditioned by the constraint preconditioners in (3.) (3.3) and the triangular preconditioners (3.4) (3.6). Such results follow from the transformations (2.4), (2.6). We prove that, apart from the multiplicity of the unit eigenvalue, the eigenvalues of preconditioned K 3,reg and K 2,reg coincide with either the constraint or the triangular (with κ = ) preconditioners. Second we show that the first two block equations of P 3,C K 3,reg 3 = P 3,C f 3 (P 3,T K 3,reg 3 = P 3,T f 3) are equivalent to P 2,C K 2,reg 2 = P 2,C f 2 (P 2,T K 2,reg 2 = P 2,T f 2) and the third equation is equivalent to the third equation in (2.9). Similar results hold for the unsymmetric 3 3 system. Theorem 3.. Suppose that H is symmetric positive semidefinite, X and Z are diagonal with positive entries, K 2,reg, K3,reg and K 3,reg in (2.2), (2.8) and (2.) 7
8 are nonsingular. Let P 2,C, P3,C and P 3,C be the preconditioners in (3.), (3.2) and (3.3) respectively, and consider systems (2.), (2.7) and (2.9). Then i) θ Λ(P 3,C K 3,reg) if and only if either θ = or θ Λ(P 2,C K 2,reg). ii) Solving P 3,C K 3,reg 3 = P 3,C f 3 is equivalent to solving P 2,C K 2,reg 2 = P 2,C f 2 and recovering z from the third equation in (2.9). iii) θ Λ( P K 3,C 3,reg ) if and only if either θ = or θ Λ(P 2,C K 2,reg). iv) The same feature in (ii) holds with K 3,reg and P 3,C. Proof. To show i), consider the generalized eigenvalue problem K 3,reg u = θp 3,C u with u R 2n+m. Using (2.6) and (3.3), we have K 3,reg u = θp 3,C u if and only if L T K 2,reg Lu = θl T P 2,C Lu, and the result readily follows from the nonsingularity of L. Concerning ii) we use again (2.6) and (3.3) and note that P 3,C K 3,reg 3 = P if and only if L P 2,C The result follows from explicitly writing K 2,reg (L 3 ) = L P 2,C L T f 3. 3,C f 3 L 3 = ( x, y, X Z 2 x + Z 2 z) = ( 2, X Z 2 x + Z 2 z), (3.7) L T f 3 = (f x X f z, f y, Z 2 fz ) = (f 2, Z 2 fz ), (3.8) with 2 and f 2 given in (2.2). From their definition, we obtain P 2,C P 3,C K 3,reg = = L 2 L 2 P 2,C K 2,reg I n L T L T L2, K 2,reg L2 showing that the two preconditioned matrices are similar, leading to (iii). The proof of (iv) parallels the arguments of (ii) and is based on (3.2). The spectral properties of P 2,C K 2,reg have been studied in a variety of papers, e.g., [7, 3, 4, 5, 28, 29], and in light of Theorem 3., these results apply to P 3,C K 3,reg. In terms of performance, Theorem 3. shows that in exact arithmetic little can be gained by the unreduced formulation when the popular constraint preconditioner is employed; the situation might differ in finite precision arithmetic. The comparison of the two approaches in the unsymmetric case, given in the above item (iv), requires further comments, as the performance of iterative preconditioned solvers such as Krylov subspace methods does not only depend on the eigenvalues of the two matrices. To this end, let us consider the right-preconditioned matrices K 3,reg P 3,C and K 2,regP, which is what we will use in our numerical experiments. 2,C 8
9 In the 3 3 case in (2.7), for instance, the system to be solved is K 3,reg P 3,C = f 3, with 3 = P 3,C ; analogously for the 2 2 system. We have that K 3,reg P 3,C = L T K 2,regP 2,C L T. I n The following remark establishes an interesting relation between the residuals obtained by using a Krylov subspace solver and the constraint preconditioner on the 3 3 and 2 2 systems. Remark. Consider an approximate solution (k) in the Krylov subspace of dimension k generated with K 3,reg P 3,C and f 3 = (f x, f y, f z ) and zero starting approximate solution. For some polynomial ϕ k of degree at most k, such that ϕ k () =, the residual can be written as f 3 K 3,reg P (k) 3,C = ϕ k(k 2,reg P X 2,C )f 2 + ϕ k () f z, (3.9) ϕ k ()f z where f 2 is given in (2.2). Indeed, f 3 K 3,reg P (k) 3,C = ϕ k ( K 3,reg P 3,C ) f 3 = L T ϕ k K 2,regP 2,C L T f 3 I n ( ) = L T ϕ k K2,reg P 2,C L T f 3 ϕ k ()I n ( ) = L T ϕ k K2,reg P f x X f z 2,C f y ϕ k ()I n f z [ = L T ϕk (K 2,reg P 2,C )f ] 2, ϕ k ()f z and the last vector is precisely (3.9). The quantity ϕ k (K 2,reg P 2,C )f 2 is the same as that obtained by using a Krylov subspace on the 2 2 system, except that the polynomial ϕ k is different, in general. Indeed, for a minimal residual method such as GMRES [36], ϕ k is obtained so as to minimize the residual Euclidean norm. For the 3 3 case, ϕ k must minimize the norm of both block vectors in (3.9), whereas in the 2 2 case, only the vector ϕ k (K 2,reg P 2,C )f 2 needs to be minimized in norm. Following the discussion in [35], we expect a similar behavior of the two residual norms if the minimizing polynomial for the 2 2 problem is small at one, which may be the case whenever one is enclosed in the spectral region of K 2,reg P 2,C. Clearly, the two problems become equivalent as soon as f z =, and their residual norms are very close if f z is small; we refer the reader to [35] for these and other considerations on the spectral properties of constraint-preconditioned Krylov subspace methods. Finally, we observe that the spectral analysis of [3, 3] ensures that solving with P 3,C may remain a well conditioned problem throughout the IP iterations and at the 9
10 limit point unlike with P 2,C. Additional comments are postponed to the end of this section. The second proof uses similar arguments. Theorem 3.2. Suppose that H is symmetric positive semidefinite, X and Z are diagonal with positive entries, and K 2,reg, K3,reg and K 3,reg in (2.2), (2.8) and (2.) are nonsingular. Let P 2,T, P3,T and P 3,T be the preconditioners (3.4), (3.5) and (3.6) respectively with κ =, and consider systems (2.9), (2.7), and (2.). Then i) θ Λ(P 3,T K 3,reg) if and only if either θ = or θ Λ(P 2,T K 2,reg). ii) Solving P 3,T K 3,reg 3 = P 3,T f 3 is equivalent to solving P 2,T K 2,reg 2 = P 2,T f 2 and recovering z from the third equation in (2.9). iii) θ Λ( P K 3,T 3,reg ) if and only if either θ = or θ Λ(P 2,T K 2,reg). iv) The same feature in (ii) holds with K 3,reg and P 3,T. Proof. To characterize the spectrum of P 3,T K 3,reg, we first observe that for L in (2.5), L T P 3,T L = P 2,T, (3.) Z 2 X and that the eigenvalue problem K 3,reg u = θp 3,T u, u R 2n+m can be written as L T K 2,reg Lu = θl T L T P 3,T L Lu. Thus, by using (3.) K 2,reg Lu = θ P 2,T Lu. (3.) Z 2 X With û = Lu = (û, û 2 ) where û R n+m, û 2 R n, the first block row gives the eigenproblem K 2,reg û = θp 2,T û while the second block row gives Xû 2 = θ([z 2, ]û Xû 2 ). Therefore, all eigenvalues of (K 2,reg, P 2,T ) are also eigenvalues of the unreduced problem with û 2 given by the second block row. The remaining eigenvalues are obtained for û =, which gives θ =. Note that if θ = is also an eigenvalue of (K 2,reg, P 2,T ), then the matrix pencil in (3.) is not diagonalizable. We now prove item ii). By multiplying from the left by L in (2.5) and using (2.6), we can write the preconditioned system P 3,T K 3,reg 3 = P 3,T f 3 as LP 3,T LT K 2,reg L 3 = LP 3,T LT L T f 3. Since LP 3,T LT = (L T P 3,T L ) and L T f 3 = (f 2, Z 2 f z ), from (3.) it follows that the system can be rewritten as P 2,T Z 2 X K 2,reg L 3 = P 2,T Z 2 X f 2 Z 2 f z.
11 By (3.7) the first block equation coincides with the reduced preconditioned system while the third block equation is equivalent to the third equation in (2.9). The proof of (iii) and (iv) parallels the above arguments and is based on (3.2) and on the equality L T P 3,T L 2 = P 2,T. Z X Remark 2. Suppose that the 3 3 and 2 2 systems are solved by a Krylov subspace solver coupled with the triangular augmented preconditioner analyzed above. By (2.4) and P 3,T = L T P 2,T L2, Z X we get K 3,reg P 3,T = L T K 2,reg = L T K 2,reg = L T K 2,regP 2,T [Z L2 L 2 L2 L 2 ]P 2,T I n P 2,T Z X P 2,T L T [X Z ]P 2,T X L T =: L T T M L. L T Therefore, for any polynomial ϕ k of degree not greater than k and setting B = [Z, ]P 2,T, we have ϕ k ( K 3,reg P 3,T ) = L T ϕ k (M) L T (3.2) [ = L T ϕk (K 2,reg P 2,T ) ] Bψ k (K 2,reg P 2,T ) ϕ L T. (3.3) k()i We thus find that [ ϕ k ( K 3,reg P 3,T ) f = L T ϕ k (K 2,reg P 2,T )f ] 2 Bψ k (K 2,reg P 2,T )f 2 + ϕ k ()f z. Here ψ k is as defined in [35, formula (2.7)], and the derivation is the same as in [27, Theorem.2] and [35, Lemma 2.2], except that it is lower block triangular instead of upper block triangular. Unlike the case discussed in Remark, now the two residual minimizations do not become equivalent for f z =, and the residual vectors are not expected to be close for small values of f z. The above theorem is valid as long as κ = and the weight is the one employed so far. It does not further hold if the (3, 3) block is different from X or if augmentation is performed using only the Schur complement J T R J. From the previous theorem, d
12 the spectrum of the preconditioned matrices is fully characterized by the eigenvalues for P 2,T K 2,reg and we refer to the results in [9, 24, 34]. As for the conditioning of matrices (3.4), (3.5) and (3.6), we observe that the (,) block of the preconditioners may be badly affected by a small regularizing parameter δ and possibly by X Z. Summarizing our results, for the preconditioners considered, the spectral properties of the preconditioned 2 2 and 3 3 matrices are the same, and the computational performance of an iterative solver, at least in terms of number of iterations, is expected to be similar for the reduced and unreduced systems (the similarity matrices L and L 2 do not affect the first two blocks). On the other hand, due to the potential wellconditioning of the preconditioner in (3.3) in the limit of the IP process, P 3,C seems to genuinely exploit the unreduced formulation and may justify the larger memory allocations and computational costs per iteration required with the unreduced formulation. An experimental analysis on constraint and augmented preconditioners is performed in Further relations between the 2 2 and 3 3 preconditioned systems. Further relationships between systems (2.7), (2.9) and (2.) preconditioned by augmented diagonal and triangular preconditioners are shown in this section. The triangular preconditioners are as in (3.4), (3.5) and (3.6), and here we are interested in the case κ. The diagonal preconditioners of type (2.7) are defined with the same augmentation, thus, [ H + ρin + X P 2,D = Z + J T R d J ], (3.4) R d P 3,D = H + ρi n + X Z + J T R d J R d = P 2,D. (3.5) X X Observe that P 2,D, P 3,D are positive definite. We remark that in the regularized case, W is equal to the (2, 2) block of the matrices and this is an optimal choice for the diagonal preconditioner in terms of spectral distribution [3, 4.3]. We now show that, upon elimination of z, the 3 3 preconditioned system reduces to the 2 2 preconditioned system; note that here this property holds for any real κ in the block triangular preconditioner. Theorem 3.3. Suppose that H is symmetric and positive semidefinite, X and Z are diagonal with positive entries, K 2,reg and K 3,reg given in (2.2) and (2.) are nonsingular. Let P 2,T and P 3,T be the preconditioners in (3.4) and (3.6) respectively, with κ R. Then, after the same elimination of z as in K 3,reg 3 = f 3, the system P 3,T K 3,reg 3 = P 3,T f 3 reduces to P 2,T K 2,reg 2 = P 2,T f 2. The same feature holds with P 2,D, P 3,D in (3.4), (3.5). Proof. We first observe that P 3,T = P 2,T κp X Z 2 2,T. Then, by (2.6) the preconditioned system P 3,T K 3,reg 3 = P 3,T f 3 can be written as P K2,reg 3,T LT L X 3 = P 3,T LT L T f 3, 2
13 and takes the form [ ] P 2,T K 2,reg ( κ)p Z 2 2,T L 3 = P 2,T ( κ)p X Z 2 2,T L T f 3. I m Finally, by (3.7) and (3.8) it readily follows that by back substitution of z the first block equation coincides with the reduced preconditioned system. The claim for the diagonal preconditioners P 2,D, P 3,D in (3.4), (3.5) can be proved by repeating the above arguments. The previous result is still valid if we consider the systems (2.) and (2.7) and either the preconditioners P 2,T and P 3,T in (3.4), (3.5) or P 2,D, P 3,D in (3.4), (3.5); the proof is omitted. 4. Qualitatively different preconditioning strategies. Our previous results show that the reduced and unreduced formulations are closely related, also when a large class of preconditioning strategies is used. To be able to investigate their true potential in a comparative manner, we need to select the free parameters of these acceleration strategies and to apply the preconditioners in an ad-hoc way. In this section, we experiment with an IP method for problem (2.). Our aim is to compare the performance of the preconditioners of the form (2.7) (2.9) and to assess whether the unreduced formulation can be advantageous in terms of conditioning and execution time relative to the reduced one. We put emphasis on the solution of the whole sequence of linear systems generated with each formulation. The numerical experiments were conducted using Matlab R25a on a 2x Intel Core i7-582k, 3.3GHz, 64 GB of RAM processor. Elapsed times were measured by the tic and toc Matlab commands. The datasets used are eight convex QP problems from the CUTEr collection [23] (cvxqp cvxqp3, stcqp a stcqp2, gouldqp3 a gouldqp3 b), and four problems from the Maros and Mészáros collection [3] (qship2 s, qship2 l, laser and qsctap3). Possible linear inequality constraints have been reformulated as equality constraints by using a slack variable. The features of these problems are summarized in Table 4.. In all the considered datasets H is not a diagonal matrix and the number of its nonzero elements is generally quite large, so as to justify the use of an iterative solver. The last four data sets have medium dimensions and the matrices H and J are very sparse. In spite of their relatively small dimensions, these problems may be representative of the spectral features of larger matrices stemming from similar application problems, thus offering some insights into conditioning issues. The QP problems were solved by the Matlab code pdco (Primal Dual interior method for optimization with Convex Objectives) developed by Michael Saunders and available in [33]. For stability purposes, problem (2.) is regularized as min x,r c T x + 2 xt Hx + 2 D x r 2 subject to Jx + D 2 r = b, x, where D and D 2 are positive definite diagonal matrices specified by the user. In pdco, nonnegativity constraints are replaced by the log barrier function and a sequence of convex barrier subproblems is generated; for each barrier subproblem, one step of Newton s method is applied to the KKT conditions with perturbed complementarity conditions. Therefore, with D = ρi n and D 2 = δi m for positive ρ and δ, the unreduced and reduced coefficient matrices of the systems to be solved take the form K 2,reg, K3,reg and K 3,reg respectively. In all our runs, we set δ = ρ = 6. 3
14 Table 4. Test problems: values of n, m, nonzeros in H and in J Problem n m nnz(h) nnz(j) cvxqp cvxqp cvxqp stcqp a stcqp b stcqp gouldqp3 a gouldqp3 b qship2 s qship2 l laser qsctap pdco allows one to work with two alternative linear system formulations: a reduced form with matrix K 2,reg and a condensed formulation where the coefficient matrix is a Schur complement. Symmetric indefinite systems are solved by a direct solver whereas for condensed systems both direct and iterative solvers are available. We modified pdco so that the unreduced regularized formulation with matrices K 3,reg, K 3,reg could also be explicitly formed. For the 3 3 formulation, the unsymmetric form (2.8) will be used when indefinite preconditioners will be employed. Indeed, the process of symmetrization (2.5) requires the application of matrix R, which becomes increasingly ill-conditioned along the IP process and may also have a detrimental effect on the scaling of the right-hand side of the linear systems. Summarizing, the systems in the sequence were solved by using: i) Matlab s sparse direct solver (function \ ); ii) MINRES [7, 32] coupled with a diagonal augmented preconditioner P D of the form (2.7) for the symmetric matrices K 3,reg, K 2,reg ; iii) GMRES [36] coupled with the indefinite constraint preconditioner (2.2) and the augmented preconditioner (2.9) for the matrices K 3,reg, K 2,reg. 4.. Implementation issues in preconditioning. The distinguishing feature of the preconditioner P 3,C given in (3.2) with respect to P 2,C in (3.) is that the former is invertible and possibly well conditioned throughout the IP iterations as long as K 3,reg is invertible at the solution. The application of P 3,C requires solves that can be performed either via a sparse complete or incomplete LU factorization, or by the following standard a-priori block decomposition. Letting D = diag(h + ρi n ) we have with S = I n D P 3,C = JD I m I n D J T D I ZD S m, I n I n J D [ ] J Z T δim JD I n + = J T + δi m JD X ZD J T ZD, + X 4
15 and, since ZD + X is diagonal, it holds [ ] Im J(Z + DX) S = S2 I m I n ZD + X (Z + DX) ZJ T, I n with S 2 = JX (Z + DX) J T + δi m. This matrix S 2 is the same Schur complement that one obtains by block factorizing the 2 2 formulation, that is [ P 2,C = I n J ( D + X Z ) I m ] [ D + X ( Z In D + X Z ) J T S 2 I m (4.) The (diagonal) entries of X (Z + DX) belong to the interval (, ) with the smallest value being of the order of min i x i if strict complementarity holds at the solution. Thus, the eigenvalues of S 2 belong to the interval [δ, δ + JD J T ] throughout the IP iterations and the lower bound is possibly achieved as min i x i tends to zero. As a consequence, the condition number of S 2 may be large, e.g., in the case where H is semidefinite positive and the regularization δ is small. This ill-conditioning correctly reflects that of P 2,C, whereas in the 3 3 case S 2 may be much more ill-conditioned than the original preconditioner. It is easy to see that S 2 is the same matrix arising from( the use of the Schur complement of the (,) block in P 2,C, the diagonal entries of + DXZ ) ) (In In belong to (, ) at each IP iteration, and the eigenvalues of S 2 lie in the interval [δ, δ + JD J T ]. The condition number of S 2 may be large, e.g. in the case where H is semidefinite positive and the regularization δ is small. As for the augmented preconditioners (2.7) (2.9), their effectiveness relies on the choice of the weight matrix W and from a computational point of view, it is convenient to choose W diagonal so that it is inexpensive to invert. In the presence of regularization in the (2, 2) block, an optimal choice of W in terms of eigenvalue distribution of the preconditioned matrices is W = C [3, 4.3], [38, Remark 2.]. However, we have not made this choice for two reasons: i) it implies a tight relation between the unreduced and reduced preconditioned formulations, as described in Theorems 3.2 and 3.3; ii) in our numerical experiments it revealed to be unsatisfactory in terms of conditioning, particularly with the unreduced systems. Indeed, the condition number of the 3 3 preconditioners increases as some entries of C tend to become very small and this occurrence is especially undesired as the systems (2.7) and (2.9) are expected to be better conditioned than the reduced ones. We have used a diagonal augmented preconditioner P D of the form (2.7), and a triangular augmented preconditioner T W of the form (2.9). For the definition of T W, following [38], we set κ = 2 in (2.9) and consequently E + F T W = T W G 2F T. (4.2) (W + C) As for the unreduced case (2.8) we let E = H + ρi n, F = [ J I n ], G = 5 J, C = Z δim, X ].
16 while in the reduced case (2.2) we have E = H + ρi n + X Z, F = G = J, C = δi m. Approximations such as W = γi n+m with γ equal to either the arithmetic mean, geometric mean or median of diag(c) actually led to slow convergence of preconditioned MINRES and GMRES. In the spirit of the proposal in [34], for the unreduced systems we obtained an effective choice of W by setting W = C + Γ = C + γ I m = γ 2 I n where γ and γ 2 are given by { γ = min, H + ρi n F δim + γ I m, (4.3) X + γ 2 I n }, γ 2 = γ mean(z), (4.4) and F is the Frobenius norm, while for the reduced system K 2,reg, we instead set W = C + γi m = (δ + γ)i m, γ = H + ρi n + X Z F. (4.5) For all problems we worked with, the above choice of γ always gave a value less than one. For different data, the minimum between this value and one could be considered as in (4.4). The choice (4.3) exploits the partitioning of the original matrix by giving different weights at the two diagonal blocks. If the problem is well scaled, the values of γ and γ 2 are expected to be of moderate size and eventually remain bounded away from zero; this feature may mitigate ill-conditioning in the (,) block of the preconditioner and the following numerical experiments support this claim. On the other hand, we remark that eventually γ is expected to tend to Selected numerical results. We report selected numerical results on the inexact implementation of the IP method in pdco and the comparison of different linear system formulations and linear algebra solvers. The issue of how much inaccuracy is allowed in the solution of the linear systems while preserving the convergence properties of IP methods has been investigated in several papers [3, 4, 7, 8, 6, 22, 29]. The norm of the residual in the linear systems can be directly linked to the barrier parameter µ = x T z/n in (2.3) whereas the convergence of the IP method is driven by the barrier term tending to zero. This setting implies an increasing accuracy in the solution of the linear system as the IP iterate approaches the solution. The resulting IP method can be interpreted as an Inexact Newton method [2] and its convergence can be analyzed in such framework [4]. On the basis of the mentioned literature we solve iteratively the unreduced system (2.7) and monitor the unpreconditioned system residual until K 3,reg 3 f 3 ηµ, (4.6) with η being a forcing term in (, ). Analogously for system (2.9), the control on inexactness is K 3,reg 3 f 3 ηµ. For the 2 2 formulation, we solve iteratively (2.) until the residual satisfies K 2,reg 2 f 2 ηµ. (4.7) 6
17 Successively, the step z is recovered from the third block equation in (2.7). The following results were obtained applying MINRES and GMRES with stopping criterion (4.7) and η equal to 2. In the whole section, the total execution times for the iterative solution of the systems include the time needed to form the preconditioner and its factorization and the time for the iterative solver. pdco was stopped with accuracy on feasibility (FeaTol) equal to 6 and complementarity tolerance (OptTol) equal to 6. As suggested in pdco, scaling of the variables may improve the performance of the code and is implemented through the parameters xsize and zsize which estimate the largest component of x and z at the solution. Moreover, scaling can be beneficial for the conditioning of the arising systems. Therefore, we used: xsize= and zsize= 4 for problems cvxqp-cvxqp3; xsize= and zsize= 2 for problems of the stcqp family; xsize= and zsize= 5 for problems of the qship2 family; xsize= 2 and zsize= 6 for laser; xsize= and zsize= for qsctap3. In fact, such scaling parameters represent the largest power of strictly smaller than max(x) and max(z) at the solution. In the gouldqp3 problems, all the lower bounds on the dual variables z happen to be active, and we set xsize= and zsize= 3. Table 4.2 Unreduced system with constraint preconditioning. Condition numbers of P C, L and U factors in P C, and CPU time in seconds, for two different pivotings. κ e( ): estimate of the -norm condition number (matlab function condest), expressed in the form min max. K 3,reg P C -GMRES ΠP C = LU ΠP C Q = LU Problem κ e(k 3,reg) κ e(p C ) κ e(l) κ e(u) Total κ e(l) κ e(u) Total min-max min-max min-max min-max time min-max min-max time cvxqp e cvxqp e cvxqp e stcqp a e stcqp b e stcqp e g dqp3 a e g dqp3 b e qship2 s e qship2 l e laser e qsctap e g dqp3 * = goulddqp3 *. We start by analyzing the conditioning of matrices K 3,reg, K 2,reg, and the corresponding preconditioners. In the unreduced formulation it is worthy to analyze the constraint preconditioner P C when the different exact factorizations ΠP C = LU and ΠP C Q = LU are used. The application of a column reordering (matrix Q) is commonly more time and memory efficient than just row pivoting via the permutation matrix Π, although the condition number of the factors L and U may be considerably higher. In Table 4.2 we report the minimum and maximum estimated condition numbers of K 3,reg and P C attained during the whole sequence of system solves, the conditioning of the factors L and U and the total execution time required for the solution of the linear systems. All condition numbers were estimated by the Matlab function condest and only the order of magnitude exponent is reported. The condition numbers of K 3,reg and P C are comparable, while in the factorization with row reordering the conditioning of U is at most a couple of orders larger than that of P C. Unfortunately, the corresponding total CPU time is prohibitive. On the contrary, the use of row and column reordering enhances the sparsity of the factors and allows for 7
18 Table 4.3 Unreduced system. Condition numbers of K 3,reg, L and U factors in P C (row and columns reordering), and the Cholesky factor of the (,) block in P D. κ e( ): estimate of the -norm condition number (matlab function condest), expressed in the form min max. K 3,reg Backslash P C -GMRES P D -MINRES κ e(k 3,reg) κ e(l) κ e(u) κ e(l) Problem min-max min-max min-max min-max cvxqp cvxqp cvxqp stcqp a stcqp b stcqp gouldqp3 a gouldqp3 b qship2 s qship2 l laser qsctap Table 4.4 Reduced system. Condition numbers of K 2,reg, Cholesky factor of the Schur complement in P C, and of the Cholesky factor of the (,) block in P D. κ e( ): estimate of the -norm condition number (matlab function condest), expressed in the form min max. K 2,reg Backslash P C -GMRES P D -MINRES κ e(k 2,reg) κ e(l) κ e(l) Problem min/max min/max min/max cvxqp cvxqp cvxqp stcqp a stcqp b stcqp gouldqp3 a gouldqp3 b qship2 s qship2 l laser qsctap a fast system solution, though the conditioning of U becomes much larger in most problems; we should say that the IP method did not display any related anomalous behavior. Keeping this warning in mind, in the following we adopt the factorization ΠP C Q = LU in all the runs with P C. To avoid possible additional ill-conditioning, we decided not to report results with the factored form of P C via the Schur complement, although timings were comparable and once again, the ill-conditioning did not seem to affect the IP iteration. In Table 4.3 we display the minimum and maximum condition number of K 3,reg, of the L and U factors for P C, and of the Cholesky factor L of the (,) block of P D during the IP iterations. Since the condition number of K 3,reg did not vary significantly in runs with different linear algebra solvers, we report only the values obtained with the Matlab s direct solver. Similarly, we do not show the condition number of the Cholesky factors of the (,) block of T W as they are very similar to those of P D. The data for K 3,reg and P C are the same as in Table 4.2 and are repeated for clarity. Table 4.4 shows analogous statistics for the reduced systems: The preconditioner P C is applied via the Schur complement factorization (4.) and we report the condition number of the Cholesky factor L of S 2. 8
19 Table 4.5 Unreduced system. Number of linear systems solved (outer iterations), number of inner iterations required by the iterative solvers. K 3,reg P C -GMRES P D -MINRES T W -GMRES Outer Inner Iter Inner Iter Inner Iter Problem Iter min/max/avg min/max/avg min/max/avg cvxqp 3 9/22/6.9 3/ 83/ 6. 4/ 29/22.9 cvxqp2 4 /23/7.9 26/ 9/ 52. / 28/9. cvxqp3 4 8/24/8.9 34/5/ 73. 6/ 35/28. stcqp a 3 2/ 6/ 3.9 2/ 28/ / 2/9.2 stcqp b 4 3/ 7/ 4. 27/ 34/ 3. 2/ 27/23.6 stcqp2 4 3// 4.6 2/ 22/ 7.6 / 2/. gouldqp3 a 4 9/3/6.7 25/ 68/ 37.3 / 26/6.4 gouldqp3 b 5 8/3/5.9 24/ 79/ 4.3 / 3/7.7 qship2 s 26 2/ 5/ /99/49.9 7/ 75/55.4 qship2 l 28 2/ 5/ 3.5 3/249/77.5 6/3/67.4 laser 3 / 5/ 2.5 / 87/ / 45/29.6 qsctap3 3 5/ 8/ 7.2 8/ 32/ / 2/. Table 4.6 Unreduced systems. Number of linear systems solved (outer iterations) and total execution times (secs) for solving the sequence of unreduced systems. K 3,reg Backslash P C -GMRES P D -MINRES T W -GMRES Problem Outer Total Total Total Total Iter time time time time cvxqp cvxqp cvxqp stcqp a stcqp b stcqp gouldqp3 a gouldqp3 b qship2 s qship2 l laser qsctap Comparing Tables 4.3 and 4.4, we observe that the maximum value attained by the conditioning of K 3,reg is several orders of magnitude smaller than that of K 2,reg. In the unreduced formulation the condition number of the Cholesky factor L of the (,) block of P D is at most a couple of orders of magnitude smaller than the condition number of the Cholesky factor for K 2,reg. Next, we present the performance of pdco with our different implementations for the linear algebra phase. As for the augmented preconditioners, in the first eight problems P D and T W were approximated by replacing E + F T W G with its incomplete Cholesky factors computed with threshold 3 ; larger thresholds led to a breakdown of the factorization. In the remaining problems, the exact Cholesky factors were computed as the incomplete factorization failed with threshold down to 4. Interestingly, for all problems the number of IP iterations was insensitive to the linear algebra solver used in the inner phase, for both formulations. This seems to imply that with the chosen stopping criterion for the inner approximate solution, the selection criterion of the inner solver should be based on performance, also when comparing the reduced and unreduced formulations. Moreover, the iterative solution of the inner systems does not degrade the overall performance of the IP method compared with the sparse direct solver. Table 4.5 shows the number of linear systems solved (outer iterations) and the 9
20 Table 4.7 Reduced systems. Number of linear systems solved (outer iterations), number of inner iterations required by the iterative solvers. K 2,reg P C -GMRES P D -MINRES T W -GMRES Outer Inner Iter Inner Iter Inner Iter Problem Iter min/max/avg min/max/avg min/max/avg cvxqp 3 9/22/6.9 3/ 8/59.3 4/32/24.9 cvxqp2 4 /23/7.9 9/ 48/34.7 /27/9.8 cvxqp3 4 8/24/8.9 35/ 89/7. 6/38/3.9 stcqp a 3 2/ 6/ 3.8 9/ 22/2. 2/8/3.7 stcqp b 4 3/ 7/ 4. 26/ 28/26.6 5/22/7.2 stcqp2 4 3// 4.6 8/ / 9.5 5/ 8/ 6. gouldqp3 a 4 9/3/6.7 7/ 92/32.6 2/58/22. gouldqp3 b 5 8/3/5.9 8/72/57.4 3/94/35.5 qship2 s 26 2/ 5/ 3.5 / 25/8.5 3/ 5/ 3.6 qship2 l 28 2/ 5/ 3.5 9/ 26/9.4 3/ 5/ 3.4 laser 3 / 5/ 2.5 6/ 9/3. 3/ 4/ 3.4 qsctap3 3 5/ 8/ 7.2 4/ 9/ 5.3 3/2/ 4.7 minimum, maximum and average (min/max/avg) number of inner iterations required by the iterative solvers P C -GMRES, P D -MINRES and T W -GMRES in the 3 3 case. Table 4.6 displays the total execution time for solving the sequence of systems with Matlab s direct solver, P C -GMRES, P D -MINRES and T W -GMRES. Analogous statistics for the solution of the sequences of reduced systems are given in Tables 4.7 and 4.8. The sparse direct solver is a compiled code whereas the iterative solvers are Matlab implementations and thus slower; nonetheless, the iterative methods are able to largely overcome this disadvantage. In the 3 3 formulation, runs with P C -GMRES show significant CPU time reductions over the direct solver in about 5% of the cases; see Table 4.6. Both augmented preconditioners are competitive with P C in most of the runs, and yield the best performance in problems stcqp a and stcqp b. In the last six problems, for both formulations, the computational time with Matlab s direct solver and P C -GMRES is very low while there is a clear computational disadvantage in applying the complete Cholesky factorization in the application of P D and T W (problems qship2 s, qship2 l, laser and qsctap3). Concerning the reduced formulation, the computational times in Table 4.8 are low but iterative solvers still produce significant savings in the first six problems. Closeness of the residual norms of P C -GMRES for both the 3 3 and 2 2 systems was experimentally verified in the whole IP process, see Remark. As expected, the reduced formulation is thus more convenient than the unreduced one. Tables 4.5 and 4.7 show that in general preconditioned GMRES and MINRES behave very similarly in the two formulations in terms of number of iterations, while the reduced formulation deals with vectors of smaller dimensions. As far as our theoretical analysis and numerical experience, we can conclude that the better spectral properties of the unreduced formulation and possibly of the corresponding preconditioners, do not provide sufficient benefits in terms of robustness and efficiency. The step computed through iterative solvers applied to the 2 2 systems are accurate enough to guarantee the success of the IP method, and the smaller dimension provides computational and time savings in the IP procedure. 5. Conclusions. The high sparsity structure of KKT systems arising in the numerical solution of quadratic programming problems by means of interior point methods encourages the use of reduction strategies before solving the systems. For 2
SPECTRAL ESTIMATES FOR UNREDUCED SYMMETRIC KKT SYSTEMS ARISING FROM INTERIOR POINT METHODS
SPECTRL ESTIMTES FOR UNREDUCED SYMMETRIC KKT SYSTEMS RISING FROM INTERIOR POINT METHODS BENEDETT MORINI, VLERI SIMONCINI, ND MTTI TNI bstract. We consider symmetrized KKT systems arising in the solution
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationFast Iterative Solution of Saddle Point Problems
Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationA DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS
Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationA robust multilevel approximate inverse preconditioner for symmetric positive definite matrices
DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationOn the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012
On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationIn order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system
!"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% ! # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationFINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING
FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More information1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form
MULTILEVEL ALGORITHMS FOR LARGE-SCALE INTERIOR POINT METHODS MICHELE BENZI, ELDAD HABER, AND LAUREN TARALLI Abstract. We develop and compare multilevel algorithms for solving constrained nonlinear variational
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationINCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES
INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationSolving the normal equations system arising from interior po. linear programming by iterative methods
Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationInexact Inverse Iteration for Symmetric Matrices
Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationThe convergence of stationary iterations with indefinite splitting
The convergence of stationary iterations with indefinite splitting Michael C. Ferris Joint work with: Tom Rutherford and Andy Wathen University of Wisconsin, Madison 6th International Conference on Complementarity
More informationPreconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming
Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming Joo-Siong Chai and Kim-Chuan Toh Abstract We propose to compute
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationAbsolute Value Programming
O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationRecent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationProjection methods to solve SDP
Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method
More informationSparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices
Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationInexact Inverse Iteration for Symmetric Matrices
Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av
More informationm i=1 c ix i i=1 F ix i F 0, X O.
What is SDP? for a beginner of SDP Copyright C 2005 SDPA Project 1 Introduction This note is a short course for SemiDefinite Programming s SDP beginners. SDP has various applications in, for example, control
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationBOUNDS ON EIGENVALUES OF MATRICES ARISING FROM INTERIOR-POINT METHODS
Cahier du GERAD G-2012-42 BOUNDS ON EIGENVALUES OF MATRICES ARISING FROM INTERIOR-POINT METHODS CHEN GREIF, ERIN MOULDING, AND DOMINIQUE ORBAN Abstract. Interior-point methods feature prominently among
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationInexact inverse iteration for symmetric matrices
Linear Algebra and its Applications 46 (2006) 389 43 www.elsevier.com/locate/laa Inexact inverse iteration for symmetric matrices Jörg Berns-Müller a, Ivan G. Graham b, Alastair Spence b, a Fachbereich
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationA PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS
A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS TIM REES AND CHEN GREIF Abstract. We explore a preconditioning technique applied to the problem of solving linear systems
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationDIPARTIMENTO DI MATEMATICA
DIPARTIMENTO DI MATEMATICA Complesso universitario, via Vivaldi - 81100 Caserta ON MUTUAL IMPACT OF NUMERICAL LINEAR ALGEBRA AND LARGE-SCALE OPTIMIZATION WITH FOCUS ON INTERIOR POINT METHODS Marco D Apuzzo
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationIncomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming
Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming by Jelena Sirovljevic B.Sc., The University of British Columbia, 2005 A THESIS SUBMITTED IN PARTIAL FULFILMENT
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationExperiments with iterative computation of search directions within interior methods for constrained optimization
Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationPreconditioning via Diagonal Scaling
Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationIP-PCG An interior point algorithm for nonlinear constrained optimization
IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More information