Newton s Method for Computing the Nearest Correlation Matrix with a Simple Upper Bound

Size: px
Start display at page:

Download "Newton s Method for Computing the Nearest Correlation Matrix with a Simple Upper Bound"

Transcription

1 J Optim Theory Appl (00) 47: DOI 0.007/s Newton s Method for Computing the Nearest Correlation Matrix with a Simple Upper Bound Qingna Li Donghui Li Houduo Qi Published online: July 00 Springer ScienceBusiness Media, LLC 00 Abstract The standard nearest correlation matrix can be efficiently computed by exploiting a recent development of Newton s method (Qi and Sun in SIAM J. Matrix Anal. Appl. 8: , 006). Two key mathematical properties, that ensure the efficiency of the method, are the strong semismoothness of the projection operator onto the positive semidefinite cone and constraint nondegeneracy at every feasible point. In the case where a simple upper bound is enforced in the nearest correlation matrix in order to improve its condition number, it is shown, among other things, that constraint nondegeneracy does not always hold, meaning Newton s method may lose its quadratic convergence. Despite this, the numerical results show that Newton s method is still extremely efficient even for large scale problems. Through regularization, the developed method is applied to semidefinite programming problems with simple bounds. Keywords Semismooth Newton method Constraint nondegeneracy Quadratic convergence Correlation matrix Communicated by X.-Q. Yang. D. Li s research is supported by the major project of the Ministry of Education of China Grant and the NSF of China Grant H. Qi s research was partially supported by EPSRC Grant EP/D50535/. Q. Li College of Mathematics and Econometrics, Hunan University, Changsha 4008, P.R. China liqingna@yahoo.com.cn D. Li ( ) School of Mathematical Sciences, South China Normal University, Guangzhou 5063, P.R. China dhli@scnu.edu.cn H. Qi School of Mathematics, The University of Southampton, Highfield, Southampton, SO7 BJ, UK hdqi@soton.ac.uk

2 J Optim Theory Appl (00) 47: Introduction Since Higham [] justified it as a challenging problem, the nearest correlation matrix problem has attracted considerable interest and a number of good numerical methods have been made available. These include, for example, the quasi-newton methods by Malick [] and Boyd and Xiao [3], the primal-dual interior-point methods of Toh, Tütüncü, and Todd [4] and of Toh [5], Newton s method of Qi and Sun [6] and its preconditioned version by Borsdorf and Higham [7], and smoothing Newton method by Gao and Sun [8]. In particular, Qi and Sun s Newton method has recently been implemented in Fortran by NAG (Numerical Algorithms Group) and seems to be the most efficient and robust algorithm. We also like to point out that there has been a considerable amount of research on this problem in finance journals. We refer the reader to [9] for more references and choose to omit any comment on them here. The purpose of this paper is to extend Newton s method to the nearest correlation matrix problem with a simple upper bound, an important problem that has a wide range of applications. Algorithmically, such an extension is straightforward. However, as we will see, some mathematical properties supporting the original algorithm do not hold anymore, posing the question whether the method is still the first choice. Our numerical experiments will convincingly settle this question. Let us first describe the nearest correlation matrix problem. Let S n be the space of all n n symmetric matrices endowed with the standard inner product and S n be the positive semidefinite cone in S n. We often use X 0 to denote X S n.letg Sn be given. The nearest correlation matrix problem is to find X S n, such that it is the optimal solution of the following problem: min X G s.t. X ii =, i=,...,n, X 0, () where denotes the Frobenius norm in S n induced by the standard inner product X, Y =trace(x T Y). Note that a correlation matrix X is defined as X 0 with its diagonal elements one. Therefore, () is to find the nearest one from the set of all correlation matrices to a given matrix G measured by the Frobenius norm. For easy description of the constraints, we define A : S n R n to be the diagonal operator, i.e., A(X) = diag(x) =[X,...,X nn ] T.Lete denote the vector of all ones. Then X ii = fori =,...,n, if and only if A(X) = e. Rather than solving () directly, quasi-newton methods as well as Newton s method is applied to solve the dual problem (in minimization form): min θ(y):= y R n S n (A y G) e T y () where y R n is the Lagrangian multiplier corresponding to the equivalent constraints in (); S n (X) denotes the orthogonal projection of X S n onto S n ; A :R n S n is the adjoint operator of A. In our case, A y = Diag(y), the diagonal matrix formed by vector y. For derivation of the dual problem, we refer to [0, ] or[, 3, ] for more details.

3 548 J Optim Theory Appl (00) 47: Once a solution ȳ of the dual problem is found, one can easily recover the optimal solution X of () by X = S n (A ȳ G). Despite () being unconstrained, solution methods are hindered by the fact that θ( ) is only once differentiable. Quasi-Newton methods are natural choices [, 3]. A great deal of efforts has been made in [6] to develop Newton s method. In fact, the gradient function θ(y)= A( S n (A y G)) e is strongly semismooth as the projection operator S n ( ) is so [3]. Moreover, we can characterize the generalized Hessian θ(y) in the sense of Clarke, so that Newton s method takes the following form: y k = y k V k θ(y k ), V k θ(y k ), k = 0,,,..., (3) which is proved to be quadratically convergent. Such a method is known to be a semismooth Newton method [4];seealso[5]. There are two contributors to the quadratic convergence of semismooth Newton method [6]: (i) θ( ) is strongly semismooth (see above), and (ii) Every element V θ(ȳ) is positive definite [6, Proposition 3.6]. The reason for (ii) is that (iii) Constraint nondegeneracy for () holds at X. A consequence of (ii) is that (iv) The dual problem () has a unique solution ȳ [6, Corollary 3.8]. Now let us consider the nearest correlation matrix problem with a simple upper bound min X S n X G s.t. X ii =, i=,...,n, κi X 0, (4) where κ>isagiven number and κi X means κi X 0. Denote B := {X S n κi X 0}. Such a type of constraints arises from many applications, such as the nearest correlation matrix with a prescribed condition number [6], minimizing the sum of the largest q eigenvalues of an affine function of symmetric matrices [7, (4.7)], and finding the search direction matrix in rank constraint problems [8, p. 44], to just name a few as examples. We will also extend our algorithm to semidefinite programming problems with a simple upper bound via quadratic regularization as another example in our numerical part. In [6], a technical lemma [6, Lemma 3.3] was developed in order to prove (ii). It was subsequently proved in [4, Proposition 4.] that this lemma actually implies (iii). If κ =, the problem has an obvious solution X = I.Andifκ<, the problem is not feasible.

4 J Optim Theory Appl (00) 47: It is straightforward to develop Newton s method by following the steps in [6]. In fact, the dual problem of (4) is (in minimization form): min y R n θ(y):= B(A y G) (A y G) A y G e T y, (5) where y R n is the Lagrangian multiplier corresponding to the equivalent constraints in (4); B (X) is the orthogonal projection of X S n onto B. We also have that θ( ) is once continuously differentiable, and θ(y)= A( B (A y G)) e. Because B (X) = S n (X) S n (X κi), θ( ) is a difference between two strongly semismooth functions, and therefore (i ) θ( ) is strongly semismooth. Moreover, we can similarly characterize θ(y) as done in [6] so as to develop Newton s method. However, corresponding to (ii) and (iii), we have the following questions (ii ) Whether every element V θ(ȳ) is positive definite; and (iii ) Whether constraint nondegeneracy for (4) still holds at X. These questions are essential to ensure the quadratic convergence of Newton s method, and the answers to them are not obvious to formulate. Actually we can prove that constraint nondegeneracy may or may not hold depending on the spectral properties of X. We further prove that (ii ) is true if and only if (iii ) is true, and we also show by an example that (iv ) The dual problem (5) may have multiple solutions. In spite of these negative answers to (ii ) and (iii ), our numerical results show that Newton s method for (4) is almost as efficient as the original one. The paper is organized as follows. In Sect., we investigate the property of constraint nondegeneracy and its characterizations. We will show that constraint nondegeneracy does not always hold. Some examples are included to demonstrate the failure of constraint nondegeneracy. In Sect. 3, we will analyze the relationship between (ii ) and (iii ) and develop a globalized Newton method. We report extensive numerical results in Sect. 4 to show the efficiency of the proposed method. Through quadratic regularization, we also apply it to semidefinite programming problems. Some final conclusions are made in Sect. 5. Some words about notation: capital letters stand for matrices, and small letters for vectors, while Greek letters are used for scales. We use to denote the Hardmard product of matrices, i.e., for B, C S n, B C =[B ij C ij ] n i,j=. For subsets α, β of {,...,n}, denote B αβ as the submatrix of B indexed by α and β.lete i R n be the unit vector with i-th component one and E i := e i ei T. E is the matrix of all ones, i.e., E = ee T.

5 550 J Optim Theory Appl (00) 47: Constraint Nondegeneracy As mentioned in Introduction, the property of constraint nondegeneracy is behind the good performance of Newton s method for (). We also mentioned that this property does not always hold for (4). In this section, we study the reasons. We first state the definition of constraint nondegeneracy with respect to the constraints in (4). More details about this nondegeneracy for general constraints can be found in, e.g., [9, (4.7)] and [0 3]. Definition. Suppose X be a feasible point of (4). Constraint nondegeneracy holds at X iff [ ] [ ] [ ] A S n {0} R n = I lin T B (X) S n, (6) or equivalently iff A(lin T B (X)) =R n, (7) where I is the identity mapping from S n to S n, T B (X) is the tangent cone of B at X, and lin T B (X) is the largest linear space contained in T B (X). The equivalence between (6) and (7) can be easily proved. To study constraint nondegeneracy, we need to know lin T B (X). It is easy to see that B = S n K, with K := {X S n κi X 0}. We also know that int S n int K, where int means the interior. It follows from Bonnans and Shapiro [9, pp. 3, 48] that T B (X) = T S n (X) T K (X). (8) Now suppose an X is given and has the following spectral decomposition: X = P Diag[λ,...,λ n ]P T, (9) where P T P = I, λ λ n are eigenvalues of X. We further define α := {i λ i κ}, β := {i 0 <λ i <κ} and γ := {i λ i 0}. Recall that X is a feasible point of (4). Suppose X := X has the spectral decomposition of (9). It is well known that (see, e.g. [9, Sects. 3.4, 4.6], []) T S n (X) ={B (P T BP) γγ 0} and T S n (κi X) ={B (P T BP) αα 0}. By the definition of tangent cones (see, e.g. [9]), B T S n (κi X) if and only if o(t) = dist(κi X tb, S n ) = dist(x t( B), κi Sn ) = dist(x t( B), K ), t 0. Once again, by the definition of tangent cones, we have B T S n (κi X), if and only if B T K (X). Therefore, T K (X) ={B S n (P T BP) αα 0}. Using (8), we have the following characterization.

6 J Optim Theory Appl (00) 47: Lemma. Let X be a feasible point of (4) and suppose it has the spectral decomposition (9) with X := X. Then it holds that T B (X) ={B S n (P T BP) αα 0 and (P T BP) γγ 0}. Moreover, lin T B (X) ={B S n (P T BP) αα = 0 and (P T BP) γγ = 0}. Lemma. yields the following characterization, which generalizes the corresponding result for constraint nondegeneracy in semidefinite programming reported in []. Proposition. Suppose X be a feasible point of (4) and has the spectral decomposition (9) with X := X. Denote P =[P α P β P γ ]. Then constraint nondegeneracy holds at X, if and only if the matrices 0 Pα T E ip β Pα T E ip γ B i := Pβ T E ip α Pβ T E ip β Pβ T E ip γ, Pγ T E ip α Pγ T E ip β 0 are linearly independent. i =,...,n, Proof By simple calculation of linear algebra, (7) is equivalent to N (A) lin T B (X) = S n, (0) where N (A) is the null space of A. Again, by simple linear algebra, (0) is equivalent to N (A) (lin T B (X)) ={0}. () Suppose (0) holds. We prove the necessity by contradiction. Assume that B i, i =,...,n, are linearly dependent. Then there exists 0 y R n such that ni= y i B i = 0, which implies by noting that A y = n i= y i E i, 0 Pα n T (A y)p β Pα T (A y)p γ y i B i = Pβ T (A y)p α Pβ T (A y)p β Pβ T (A y)p γ = 0. () Pγ T (A y)p α Pγ T (A y)p β 0 i= Consequently, we have Pα T (A y)p α 0 0 P T (A y)p = (3) 0 0 Pγ T (A y)p γ

7 55 J Optim Theory Appl (00) 47: Apparently, A y N (A). Furthermore, Lemma. and (3) together imply that for any B lin T B (X), A y, B = P T (A y)p, P T BP =0. Hence, 0 A y N (A) (lin T B (X)), which contradicts (). This proves the linear independence of B i s. Now we suppose B i s be linearly independent and prove the sufficiency by contradiction by assuming N (A) (lin T B (X)) {0}. Let 0 Y S n belong to the left-hand side of (). Because Y N (A), there exists 0 y R n such that Y = A y. The fact that Y (lin T B (X)) implies Y, B = P T (A y)p, P T BP =0, B lin T B (X). The structure of lin T B (X) in Lemma. yields (3), which in turn implies ni= y i B i = 0fory 0, contradicting the linear independence condition of B i s. This establishes () and hence constraint nondegeneracy. The characterization in Proposition. is to be used to study whether constraint nondegeneracy holds. First, let us count the elements in α, β, and γ by α ={,...,s}, β ={s,...,r} and γ ={r,...,n}. (4) We have the following result. Proposition. Suppose X be a feasible point of (4) and has the spectral decomposition (9) with X := X. Then the following statements hold. (I) If α = or γ =, constraint nondegeneracy holds at X. (II) If β =, constraint nondegeneracy fails at X. (III) In the remaining cases (i.e., α, β, γ ), constraint nondegeneracy may or may not hold at X. Proof (I) If α = or γ =, then either the upper bound κi X 0orthelower bound X 0 is inactive. That is, the problem (4) becomes the type of () near X. Constraint nondegeneracy for the latter case has been proved in [4, Proposition 4.]. Therefore, we omit the proof here. (II) First of all, we would like to point out when β =, both α and γ must be nonempty due to the fact κ>. Therefore the claim in (II) does not contradict (I). We only need to prove that the B i s in Proposition. are linearly dependent. By (), the linear independence means the following implication (by noting β = ) P T α (A y)p γ = 0 = y = 0. (5)

8 J Optim Theory Appl (00) 47: Using (4), the left-hand side of (5) reduces to (p p r ) T (p p r ) T (p p r ) T.. y = 0, (p p r ) T. y = 0,...,. (p p n ) T (p p n ) T (p s p r ) T (p s p r ) T.. (p s p n ) T y = 0, where p i is the i-th column of P. Noting the fact that (p i p j ) T e = pi T p j = 0for any i j because P T P = I, we see that y = e is a nonzero solution of the left-hand side of (5), which means that B i s are linearly dependent. We demonstrate case (III) by means of two examples below (Examples. and.). Remark. It is well known that constraint nondegeneracy coincides with the linear independence constraint qualification (LICQ) in the context of classical nonlinear programming. Consider a linear system with simple bounds: a i,x =b i, i =,...,mand l x u, (6) where a i R n, i =,...,m and b R m ; and l,u R n are lower and upper bounds for x. Similarly, define α ={i x i = u i }, β ={i l i <x i <u i } and γ ={i x i = l i }. If β =, then α γ ={,...,n} and hence LICQ is never satisfied at x when m. Proposition. (II) generalizes this simple fact to the context of semidefinite programming. Remark. As mentioned in proof (II), in the case of β =, both α and γ will be nonempty. In other words, (I) and (II) do not contradict each other. Remark.3 For a given feasible X, if (II) happens, it means that κs = trace(x) = n as all the other eigenvalues do not contribute to the trace. So (II) can only occur for very special choices of κ. Example. In this example, we have n = 4, κ = 3, and β. Constraint nondegeneracy holds at X which is given below: X = = P Diag[3,, 0, 0]P T,

9 554 J Optim Theory Appl (00) 47: and P = For this X,wehaveα ={}, β ={}, and γ ={3, 4}.TheB i s are B = 0, B = , B 3 = , B 4 =, which are obviously linearly independent. Constraint nondegeneracy holds at X. Example. In this example, we have n = 4, κ = 3, and β. Constraint nondegeneracy does not hold at X which is given below: 0 X = 0 T 0 = P Diag[3,, 0, 0]P with P = For this X,wehaveα ={}, β ={}, γ ={3, 4}.TheB i s are B =, B = ,

10 J Optim Theory Appl (00) 47: B 3 = , B 4 = They are linearly dependent, and hence constraint nondegeneracy fails to hold at X. 3 Generalized Hessian and Newton s Method It follows from (3) that the availability of a generalized Hessian in θ(y k ) is essential to practically implement Newton s method. Its (local) quadratic convergence is ensured when each of the generalized Hessian is positive definite at the solution ȳ. We will see in this section that not only we can calculate generalized Hessian, but also we can show that the positive definiteness is equivalent to constraint nondegeneracy studied in the last section. Finally, we will give a globalized version of Newton method, which is to be used in our numerical experiment. For a given y R n,letg A y have the spectral decomposition (9) with X := G A y. Because G A y may not be feasible to (4), it may have eigenvalues larger than κ or negative eigenvalues. We need the following index sets: α := {i λ i >κ}, α := {i λ i = κ}, β := {i 0 <λ i <κ}, γ := {i λ i = 0}, γ := {i λ i < 0}. (7) We define the collection of all 5 5 block matrices of the following type: 0 0 M α β M α γ M α γ 0 M α α E α β E α γ M α γ M := M S n M = M α T β E βα E ββ E βγ M βγ, Mα T γ E γ α E γ β M γ γ 0 Mα T γ Mα T γ Mβγ T 0 0 where M α α S α with M ij [0, ] for i, j α ; and M γ γ S γ with M ij [0, ] for any i, j γ ; and (κ λ j )/(λ i λ j ), i α,j β, κ/λ i, i α,j γ, M ij := κ/(λ i λ j ), i α,j γ, (8) κ/(κ λ j ), i α,j γ, λ i /(λ i λ j ), i β, j γ. We note that the set M reduces to the one used in [6, p. 367] if there is no upper bound κi X. Following the proof of [6, Lemma 3.5] with slight modification to cope with the upper bound, we have the following characterization of the generalized Hessian (in [6] it is called the generalized Jacobian of the gradient map θ).

11 556 J Optim Theory Appl (00) 47: Lemma 3. Given y R n, let G A y have the spectral decomposition (9) with X := G A y. For any h R n, we have θ(y)h {AWH : W W}, (9) where H = A h and W := {W WH = P(M (P T H P ))P T,M M, H S n }. Although (9) is an inclusion relationship for θ(y) which is characterized by the set M, we can actually compute two particular elements, which are defined by the following two matrices: 0 0 M α β M α γ M α γ 0 0 E α β E α γ M α γ M := Mα T β E βα E ββ E βγ M βγ Mα T γ E γ α E γ β 0 0 Mα T γ Mα T γ Mβγ T 0 0 (0) and 0 0 M α β M α γ M α γ 0 E α α E α β E α γ M α γ M := Mα T β E βα E ββ E βγ M βγ, () Mα T γ E γ α E γ β E γ γ 0 Mα T γ Mα T γ Mβγ T 0 0 where the elements M ij in M and M are calculated by (8). For reasons why M and M define two elements in θ(y), please refer to [6, Sect. 5(a)]. The following result states that nonsingularity of all matrices in θ(ȳ) at the dual optimal point ȳ means constraint nondegeneracy for the corresponding primal optimal point X. Proposition 3. Let ȳ be the optimal solution of (5) and correspondingly, X := B (A ȳ G) is the optimal solution of (4). The following are equivalent. (i) Every V θ(ȳ) is positive definite. (ii) Constraint nondegeneracy with respect to the constraints in (4) holds at X. Proof Suppose A ȳ G has the spectral decomposition (9) with X := A ȳ G, and the index sets are defined as in (7). Let α := α α, γ := γ γ. Choose an arbitrary V θ(ȳ). From Lemma 3., there exists an M M, such that for any h R n,vh= A(P (M (P T H P ))P T ), where H = A h. Furthermore, we have h, V h = h, A(P (M (P T H P ))P T ) = A h, P (M (P T H P ))P T = P T HP, M (P T HP).

12 J Optim Theory Appl (00) 47: Let H = P T HP, there is h, V h = H, M H = n i,j= M ij H ij M ij H ij ( H ij ) M ij H ij i α j β γ i α j β γ j γ ( ) M ij H ij H ij i β j α γ ω ( i α j β γ H ij i β i α β γ n j= H ij ), () where ω := min{m ij (i, j) (α (β γ)) ((α β) γ )} > 0. Now suppose (ii) holds. That is, constraint nondegeneracy holds at X. By Proposition.,theB i s are linearly independent, which implies the following implication by referring to (3) (noting that H = A h and H ij := p T i Hp j ): H ij = 0, i α, j β γ, and H ij = 0, i β, j =,...,n, = h = 0. (3) By the lower bound in (), h, V h =0 implies the left-hand side of (3), which in turn by constraint nondegeneracy implies h = 0. In other words, h, V h > 0 for any h 0. This proved that any V θ(ȳ) is positive definite. Now we suppose (i) holds, and prove that constraint nondegeneracy holds at X. Choose V that is defined by M in (0). By the structure of M, we have the following equivalences (referring to ()): 0 = h h, V h =0 the left-hand side of (3) holds. That is, under the positive definiteness of V defined by (0), (3) holds, which is exactly the constraint nondegeneracy characterized in Proposition.. We finished the proof. We have two comments regarding Proposition 3.. (i) For problem (), it was known that constraint nondegeneracy for the primal optimal solution implies positive definiteness of θ(ȳ) (see [6, Proposition 3.6] and [4, Proposition 4.]). The converse relationship seems new even for this case. (ii) There is an interesting result that is closely related to Proposition 3.. Consider the KKT system of (4): X A y U V = G A(X) = b, X 0, U 0, X, U =0 κi X 0, V 0, κi X, V =0

13 558 J Optim Theory Appl (00) 47: where y R n, U, V S n are Lagrange multipliers satisfying the KKT system above. It is well known that the KKT system is equivalent to the following nonsmooth equation: X A y U V G A(X) b F(X,y,U,V):= X S n (X U) = 0. (κi X) S n (κi X V) [4, Theorem 4.], when applied to problem (4), implies that constraint nondegeneracy at X is equivalent to the nonsingularity of every matrix in F(X,y,U,V), where F(X,y,U,V) is the generalized Jacobian of F at (X,y,U,V). We note that constraint nondegeneracy for the primal optimal solution X implies the uniqueness of Lagrange multiplier (y,u,v) [9, Proposition 4.75]. We also note that the matrix in F(X,y,U,V)is nonsymmetric while every matrix in θ(y) is symmetric. Proposition 3. establishes the equivalence of nonsingularity of θ(y) and that of F(X,y,U,V), where X and y has the relationship X = B (A y G), and y is the optimal solution of (5). It may be possible to establish the equivalence directly. We are ready to state our globalized Newton method. Algorithm 3. (Globalized Newton Method) Step 0. Initial point: y 0 R n. Tolerance: Tol > 0. Parameters:0<ρ<, 0 <σ < /, 0 <η<. Iteration: k := 0. Step. Compute θ(y k ). If θ(y k ) Tol, stop; Otherwise, go to Step. Step. Select V k θ(y k ), and use an iterative solver to solve to get d k such that (V k ɛ k I)d k = θ(y k ) (4) θ(y k ) V k d k η k θ(y k ), (5) where η k := min{η, θ(y k ) } and ɛ k := θ(y k ). If (5) is not satisfied or if the condition θ(y k ) T d k η k d k (6) fails, let d k := Bk θ(y k ), where B k is any symmetric positive definite matrix in S n. Step 3. Choose the smallest nonnegative integer l k such that θ(y k ρ l k d k ) θ(y k ) σρ l k θ(y k ) T d k. Let t k := ρ l k. Step 4. Set y k := y k t k d k and go to Step.

14 J Optim Theory Appl (00) 47: Remark 3. The framework of Algorithm 3. is almost the same as [6, Algorithm 5.]. There are other ways to globalize Newton s method (see, e.g. [5]). We choose to follow [6] as our primal purpose is to extend Newton s method in [6] to problem (4). Remark 3. It is easy to see from () that V θ(y) is only positive semidefinite. In (4), we add a regularization term ɛ k I to V k to make the linear equation always positive definite. Due to the fact that θ( ) is strongly semismooth [3], this regularization term does not cause any problem to the quadratic convergence of Newton s method. Remark 3.3 In Step, we propose to use an iterative solver to solve the linear equation (4), when the size of equation is large. In the numerical part, we will use CG for our test problems. When n is not large (say, less than 00), it is also possible to use a direct solver to (4). Remark 3.4 If the Newton direction d k fails to satisfy (5) or(6), we simply use a quasi-newton direction d k := Bk θ(y k ). This strategy is to ensure the global convergence of Algorithm 3.. In our numerical experiment, this quasi-newton direction was never used. Remark 3.5 Because the generalized Slater condition holds for (4), the dual function θ(y) is coercive [0]. Therefore, the sequence {y k } generated by the algorithm remains bounded, and hence has an accumulation point. Starting from this basic fact, the global and local convergence of the algorithm can be analyzed just as in [6, Theorem 5.3]. We omit such a lengthy analysis as the proofs are almost identical. We simply state the convergence result below. Theorem 3. Suppose that {B k } and {Bk } in Algorithm 3. are bounded. Let {y k } be a sequence generated by Algorithm 3.. Then we have the following convergence statements. (i) {y k } is bounded. (ii) Any accumulation point of {y k } is an optimal solution of (4). (iii) Suppose ȳ be an accumulation point of {y k }. Let X := B (A ȳ G). Also suppose that constraint nondegeneracy with respect to the constraints in (4) holds at X. Then {y k } quadratically converges to ȳ. We end this section by an example showing that the dual problem (5) might have multiple solutions, as we mentioned in (iv ) in introduction. [ ] Example 3. Let G =, κ = 3. For the dual problem (5), we can find two optimal solutions ȳ =[0, 0, 0] T and ŷ =[,, ] T, satisfying θ(y)= 0. While the optimal solution for the primal ] problem (4) is given by X := B (A ȳ G) = B (A ŷ G) =, which is constraint degenerate. [

15 560 J Optim Theory Appl (00) 47: Numerical Results In this section, we conduct extensive numerical experiments on a variety of problems gathered from existing literature. We also conduct numerical comparison with other methods. We ran all the tests in Matlab (006b) on a computer with Pentium(R)D CPU 3.00 GHz, RAM GB. The test problems are listed as follows, among which E E3, E5 come from [6], and E4 from [5, Sect. 6.]. 4. Test Problems We gather test problems in this section with more explanations on them to follow when necessary. E. G = C αr, where C is generated by MATLAB s gallery ( randcorr, n), R is a random n n symmetric matrix with R ij [, ], i, j =,,...,n. The MATLAB code for generating R is R = rand(n); R = (R R )/; R = -R; α = 0.0, 0.,, 0. E. G is a random n n symmetric matrix with G ij [, ], G ii =, i, j =,,..., n.thematlab code is G = rand(n); G = (G G )/; G = - *G; for i = :n; G(i,i) = ; end. E3. G is a random n n symmetric matrix with G ij [0, ], G ii =, i,j =,,..., n.thematlab code is G = *rand(n); G = (G G )/; for i = :n; G(i,i) = ; end. E4. G = C 0.05R, where C and R are generated by: d = 0.ˆ(4*[-:/(n-):0]), C = gallery ( randcorr,n*d/sum(d)), R = *rand(n) -, R = triu(r) triu(r,). E5. G = C αr, where C is generated as example in [] with C ii =, i =,..., n. That is, let r = l l,<l<n, then C(:l,:l) = r.*ones(l,l), for i = :n; C(i,i) = ; end. R is a diagonal matrix with diagonal elements in [, ]. α = 0.0, 0.,. We choose l by if n >= 000; l = 500; else; l = floor(n/3); end. E6. G = C αr. C is the day correlation matrix (as of Oct 0, 008) from the lagged datasets of RiskMetrics ( stddownload_edu.html). R is a randomly generated symmetric matrix with entries in [, ]: R =.0*rand(387,387)- ones(387, 387); R = triu(r) triu(r,) ; for i = :n; R(i,i) = ; end. α = 0.0, 0.,, 0. E7. G is the Hilbert matrix generated by G = hilb(n). E8. G is a correlation matrix by factoring out the diagonal elements of the Hilbert matrix in E7, i.e., G ij = C ij,i,j=,...,n, with C generated by E7. Cii C jj E9. G is generated by E (α = 0.) with more equality constraints added to (4)ofthe following type: X ij = 0, (i,j) B e. The index set B e is generated similar to that in [8, Example 5.5]. It consists of the indices of min{ˆn r,n i} randomly generated elements at the i-th row of G, i =,...,n, with ˆn r =, 5, 0, 0, 50, 00. E0. G is generated by E6 (α = 0.) and the constraints are the same as in E9. E. This problem has been tested by [4] and is a SDP with simple bounds: min X Sn X, C s.t. AX = 0, I,X =q, I X 0.

16 J Optim Theory Appl (00) 47: Fig. Understanding the role of upper bound X κi (κ = 5): comparison between BCorNewton and CorNewton on E In [4], the data was generated as A(X) =[ A,X,..., A n,x ] T, with A i = e i ei T e iei T, i =,...,n. C is generated by R = rand(n), R = 0.5*(R R ), C = R norm(r,)*i. q = 5. In our implementation, we set the parameters as follows. Tol =.0 0 6, ρ = 0.5, σ =.0 0 4, η = 0.5, B k = I, and y 0 = e diag(g). As for the iterative solver for the linear equation (4), we use the diagonally preconditioned CG of Hestense and Stiefel [6]. See [6] for more on this method. We let y f denote the final iterate of Algorithm 3. and X f the corresponding solution to the problem (4). Let f p and f d be the primal and dual objective function values at X f and y f, respectively. In the numerical tables, we report the following information. It: number of iterations upon termination gap: f p f d λ min : the smallest eigenvalue of X f λ max : the largest eigenvalue of X f t: cpu time (hh:mm:ss) used upon termination res: θ(y f ) Before we conduct extensive numerical experiments and comparisons on the problems above, we would like to understand the role that the upper bound X κi plays in the problem (4). We did it from a numerical comparison of our code (denoted as BCorNewton) with the Newton method of [6], which does not deal with the upper bound constraint. The code of [6], CorNewton.m (denoted as CorNewton), is available from The comparison is on E with α = 0, κ = 5, n = 400, 500,...,00. Because BCorNewton deals with more constraints than CorNewton, one may expect that it should be slower than CorNewton. Our numerical result (see Fig. ) just confirms this expectation. The time line (solid) by BCorNewton is always above that (dashed) by CorNewton. We also observe that the solid line is not very far above the dashed line, indicating

17 56 J Optim Theory Appl (00) 47: that BCorNewton is also very efficient, given that it deals with one more constraint than CorNewton. Our numerical results below shows that the upper bound is always active at the solution for E. 4. Numerical Comparison The nearest correlation matrix problem (4) is a special case of the more general quadratic semidefinite programming (QSDP). From this perspective, it can be solved by several existing methods, including pennon [7], qsdp [4, 5], and qnal [5, 8]. The smoothing Newton method developed in [8] can also be adapted to solve (4) with a proper smoothing function to be chosen to approximate the projection B (X). It is worth pointing out that all of those methods are capable of dealing with inequality constraints, while in contrast our method only focuses on equality constraints. The benefit of our method depending on equality constraints is that it is highly efficient, as confirmed by the comparison reported below. Nevertheless, the problem (4) itself covers important instances. Moreover, our method can be used to solve the SDP problem stated in E. From [5, 8], it seems that qnal has performed exceedingly well over a wide range of test problems including SDPs and QSDPs, and it is getting popular in the SDP community. It is this method that we are going to compare with. In qnal, we use its default settings. In particular, we use tol = 0 6. The residual for qnal was calculated in the way reported in [8], and the gap for qnal is the difference between the primal and the dual objective function values when (4) was regarded as a QSDP. Note that a different dual formulation was used in [8]. Three tables of numerical results are included. In all computation, we set κ = 5 (other values could also be used, but the results are similar). For E7, we also test the case where κ = 0.5λ max (G) (upon a suggestion of a referee). The result is indicated by E7.From this example, we see that as the condition number of the correlation matrix increases because of the increasing κ, both algorithms took more time to reach the termination. This trend well fits the general expectation. In Table, we report results on E E8. For problems E, E5, and E6, α represents the level of perturbation to a true correlation matrix. The bigger α is, the more a correlation matrix is perturbed, and consequently, more time would be needed to solve the problem. Numerical results confirmed this expectation. Overall, BCorNewton performed very well compared to qnal, which, we would like to emphasize once again, is a generic code for QSDPs. Since BCorNewton is capable of dealing with equality constraints, we try to see how its performance changes as more and more constraints were added to (4). Table serves for this purpose, where n e denotes the number of equality constraints. We tested E9 and E0 when more constraints of type X ij = 0 were added. The reason to add fixed zero element constraints is that the resulting problem always has a feasible solution. A clear trend for both BCorNewton and qnal is that it took a longer time to solve as more constraints were added. Still, BCorNewton is very efficient. We suspect that the method would become less efficient when a large number (say O(n )) of constraints were added. As mentioned early on, the simple upper bound constraint appears in a SDP formulation of minimizing the sum of the largest q eigenvalues of an affine function

18 J Optim Theory Appl (00) 47: Table Comparison between (a) BCorNewton and (b) qnal E(α) n Alg. f p gap λ min λ max It Res t 500 a 6.33E E 0 5.9E 09 (0.0) b 6.33E00.9E E a 6.06E0 3.06E E 4 3 b 6.06E0 6.80E E a.7e0.99e E 37 b.7e0.05e E 07 8: 500 a.53e E E 3 3 (0.) b.53e03.e E a.07e E E 3 8 b.07e E E 07 3:3 500 a.45e04 7.4E E b.45e04.08e E 07 0: a.8e E E 08 3 () b.8e E E 07 :0 000 a.3e06.63e E 09 0 b.3e06.88e E 07 8: a.56e06 3.3E E 0 :0 b.56e06 4.8E E 07 : a.86e E E 4 (0) b.86e E E 07 5: a.4e08.34e E 0 6 b.4e08.59e E 07 47:8 500 a.58e08.09e E : b.58e08 5.9E E 07 3:5: a.53e04.9e E 07 3 b.53e04.e E 07 :9 000 a 6.69E04 7.3E E 08 0 b 6.69E E E 07 6: a.57e E E 08 :0 b.57e05.7e E 07 50: a.38e E E 07 3 b.38e05.06e E 07 :8 000 a 5.6E05.3E E 08 0 b 5.6E E E 07 4:8 500 a.8e06 7.3E E 09 :0 b.8e06.05e E 07 47:49

19 564 J Optim Theory Appl (00) 47: Table (Continued) E(α) n Alg. f p gap λ min λ max It Res t a.4e0.68e E 09 b.4e0.7e E a 3.03E0.80E E 0 7 b 3.03E0 4.46E E 07 4:9 500 a 5.89E0.4E E 0 54 b 5.89E0.35E E 06 : a.3e04 6.4E E (0.0) b.3e04.86e E a.3e E E 4 b.3e05.0e E 07 :9 500 a.3e05.75e E 4 b.3e05.0e E 07 6: a.3e E E 3 (0.) b.3e04.6e E a.3e05.9e E 0 b.3e05 6.9E E 07 4: a.4e05.65e E b.4e05 6.3E E 07 : a.86e04.73e E 08 3 () b.86e04.6e E 07 :5 000 a.9e05.04e E 08 0 b.9e E E 07 6: a.8e E E 08 :03 b.8e05.0e E 07 53: a.3e04.8e E 3 (0.0) b.3e04.8e E a 9.35E03.88E E 08 (0.) b 9.35E03 4.7E E a.95e E E 07 () b.95e04 9.9E E 07 : a 3.47E06.05E E 3 (0) b 3.47E06.56E E 07 8:56 of symmetric matrices (see [4, 7] for more details). Such SDPs take the form of E. We use Algorithm 3. to solve this problem by solving its regularized problem proposed in [9]. In each iteration, we solve the following subproblem (σ >0) min X S n σ X (Y σc) s.t. AX = 0, I,X =q, I X 0.

20 J Optim Theory Appl (00) 47: Table (Continued) E(α) n Alg. f p gap λ min λ max It Res t a.47e0.84e E 4 b.47e0 4.3E E a 4.96E0.4E E 4 5 b 4.96E0 8.68E E 06 :4 500 a 7.46E0.4E E 4 6 b 7.46E0.3E E 06 6: a.49e0 8.55E E 07 b.49e0 6.0E E a 4.99E0 3.87E E 07 5 b 4.99E0.07E E 07 3:4 500 a 7.49E0.90E E b 7.49E0 3.30E E 07 : a 9.44E04 3.4E E 07 b 9.44E04.36E E a 3.8E05.8E E 07 3 b 3.8E05.E E 07 : a 8.6E E E b 8.6E05.76E E 07 8:8 We set the stopping criteria ɛ = 0 7 in the regularization method [9]. We compare with sdpnal (qnal is an extension of sdpnal) developed in [5]. The results are shown in Table 3. Once again, the results confirmed the efficiency of our method on E. 5 Conclusions In this paper, we proposed a globalized Newton method to solve the nearest correlation matrix problem with a simple upper bound (4). The method is an extension of Qi and Sun s Newton method [6]. We found that constraint nondegeneracy, which is a key point to the quadratic convergence of the original Newton s method, might fail for some feasible points of (4). Yet the extensive numerical results confirmed that the proposed method is still very efficient, even for large scale problems. The main reason why we can develop a Newton method is that the dual problem of (4) is unconstrained, because all constraints are of equality type. When inequality constraints are in presence, one has to develop other methods. There are two ways to go for this further development. One is to follow the smoothing Newton method studied in [8]. In this development, one has to use a proper smoothing function for the projection B (X). Such smoothing functions can be found in [30]. The other

21 566 J Optim Theory Appl (00) 47: Table Comparison between (a) BCorNewton and (b) qnal E(n) ˆn r n e Alg. f p gap λ min λ max It Res t 9(500) 999 a.5e E E 0 8 b.5e03.98e E a.5e03.e E 08 8 b.5e03 5.6E E a.5e E E 09 8 b.5e E E a.54e03.79e E 08 8 b.54e03.04e E a.55e03 3.6E E 07 9 b.55e03 3.8E E 07 : a.57e03.64e E 35 b.57e03.49e E 07 :56 9(000) 999 a.07e E E b.07e04.56e E 07 3: a.07e04.53e E b.07e E E 07 3: a.07e04 3.7E E b.07e04.54e E 07 3: a.07e04 6.8E E 07 :03 b.07e04.7e E 07 4: a.07e E E 07 :00 b.07e E E 07 5: a.07e E E 0 :43 0(387) 773 a 9.3E03.70E E 08 6 b 9.3E03.35E E a 9.33E03.37E E 07 6 b 9.33E03 3.7E E a 9.35E03.8E E 0 b 9.35E E E a 9.39E03.64E E 0 3 b 9.39E E E 07 : a 9.46E E E 08 3 b 9.46E03.90E E 07 : a 9.6E03 7.8E E 53 b 9.6E03 3.3E E 07 :

22 J Optim Theory Appl (00) 47: Table 3 Comparison between (a) BCorNewton and (b) sdpnal E(n) Alg. f p gap λ min λ max It Res t (00) a 4.77E0.87E E 08 b 4.77E0.54E E 07 3 (400) a 9.64E0.48E E 08 :9 b 9.64E0.33E E 07 :30 (800) a.95e03 3.5E E 08 6: b.95e03.4e E 07 :3 (600) a 3.9E03.4E E 08 3:58 b 3.9E03.46E E 07 58:09 development would be to follow the augmented Lagrangian approach studied in [9]. In both developments, constraint nondegeneracy studied in this paper would be also crucial to the convergence rate of the respective smoothing Newton method and the augmented Lagrangian method. Acknowledgements We would like to thank the associate editor and all three referees for their detailed comments and suggestions that have led to a significant improvement of the paper. We also thank the authors of [5] for sending us their excellent codes qnal and sdpnal for numerical comparison. References. Higham, N.J.: Computing the nearest correlation matrix a problem from finance. IMA J. Numer. Anal., (00). Malick, J.: A dual approach to semidefinite least-squares problems. SIAM J. Matrix Anal. Appl. 6, 7 84 (004) 3. Boyd, S., Xiao, L.: Least-squares covariance matrix adjustment. SIAM J. Matrix Anal. Appl. 7, (005) 4. Toh, K.C., Tütüncü, R.H., Todd, M.J.: Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems. Pac. J. Optim. 3, (007) 5. Toh, K.C.: An inexact primal-dual path-following algorithm for convex quadratic SDP. Math. Program., 54 (008) 6. Qi, H.-D., Sun, D.F.: A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl. 8, (006) 7. Borsdorf, R., Higham, N.J.: A preconditioned Newton algorithm for the nearest correlation matrix. IMAJ.Numer.Anal.94, (00) 8. Gao, Y., Sun, D.F.: Calibrating least squares covariance matrix problems with equality and inequality constraints. SIAM J. Matrix Anal. Appl. 3, (00) 9. Qi, H.-D., Sun, D.F.: Correlation stress testing for value-at-risk: an unconstrained convex optimization approach. Comput. Optim. Appl. 45, (00) 0. Rockafellar, R.T.: Conjugate Duality and Optimization. SIAM, Philadelphia (974). Deutsch, F.: Best Approximation in Inner Product Spaces. CMS Books in Mathematics, vol. 7. Springer, New York (00). Borwein, J., Lewis, A.S.: Partially finite convex programming I: Quasi relative interiors and duality theory. Math. Program. 57, 5 48 (99) 3. Sun, D.F., Sun, J.: Semismooth matrix valued functions. Math. Oper. Res. 7, (00) 4. Qi, L., Sun, J.: A nonsmooth version of Newton s method. Math. Program. 58, (993) 5. Kummer, B.: Newton s method for nondifferentiable functions. In: Guddat, J., Bank, B., Hollatz, H., Kall, P., Klatte, D., Kummer, B., Lommatzsch, K., Tammer, L., Vlach, M., Zimmerman, K. (eds.) Advances in Mathematical Optimization, pp Academie Verlag, Berlin (988)

23 568 J Optim Theory Appl (00) 47: Werner, R., Schöettle, K.: Calibration of correlation matrices SDP or not SDP. Technical Report, Munich University of Technology (007) 7. Alizadeh, F.: Interior point methods in semidefinite programming with applications to combinatorial optimizations. SIAM J. Optim. 5, 3 5 (995) 8. Dattorro, J.: Convex Optimization and Euclidean Distance Geometry. Meboo Publishing USA, California (005) 9. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (000) 0. Bonnans, J.F., Shapiro, A.: Nondegeneracy and quantitative stability of parameterized optimization problems with multiple solutions. SIAM J. Optim. 8, (998). Shapiro, A., Fan, M.K.H.: On eigenvalue optimization. SIAM J. Optim. 5, (995). Alizadeh, F., Haeberly, J.-P.A., Overton, M.L.: Complementarity and nondegeneracy in semidefinite programming. Math. Program. 77, 8 (997) 3. Chan, Z.X., Sun, D.F.: Constraint nondegeneracy, strong regularity and nonsingularity in semidefinite programming. SIAM J. Optim. 9, (008) 4. Sun, D.F.: The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 3, (006) 5. Zhao, X.Y., Sun, D.F., Toh, K.C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 0, (00) 6. Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 49, (95) 7. Kočvara, M., Stingl, M.: PENNON: a generalized augmented Lagrangian method for semidefinite programming. Optim. Method. Softw. 8, (003) 8. Zhao, X.Y.: A semismooth Newton-CG augmented Lagrangian method for large scale linear and convex quadratic SDPs. Ph.D. thesis, National University of Singapore (009) 9. Malick, J., Povh, J., Rendl, F., Wiegele, A.: Regularization methods for semidefinite programming. SIAM J. Optim. 0, (009) 30. Chen, X., Qi, L., Sun, D.F.: Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities. Math. Comput. 67, (998)

An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem

An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem Houduo Qi and Defeng Sun March 3, 2008 Abstract In [15], Higham considered two types of nearest correlation matrix

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

An Introduction to Correlation Stress Testing

An Introduction to Correlation Stress Testing An Introduction to Correlation Stress Testing Defeng Sun Department of Mathematics and Risk Management Institute National University of Singapore This is based on a joint work with GAO Yan at NUS March

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Calibrating Least Squares Semidefinite Programming with Equality and Inequality Constraints

Calibrating Least Squares Semidefinite Programming with Equality and Inequality Constraints Calibrating Least Squares Semidefinite Programming with Equality and Inequality Constraints Yan Gao and Defeng Sun June 12, 2008; Revised on June 30, 2009 Abstract In this paper, we consider the least

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

Nearest Correlation Matrix

Nearest Correlation Matrix Nearest Correlation Matrix The NAG Library has a range of functionality in the area of computing the nearest correlation matrix. In this article we take a look at nearest correlation matrix problems, giving

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

AN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING

AN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING J Syst Sci Complex (2010) 23: 822 829 AN EQUVALENCY CONDTON OF NONSNGULARTY N NONLNEAR SEMDEFNTE PROGRAMMNG Chengjin L Wenyu SUN Raimundo J. B. de SAMPAO DO: 10.1007/s11424-010-8057-1 Received: 2 February

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

1. Introduction. Consider the following parameterized optimization problem:

1. Introduction. Consider the following parameterized optimization problem: SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Rank Constrained Matrix Optimization Problems

Rank Constrained Matrix Optimization Problems Rank Constrained Matrix Optimization Problems Defeng Sun Department of Mathematics and Risk Management Institute National University of Singapore This talk is based on a joint work with Yan Gao at NUS

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 2, pp. 532 546 c 2005 Society for Industrial and Applied Mathematics LEAST-SQUARES COVARIANCE MATRIX ADJUSTMENT STEPHEN BOYD AND LIN XIAO Abstract. We consider the

More information

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones Bryan Karlovitz July 19, 2012 West Chester University of Pennsylvania

More information

A Newton-CG Augmented Lagrangian Method for Semidefinite Programming

A Newton-CG Augmented Lagrangian Method for Semidefinite Programming A Newton-CG Augmented Lagrangian Method for Semidefinite Programming Xin-Yuan Zhao Defeng Sun Kim-Chuan Toh March 12, 2008; Revised, February 03, 2009 Abstract. We consider a Newton-CG augmented Lagrangian

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

A two-phase augmented Lagrangian approach for linear and convex quadratic semidefinite programming problems

A two-phase augmented Lagrangian approach for linear and convex quadratic semidefinite programming problems A two-phase augmented Lagrangian approach for linear and convex quadratic semidefinite programming problems Defeng Sun Department of Mathematics, National University of Singapore December 11, 2016 Joint

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Positive semidefinite matrix approximation with a trace constraint

Positive semidefinite matrix approximation with a trace constraint Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP Kaifeng Jiang, Defeng Sun, and Kim-Chuan Toh Abstract The accelerated proximal gradient (APG) method, first

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP Kaifeng Jiang, Defeng Sun, and Kim-Chuan Toh March 3, 2012 Abstract The accelerated proximal gradient (APG)

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

A Local Convergence Analysis of Bilevel Decomposition Algorithms

A Local Convergence Analysis of Bilevel Decomposition Algorithms A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Advanced Continuous Optimization

Advanced Continuous Optimization University Paris-Saclay Master Program in Optimization Advanced Continuous Optimization J. Ch. Gilbert (INRIA Paris-Rocquencourt) September 26, 2017 Lectures: September 18, 2017 November 6, 2017 Examination:

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

Convex Quadratic Approximation

Convex Quadratic Approximation Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,

More information

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

A preconditioned Newton algorithm for the nearest correlation matrix

A preconditioned Newton algorithm for the nearest correlation matrix IMA Journal of Numerical Analysis (2010) 30, 94 107 doi:10.1093/imanum/drn085 Advance Access publication on June 8, 2009 A preconditioned Newton algorithm for the nearest correlation matrix RÜDIGER BORSDORF

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

On the Moreau-Yosida regularization of the vector k-norm related functions

On the Moreau-Yosida regularization of the vector k-norm related functions On the Moreau-Yosida regularization of the vector k-norm related functions Bin Wu, Chao Ding, Defeng Sun and Kim-Chuan Toh This version: March 08, 2011 Abstract In this paper, we conduct a thorough study

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

FETI domain decomposition method to solution of contact problems with large displacements

FETI domain decomposition method to solution of contact problems with large displacements FETI domain decomposition method to solution of contact problems with large displacements Vít Vondrák 1, Zdeněk Dostál 1, Jiří Dobiáš 2, and Svatopluk Pták 2 1 Dept. of Appl. Math., Technical University

More information