Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Similar documents
Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

Research Article Strong Convergence of a Projected Gradient Method

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Research Article Existence of Periodic Positive Solutions for Abstract Difference Equations

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming

Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article Residual Iterative Method for Solving Absolute Value Equations

Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and Applications

An improved generalized Newton method for absolute value equations

Research Article Self-Adaptive and Relaxed Self-Adaptive Projection Methods for Solving the Multiple-Set Split Feasibility Problem

Research Article An Iterative Algorithm for the Split Equality and Multiple-Sets Split Equality Problem

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Research Article A Characterization of E-Benson Proper Efficiency via Nonlinear Scalarization in Vector Optimization

Research Article Almost Sure Central Limit Theorem of Sample Quantiles

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.18

Research Article Two Different Classes of Wronskian Conditions to a (3 + 1)-Dimensional Generalized Shallow Water Equation

A Smoothing Newton Method for Solving Absolute Value Equations

Research Article Convergence Theorems for Infinite Family of Multivalued Quasi-Nonexpansive Mappings in Uniformly Convex Banach Spaces

Research Article Cyclic Iterative Method for Strictly Pseudononspreading in Hilbert Space

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Research Article Existence and Uniqueness of Homoclinic Solution for a Class of Nonlinear Second-Order Differential Equations

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Research Article A New Roper-Suffridge Extension Operator on a Reinhardt Domain

Research Article On Decomposable Measures Induced by Metrics

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Strong Convergence of Parallel Iterative Algorithm with Mean Errors for Two Finite Families of Ćirić Quasi-Contractive Operators

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

Research Article Quasilinearization Technique for Φ-Laplacian Type Equations

An Introduction to Correlation Stress Testing

Research Article Solvability of a Class of Integral Inclusions

Research Article An Inverse Eigenvalue Problem for Jacobi Matrices

Research Article Fixed Point Theorems of Quasicontractions on Cone Metric Spaces with Banach Algebras

On the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities

Research Article Extended Precise Large Deviations of Random Sums in the Presence of END Structure and Consistent Variation

Research Article A Constructive Analysis of Convex-Valued Demand Correspondence for Weakly Uniformly Rotund and Monotonic Preference

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Research Article Uniqueness Theorems on Difference Monomials of Entire Functions

Research Article Sufficient Optimality and Sensitivity Analysis of a Parameterized Min-Max Programming

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Research Article Global Existence and Boundedness of Solutions to a Second-Order Nonlinear Differential System

A double projection method for solving variational inequalities without monotonicity

Research Article New Examples of Einstein Metrics in Dimension Four

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Research Article Numerical Solution of the Inverse Problem of Determining an Unknown Source Term in a Heat Equation

A note on the unique solution of linear complementarity problem

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Article New Oscillation Criteria for Second-Order Neutral Delay Differential Equations with Positive and Negative Coefficients

Research Article Remarks on Asymptotic Centers and Fixed Points

GENERALIZED second-order cone complementarity

Research Article A Novel Filled Function Method for Nonlinear Equations

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article The Dirichlet Problem on the Upper Half-Space

Algorithms for nonlinear programming problems II

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces

Research Article Function Spaces with a Random Variable Exponent

On the Moreau-Yosida regularization of the vector k-norm related functions

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Positive semidefinite matrix approximation with a trace constraint

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article Coefficient Conditions for Harmonic Close-to-Convex Functions

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion

Trace inequalities for positive semidefinite matrices with centrosymmetric structure

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Research Article On a Quasi-Neutral Approximation to the Incompressible Euler Equations

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

Research Article Soliton Solutions for the Wick-Type Stochastic KP Equation

Research Article Positive Solutions for Neumann Boundary Value Problems of Second-Order Impulsive Differential Equations in Banach Spaces

Research Article Modulus of Convexity, the Coeffcient R 1,X, and Normal Structure in Banach Spaces

Research Article Frequent Oscillatory Behavior of Delay Partial Difference Equations with Positive and Negative Coefficients

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Strong convergence theorems for total quasi-ϕasymptotically

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Research Article The Entanglement of Independent Quantum Systems

Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

A Note on KKT Points of Homogeneous Programs 1

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

Lecture 6: Conic Optimization September 8

Research Article An Auxiliary Function Method for Global Minimization in Integer Programming

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

New hybrid conjugate gradient methods with the generalized Wolfe line search

Research Article On New Wilker-Type Inequalities

An inexact subgradient algorithm for Equilibrium Problems

Research Article Some Krasnonsel skiĭ-mann Algorithms and the Multiple-Set Split Feasibility Problem

Solving Separable Nonlinear Equations Using LU Factorization

Research Article He s Variational Iteration Method for Solving Fractional Riccati Differential Equation

Research Article Partial Pole Placement in LMI Region

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

Transcription:

Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method M. H. Xu 1 and H. Shao 1 School of Mathematics and Physics, Changzhou University, Jiangsu Province, Changzhou 13164, China Department of Mathematics, School of Sciences, China University of Mining and Technology, Xuzhou 1116, China Correspondence should be addressed to M. H. Xu, xuminghua@cczu.edu.cn Received 11 April 01; Accepted 17 June 01 Academic Editor: Abdellah Bnouhachem Copyright q 01 M. H. Xu and H. Shao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Let S be a closed convex set of matrices and C be a given matrix. The matrix nearness problem considered in this paper is to find a matrix X in the set S at which max x ij c ij reaches its minimum value. In order to solve the matrix nearness problem, the problem is reformulated to a min-max problem firstly, then the relationship between the min-max problem and a monotone linear variational inequality LVI is built. Since the matrix in the LVI problem has a special structure, a projection and contraction method is suggested to solve this LVI problem. Moreover, some implementing details of the method are presented in this paper. Finally, preliminary numerical results are reported, which show that this simple algorithm is promising for this matrix nearness problem. 1. Introduction Let C c ij R n n be a given symmetric matrix and S n Λ H R n n H T H, λ min I H λ max I, 1.1 where λ min,λ max are given scalars and λ min <λ max, I is the identity matrix, and A B denotes that B A is a positive semidefinite matrix. It is clear that S n Λ is a nonempty closed convex set. The problem considered in this paper is min X C inf X x ij ) S n Λ, 1.

Advances in Operations Research where X C inf max xij c ij i 1,...,n,j 1,...,n. 1.3 Throughout the paper we assume that the solution set of problem 1. is nonempty. Note that when λ min 0andλ max,thesets n Λ reduces to the semidefinite cone S n H R n n H T H, H 0. 1.4 Using the terminology in interior point methods, S n is called semidefinite cone, and thus the related problem belongs to the class of semidefinite programming 1. Problem 1. can be viewed as a type of matrix nearness problem, that is, the problem of finding a matrix that satisfies some property and is nearest to a given one. A survey on matrix nearness problems can be found in. The matrix nearness problems have many applications especially in finance, statistics, and compressive sensing. For example, one application in statistics is to make adjustments to a symmetric matrix so that it is consistent with prior knowledge or assumptions, and is a valid covariance matrix 3 8. Paper 9 discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence and proposes a framework for studying these problems, discusses some specific matrix nearness problems, and provides algorithms for solving them numerically. Note that a different norm is used in this paper than in these published papers 3 7 and this makes the objective function of problem 1. be nonsmooth, and the problem 1. cannot be solved very easily. In the next section, we will find that the problem 1. can be converted into a linear variational inequality and thus can be solved effectively with the projection and contraction PC method which is extremely simple both in theoretical analysis and numerical implementations 10 15. The paper is organized as follows. The relationship between the matrix nearness problem considered in this paper and a monotone linear variational inequality LVI is built in Section. In Section 3, some preliminaries on variational inequalities are summarized. The projection and contraction method for the LVI associated with the considered problem is suggested. In Section 4, the implementing details for applying the projection and contraction method to the matrix optimization problems are studied. Preliminary numerical results and some concluding remarks are reported in Sections 5 and 6, respectively.. Reformulating the Problem to a Monotone LVI For any d R m, we have d max ξ B 1 ξ T d,.1 where B 1 ξ R m ξ 1 1 and ξ T d is the Euclidean inner-product of ξ and d.

Advances in Operations Research 3 In order to simplify the following descriptions, let vec A be a linear transformation which converts the matrix A R l n into a column vector in R ln obtained by stacking the columns of the matrix A on top of one another, that is, vec A a 11,...,a l1,a 1,...,a l,...,a 1n,...,a ln T,. and let mat vec A be the original matrix A, thatis, mat vec A A..3 Based on.1 and the fact that the matrix X and C are symmetric, problem 1. can similarly be rewritten as the following min-max problem: min max vec X S n Λ Z B Z T vec X vec C,.4 where n n B Z Rn n Z Z T, z ij 1. j 1.5 Remark.1. Since X and C are both symmetric matrices, we can restrict the matrices in set B to be symmetric. Let Ω 1 x vec X X S n Λ,.6 Ω z vec Z Z B..7 Since S n Λ and B are both convex sets, it is easy to prove that Ω 1 and Ω are also convex sets. Let X,Z S n Λ Bbe any solution of.4 and x vec X,z vec Z. Then z T x c z T x c z T x c, x, z Ω,.8 where c vec C and Ω Ω 1 Ω.Thus, x,z is a solution of the following variational inequality: find x,z Ω such that x x T z 0, z z T x c 0, x, z Ω..9 For convenience of coming analysis, we rewrite the linear variational inequality.9 in the following compact form: find u Ω such that u u T Mu q ) 0, u Ω,.10

4 Advances in Operations Research where u x ) x z, u, M z) ) 0 I 0, q..11 I 0 c) In the following, we denote the linear variational inequality.10 -.11 by LVI Ω,M,q. Remark.. Since M is skew-symmetric, the linear variational inequality LVI Ω,M,q is monotone. 3. Projection and Contraction Method for Monotone LVIs In this section, we summarize some important concepts and preliminary results which are useful in the coming analysis. 3.1. Projection Mapping Let Ω be a nonempty closed convex set of R m. For a given v R m, the projection of v onto Ω, denoted by P Ω v, is the unique solution of the following problem: min u u v u Ω, 3.1 where is the Euclidian norm. The projection under an Euclidean norm plays an important role in the proposed method. A basic property of the projection mapping on a closed convex set is v P Ω v T u P Ω v 0, v R m, u Ω. 3. In many cases of practical applications, the closed convex set Ω has a simple structure, and the projection on Ω is easy to carry out. For example, let e be a vector whose each element is 1, and B ξ R m ξ 1. 3.3 Then the projection of a vector d R m on B can be obtained by P B d max e, mind, e, 3.4 where the min and max are component wise.

Advances in Operations Research 5 3.. Preliminaries on Linear Variational Inequalities We denote the solution set of LVI Ω,M,q.10 by Ω and assume that Ω /. Since the early work of Eaves 16, it is well known that the variational inequality LVI Ω,M,q problem is equivalent to the following projection equation: u P Ω [ u Mu q )]. 3.5 In other words, to solve LVI Ω,M,q is equivalent to finding a zero point of the continuous residue function e u : u P Ω [ u Mu q )]. 3.6 Hence, e u 0 u Ω. 3.7 In the literature of variational inequalities, e u is called the error bound of LVI. It quantitatively measures how much u fails to be in Ω. 3.3. The Projection and Contraction Method Let u Ω be a solution. For any u R m, because P Ω u Mu q Ω, it follows from.10 that PΩ [ u Mu q )] u ) T Mu q ) 0, u R m. 3.8 By setting v u Mu q and u u in 3., we have PΩ [ u Mu q )] u ) T u Mu q ) PΩ [ u Mu q )] 0, u R m. 3.9 Adding the above two inequalities, and using the notation of e u,weobtain u u e u T e u M u u 0, u R m. 3.10 For positive semi-definite not necessary symmetric matrix M, the following theorem follows from 3.10 directly. Theorem 3.1 Theorem 1 in 11. For any u Ω, we have u u T d u e u, u R m, 3.11 where d u M T e u e u. 3.1

6 Advances in Operations Research For u Ω \ Ω, it follows from 3.11-3.1 that d u is a descent direction of the unknown function 1/ u u. Some practical PC methods for LVI based on direction d u are given in 11. Algorithm 3. Projection and Contraction Method for LVI Problem.10. Givenu 0 Ω.for k 0, 1...,ifu k / Ω, then do the following u k 1 u k γα u k) d u k), 3.13 where γ 0,, d u is defined in 3.1,and α u e u d u. 3.14 Remark 3.3. In fact, γ is a relaxation factor and it is recommended to be taken from 1,. In practical computation, usually we take γ 1., 1.9. The method was first mentioned in 11. Among the PC methods for asymmetric LVI Ω,M,q 10 13, this method makes just one projection in each iteration. For completeness sake, we include the theorem for LVI.10 and its proofs. Theorem 3.4 Theorem in 11. The method 3.13-3.14 produces a sequence u k,which satisfies u k 1 u u k u γ γ ) α u k) e u k). 3.15 Proof. It follows from 3.11 and 3.14 that u k 1 u u k u γα u k) d u k) u k u γα u k) u k u ) T d u k) γ α u k) d u k) u k u γα u k) e u k) γ α u k) d u k) u k u γ γ ) α u k) e u k). 3.16 Thus the theorem is proved. The method used in this paper is called projection and contraction method because it makes projections in each iteration and the generated sequence is Fejér monotone with respect to the solution set. For skew-symmetric M in LVI.10, it is easy to prove that α u 1/. Thus, the contraction inequality 3.15 can be simplified to u k 1 u u k u γ γ ) e u k). 3.17

Advances in Operations Research 7 Following 3.17, the convergence results of the sequence u k can be found in 11 or be proved similarly to that in 17. Since the above inequality is true for all u Ω, we have dist u k 1, Ω ) dist u k, Ω ) γ γ ) e u k), 3.18 where dist u, Ω min u u u Ω. 3.19 The above inequality states that we get a great profit from the kth iteration, if e u k is not too small; conversely, if we get a very small profit from the kth iteration, then e u k is already very small and u k is a sufficiently good approximation of a u Ω. 4. Implementing Details of Algorithm 3. for Solving Problem 1. We use the Algorithm 3. to solve the linear variational inequality.10 arising from problem 1.. For a given u x, z Ω, the process of computing a new iterate is listed as follows: d u e u ) z Mu q, c x ) ) ex u x PΩ1 x z, e z u z P Ω z x c ) dx u M T e u e u d z u x u new new) x γd x u z new z γd z u, ) ex u e z u, e x u e z u γ 0,. 4.1 The key operations here are to compute P Ω1 x z and P Ω z x c, where mat x, mat z, and mat z x c are symmetric matrices. In the following, we first focus on the computing method of P Ω1 v, where mat v is a symmetric matrix. Since mat v is a symmetric matrix, we have P Ω1 v arg min x v x Ω 1 vec arg min X ) V x F X Sn X Λ, 4. where V mat v. It is known that the optimal solution of problem min X V F X Sn Λ 4.3

8 Advances in Operations Research is given by Q ΛQ T, 4.4 where VQ QΛ, Λ diag λ 11,...,λ nn, ) Λ diag λ11,..., λ nn, λii minmaxλ ii,λ min,λ max, 4.5 and F is the Frobenius norm. Thus, we have P Ω1 v vec Q ΛQ T). 4.6 Now we move to consider the technique of computing P Ω v, where mat v is a symmetric matrix. Lemma 4.1. If mat v is a symmetric matrix and x is the solution of the problem min x x v x i 1 4.7 then we have mat x mat x T. 4.8 Proof. Since arg min x x v n n x i 1 vec arg min X X V F xij 1 j 1 4.9 and V V T, where V mat v, we have that if x is the solution of problem 4.7, then vec mat x T is also the solution of problem 4.7. As it is known that the solution of problem 4.7 is unique. Thus, x vec mat x T, and the proof is complete. Lemma 4.. If x is the solution of problem min x x sign v v x i 1,x 0, 4.10 where sign v sign v 1, sign v,...,sign v m T, sign v i is the sign of real number v i, and sign v v sign v 1 v 1, sign v v,...,sign v m v m ) T v, 4.11

Advances in Operations Research 9 then sign v x is the solution of the problem min x x v x i 1. 4.1 Proof. The result follows from x v sign v x v, sign vi x i x i 1. 4.13 Lemma 4.3. Let v 0, T be a permutation transformation sorting the components of v in descending order, that is, the components of v Tv are in descending order. Further, suppose that x is the solution of the problem min x x v x i 1, 4.14 then x T 1 x is the solution of the problem min x x v x i 1. 4.15 Proof. Since T is a permutation transformation, we have that T 1 T, 4.16 and the optimal values of objective function of problem 4.14 and 4.15 are equal. Note that x v T x v x v, x i x i 1. 4.17 Thus, x is the optimal solution of problem 4.15. And the proof is complete.

10 Advances in Operations Research Remark 4.4. Suppose that mat v R n n is a symmetric matrix for a given v R m.letṽ T sign v v, where T is a permutation transformation sorting the components of sign v v in descending order. Lemmas 4.1 4.3 show that if x is the solution of the following problem min x 1 x ṽ x i 1,x 0, 4.18 then P Ω v sign v T x. 4.19 Hence, to solve the problem 4.18 is a key work to obtain the projection P Ω v. Let e 1, 1,...,1 T R m, then problem 4.18 can be rewritten as 1 min x x ṽ e T x 1,x 0. 4.0 The Lagrangian function for the constrained optimal problem 4.0 is defined as L x, y, w ) 1 ) x ṽ y 1 e T x w T x, 4.1 where scale y and vector w R m are the Lagrange multipliers corresponding to inequalities e T x 1andx 0, respectively. By KKT condition, we have x ṽ ye w 0, ) y 0, 1 e T x 0, y 1 e T x 0, 4. w 0, x 0, w T x 0, that is, ) y 0, 1 e T x 0, y 1 e T x 0, x 0, x ye ṽ 0, x T x ye ṽ ) 0. 4.3 It is easy to check that if e T ṽ 1, then x, ỹ ṽ, 0 is the solution of problem 4.3. Now we assume that e T ṽ>1. In this case, let Δv ṽ 1 ṽ, ṽ ṽ 3,...,ṽ m 1 ṽ m, ṽ m T. 4.4

Advances in Operations Research 11 Note that Δv 0and i Δv i ṽ i > 1. 4.5 Thus, there exists a least-integer K such that K i Δv i 1. 4.6 Since K K i Δv i K ṽ K 1, 1 K<m, ṽ i K i Δv i, K m, 4.7 we have that K ṽ i 1. 4.8 Let ỹ 1 ) K ṽ i 1, 4.9 K x ṽ 1 ỹ, ṽ ỹ,...,ṽ K ỹ, 0,...,0 ) T R m. 4.30 Theorem 4.5. Let x and ỹ be given by 4.30 and 4.9, respectively, then x, ỹ is the solution of problem 4.3, and thus P Ω v sign v T x. 4.31 Proof. It follows from 4.6, 4.7, and 4.9 that ỹ 1 ) K ṽ i 1 1 K ṽ i i Δv i K K ṽk 1, 1 K<m, 0, K m, ỹ 1 ) K ṽ i 1 1 ) K K 1 ṽ i i Δv i K K

1 Advances in Operations Research 1 K K 1 ṽ i ) K 1 i Δv i ṽ K ṽ K. 4.3 Following from 4.3 it is easy to check that x, ỹ is a solution of problem 4.3.Notethat problem 4.18 is convex, thus x, ỹ is the solution of problem 4.18. Further, according to Remark 4.4, we have P Ω v sign v T x. 4.33 The proof is complete. Remark 4.6. Note that if X is the solution of problem min X 1 β C X ) x ij S n Λ, 4.34 inf where β>0 is a given scalar, and S n Λ H R n n H T H, λ min β I H λ max β I, 4.35 then X β X is the solution of problem 1.. Thus, we can find the solution of problem 1. by solving problem 4.34. Let Ω 1 x vec X X S n Λ. 4.36 Now, we are in the stage to describe the implementing process of Algorithm 3. for problem 1. in detail. Algorithm 4.7 projection and contraction method for problem 1.. Step 1 Initialization. LetC c ij R n n be a given symmetric matrix, λ min,λ max,β > 0 be given scalars. Choose arbitrarily an initial point x 0 vec X 0, z 0 vec Z 0,andu 0 ) x 0 z 0, where X 0 Ω 1, Z 0 Ω, Ω 1,andΩ are defined by 4.36 and.7, respectively. Let γ 0,, k 0, and ε>0 be a prespecified tolerance. Step Computation. Compute P Ω 1 x k z k and P Ω z k x k c by using 4.6 and 4.31, respectively. Let e x u k x k P Ω 1 x k z k, e z u k z k P Ω z k x k c,

Advances in Operations Research 13 Table 1: Numerical results of example 1. n λ 1 λ n k CPU time s e u vec X C / vec C 10 0.947.687 55 0.06 9.699 10 8 1.9768 10 7 0 0.5849.8134 58 0.17 7.38 10 8.700 10 7 30 0.9467.9938 65 0.35 7.5651 10 8.8008 10 7 40 1.405 3.404 75 0.49 9.635 10 8 5.0350 10 7 50 1.7593 3.795 89 0.67 8.559 10 8 3.9993 10 7 60.0733 4.1648 106 1.9 8.9517 10 8 6.1197 10 7 70.3450 4.335 11.00 9.595 10 8 6.6696 10 7 80.5817 4.5194 140 3.56 8.397 10 8 3.7658 10 7 90.8345 4.751 160 5.3 8.37 10 8 4.8914 10 7 100.9193 5.1780 181 7.43 9.3473 10 8 6.160 10 7 150 4.1850 5.7006 30 35.90 7.4435 10 8 6.155 10 7 00 4.671 6.7545 446 119.07 7.8171 10 8 8.4093 10 7 Table : Numerical results of example. n λ 1 λ n k CPU time s e u vec X C / vec C 10 0.0090 9.0559 74 0.08 9.9036 10 8 4.676 10 9 0 0.0048 19.4004 107 0.30 9.4778 10 8 4.66 10 7 30 0.0184 33.6477 7617 17.98 9.9976 10 8 7.0348 10 40 0.0004 46.479 16 0.76 9.9938 10 8 1.16 10 1 50 0.0045 55.707 19 1.46 9.4131 10 8 1.8517 10 1 60 0.000 73.8863 159 15.18 9.9708 10 8.454 10 1 70 0.0084 86.8784 45 4.8 8.991 10 8 3.3916 10 1 80 0.001 100.338 614 14.13 9.0897 10 8 3.8534 10 1 90 0.000 108.674 384 10.38 7.7490 10 8 4.7801 10 1 100 0.0011 130.793 36 9.03 9.0180 10 8 5.59 10 1 150 0.0001 195.6736 14 98.8 7.0598 10 8 6.645 10 1 00 0.0013 61.9417 158 165.9 7.1460 10 8 7.4489 10 1 d x u k e x u k e z u k, d z u k e x u k e z u k, ) x k z k where u k. Step 3. Verification. If e u k ) <ε, then stop and output the approximate solution X βmat x k, where e u k ex u k. e z u k Step 4. Iteration. x k 1 x k γd x u k /, z k 1 z k γd z u k /, k : k 1, goto Step. 5. Numerical Experiments In this section, some examples are provided to illustrate the performance of Algorithm 4.7 for solving problem 1.. In the following illustrative examples, the computer program for

14 Advances in Operations Research implementing Algorithm 4.7 is coded in Matlab and the program runs on IBM notebook R51. Example 5.1. Consider problem 1. with C C C T / eye n,λ min 50, and λ max 50, where C rand n 0.5, rand and eye are both the Matlab functions, and n is the size of problem 1..Letγ 1.5,ε 1 10 7, then we have β n vec C, 1 x 0 vec λ1 λ ) ) eye n, z 0 vec zeros n, 5.1 where zeros is also the Matlab function, λ 1 maxλ 1 /β, λ min /β, λ minλ n /β, λ max /β, and λ 1,λ n are the smallest and the largest eigenvalue of matrix C, respectively. Table 1 reports the numerical results of Example 5.1 solved by Algorithm 4.7, where k is the number of iterations, the unit of time is second, and X is the approximate solution of problem 1. obtained by Algorithm 4.7. Example 5.. Consider problem 1. with C C C T,λ min 0, and λ max 0, where C rand n 1. In this test example, let γ, ε be same as that in Example 5.1, β, x 0,andz 0 be also given according to 5.1. Table reports the numerical results of Example 5. and shows the numerical performance of Algorithm 4.7 for solving problem 1.. 6. Conclusions In this paper, a relationship between the matrix nearness problem and the linear variational inequality has been built. The matrix nearness problem considered in this paper can be solved by applying an algorithm for the related linear variational inequality. Based on this point, a projection and contraction method is presented for solving the matrix nearness problem, and the implementing details are introduced in this paper. Numerical experiments show that the method suggested in this paper has a good performance, and the method can be improved by setting the parameters in Algorithm 4.7 properly. Thus, further studying of the effect of the parameters in Algorithm 4.7 maybe a very interesting work. Acknowledgments This research is financially supported by a research grant from the Research Grant Council of China Project no. 10971095. References 1 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 004. N. J. Higham, Matrix nearness problems and applications, in Applications of Matrix Theory,M.Gover and S. Barnett, Eds., pp. 1 7, Oxford University Press, Oxford, UK, 1989. 3 S. Boyd and L. Xiao, Least-squares covariance matrix adjustment, SIAM Journal on Matrix Analysis and Applications, vol. 7, no., pp. 53 546, 005. 4 N. J. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear Algebra and its Applications, vol. 103, pp. 103 118, 1988. 5 N. J. Higham, Computing the nearest correlation matrix a problem from finance, IMA Journal of Numerical Analysis, vol., no. 3, pp. 39 343, 00.

Advances in Operations Research 15 6 G.L.XueandY.Y.Ye, Anefficient algorithm for minimizing a sum of Euclidean norms with applications, SIAM Journal on Optimization, vol. 7, no. 4, pp. 1017 1036, 1997. 7 G. L. Xue and Y. Y. Ye, An efficient algorithm for minimizing a sum of p-norms, SIAM Journal on Optimization, vol. 10, no., pp. 551 579, 000. 8 J. Yang and Y. Zhang, Alternating direction algorithms for l 1 -problems in compressive sensing, SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 50 78, 011. 9 I. S. Dhillon and J. A. Tropp, Matrix nearness problems with Bregman divergences, SIAM Journal on Matrix Analysis and Applications, vol. 9, no. 4, pp. 110 1146, 007. 10 B. S. He, A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming, Applied Mathematics and Optimization, vol. 5, no. 3, pp. 47 6, 199. 11 B. S. He, A new method for a class of linear variational inequalities, Mathematical Programming, vol. 66, no., pp. 137 144, 1994. 1 B. S. He, Solving a class of linear projection equations, Numerische Mathematik, vol.68,no.1,pp. 71 80, 1994. 13 B. S. He, A modified projection and contraction method for a class of linear complementarity problems, Journal of Computational Mathematics, vol. 14, no. 1, pp. 54 63, 1996. 14 B. He, A class of projection and contraction methods for monotone variational inequalities, Applied Mathematics and Optimization, vol. 35, no. 1, pp. 69 76, 1997. 15 M. H. Xu and T. Wu, A class of linearized proximal alternating direction methods, Journal of Optimization Theory and Applications, vol. 151, no., pp. 31 337, 011. 16 B. C. Eaves, On the basic theorem of complementarity, Mathematical Programming, vol. 1, no. 1, pp. 68 75, 1971. 17 M. H. Xu, J. L. Jiang, B. Li, and B. Xu, An improved prediction-correction method for monotone variational inequalities with separable operators, Computers & Mathematics with Applications, vol. 59, no. 6, pp. 074 086, 010.

Advances in Operations Research Advances in Decision Sciences Journal of Applied Mathematics Algebra Journal of Probability and Statistics The Scientific World Journal International Journal of Differential Equations Submit your manuscripts at International Journal of Advances in Combinatorics Mathematical Physics Journal of Complex Analysis International Journal of Mathematics and Mathematical Sciences Mathematical Problems in Engineering Journal of Mathematics Discrete Mathematics Journal of Discrete Dynamics in Nature and Society Journal of Function Spaces Abstract and Applied Analysis International Journal of Journal of Stochastic Analysis Optimization