496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI
|
|
- Kathryn Hunter
- 5 years ago
- Views:
Transcription
1 Journal of Computational Mathematics, Vol., No.4, 003, A MODIFIED VARIABLE-PENALTY ALTERNATING DIRECTIONS METHOD FOR MONOTONE VARIATIONAL INEQUALITIES Λ) Bing-sheng He Sheng-li Wang (Department of Mathematics, Nanjing University, Nanjing 0093, China) Hai Yang (Department of Civil Engineering, The Hong Kong University of Science & Technology, Clear Water Bay, Kowloon, Hong Kong) Abstract Alternating directions method is one of the approaches for solving linearly constrained separate monotone variational inequalities. Experience on applications has shown that the number of iteration significantly depends on the penalty for the system of linearly constrained equations and therefore the method with variable penalties is advantageous in practice. In this paper, we extend the Kontogiorgis and Meyer method [] by removing the monotonicity assumption on the variable penalty matrices. Moreover, we introduce a self-adaptive rule that leads the method to be more efficient and insensitive for various initial penalties. Numerical results for a class of Fermat-Weber problems show that the modified method and its self-adaptive technique are proper and necessary in practice. Key words: Monotone variational inequalities, Alternating directions method, Fermat- Weber problem.. Introduction The mathematical form of variational inequalities consists of finding a vector u Λ Ω such that VI(Ω;F) (u u Λ ) T F (u Λ ) 0; 8 u Ω; () where Ω is a nonempty, closed convex subset of R l, F is a continuous mapping from R l to itself. In practice, many VI problems have the following separable structure, namely (e.g., [4]), x f(x) u = ; F (u) = ; () y g(y) Ω=f(x; y)jx X;y Y;Ax+ By = bg; (3) where X ρr n and YρR m are given closed convex sets, f : X!R n, g : Y!R m are given monotone operators, A R r n ;B R r m are given matrices, and b R r is a given vector. By attaching a Lagrange multiplier vector R r to the linear constraints Ax + By = b, the problem under consideration can be explained as a mixed variational inequality (VI with equality restriction Ax + By = b and unrestricted variable ): Find w Λ W; such that (w w Λ ) T Q(w Λ ) 0; 8 w W; (4) Λ Received March 5, 00. ) The first author was supported by the NSFC grant 07054, the third author was supported inpart by the Hong Kong Research Grants Council through a RGC-CERG Grant (HKUST603/99E).
2 496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI(W;Q) and will be concerned in this paper. It has been well nown (e.g., see [4]) that solving MVI(W;Q) is equivalent to finding a zero point of e(w) :=w P W [w Q(w)]; (6) where P W ( ) denotes the projection on W. e(w) can be viewed as a `error bound' that measures how much w fails to be a solution of MVI(W;Q). As a tool for solving MVI(W;Q) problems, the alternating directions method was originally proposed by Gabay [5] and Gabay and Mercier [4]. At each iteration of this method, the new iterate w + =(x + ;y + ; + ) X Y R r is generated from a given triple w = (x ;y ; ) X Y R r by the following procedure: First, x + is obtained (with y and held fixed) by solving (x 0 x + ) T f(x + ) A T [ fi(ax + + By b)] and then y + is produced (with x + and held fixed) by solving (y 0 y + ) T g(y + ) B T [ fi(ax + + By + b)] Finally, the multipliers are updated by 0; 8 x 0 X; (7) 0; 8 y 0 Y: (8) + = flfi(ax + + By + b); (9) where fl (0; +p 5 ) and fi > 0 are given constants. This method is referred to as a method of multiplier in the literature [5], and the convergence proof can be found in [4, 6] (for B = I) and [3, 5] (for general B). Further studies and applications of such methods can be found in Glowinsi [6], Glowinsi and Le Tallec [7], Ecstein and Fuushima [] and He and Yang [9]. Experience on applications [, 3, ] has shown that if the fixed penalty fi is chosen too small or too large the solution time can significantly increase. In order to improve such methods, recently, Kontogiorgis and Meyer [] presented a more general alternating directions method, in which they too a sequence of symmetric positive definite (spd) penalty matrices fh g instead of the constant penalty fi. The convergence of their method was proved under the assumption that the eigenvalues of fh g are uniformly bounded from below away from zero, and, with finitely many exceptions, the eigenvalues of H H + are nonnegative. In this paper, we continue the Kontogiorgis and Meyer's research[] and present a modified variable-penalty alternating directions method that allows the eigenvalues of fh g either to increase or to decrease in each iteration. This can be beneficial in applications. In addition, similarly as in [0], we propose a self-adaptive adjusting rule that leads the method to be more advantageous in practice. The following notation is used in this paper. We denote by I n n the identity matrix in R n n. For any real matrix M and vector v, we denote the transposition by M T and v T, respectively. The notation M ν 0 means that M is a positive semi-definite matrix, and M χ 0 means that M is a positive definite matrix. Superscripts such asinv refer to specific vectors and are usually iteration indices. The Euclidean norm of vector z will be denoted by z, i.e., z = p z T z.. The General Structure of the Modified Method Throughout this paper, we call the method bykontogiorgis and Meyer [] and our modified method ADM method and MADM method, respectively. To describe the MADM method, we need a non-negative sequence f g that satisfies P =0 <.
3 A Modified Variable-penalty Alternating Directions Method for VI 497 Modified Variable-penalty Alternating Directions Method (short MADM) P Step 0. Given ">0, fl (0; +p 5 ), a non-negative sequence f g satisfying =0 <, a symmetric positive definite matrix (spd) H 0, y 0 Y and 0 R r. Set =0. Step. Convergence verification ( ) If e(w ) <", stop; Step. Find x + X(with fixed y and ), such that where (x 0 x + ) T f (x + ) 0; 8 x 0 X: (0) f (x) =f(x) A T [ H (Ax + By b)]: () Step 3. Find y + Y(with fixed x + and ), such that where Step 4. Update (y 0 y + ) T g (y + ) 0; 8 y 0 Y: () g (y) =g(y) B T [ H (Ax + + By b)]: (3) + = flh (Ax + + By + b): (4) Step 5. Adjust the penalty matrix H ( ) such that, Set := +, and go to Step. + H μ H + μ ( + )H : (5) Remar. In the ADM method [], the restriction on matrix sequence fh g can be equivalently described as + H μ H + μ H ; 8 0 (with 0 and X = < ): (6) In comparison with (5) and (6), the MADM method does not need the monotonicity assumption and allows the sequence fh g be more flexible. Furthermore, instead of fl in[], we relax fl (0; +p 5 ). Remar. There are various ways to construct such spd matrix P H satisfying Q (5) and we will show some of them in Section 5. Since i is non-negative and i=0 i <, i= ( + i)is convergent and greater than zero. It follows form (5) that the matrix sequence fh g is both upper and below (from away from zero) bounded. Also from (5) we have + H» H +» ( + ) H ;» + H H +» ( + ) : H Remar 3. For bounded fh g,itcanbeshown there is a constant c 0 > 0, such that e(w + )» c 0 Ax + + By + b + B(y y + ) : (7) Since solving MVI(W;Q) is equivalent to finding a zero point ofe(w), we need only to prove that lim! Ax + + By + b + B(y y + ) =0: In fact, it is easy from (0) -(4) to chec, if Ax + + By + b =0andB(y y + )=0, then w + =(x + ;y + ; + ) is a solution of MVI(W;Q).
4 498 B.S. HE, S.L. WANG AND H. YANG For convenience, we mae some basic assumptions to guarantee that the problem under consideration is solvable and the MADM method is well defined. Assumption A. The solution set of MVI(W;Q), denoted by W Λ, is nonempty. Assumption B. Problems (0) and () are solvable. 3. Some Preparations for the Convergence Analysis In this section, we do some preparations for the convergence analysis. We will investigate the difference between Λ H + flb(y y Λ ) H and + Λ H Now, let us first observe the difference of Λ H the identity Λ H we get + Λ H Λ H + H = + Λ H + flb(y + y Λ ) H : and + Λ. Using (4) and H +( Λ ) T H ( + ); fl Ax + + By + b H +fl( Λ ) T (Ax + + By + b): (8) The following lemma provides a desirable property of the last term of (8). Lemma. For any w Λ =(x Λ ;y Λ ; Λ ) W Λ, we have and ( Λ ) T (Ax + + By + b) Ax + + By + b H +(Ax + Ax Λ ) T H (By By + ): (9) Proof. Since w Λ W Λ ; x + X and y + Y,wehave On the other hand, from (0) and (), it follows that and (x + x Λ ) T (f(x Λ ) A T Λ ) 0 (0) (y + y Λ ) T (g(y Λ ) B T Λ ) 0: () (x Λ x + ) T f(x + ) A T [ H (Ax + + By b)] 0; () (y Λ y + ) T g(y + ) B T [ H (Ax + + By + b)] 0: (3) Adding (0) and (), and using the monotonicity of operator f, we get (x + x Λ ) T A T [( Λ ) H (Ax + + By b)] 0: (4) Similarly, adding () and (3), and using the monotonicity of operator g, it follows that (y + y Λ ) T B T [( Λ ) H (Ax + + By + b)] 0: (5) Combining (4) and (5) and using Ax Λ + By Λ = b, we get the assertion of this lemma. From Lemma, we have Λ H = + Λ H + fl( fl)ax + + By + b H +fl(ax + Ax Λ ) T H (By By + ): (6)
5 A Modified Variable-penalty Alternating Directions Method for VI 499 In addition, we have the identity flb(y y Λ ) H flb(y + y Λ ) H + flby By + H Thus, combining (6) and (7) and using Ax Λ + By Λ = b we get +fl(by + By Λ ) T H (By By + ): (7) Λ + flb(y y Λ ) H H = + Λ + flb(y + y Λ ) H H +fl( fl)ax + + By + b H + flb(y y + ) H +fl(ax + + By + b) T H (By By + ): (8) In the following lemma, we observe the last term in (8). Lemma. For we have (Ax + + By + b) T H (By By + ) ( fl)(ax + By b) T H (By By + ): (9) Proof. By setting y 0 = y in () we get (y y + ) T g(y + ) B T [ H (Ax + + By + b)] Similarly, taing := and y 0 = y + in () we have (y + y ) T g(y ) B T [ H (Ax + By b)] By adding (30) and (3) and using the monotonicity of operator g, we obtain (y + y ) T B T [ H (Ax + + By + b)] [ H (Ax + By b)] 0: (30) 0: (3) 0: (3) Substituting = flh (Ax + By b) in (3), the assertion of this lemma follows immediately. 4. The Main Theorem and the Convergence Proof Remember that we restrict fl (0; +p 5 ) and hence + fl fl > 0. Let and then we have T = 3 ( + fl fl ); (33) T = 3 ( + fl fl ) > 0 and T fl = 3 (fl 4fl +5)= 3 [(fl ) +]> 3 : Now we are in the stage to prove the main theorem of this paper. Theorem. Let w Λ = (x Λ ;y Λ ; Λ ) W Λ be a solution point of MVI(W;Q). Let fw g = f(x ;y ; )g be the sequence generated by the MADM method, then there is a 0 0, such that + Λ + flb(y + y Λ ) H H + + fl(t fl)ax + + By + b H +» ( + ) Λ + flb(y y Λ ) H H + fl(t fl)ax + By b H fl( T ) Ax + + By + b + B(y y + ) ; 8 H H 0 : (34)
6 500 B.S. HE, S.L. WANG AND H. YANG Proof. It follows from (8) and (9) that Λ + flb(y y Λ ) H H + Λ + flb(y + y Λ ) H H +fl( fl)ax + + By + b H + flb(y y + ) H +fl( fl)(ax + By b) T H (By By + ): (35) Using Cauchy-Schwarz inequality, we have fl( fl)(ax + By b) T H (By By + ) fl(t fl)ax + By b H fl(t fl)ax + By b H Substituting (36) in (35), we derive where Note that + Λ H» Λ H fl( fl) T fl B(y y + ) H fl( fl) T fl ( + )B(y y + ) H : (36) + flb(y + y Λ ) H + fl(t fl)ax + + By + b H + flb(y y Λ ) H + fl(t fl)ax + By b H fl( T )Ax + + By + b H flffi B(y y + ) H ; (37) ffi := ( fl) (T fl) ( + ): ( fl) ffi = (T fl) ( fl) = ( + fl fl ) (T fl) fl 4fl +5 ( fl) ; (38) (T fl) the second equality of (38) is obtained by using (33). From 0and P =0 <, wehave lim! =0. Then there exists a positive constant 0 such that 0» ( fl) (T fl) < 5 ( + fl fl ); 8 0 : Since sup p fl(0; + 5 ) ffl 4fl +5g =5,itfollows from (38) that for 0 ffi 5 5 ( + fl fl )= 3 ( + fl fl )=( T ): (39) In addition, we have + Λ H + + flb(y + y Λ ) H + + fl(t fl)ax + + By + b H» ( + ) + Λ H + flb(y + y Λ ) H + fl(t fl)ax + + By + b H (40) Substituting (39) and (40) in (37), we obtain the conclusion of the theorem immediately. Using Theorem, we can prove the convergence of our method as follows. Theorem. Let f(x ;y ; )g be the sequence generated by the modified variable-penalty alternating directions method for MVI(W;Q). We have lim! Ax + + By + b H + B(y y + ) H =0: (4)
7 A Modified Variable-penalty Alternating Directions Method for VI 50 P Q Proof. Since f g is nonnegative and i= i <, it follows that i= ( + i) is bounded. Denote C s := X i= From Theorem, we have for all 0 that i ; C p := Y i= ( + i ): (4) + Λ H + + flb(y + y Λ ) H + + fl(t fl)ax + + By + b H» C p 0 Λ H 0 Therefore, there exists a constant C>0, such that Λ H + flb(y 0 y Λ ) H 0 + fl(t fl)ax0 + By 0 b H 0 + flb(y y Λ ) H + fl(t fl)ax + By b H» C; 8 0: (43) From Theorem and (43) we get X and hence i= 0 fl( T ) Ax i+ + By i+ b H i + B(y i y i+ ) H i» ( + Cs )C (44) lim! fl( T ) Ax + + By + b H + B(y y + ) H =0: Since fl (0; +p 5 ) and T = 3 ( + fl fl ) > 0, it follows that lim! Ax + + By + b H + B(y y + ) H =0 and thus Theorem is proved. Remember that the sequence fh g is bounded, it follows from Theorem that lim! and the MADM method is convergent. Ax + + By + b + B(y y + ) =0 5. Numerical Experiments Let w + = (x + ;y + ; + ) X Y R r be generated from a given triple w = (x ;y ; ) X Y R r by (0)-(4) with fl =. In this case, it follows that e y (w + )=0 and thus e(w + ) = e x (w + ) + e (w + ) ; where e x (w) =x P X fx [f(x) A T ]g and e (w) =Ax + By b: For the sae of balance, in [0], the authors adjusted the penalty parameter fi such that e x (w) ße (w). For VI problem ()-(3) has bloc form and x = x ;x ; ;x l ) T ; y =(y ;y ; ;y l T ; (45) f(x)) = f (x );f (x ); ;f l (x l )) T ; g(y) = g (y );g (y ); ;g l (y l ) T : (46) Ω=fu =(x; y)jx i X i ;y i Y i ;A i x i + B i y i = b i ; 8i =; ;l:g (47) we suggest the similar strategy as in [0] for adjusting the penalty matries: A simple strategy for Adjusting penalty matrix H in MADM method.
8 50 B.S. HE, S.L. WANG AND H. YANG H (i) + = 8 >< >: ( + fi )H (i) ; if x i P X i [x i (f i(x i ) AT i i )] <μa ix i + B iyi b i, H (i) +fi ; if μx i P X i [x i (f i(x i ) A T i i )] > A i x i + B i yi b i, H (i) ; otherwise. with a symmetric matrix H 0 = diagfh () 0 ;H() 0 ; ;H(s) 0 g and H(i) 0 χ 0; i=; ;l: In the numerical tests, we consider the Fermat-Weber problem [] min yr n lx i= a i y b (48) in which the vectors b and the weights a i > 0 are given. For n = the problem has a single-facility location interpretation: b are shipment centers, represented as points in the plane; the sought minimizer is the location of the facility to be built, such that the sum of the transportation costs between the centers and the facility is minimized, where each cost is proportional to the Euclidean distance. Introducing auxiliary vectors of x [] ; ;x [l], Problem (48) can be rewritten as min f lx i= Then it can be formulated into the form (45) (47) with a i x jx = y b ; i =; ;lg: (49) x i = x ; y i = y; f i (x i )=a i x i x i ; g i(y i )=0; A i = I n n ; B i = I n n ; b i = b ; X i = Y i = R n ; i =; ;l: Thus, under the non-degeneracy assumption, we can apply the alternating directions method with self-adaptive bloc diagonal penalty matrixtosolve Problem (49), in which wetae fl =, H 0 = diagffi[] 0 I n;fi([] 0 I n; ;fi(l) 0 I ng with fi 0 > 0;i=; ;l and =minf; (maxf; 00g) g = f; ;:::;; 4 ; 9 ; 6 ;:::g: In particular implementation, we have and fi + = 8 >< >: ( + )fi if a i x x < 0:x y + b, fi =( + x ) if 0:a i x > x y + b, otherwise fi x + = a i fi y + = lx i= fi i lx i= ; = + fi y fi b ; i =; ;l; fi x+ + fi b + = fi i (x + y + + b ); i =; ;l: In [], the author used ADM method for Problem (49) by taing H 0 =diagn a b [] I n; a b [] I n; ; ; a l b [l] I n o; L = 0:075 nl lx i= a i ;
9 A Modified Variable-penalty Alternating Directions Method for VI 503 which are denoted by H 0 [] andl[] respectively in the sequel, and H = diagffi[] I n;fi[] I n; ;fi[l] I ng; ( fi = :05fi T ; if fi T <L; maxf0:98fi T ;Lg; otherwise; for i =; ;l; where the penalties were updated every T (= 0) iterations. To assess the impact of the penalty value on performance, we generated classes of data, with the number of points l in f5; 50; 75g and the dimension n in f; 4; 8; 6g. For each class the problem is generated randomly. The weights a i were uniformly distributed in [; 0], while the components of b are uniformly distributed in [0; 00]. The stopping test was e(w )» 0 6. Table reports the computational results by applying the proposed MADM method and the ADM method in [] for solving Problem (49), respectively. Table. Number of iterations for Fermat-Weber problems n l H 0 =0 I H 0 =0 I H 0 = I H 0 =0I H 0 =0 I H 0 = H 0[] ADM MADM ADM MADM ADM MADM ADM MADM ADM MADM ADM MADM > > > > > > > > > > > > It seems that the solution time of the proposed MADM method is much less than that of ADM method. We note that, for the same problem, the iteration numbers of ADM method are significantly depends on the initial penalty. Although the ADM method performs as efficiently as (or somewhat more efficiently than) MADM method when the initial penalty matrix H 0 is selected carefully as in [], it is inconvenienttochoose a proper initial penalty in ADM method for individual problems. Another difficulty encountered by the PADM method is how tochoose aproperlower boundary L. As we have seen that L = 0:075 l nl i= a i in [] isnotanobvious one. The iteration numbers of the proposed MADM method are insensitive to the initial penalty. To demonstrate it more clearly, we test the Fermat-Weber problem with n = 6 and l = 75 (the largest problem in Table and ). The initial penalty matrixh 0 is a positive diagonal matrix whose elements are fi 0 I 6;i=; ; 75. We letfi 0 be uniformly distributed in (0p ; 0 p ) and list the iteration numbers in Table. Table. Number of iterations of the proposed method for Fermat-Weber problems fi 0 (0p ; 0 p ),p = Number of iterations Conclusion. In this paper, we proposed a modified variable-penalty alternating directions method. The presented MADM method extends the ADM method by allowing the penalty matrix to vary more flexible. The preliminary numerical tests show that the proposed method with self-adaptive technique is more preferable in practice. References [] J. Ecstein and M. Fuushima, Some reformulation and applications of the alternating direction method of multipliers, Large Scale Optimization: State of the Art, W.W.Hager et al eds., Kluwer Academic Publishers, (994), 5-34.
10 504 B.S. HE, S.L. WANG AND H. YANG [] M. Fortin and R. Glowinsi, Eds., Augmented Lagrangian Methods: Applications to the solution of Boundary-Valued Problems, North-Holland, Amsterdam, (983). [3] M. Fuushima, Application of the alternating direction method of multipliers to separable convex programming problems, Computational Optimization and Applications, (99), 93-. [4] D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite-element approximations, Computer and Mathematics with Applications, (976), [5] D. Gabay, Applications of the method of multipliers to variational inequalities, Augmented Lagrange Methods: Applications to the Solution of Boundary-valued Problems, M. Fortin and R. Glowinsi, eds., North Holland, Amsterdam, The Netherlands, (983), [6] R. Glowinsi, Numerical Methods for Nonlinear Variational Problems, Springer-Verlag, New Yor, Berlin, Heidelberg, Toyo, (984). [7] R. Glowinsi and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, SIAM Studies in Applied Mathematics, Philadelphia, PA, 989. [8] P.T. Harer and J.S. Pang, Finite-dimensional variational inequality and nonlinear complementarity problems: A Survey of theory, algorithms and applications, Mathematical Programming, 48 (990), 6-0. [9] B.S. He and H. Yang, Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities, Operations Research letters, 3 (998), 5-6. [0] B.S. He, H. Yang and S.L. Wang, Alternating directions method with self-adaptive parameter for monotone variational inequalities, JOTA, 06 (000), [] D. Kinderlehhrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, Academic Press, New Yor, (980). [] S. Kontogiorgis and R.R. Meyer, A variable-penalty alternating directions method for convex optimization, Mathematical Programming, 83 (998), [3] P.L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numerical Analysis, 6 (979), [4] A. Nagurney, Networ Economics, A Variational Inequality Approach, Kluwer Academic Publishers, Dordrecht, Boston, London, (993). [5] P. Tseng, Applications of splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim., 9 (99), 9-38.
Contraction Methods for Convex Optimization and monotone variational inequalities No.12
XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationOn the acceleration of augmented Lagrangian method for linearly constrained optimization
On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental
More informationOn the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities
On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods
More informationProximal-like contraction methods for monotone variational inequalities in a unified framework
Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China
More informationA Power Penalty Method for a Bounded Nonlinear Complementarity Problem
A Power Penalty Method for a Bounded Nonlinear Complementarity Problem Song Wang 1 and Xiaoqi Yang 2 1 Department of Mathematics & Statistics Curtin University, GPO Box U1987, Perth WA 6845, Australia
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.18
XVIII - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No18 Linearized alternating direction method with Gaussian back substitution for separable convex optimization
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.11
XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics
More informationOptimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1
Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming Bingsheng He 2 Feng Ma 3 Xiaoming Yuan 4 October 4, 207 Abstract. The alternating direction method of multipliers ADMM
More informationAn Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1
An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied
More informationOn convergence rate of the Douglas-Rachford operator splitting method
On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford
More informationThe Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent
The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Caihua Chen Bingsheng He Yinyu Ye Xiaoming Yuan 4 December, Abstract. The alternating direction method
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationLinearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming.
Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming Bingsheng He Feng Ma 2 Xiaoming Yuan 3 July 3, 206 Abstract. The alternating
More informationRecent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables
Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop
More informationOn the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
On the O1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators Bingsheng He 1 Department of Mathematics, Nanjing University,
More informationBeyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory
Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics
More informationPacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT
Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given
More informationarxiv: v1 [math.oc] 28 Jan 2015
A hybrid method without extrapolation step for solving variational inequality problems Yu. V. Malitsky V. V. Semenov arxiv:1501.07298v1 [math.oc] 28 Jan 2015 July 12, 2018 Abstract In this paper, we introduce
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationNumerical Methods for Differential Equations Mathematical and Computational Tools
Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationGeneralized Monotonicities and Its Applications to the System of General Variational Inequalities
Generalized Monotonicities and Its Applications to the System of General Variational Inequalities Khushbu 1, Zubair Khan 2 Research Scholar, Department of Mathematics, Integral University, Lucknow, Uttar
More informationSCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES
Nonlinear Analysis Forum 12(1), pp. 119 124, 2007 SCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES Zhi-bin Liu, Nan-jing Huang and Byung-Soo Lee Department of Applied Mathematics
More informationADMM for monotone operators: convergence analysis and rates
ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationAn inexact subgradient algorithm for Equilibrium Problems
Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationSTRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS
ARCHIVUM MATHEMATICUM (BRNO) Tomus 45 (2009), 147 158 STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS Xiaolong Qin 1, Shin Min Kang 1, Yongfu Su 2,
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation
More informationA relaxed customized proximal point algorithm for separable convex programming
A relaxed customized proximal point algorithm for separable convex programming Xingju Cai Guoyong Gu Bingsheng He Xiaoming Yuan August 22, 2011 Abstract The alternating direction method (ADM) is classical
More informationA Solution Method for Semidefinite Variational Inequality with Coupled Constraints
Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled
More informationCoordinate Update Algorithm Short Course Operator Splitting
Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators
More informationEfficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems
Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/
More informationResearch Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method
Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction
More informationGENERALIZED second-order cone complementarity
Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the
More informationIteration-complexity of first-order penalty methods for convex programming
Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationOn the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (06), 5536 5543 Research Article On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces
More informationAn improved generalized Newton method for absolute value equations
DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationDual Proximal Gradient Method
Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method
More informationAn Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace
An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well
More information4y Springer NONLINEAR INTEGER PROGRAMMING
NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai
More informationStrong convergence theorems for total quasi-ϕasymptotically
RESEARCH Open Access Strong convergence theorems for total quasi-ϕasymptotically nonexpansive multi-valued mappings in Banach spaces Jinfang Tang 1 and Shih-sen Chang 2* * Correspondence: changss@yahoo.
More informationAn inexact parallel splitting augmented Lagrangian method for large system of linear equations
An inexact parallel splitting augmented Lagrangian method for large system of linear equations Zheng Peng [1, ] DongHua Wu [] [1]. Mathematics Department of Nanjing University, Nanjing PR. China, 10093
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationPolynomial complementarity problems
Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016
More informationMath 273a: Optimization Overview of First-Order Optimization Algorithms
Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization
More informationEigenvalue analysis of constrained minimization problem for homogeneous polynomial
J Glob Optim 2016) 64:563 575 DOI 10.1007/s10898-015-0343-y Eigenvalue analysis of constrained minimization problem for homogeneous polynomial Yisheng Song 1,2 Liqun Qi 2 Received: 31 May 2014 / Accepted:
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationSOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION
International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2
More informationSharpening the Karush-John optimality conditions
Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,
More informationDETERMINISTIC REPRESENTATION FOR POSITION DEPENDENT RANDOM MAPS
DETERMINISTIC REPRESENTATION FOR POSITION DEPENDENT RANDOM MAPS WAEL BAHSOUN, CHRISTOPHER BOSE, AND ANTHONY QUAS Abstract. We give a deterministic representation for position dependent random maps and
More informationA note on the unique solution of linear complementarity problem
COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More information1. Introduction. In this paper we consider a class of matrix factorization problems, which can be modeled as
1 3 4 5 6 7 8 9 10 11 1 13 14 15 16 17 A NON-MONOTONE ALTERNATING UPDATING METHOD FOR A CLASS OF MATRIX FACTORIZATION PROBLEMS LEI YANG, TING KEI PONG, AND XIAOJUN CHEN Abstract. In this paper we consider
More informationResearch Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem
Advances in Operations Research Volume 01, Article ID 483479, 17 pages doi:10.1155/01/483479 Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationc 1998 Society for Industrial and Applied Mathematics
SIAM J MARIX ANAL APPL Vol 20, No 1, pp 182 195 c 1998 Society for Industrial and Applied Mathematics POSIIVIY OF BLOCK RIDIAGONAL MARICES MARIN BOHNER AND ONDŘEJ DOŠLÝ Abstract his paper relates disconjugacy
More informationOn intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings
Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous
More informationA Continuation Method for the Solution of Monotone Variational Inequality Problems
A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationSolving generalized semi-infinite programs by reduction to simpler problems.
Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationRobust Duality in Parametric Convex Optimization
Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationViscosity approximation method for m-accretive mapping and variational inequality in Banach space
An. Şt. Univ. Ovidius Constanţa Vol. 17(1), 2009, 91 104 Viscosity approximation method for m-accretive mapping and variational inequality in Banach space Zhenhua He 1, Deifei Zhang 1, Feng Gu 2 Abstract
More informationResearch Article Strong Convergence of a Projected Gradient Method
Applied Mathematics Volume 2012, Article ID 410137, 10 pages doi:10.1155/2012/410137 Research Article Strong Convergence of a Projected Gradient Method Shunhou Fan and Yonghong Yao Department of Mathematics,
More informationTensor Complementarity Problem and Semi-positive Tensors
DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationQUADRATIC MAJORIZATION 1. INTRODUCTION
QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationSplitting methods for decomposing separable convex programs
Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques
More informationEnhanced Fritz John Optimality Conditions and Sensitivity Analysis
Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained
More informationSPARSE AND LOW-RANK MATRIX DECOMPOSITION VIA ALTERNATING DIRECTION METHODS
SPARSE AND LOW-RANK MATRIX DECOMPOSITION VIA ALTERNATING DIRECTION METHODS XIAOMING YUAN AND JUNFENG YANG Abstract. The problem of recovering the sparse and low-rank components of a matrix captures a broad
More informationSupport Vector Machines
Support Vector Machines Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts Sridhar Mahadevan: CMPSCI 689 p. 1/32 Margin Classifiers margin b = 0 Sridhar Mahadevan: CMPSCI 689 p.
More informationDistributed Optimization via Alternating Direction Method of Multipliers
Distributed Optimization via Alternating Direction Method of Multipliers Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato Stanford University ITMANET, Stanford, January 2011 Outline precursors dual decomposition
More informationResearch Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem
Journal of Applied Mathematics Volume 2012, Article ID 219478, 15 pages doi:10.1155/2012/219478 Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity
More informationSome Inexact Hybrid Proximal Augmented Lagrangian Algorithms
Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationThe Multi-Path Utility Maximization Problem
The Multi-Path Utility Maximization Problem Xiaojun Lin and Ness B. Shroff School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 47906 {linx,shroff}@ecn.purdue.edu Abstract
More informationA generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward
More informationMATHEMATICAL ENGINEERING TECHNICAL REPORTS
MATHEMATICAL ENGINEERING TECHNICAL REPORTS Combinatorial Relaxation Algorithm for the Entire Sequence of the Maximum Degree of Minors in Mixed Polynomial Matrices Shun SATO (Communicated by Taayasu MATSUO)
More information1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is
New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,
More informationDownloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see
SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION
More informationProximal Methods for Optimization with Spasity-inducing Norms
Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationFIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow
FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint
More information