An Extended Alternating Direction Method for Three-Block Separable Convex Programming

Size: px
Start display at page:

Download "An Extended Alternating Direction Method for Three-Block Separable Convex Programming"

Transcription

1 An Etended Alternating Direction Method for Three-Bloc Separable Conve Programming Jianchao Bai Jicheng Li Hongchao Zhang School of Mathematics and Statistics, Xi an Jiaotong University, Xi an 70049, China Department of Mathematics, Louisiana State University, Baton Rouge, LA , USA Abstract In order to solve the conve minimization problem with at least two variables, the augmented Lagrangian method is often used and improved to the alternating direction method of multipliers ADMM in the Gauss-Seidel or Jacobian fashion. Though various versions of the ADMM were developed for solving the two-bloc separable conve problem with linear equality constraint, there are a few feasible ways to tacling such problem with three-bloc objectives. In this paper, we design a novel ADMM combining with these two manners, where the first two subproblems are solved in parallel with some positive-definite proimal terms and the step sizes of updating the Lagrangian multipliers are enlarged. The variational inequality is applied to characterize the solution set of the concerned problem, and the Cauchy-Schwarz inequality is used to analyze some properties of the sequence {w w } in a weighted norm, where w and w are respectively the iterative variable and correcting variable. In analyzing the convergence of the proposed method, the lower bound of w w G is discussed separately in several different cases. Then based on the obtained lower bound, the global convergence of the proposed method is proved and the worst-case O/t convergence rate in an ergodic sense is established. Keywords Conve programming; Alternating direction method of multipliers; Variational inequality; Cauchy-Schwarz inequality; Compleity AMS subject classifications 65K0; 90C5 Introduction Throughout this paper, denoted by the symbols R, R n, R m n be the set of real numbers, the set of n real column vectors and the set of m n real matrices, respectively. The notation =, denotes the Euclidean norm of R n, which is induced by the inner product, y = T y for any vectors, y of the same dimension. The symbol I m n stands for the m n matri whose diagonal entries are one and the others are zero especially, we denote it as I n if m = n. We consider the following separable conve programming with three-bloc objectives, that is, min f + f + f s.t. A + A + A = b, i X i, i =,,, The wor was supported by the National Natural Science Foundation of ChinaNo.678 and the Natural Science Foundation of Fujian provinceno.06j008. Corresponding author. addresses:bjc987@6.com.

2 where f i i : R mi Ri =,, are closed and proper conve functions not necessarily smooth; A i R n m i and b R n are given matrices and vectors, respectively; X i R m i i =,, are closed conve sets. Throughout this article, the solution set of. is assumed to be nonempty. The problem. arises in many applications, such as the sparse inverse covariance estimation problem [] in finance, the model updating problem [] in the design of vibration structural dynamic system and bridges, the low ran and sparse representations [] for image processing and so on. Intuitionally, it seems that the problem. can be solved by the augmented Lagrangian method ALM, [5] which minimizes the following augmented Lagrangian function L β,,, λ = L,,, λ + β A + A + A b,.a where β > 0 is a penalty parameter with respect to the equality constraint and L,,, λ = f + f + f λ, A + A + A b.b is the Lagrangian function of the problem. with a Lagrangian multiplier λ R n. That is, one may obey the following ALM iterations { +, +, + { = arg min Lβ,,, λ i X i, i =,, }, λ + = λ β A + + A + + A + b.. In practice, using the ALM still maes it difficult to simultaneously tacle the subproblem of. with three variables, especially in the case that the objective function is nondifferentiable or more comple. A natural idea to overcome this difficulty is to apply the famous alternating direction method of multipliers ADMM, which was originally proposed in [4] and can be regarded as a splitting version of the ALM with respect to each variable at a time while fiing other two at the last values, and then update the Lagrange multiplier. In other words, one may follow the following ADMM scheme with Gauss-Seidel decomposition + = arg min { L β,,, λ } X, + = arg min { L β +,,, λ } X, + = arg min { L β +, +,, λ } X, λ + = λ β A + + A + + A + b..4a The scheme.4a is actually a serial algorithm which uses the latest information of each variable in each iteration. Though the above technique was proved to be convergent for the two-bloc separable conve minimization problem see [6], as said by Chen et al.[], the direct etension of ADMM.4a for the problem. was not necessarily convergent only if some certain relationships among the matrices A i i =,, were satisfied. Another natural idea is to apply the parallel procedure in the Jacobian fashion to deal with the subproblem of., that is, = arg min { L β,,, λ } X, = arg min { L β,,, λ } X, = arg min { L β,,, λ } X, λ + = λ β A + + A + + A + b..4b However, as shown in [, 8], neither of the serial algorithm.4a nor the changed parallel algorithm.4b is necessarily convergent. The divergence of the schemes.4a-.4b maes us to pore on a question: why not allow some of the subproblems of.4 to be solved in a parallel or serial manner? Inspired by such question, He et al.[9] developed a novel ADMM-lie splitting method that allowed some of its subproblems to be solved in parallel, while these subproblems should be regularized by some positive-definite proimal terms to ensure the convergence. And in [9], some sparse and low-ran models and image painting problems were tested to verify the efficiency of the method. Recently, a very

3 meaningful and systematical wor of He et al.[0] for solving the two-bloc separable conve minimization strongly attracts our attentions, where the iterative framewor of the method therein is + = arg min { L β,, λ } X, λ + = λ rβ A + + A b {, } + = arg min L β +,, λ +.5a X, λ + = λ + sβ A + + A + b, and the step sizes r and s are restricted into the domain { H = r, s s 0, + } 5, r,, r + s > 0, r < + s s..5b The main improvement of [0] is that the proposed scheme.5 largely etends the range of the step sizes r, s and it can be regarded as an etended wor of [7] with r = s 0,. What s more important, numerical eamples about the basis pursuit problem and the total-variational image debarring model were tested to verify the efficiency of.5 in CPU time and the number of iterations, compared with the original ADMM in [4]. Now, another question appears: can the idea of the scheme.5 be etended to solve the problem.? Motivated by the aforementioned two questions, we net design an improved ADMM scheme with partial proimal regularization for solving the problem., and the framewor is described as follows: { + = arg min L β,,, λ + σβ A } X, { + = arg min L β,,, λ + σβ A } X, λ + = λ τβ A + + A + + A b,.6 { } + = arg min L β +, +,, λ + X, λ + = λ + sβ A + + A + + A + b, where σ, τ, s are three independent constants that are respectively restricted into σ, +, τ, s K = {τ, s τ + s > 0, τ <, τ < + s s }..7 The above parameter σ is usually called a proimal coefficient that controls the proimity of the new iterative value to the last one. Clearly, by sipping some constants in the objective functions of the three subproblems, the ADMM.6 can be further simplified as + = arg min X {f λ, A + β + = arg min {f λ, A + β X λ + = λ τβ A + + A + + A b, + = arg min X {f λ +, A + β λ + = λ + sβ A + + A + + A + b. [ A + A + A b + σ A ]}, [ A + A + A b + σ A ]}, A + + A + + A b }, There are three main contributions of the paper. One is that we develop a new ADMM.8 for solving the three-bloc separable conve programming problem, and the domain H in.5b of the step sizes is enlarged to the domain K defined in.7 see the shaded area in Fig.. Another contribution is that the global convergence property and the worst-case O/t convergence rate of.8 are proved in detail, where the variational inequality and the Cauchy-Schwarz inequality play a significant role in characterizing the solution set of. and estimating the lower bound of the sequence {w w } in a.8

4 4 Fig. : Step size domain K of the ADMM.8. weighted norm, respectively. Besides, the developed ADMM.8 is etended to solve the multi-bloc separable conve minimization problem. This paper is organized as follows. In Section, some preliminaries are prepared to reformulate the problem., including the variational inequality to characterize the solution set of the problem. and a prediction-correction procedure to interpret the ADMM.8. We still prove that the ADMM.8 is equivalent to another ADMM version. In Section, we first investigate some properties of w w H, then we provide the lower bound of w w G in different cases, and later we prove that the proposed method is globally convergent in a worst-case O/t convergence rate in an ergodic sense. Section 4 etends the ADMM.8 to the multi-bloc separable conve minimization problem. Finally, the paper is concluded and discussed in Section 5. Preliminaries In this section, we first uses the variational inequality to characterize the solution set of the problem.. Then we analyze that the ADMM.8 can be regarded as a prediction-correction procedure which is split into two steps: the prediction step and the correction step. At the end of this section, we put forward another ADMM scheme and prove that it is equivalent to.8.. Variational reformulation of. By the aid of the variational inequality, in this subsection, a unified framewor is introduced to characterize the solution set of.. We begin with a lemma that is used throughout the Section. Lemma. [0] Let f : R m R and h : R m R be two conve functions defined onto a closed conve set Ω R m. In addition, h is differentiable. Assume that the solution set of the

5 5 minimization problem min{f + h} is nonempty. Then we have Ω = arg min Ω {f + h} Ω, f f +, h 0, Ω.. It is well-nown that a quadruple,,, λ is called the saddle point of the Lagrangian function.b if it satisfies the following saddle-point inequalities L,,, λ L,,, λ L,,, λ, which maes us to solve the following system = arg min{l,,, λ X }, = arg min{l,,, λ X }, = arg min{l,,, λ X }, λ = arg ma{l,,, λ λ R n }.. By maing use of., the system. can be equivalently epressed as X, f f +, A T λ 0, X, X, f f +, A T λ 0, X, X, f f +, A T λ 0, X, λ R n, λ λ, A + A + A b 0, λ R n, which is further rewritten as a compact variational inequality VI form VIf, J, M : f f + w w, J w 0, w M,.a with = f = f + f + f, M = X X X R n,.b A T λ, w =, J w = A T λ A T λ..c λ A + A + A b Notice that the mapping J w in.c is monotone because it is affine with a sew-symmetric matri. Therefore, the following basic inequality holds w ŵ, J w J ŵ = 0, w, ŵ M..4 By the nonempty assumption of the solution set of., the solution set of VIf, J, M denoted by M for convenience is also nonempty and conve, see e.g.[, Theorem..5]. The net theorem presents a concise way to characterizing the set M, whose proof is the same as that of Theorem. [6] and is omitted here. Theorem. The solution set of VIf, J, M is conve and can be epressed as M = {ŵ M f f + w ŵ, J w 0}..5 w M Remar. The above theorem tells us that if sup {f f + ŵ w, J w} ϵ, w Kŵ then ŵ M is called an ϵ-approimate solution point of VIf, J, M, where Kŵ = {w M w ŵ } and ϵ > 0 is an accuracy, especially, ϵ = O/t.

6 6. A prediction-correction interpretation of.8 Now, we give a prediction-correction procedure to interpret the ADMM scheme.8, which aims only to provide a convenience for the theoretical analysis of the proposed method. And it is not necessary to tae this prediction-correction interpretation to carry out the scheme.8 in practice. For the triple +, +, + generated by the ADMM.8, we define where w = λ, w = λ,.6a = +, = +, = +,.6b λ = λ β A + + A + + A b..6c The above auiliary notation w will be often used in the sequel and mae the convergence analysis of the ADMM.8 more easier. The net lemma shows that the optimality condition of the i -subproblem of.8 combining with the update of the Lagrangian multiplier λ can be written as a mied variational inequality. Lemma. For the iterates w and w defined in.6a, it holds that w M, f f + w w, J w + Q w w 0, w M,.7a where Q = σβa T A βa T A 0 0 βa T A σβa T A βa T A τa T 0 0 A β I n..7b Proof By using. and λ defined in.6c, the optimality condition of the -subproblem of the ADMM.8 can be epressed as + X, f f + + +, A T λ + βa T A + + A + A b + σβa T A + }{{} +A + A + = f f +, A T λ βa T A + σβa T A Similarly, for the -subproblem we have + X, f f + + +, A T λ + βa T A + A + + A b + σβa T A + }{{} +A + A + = f f +, A T λ βa T A + σβa T A 0, X, 0.8 0, X, 0.9 Using the notation λ in.6c, the iterate λ + in.8 can be rewritten as λ + = λ τ λ λ..0

7 7 Then by applying Lemma. to the -subproblem together with.0, we obtain + X, f f + + f f + f f + Besides, it follows from.6c that +, A T λ + + βa T A + + A + + A + b }{{} +A A [ λ τ λ λ ], A T, A T λ + βa T A A + + A + + A + b A + = = A + A + A b A + β 0, X, + A T λ λ + βa T A 0 λ τa T λ 0. λ λ λ β λ = 0,. which is rewritten as λ R n, λ λ, A + A + A b A λ + λ 0, λ R n.. β Combining.8-.9 and.-., the proof of the inequality.7 is completed. Lemma. For the sequence {w + } generated by the ADMM.8 and the iterate w defined in.6a, the following relationship w + = w Mw w.a holds with M = I m I m I m sβa τ + si n Proof By the update of λ + in.8 and Eq..0, it follows that λ + = λ + sβ A + + A + + A + b = λ + sβ A + + A + + A b sβa + = λ τ λ λ s λ λ sβa [ = λ sβa + τ + s λ λ ]. The above equality together with + i λ + = λ = i i =,, implies that. I m I m I m 0 0 sβa τ + si n λ λ,.b that is, the relationship.a holds. Lemma.-. show that the ADMM.8 can be interpreted as a prediction-correction framewor as we mentioned before, where the iterates w + and w are formally called the predictive variable and correcting variable, respectively.

8 8. Relationship between.8 and another scheme This subsection presents an improvement of the algorithms.4a-.4b, where each subproblem is regularized by a proimal regularization and the Lagrangian multiplier is updated once time in each iteration, that is the following scheme: + = arg min {L β, X,, λ + σβ A }, + = arg min X {L β,,, λ + σβ + = arg min X {f λ, A + µβ λ + = λ sβ A + + A + + A + A }, A + + A + + A b τβ b, A }, where µ = τ +, σ > and τ, s are independent parameters restricted to the domain K in.7..4 Note that there are three characteristics of the above ADMM scheme: i the first two subproblems of.4 are regularized by respectively adding a proimal term, which is the same as that of.8; ii the parameter µ = τ + instead of the constant one and the -subproblem is regularized by subtracting a proimal term; iii the step size of λ, that is s, is not the constant one but an enlarged domain. Although the scheme.4 is different from.8 on the surface, we will prove that both of them are actually equivalent. Lemma.4 The ADMM.8 is equivalent to the scheme.4 with µ = τ +. Proof Clearly, we only need to prove that the optimality condition of the -subproblem in.8 are the same as that of.4. By applying., the optimality condition of the -subproblem in.4 is + X and f f + + +, A T λ + µβa T A + + A + + A + b τβa T A + 0, X..5 Besides, using the update of λ + in.8, the optimality condition of the -subproblem in.8 can be further rewritten as f f + + +, A T λ τβa + + A + + A b }{{} + βat A + + A + + A + b 0, +A + A + = f f + + +, A T λ + τ + βa T A + + A + + A + b τβa T A Obviously,.5 is just.6 by setting µ = τ +. Remar. When taing τ, s = 0, K, then.4 becomes a concise scheme + = arg min X {L β,,, λ + σβ + = arg min {L β,, X, λ + σβ + { = arg min Lβ +, +,, λ }, X λ + = λ β A + + A + + A + A }, A }, b.

9 9 Convergence analysis of ADMM.8 The ey to prove the convergence of the ADMM.8 is to verify that the cross term of.7 converges to zero, that is, lim w w, Qw w = 0, w M, where w and w are defined in.6a, in other words, the sequence {w w } would be contractive in some weighted norm. In this section, we first investigate some properties of the sequence {w w H } and estimate the lower bound of w w G. Then by using these properties, several theorems are provided to illustrate the global convergence property and the convergence rate of the ADMM.8.. Properties of {w w H } We investigate whether the sequence {w w } is contractive or not. Using the property of J w, that is.4, the variational inequality.7a becomes w M, f f + w w, J w w w, Qw w, w M.. Note that the matri M in.b is nonsingular. By maing use of the relationship., the right-hand term of. is further changed to w w, Qw w = w w, Hw w +.. where H = QM.. It is noteworthy that the above relationship. will play a fundamental role in analyzing the convergence of the proposed method in the sequel. The net lemma shows that the matri H is symmetric positive definite in the discussed domain. Lemma. The matri H defined in. is symmetric positive definite for any σ, + and τ, s K when the matrices A i i =,, are full column ran. Proof For the matri M defined in.b, it is easy to obtain its inverse M = Thus by the relationship., we have H = = I m I m I m σβa T A βa T A 0 0 βa T A σβa T A βa T A τa T sβ τ+s A τ+s I n 0 0 A β I n σβa T A βa T A 0 0 βa T A σβa T A sτ τ+s βa T A τ 0 0 τ τ+s A. I m I m I m τ+s AT βτ+s I n. sβ τ+s A τ+s I n Clearly, the matri H is symmetric and can be decomposed into H = D T H 0 D, where A σβi n βi n 0 0 D = 0 A A 0, H βi n σβi n = 0 0 sτ τ+s βi n τ τ+s I n I n 0 0 τ τ+s I n βτ+s I n..4a.4b

10 0 Then, the matri H is positive definite if and only if H 0 is positive definite. In other words, all the ordered-main subdeterminants of H 0 should be positive for any given β > 0, equivalently, σ > 0, σ > 0, σ τ + sτ + s τs > 0, σ τ + sτ + s τs τ > 0. σ >, = τ + s > 0, τ <. Obviously, the above obtained range of σ, τ, s in.5 has been in the domain of.7. Theorem. Let the matrices Q, M, H be defined in.7b,.b and.4a, respectively. For the sequence {w + } generated by the ADMM.8 and the iterate w defined in.6a, we have f f + w w, J w { w w + H w w } H + w w G, w M,.6a where G = Q + Q T M T HM..5.6b Proof Applying the identity a b, Hc d = a d H a c H + c b H d b H with substitutions a = w, b = w, c = w, d = w +, we get w w, Hw w + = w w + H w w H + w w H w + w H..7 Meanwhile, for the last two terms of the right-hand side of.7, it follows that w w H w+ w H = w w H w w + w w + H = w w H w w + Mw w H = w w, [ HM + HM T M T HM ] w w = w w, Q + Q T M T HM w w,.8 where the fourth equality of.8 uses the Eq... Combining.-. and.7-.8, we complete the proof of the assertion.6. Theorem. For the iterate w defined in.6a, the sequence {w + } generated by the ADMM.8 satisfies w + w H w w H w w G, w M..9 Proof By via of taing w = w in.6a, we have { w w H w w H} w+ w H + f f + w w, J w. The above inequality together with.a immediately implies the inequality.9. Note from.9 that the sequence {w w } is not always contractive since the matri G defined in.6b is not always positive definite when the parameters τ and s are restricted into K. Hence, we need to discuss the convergence of the proposed method in different cases. For analysis convenience, let K = 5 K i, K i Kj =, i j, i, j =,,, 4, 5, i=

11 where K i be partitioned in a similar way as [0], that is, K = {τ, s s < τ <, s < },.0a K = {τ, s < τ <, s = },.0b { K = τ, s τ = 0, < s < + } 5,.0c K 4 = {τ, s 0 < τ < + s s, s > }, K 5 = {τ, s s s < τ < 0, s > }..0d.0e. Lower bound of w w G in K K In this subsection, we discuss some properties of w w G in the domain K K, where w is the th iteration of the ADMM.8 and w is defined in.6a. Moreover, the lower bound of w w G are given in the aftermentioned Theorem.4. Lemma. For the sequence {w + } generated by the ADMM.8 and the iterate w defined in.6a, we have w w G = σβ A + + σβ A + + τβ A + + τ sβ A + + A + + A + b β + T A T A + + τβ A + + A + + A + b T A +. Proof By a simple calculation, we obtain. M T HM = M T Q = σβa T A βa T A 0 0 βa T A σβa T A sβa T A τ + sa T 0 0 τ + sa τ+s β I n, and then Hence, it holds that G = Q + Q T M T HM = σβa T A βa T A 0 0 βa T A σβa T A sβa T A s A T 0 0 s A τ+s β I n.. w w G = w w, Gw w = σβ A + σβ A + sβ A λ λ + s λ λ T A β T A T A. + τ s β By the notation λ in.6c, we have. λ λ = β [ A + + A + + A + b + A + ]..4a

12 Thus, the cross term λ λ T A and λ λ = βa + +A + +A + b T A +β A.4b = β A + + A + + A + b + β A + + β A + + A + + A + b T A +..4c Using the notations i = + i i =,, in.6b and substituting.4b-.4c into., we obtain the equality.. Lemma. The sequence {w + } generated by the ADMM.8 satisfies A + + A + + A + b T A + s +τ A + A + A b T A + τ +τ A +. Proof By Lemma., the optimality condition of the -subproblem of.8 is + X, f f + + +, A T λ + + βa T A + + A + + A + b 0, X, = f f +, A T λ + βa T A + A + A b 0. Setting = in.6a and = + in.6b, and adding them together, we have +, A T λ λ + + βa T A + + A + + A + b βa T A + A + A b From the update of λ + and the previous iteration we deduce that in.8, i.e., λ + = λ τβ A + + A + + A b λ = λ sβ A + A + A b,.5.6a.6b 0..7 λ λ + = τβ A + + A + + A + b +τβa + +sβ A + A + A b..8 Substituting.8 into.7 with a simplification, the proof is completed. The aforementioned Lemmas.-. immediately result in the following Theorem. which plays an important role in analyzing the convergence of the ADMM.8 in the sequel. Theorem. For the sequence {w + } generated by the ADMM.8 and the iterate w defined in.6a, we have w w G σβ A + + σβ A + + τ +τ β A + + τ sβ A + + A + + A + b + τ s +τ βa + A + A b T A + β + T A T A +..9

13 Theorem.4 Let the sequence {w + } be generated by the ADMM.8 and the iterate w be defined in.6a. For any σ > and τ, s K K, there eist constants ξ i > 0i =,,, 4 such that w w G ξ A + + ξ A + + ξ 4 A + + A + + A + b. Proof The assertion.0 can be proved separately in two different cases. Case I For any σ >, τ, s K = {τ, s s < τ <, s < }, + ξ A +.0 the matri G defined in. can be decomposed into G = D T G 0 D, where D is defined in.4b and σβi n βi n 0 0 βi G 0 = n σβi n sβi n s I n.. τ+s 0 0 s I n β I n Regardless of the given positive parameter β, it is easy to compute that all the ordered-main subdeterminants of the matri G 0 are positive for σ > and τ, s K, that is, σ > 0, σ > 0, σ s > 0, σ s τ > 0. Hence, both G and G 0 are positive definite. Besides, by.4a and.6b, we have A I n A A + A = 0 I n 0 0 A I n 0 λ λ A βi n βi n A + + A + + A + b = w w G = A + A + A + A + + A + + A + b where the matri G 0 = L T G 0 L is positive definite and the lower triangular matri I n L = 0 I n I n 0 is nonsingular. 0 0 βi n βi n Consequently, the assertion.0 holds followed by.. Case II For any σ >, τ, s K = {τ, s < τ <, s = }, G 0,. by using the Cauchy-Schwarz inequality see e.g.[4], Page vii, there must eists a positive number η /σ, σ especially, η = such that β + T A T A + ηβ A + β A η +..

14 4 Setting s = in.9 together with the above inequality, we get w w G σ ηβ A + + σ η β A + + τβ A + + A + + A + b. That is, the assertion.0 holds with ξ = σ ηβ > 0, ξ = σ /ηβ > 0, ξ = τ +τ β > 0, ξ 4 = τβ > 0. + τ +τ β A + Remar. When taing τ = s and restricting s 0, K, the ADMM.8 can be regarded as an etension of the symmetric ADMM scheme in [7] that considers the two-bloc separable conve programming. Also, it is an etension of the recent wor of He et al.[0], but the problem and the range of the parameters are different in this paper.. Lower bound of w w G in K K4 K5 We first give a lower bound of w w G described in the following theorem. Theorem.5 Let the sequence {w + } be generated by the ADMM.8 and the iterate w be defined in.6a. For any σ > and τ, s K K4 K5, there eist constants ξ i > 0i =,,, 4, 5 such that w w G ξ A + + ξ A + + ξ A + + ξ 4 A + + A + + A + b A + A + A b.4 + ξ 5 A + + A + + A + b. Proof We also analyze.4 separately in three different cases. Because the inequalities. and.9 hold for any σ > and τ, s K, we thus have w w G σ ηβ A + + σ η β A + + τ +τ β A + + τ sβ A + + A + + A + b + τ s +τ βa + A + A b T A +..5 Case I For any τ, s K = { τ, s τ = 0, < s < + 5 we now s < 0. If there eists N > 0, then by the Cauchy-Schwarz inequality, there is sβa + A + A b T A + N β A + A + A b s N β A +.6. Substituting.6 into.5 together with τ, s K, we get w w G σ ηβ A + + σ η β A + + s β A + +N β N }, + N sβ A + + A + + A + b A + + A + + A + b A + A + A b,.7

15 5 where η /σ, σ for any σ >. Now, we prove that the positive number N eists. By.7, we need ξ = σ ηβ > 0, ξ = σ /ηβ > 0, ξ = s N β > 0, = s < N < s, β > 0. ξ 4 = N β > 0, ξ 5 = N sβ > 0, By the intermediate-value theorem in mathematical analysis, N eists if and only if s < s 0 < + s s s 5, + 5, + 5 Thus only if we tae N = { s + s } /, the assertion.4 is proved. Case II For any τ, s K 4, the assertion.4 holds. Since if setting K 4 = {τ, s 0 < τ < + s s, s > }, N = τ + s + s,.8 then, obviously, N τ + s > 0. Since s < 0, by the Cauchy-Schwarz inequality, we get τ s +τ βa + A + A b T A + N τ sβ A + A + A b τ s N τ s+τ β A +.9. Substituting.9 into.5, it holds that w w G σ ηβ A + + σ η β A + s β A + + τ +τ +N τ sβ N τ s+τ. + N β A + + A + + A + b A + + A + + A + b A + A + A b,.0 where η /σ, σ for any σ >. Now we prove that all the coefficients of the right-hand term of.0 are positive, i.e., ξ = σ ηβ > 0, ξ = σ /ηβ > 0, ξ = τ +τ ξ 4 = N τ sβ > 0, ξ 5 = N β > 0, s N τ s+τ β > 0, β > 0. Clearly, by the nown conditions, ξ, ξ, ξ 4 have been positive. For the domain K 4, we now τ < + s s = > τ + s s = > τ + s s + = > τ + s + s = N, which implies ξ 5 > 0. Besides, by the definition of N in.8, we obtain τ s τ τ ξ = β = + τ N τ s + τ + τ > 0, τ, s K 4. Case III For any τ, s K 5, the assertion.4 holds. The proof is similar to that of Case II see also Lemma 5. [0] for more details, which is omitted here.

16 6 Remar. The proof of Theorem.5 implies why we choose τ < + s s. The difference between Theorem.4 and Theorem.5 is that the lower bound of w w G is different due to the variant choice of τ, s, but it does not affect the convergence of the proposed method..4 Convergence analysis In this subsection, we will use the obtained Theorems.-. and Theorems.4-.5 to establish the global convergence and the worst-case O/t convergence rate of the ADMM.8. The following corollary can be deduced easily from Theorem. together with Theorems Corollary. Let the sequence {w + } be generated by the ADMM.8. Then for any w M and σ >, the following two inequalities hold: i For any τ, s K K, there eist constants ξ i > 0i =,,, 4 such that w + w H w w H ξ A + + ξ A + ξ A + + ξ4 A + + A + + A + b ; ii For any τ, s K K4 K5, there eist constants ξ i > 0i =,,, 4, 5 such that w + w H + ξ A + + A + + A + b w w H + ξ ξ A + A + A + A b + ξ A + ξ 5 A + + A + + A + b. + ξ4 A +.a.b Remar. The above corollary illustrates that for any σ > the sequence { w w H} is contractive for the parameters τ, s K K, i.e., w + w H w w H, w M. { And the sequence w w H + ξ A + + A + + A + b } is also contractive for the parameters τ, s K K4 K5. These two styles of contractive properties guarantee the convergence of the ADMM.8. The following result is obtained directly from Theorem. and Theorems Corollary. Let the sequence {w + } be generated by the ADMM.8 and the iterate w be defined in.6a. Then the following two inequalities hold: i For any τ, s K K, we have f f + w w, J w + w w H w w+ H, w M; ii For any τ, s K K4 K5, there eists a constant ξ > 0 such that f f + w w, J w + w w H + ξ A + A + A b w w + H + ξ A + + A + + A + b, w M..a.b

17 7 Theorem.6 Let the sequence {w + } be generated by the ADMM.8 and the iterate w be defined in.6a. Then for any τ, s K, i it holds that lim A + + A + + A + b + ii the sequences {w } and { w } are bounded; iii any accumulation point of { w } is a solution point of VIf, J, M; iv there eists w M such that lim w = w. Proof Summing the inequality.a over = 0,,,,, we have =0 ξ 4 A + + A + + A + b + i= Ai i + i = 0;. i= ξ i A i i + i w 0 w H. which shows that the assertion. holds. In a similar way, it follows from Theorem. that =0 w w G w 0 w H = lim w w = 0, τ, s K,.4 where the above assertion uses the positivity of matrices G and H in the domain K. Hence, by.4 the second assertion holds. Taing the limit of both sides of.7a together with.4, we have { f f + w w, J w } 0, w M, lim which verifies that lim w is a solution point of VIf, J, M, i.e., the assertion iii holds. Let w be an accumulation point of { w }. Then the assertion iii shows that w M, which together with.9 implies that w + w H w w H w w G. Combining the above inequality and.4, the proof of the assertion iv is completed. Theorem.7 Let the sequence {w + } be generated by the ADMM.8 and the iterate w be defined in.6a. Then the assertion. holds for any τ, s K K K4 K5. Moreover, if the matrices A i i =,, are assumed to be full column ran, then the sequence { w } converges to a solution point of w M. Proof By., we can prove this theorem in a similar way of proving Theorem 6. [0]. Remar.4 Both Theorem.6 and Theorem.7 show that the ADMM.8 is globally convergent for the step sizes τ, s in the domain K, but the former uses the positivity of matri G and the later uses the full column ran assumption of the matrices A i i =,,. Theorem.8 Let the sequence {w + } be generated by the ADMM.8 and the iterate w be defined in.6a. For any integer t > 0, let ẇ t = t w..5 t + i For any τ, s K K, we have fẋ t f + ẇ t w, J w =0 w 0 w, w M; t +.6a H

18 8 ii For any τ, s K K4 K5, we have fẋ t f + ẇ t w, J w w 0 w t + H + ξ A 0 + A 0 + A 0 b, w M. Proof Notice that the inequalities.a-.b can be respectively rewritten as f f + w w, J w + w w+ H w w H, f f + w w, J w + w w + H + ξ A + + A + + A + b w w H + ξ A + A + A b, w M, =0 t t f t + f + w t + w, J w w w0 H, =0 t t f t + f + w t + w, J w =0 =0 w w 0 H + ξ A 0 + A 0 + A 0 b, w M, t+ t+ t t f f + t+ w w, J w t+ w w0 H, =0 t t f f + t+ w w, J w =0 =0 w w 0 H + ξ A 0 + A 0 + A 0 b, w M. =0 t+ Because f is conve and ẋ t = t+ t =0, therefore, it follows that fẋ t t + t f. =0.6b.7 Substituting it respectively into the two inequalities of.7 and using the definition of ẇ t in.5, the proof is immediately completed. Remar.5 The above Theorems.6-.7 imply that if the matrices A i i =,, are assumed to be full column ran, then we can choose an easily implementable stopping criterion when carrying out some numerical eperiments, that is, ma { +, +, + Otherwise, we can use the following stopping criterion, A + + A + + A + b } tol. { } A ma +, A +, A +, A + + A + + A + b tol, where tol is the given tolerance error and w + is the + th iteration of the ADMM.8. Remar.6 Theorem.8 tells us that for any given compact set K M, if taing sup w 0 w = η H w K and sup w 0 w H + ξ A 0 + A 0 + A 0 b = η, w K

19 9 the vector ẇ t defined in.5 must satisfy fẋ t f + ẇ t w, J w η t +, which shows that the proposed method converges in a worst-case O/t rate in an ergodic sense. 4 Etensions to the multi-bloc case Noting that in Sections - the upper-left by blocs in matrices Q, M and G are positive definite for σ >, and the treatment of the lower-right by blocs are almost the same as that in [0]. These mared observations motivate us to further study the following multi-bloc separable conve minimization problem { p } p min f i i A i i = b, i X i, i =,,,, p, 4. i= i= where f i i are closed and proper conve functions not necessarily smooth; A i R n m i and b R n are given matrices and vectors, respectively; X i R mi i =,,,, p are closed conve sets. In a similar way as.8, we can design the corresponding ADMM scheme for 4., that is, the first p subproblems are solved in a parallel manner with some positive-definite proimal terms: { + = arg min L β,,,, p, λ + σβ A } X, { + = arg min L β,,,, p, λ + σβ A } X, { + i = arg min L β,,, i, i, i+,, p, λ } + σβ A i i i i X i, + p = arg min {L β,,, p, p, p, λ + σβ A p p p } p X p, λ + = λ τβ A + + A A p + p + A p p b { }, + p = arg min L β +, +,, + p, p, λ + p X p, λ + = λ + sβ A + + A A p + p b, 4. where p p L β,,,, p, λ = f i i λ, A i i b + β p A i i b i= is the augmented Lagrangian function of the problem 4. and σ, τ, s are three independent constants that are respectively restricted into i= σ p, +, τ, s K = {τ, s τ + s > 0, τ <, τ < + s s }. 4. For the saddle point,,, p, λ belonging to the solution set M of the problem 4., it satisfies the following variational inequality VIf, J, M : f f + w w, J w 0, w M, where f = p i= f i i, M = X X X p R n and A T λ A T λ =., w =., J w =. p A T p λ p λ A + A + + A p p b i= 4.4a. 4.4b

20 0 Define w =.. p λ, w =. p λ, 4.5a where =,,, p = +, +,, + p and λ = λ β A + + A A p + p + A p p b. 4.5b Using Lemma. and 4.5, the optimal condition of the i -subproblemi =,,, p of the ADMM 4. can be epressed as + i X i, f i i f i + i + i + i, A T i λ + βa T i A i + i + = f i i f i i + p j=,j i A j j + A p p b + σβa T i A i + i i 0, i X i, }{{} p + A j + p j A j + j j=,j i j=,j i i i, A T i λ βa T i p j=,j i A j j j + σβa T i A i i i Similarly, the optimal condition of the p -subproblem of the ADMM 4. is + p X p and f p p f p p + p p, A T λ p + βa T p A p p p τa T p λ λ 0, p X p. 0, 4.6a 4.6b Combining 4.6a and 4.6b, the sequence {w } in the ADMM 4. satisfies where w M, f f + w w, J w + Q w w 0, w M, 4.7a σβa T A βa T A βa T A p 0 0 βa T A σβa T A βa T A p 0 0 Q = b βa T p A βa T p A σβa T p A p βa T p A p τa T p 0 0 A p β I n The following Lemma is easy to be obtained in a similar way as the proof of Lemma.. Lemma 4. For the sequence {w + } generated by the ADMM 4. and the iterate w defined in 4.5, the following relationship w + = w Mw w 4.8a holds with I m I m M = b I mp I mp sβa p τ + si n

21 Clearly, the above variational characterization 4.7 and prediction-correction interpretation 4.8 are the etended versions of.7 and. for the three-bloc separable conve minimization problem.. Hence, the convergence analysis of the ADMM 4. is similar to that of the ADMM.8 in Section, which is omitted for succinctness. 5 Conclusion and discussion Since Chen et al.[] analyzed that the direct etension of the ADMM with a Gauss-Seidel for solving the three-bloc separable conve minimization problem is not necessarily convergence, there has been a constantly increasing interest in developing and improving the theory of the ADMM for such problem. In this paper, the idea of combining the parallel and serial algorithms is applied to improve the traditional ADMM, and a novel ADMM scheme.8 is developed to solve the conve programming with threebloc objectives, where the first two subproblems are regularized by some positive-definite proimal terms. Moreover, the domain of the step size is enlarged, but the global convergence of the method with a worst-case O/t convergence rate still holds. Also, the proposed method is etended to solve the multi-bloc separable conve minimization problem. It is worthwhile to thin further that the ADMM.8 is designed by improving the first two subproblems and adding a step of updating the Lagrangian multiplier. The following general iteration scheme is also worthwhile to discuss deeper, that is, + = arg min { } L β,,,, p, λ X, λ + = λ τβ A + + A + A + + A p p b, + = arg min {L β +,,,, p, λ + + σβ A } X, + = arg min {L β +,,,, p, λ + + σβ A } X,. + p = arg min {L β +,,,, p, p, λ + + σβ λ + = λ + sβ A + + A + + A A p + p b. A p p p } p X p, Maybe the convergence of the above scheme need to change the range of σ and τ, s. Besides, another ADMM scheme that uses the conve combination of the last iteration of each variables see e.g.[] will be investigated in the future wor. References [] Chen, C.H., He, B.S., Ye, Y.Y., Yuan, X.M.: The direct etension of ADMM for multi-bloc minimization problems is not necessarily convergent. Math. Program. Ser. A. 55, [] Dong, B., Yu, Y., Tian, D.D.: Alternating projection method for sparse model updating problems. J. Eng. Math. 05. doi: 0.007/s [] Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. I. Springer Series in Operations Research, Springer, New Yor 00 [4] Glowinsi, R., Marrocco, A.: Approimation paréléments finis d rdre un et résolution, par pénalisationdualité dune classe de problèmes de Dirichlet non linéaires. RAIRO Anal. Numér. 9, [5] Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, [6] He, B.S., Yuan, X.M.: On the O/n convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Num. Anal. 50,

22 [7] He, B.S., Liu, H., Wang, Z.R., Yuan, X.M.: A strictly contractive Peaceman-Rachford splitting method for conve programming. SIAM J. Optim. 4, [8] He, B.S., Hou, L.S., Yuan, X.M.: On full Jacobian decomposition of the augmented Lagrangian method for separable conve programming. SIAM J. Optim. 5, [9] He, B.S., Tao, M., Yuan, X.M.: A splitting method for separable conve programming. IMA J. Numer. Anal., [0] He, B.S., Ma, F., Yuan, X.M.: Convergence study on the symmetric version of ADMM with larger step sizes. SIAM J. Imaging Sci. 9, [] Rothman, A.J., Bicel, P.J., Levina, E., Zhu, J.: Sparse permutation invariant covariance estimation. Electron. J. Stat., [] Yang, W.H., Han, D.R.: Linear convergence of the alternating direction method of multipliers for a class of conve optimization problems. SIAM J. Numer. Anal. 54, [] Zhang, Y.Z., Jiang, Z.L., Davis, L.S.: Learning structured low-ran representations for image classification. 0 IEEE Conference on Computer Vision and Pattern Recognition. pp , doi: 0.09/CVPR.0.9 [4] Zhang, F.Z.: Matri Theory: Basic Results and Techniques. Springer, New Yor 0 [5] Candès, E.J., Li, X.D., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM, 58, Article, 0 [6] Argyriou, A., Evgeniou, T., Pontil, M.: Multi-tas feature learning. Adv. Neural Inform. Process. Syst. 9, [7] Papadimitriou, C., Raghavan, P., Tamai H., Vempala, S.: Latent semantic indeing, a probabilistic analysis. J. Comput. Syst. Sci. 6, [8] Tao, M., Yuan, X.M.: Recovering low-ran and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim., [9] Yang, J.F., Yin, W.T., Zhang, Y., Wang, Y.L.: A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sci., [0] Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM J.Sci. Comput. 0, [] Cai, J.F., Candès, E.J., Shen, Z.W.: A singular value thresholding algorithm for matri completion. SIAM J. Optim. 0, [] Liu, Z.S., Li, J.C., Li, G., Bai, J.C., Liu, X.N.: A new model for sparse and low-ran matri decomposition. J. Appl. Anal. Comput. accepted [] Larsen, R.M.: Lanczos bidiagonalization with partial reorthogonalization, Technical report, Department of Computer Science, Aarhus University, Aarhus, Denmar, 998

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

A One-Parameter Family of Middle Proximal ADMM for Constrained Separable Convex Optimization

A One-Parameter Family of Middle Proximal ADMM for Constrained Separable Convex Optimization Noname manuscript No will be inserted by the editor A One-Parameter Family of Middle Proximal ADMM for Constrained Separable Convex Optimization Jianchao Bai Hongchao Zhang Received: date / Accepted: date

More information

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Caihua Chen Bingsheng He Yinyu Ye Xiaoming Yuan 4 December, Abstract. The alternating direction method

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.18

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.18 XVIII - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No18 Linearized alternating direction method with Gaussian back substitution for separable convex optimization

More information

496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI

496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI Journal of Computational Mathematics, Vol., No.4, 003, 495504. A MODIFIED VARIABLE-PENALTY ALTERNATING DIRECTIONS METHOD FOR MONOTONE VARIATIONAL INEQUALITIES Λ) Bing-sheng He Sheng-li Wang (Department

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1 Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming Bingsheng He 2 Feng Ma 3 Xiaoming Yuan 4 October 4, 207 Abstract. The alternating direction method of multipliers ADMM

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming.

Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming. Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming Bingsheng He Feng Ma 2 Xiaoming Yuan 3 July 3, 206 Abstract. The alternating

More information

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems Bingsheng He Feng Ma 2 Xiaoming Yuan 3 January 30, 206 Abstract. The primal-dual hybrid gradient method

More information

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Yinyu Ye K. T. Li Professor of Engineering Department of Management Science and Engineering Stanford

More information

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Sparse signals recovered by non-convex penalty in quasi-linear systems

Sparse signals recovered by non-convex penalty in quasi-linear systems Cui et al. Journal of Inequalities and Applications 018) 018:59 https://doi.org/10.1186/s13660-018-165-8 R E S E A R C H Open Access Sparse signals recovered by non-conve penalty in quasi-linear systems

More information

Proximal ADMM with larger step size for two-block separable convex programming and its application to the correlation matrices calibrating problems

Proximal ADMM with larger step size for two-block separable convex programming and its application to the correlation matrices calibrating problems Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., (7), 538 55 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa Proximal ADMM with larger step size

More information

Generalized ADMM with Optimal Indefinite Proximal Term for Linearly Constrained Convex Optimization

Generalized ADMM with Optimal Indefinite Proximal Term for Linearly Constrained Convex Optimization Generalized ADMM with Optimal Indefinite Proximal Term for Linearly Constrained Convex Optimization Fan Jiang 1 Zhongming Wu Xingju Cai 3 Abstract. We consider the generalized alternating direction method

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization

Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization J Optim Theory Appl 04) 63:05 9 DOI 0007/s0957-03-0489-z Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization Guoyong Gu Bingsheng He Junfeng Yang

More information

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem 1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming

Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming Bingsheng He, Han Liu, Juwei Lu, and Xiaoming Yuan Abstract Recently, a strictly contractive

More information

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Contraction Methods for Convex Optimization and monotone variational inequalities No.12 XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

RECOVERING LOW-RANK AND SPARSE COMPONENTS OF MATRICES FROM INCOMPLETE AND NOISY OBSERVATIONS. December

RECOVERING LOW-RANK AND SPARSE COMPONENTS OF MATRICES FROM INCOMPLETE AND NOISY OBSERVATIONS. December RECOVERING LOW-RANK AND SPARSE COMPONENTS OF MATRICES FROM INCOMPLETE AND NOISY OBSERVATIONS MIN TAO AND XIAOMING YUAN December 31 2009 Abstract. Many applications arising in a variety of fields can be

More information

On the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities

On the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, June 204, pp 05 3. Published online in International Academic Press (www.iapress.org) On the Convergence and O(/N)

More information

Monotonicity and Restart in Fast Gradient Methods

Monotonicity and Restart in Fast Gradient Methods 53rd IEEE Conference on Decision and Control December 5-7, 204. Los Angeles, California, USA Monotonicity and Restart in Fast Gradient Methods Pontus Giselsson and Stephen Boyd Abstract Fast gradient methods

More information

arxiv: v1 [math.oc] 23 May 2017

arxiv: v1 [math.oc] 23 May 2017 A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes A Piggybacing Design Framewor for Read-and Download-efficient Distributed Storage Codes K V Rashmi, Nihar B Shah, Kannan Ramchandran, Fellow, IEEE Department of Electrical Engineering and Computer Sciences

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Lecture 23: November 19

Lecture 23: November 19 10-725/36-725: Conve Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 23: November 19 Scribes: Charvi Rastogi, George Stoica, Shuo Li Charvi Rastogi: 23.1-23.4.2, George Stoica: 23.4.3-23.8, Shuo

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Eigenvalue analysis of constrained minimization problem for homogeneous polynomial

Eigenvalue analysis of constrained minimization problem for homogeneous polynomial J Glob Optim 2016) 64:563 575 DOI 10.1007/s10898-015-0343-y Eigenvalue analysis of constrained minimization problem for homogeneous polynomial Yisheng Song 1,2 Liqun Qi 2 Received: 31 May 2014 / Accepted:

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators On the O1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators Bingsheng He 1 Department of Mathematics, Nanjing University,

More information

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction Trends in Mathematics Information Center for Mathematical Sciences Volume 9 Number 2 December 2006 Pages 0 INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR

More information

A hybrid splitting method for smoothing Tikhonov regularization problem

A hybrid splitting method for smoothing Tikhonov regularization problem Zeng et al Journal of Inequalities and Applications 06) 06:68 DOI 086/s3660-06-098-8 R E S E A R C H Open Access A hybrid splitting method for smoothing Tikhonov regularization problem Yu-Hua Zeng,, Zheng

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

An inexact parallel splitting augmented Lagrangian method for large system of linear equations

An inexact parallel splitting augmented Lagrangian method for large system of linear equations An inexact parallel splitting augmented Lagrangian method for large system of linear equations Zheng Peng [1, ] DongHua Wu [] [1]. Mathematics Department of Nanjing University, Nanjing PR. China, 10093

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

Improving an ADMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization

Improving an ADMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization Improving an AMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization Bingsheng He 1 and Xiaoming Yuan August 14, 016 Abstract. The augmented

More information

Singular Value Inequalities for Real and Imaginary Parts of Matrices

Singular Value Inequalities for Real and Imaginary Parts of Matrices Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS

MATHEMATICAL ENGINEERING TECHNICAL REPORTS MATHEMATICAL ENGINEERING TECHNICAL REPORTS Combinatorial Relaxation Algorithm for the Entire Sequence of the Maximum Degree of Minors in Mixed Polynomial Matrices Shun SATO (Communicated by Taayasu MATSUO)

More information

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem:

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem: J. Korean Math. Soc. 47, No. 6, pp. 53 67 DOI.434/JKMS..47.6.53 A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION Youjiang Lin, Yongjian Yang, and Liansheng Zhang Abstract. This paper considers the

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT

FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT N. S. AYBAT, D. GOLDFARB, AND G. IYENGAR Abstract. The stable principal component pursuit SPCP problem is a non-smooth convex optimization

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS JONATHAN M. BORWEIN AND MATTHEW K. TAM Abstract. We analyse the behaviour of the newly introduced cyclic Douglas Rachford algorithm

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION JUNFENG YANG AND XIAOMING YUAN Abstract. The nuclear norm is widely used to induce low-rank solutions for

More information

ON THE GLOBAL AND LINEAR CONVERGENCE OF THE GENERALIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS

ON THE GLOBAL AND LINEAR CONVERGENCE OF THE GENERALIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS ON THE GLOBAL AND LINEAR CONVERGENCE OF THE GENERALIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS WEI DENG AND WOTAO YIN Abstract. The formulation min x,y f(x) + g(y) subject to Ax + By = b arises in

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron

A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron Noname manuscript No. (will be inserted by the editor) A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron Ya-Feng Liu Shiqian Ma Yu-Hong Dai Shuzhong Zhang Received: date

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

Properties of Solution Set of Tensor Complementarity Problem

Properties of Solution Set of Tensor Complementarity Problem Properties of Solution Set of Tensor Complementarity Problem arxiv:1508.00069v3 [math.oc] 14 Jan 2017 Yisheng Song Gaohang Yu Abstract The tensor complementarity problem is a specially structured nonlinear

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Devil s Staircase Rotation Number of Outer Billiard with Polygonal Invariant Curves

Devil s Staircase Rotation Number of Outer Billiard with Polygonal Invariant Curves Devil s Staircase Rotation Number of Outer Billiard with Polygonal Invariant Curves Zijian Yao February 10, 2014 Abstract In this paper, we discuss rotation number on the invariant curve of a one parameter

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

The matrix approach for abstract argumentation frameworks

The matrix approach for abstract argumentation frameworks The matrix approach for abstract argumentation frameworks Claudette CAYROL, Yuming XU IRIT Report RR- -2015-01- -FR February 2015 Abstract The matrices and the operation of dual interchange are introduced

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information