PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES

Size: px
Start display at page:

Download "PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES"

Transcription

1 PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES Patrick L. Combettes and Jean-Christophe Pesquet Laboratoire Jacques-Louis Lions UMR CNRS 7598 Université Pierre et Marie Curie Paris Paris, France plc@math.jussieu.fr Institut Gaspard Monge and UMR CNRS 8049 Université de Marne la Vallée Marne la Vallée Cedex, France pesquet@univ-mlv.fr September 9, 006 Abstract The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated. Problem formulation Throughout this paper, H is a separable infinite-dimensional real Hilbert space with scalar product, norm, and distance d. Moreover, Γ 0 H) denotes the class of proper lower semicontinuous convex functions from H to ], + ], and e k ) k N is an orthonormal basis of H. The standard denoising problem in signal theory consists of recovering the original form of a signal x H from an observation z = x + v, where v H is the realization of a noise process. In many instances, x is known to admit a sparse representation with respect to e k ) k N and an estimate x of x can be constructed by removing the coefficients of smallest magnitude in the

2 representation z e k ) k N of z with respect to e k ) k N. A popular method consists of performing a so-called soft thresholding of each coefficient z e k at some predetermined level ω k ]0, + [, namely see Fig. ).) x = k N soft [ ωk,ω k ] z e k )e k, where soft [ ωk,ω k ] : ξ signξ) max{ ξ ω k, 0}. This approach has received considerable attention in various areas of applied mathematics ranging from nonlinear approximation theory to statistics, and from harmonic analysis to image processing; see for instance [, 7, 8, 9,, 7, 3] and the references therein. From an optimization point of view, the vector x exhibited in.) is simply the solution to the variational problem.) minimize x H x z + ω k x e k. k N Attempts have been made to extend this formulation to the more general inverse problems in which the observation assumes the form z = T x + v, where T is a nonzero bounded linear operator from H to some real Hilbert space G, and where v G is the realization of a noise process. Thus, the variational problem.3) minimize x H T x z + ω k x e k k N was considered in [5, 7, 8, ] see also [34] and the references therein for related work), and the soft thresholding algorithm.4) x 0 H and n N) x n+ = k N soft [ ωk,ω k ] xn + T z T x n ) e k ) e k was proposed to solve it. The strong convergence of this algorithm was formally established in [6]. Proposition. [6, Theorem 3.] Suppose that inf k N ω k > 0 and that T <. sequence x n ) generated by.4) converges strongly to a solution to.3). Then the In [4],.3) was analyzed in a broader framework and the following extension of Proposition. was obtained by bringing into play tools from convex analysis and recent results from constructive fixed point theory. Proposition. [4, Corollary 5.9] Let γ n ) be a sequence in ]0, + [ and let λ n ) be a sequence in ]0, ]. Suppose that the following hold: inf k N ω k > 0, inf γ n > 0, sup γ n < / T, and inf λ n > 0. Then the sequence x n ) generated by the algorithm.5) x 0 H and n N) x n+ = x n + λ n soft [ γn ω k,γ n ω k ] xn + γ n T z T x n ) e k ) ) e k x n k N converges strongly to a solution to.3).

3 In denoising and approximation problems, various theoretical, physical, and heuristic considerations have led researchers to consider alternative thresholding strategies in.); see, e.g., [, 3, 3, 33, 37]. The same motivations naturally serve as a thrust to investigate the use of alternative thresholding rules in more general algorithms such as.5), and to identify the underlying variational problems. These questions are significant because the current theory of iterative thresholding, as described by Proposition., can tackle only problems described by the variational formulation.3), which offers limited flexibility in the penalization of the coefficients x e k ) k N and which is furthermore restricted to standard linear inverse problems. The aim of the present paper is to bring out general answers to these questions. Our analysis will revolve around the following variational formulation, where σ Ω denotes the support function of a set Ω. Problem.3 Let Φ Γ 0 H), let K N, let L = N K, let Ω k ) k K be a sequence of closed intervals in R, and let ψ k ) k N be a sequence in Γ 0 R). The objective is to.6) minimize x H Φx) + k N ψ k x e k ) + k K σ Ωk x e k ), under the following standing assumptions: i) the function Φ is differentiable on H, inf ΦH) >, and Φ is /β-lipschitz continuous for some β ]0, + [ ; ii) for every k N, ψ k ψ k 0) = 0; iii) the functions ψ k ) k N are differentiable at 0; iv) the functions ψ k ) k L are finite and twice differentiable on R {0}, and.7) ρ ]0, + [) θ ]0, + [) inf k L inf 0< ξ ρ ψ k ξ) θ; v) the function Υ L : l L) ], + ] : ξ k ) k L k L ψ kξ k ) is coercive; vi) 0 int k K Ω k. Let us note that Problem.3 reduces to.3) when Φ: x T x z /, K = N, and, for every k N, Ω k = [ ω k, ω k ] and ψ k = 0. It will be shown Proposition 4.) that Problem.3 admits at least one solution. While assumption i) on Φ may seem offhand to be rather restrictive, it will be seen in Section 5. to cover important scenarios. In addition, it makes it possible to employ a forward-backward splitting strategy to solve.6), which consists essentially of alternating a forward explicit) gradient step on Φ with a backward implicit) proximal step on.8) Ψ: H ], + ] : x k N ψ k x e k ) + k K σ Ωk x e k ). Our main convergence result Theorem 4.5) will establish the strong convergence of an inexact forward-backward splitting algorithm Algorithm 4.3) for solving Problem.3. Another contribution of this paper will be to show Remark 3.4) that, under our standing assumptions, the function 3

4 displayed in.8) is quite general in the sense that the operators on H that perform nonexpansive as required by our convergence analysis) and nonincreasing as imposed by practical considerations) thresholdings on the closed intervals Ω k ) k K of the coefficients x e k ) k K of a point x H are precisely those of the form prox Ψ, i.e., the proximity operator of Ψ. Furthermore, we show Proposition 3.5 and Lemma.3) that such an operator, which provides the proximal step of our algorithm, can be conveniently decomposed as.9) prox Ψ : H H: x k K prox ψk softωk x e k ) e k + k L prox ψk x e k e k, where we define the soft thresholder relative to a nonempty closed interval Ω R as ξ ω, if ξ < ω; { ω = inf Ω,.0) soft Ω : R R: ξ 0, if ξ Ω; with ω = sup Ω. ξ ω, if ξ > ω, The remainder of the paper is organized as follows. In Section, we provide a brief account of the theory of proximity operators, which play a central role in our analysis. In Section 3, we introduce and study the notion of a proximal thresholder. Our algorithm is presented in Section 4 and its strong convergence to a solution to Problem.3 is demonstrated. Signal recovery applications are discussed in Section 5, where numerical results are presented. Proximity operators Let us first introduce some basic notation for a detailed account of convex analysis, see [39]). Let C be a subset of H. The indicator function of C is { 0, if x C;.) ι C : H {0, + }: x +, if x / C, its support function is σ C : H [, + ] : u sup x C x u, and its distance function is d C : H [0, + ] : x inf C x. If C is nonempty, closed, and convex then, for every x H, there exists a unique point P C x C, called the projection of x onto C, such that x P C x = d C x). A function f : H [, + ] is proper if / fh) {+ }. The domain of f : H [, + ] is dom f = { x H fx) < + }, its set of global minimizers is denoted by Argmin f, and its conjugate is the function f : H [, + ] : u sup x H x u fx); if f is proper, its subdifferential is the set-valued operator.) f : H H : x { u H y dom f) y x u + fx) fy) }. If f : H ], + ] is convex and Gâteaux differentiable at x dom f with gradient fx), then fx) = { fx)}. Example. Let Ω R be a nonempty closed interval, let ω = inf Ω, let ω = sup Ω, and let ξ R. Then the following hold. 4

5 ωξ, if ξ < 0; i) σ Ω ξ) = 0, if ξ = 0; ωξ, if ξ > 0. {ω} R, if ξ < 0; ii) σ Ω ξ) = Ω, if ξ = 0; {ω} R, if ξ > 0. The infimal convolution of two functions f, g : H ], + ] is denoted by f g. Finally, an operator T : H H is nonexpansive if x, y) H ) T x T y x y and firmly nonexpansive if x, y) H ) T x T y x y T x T y. Proximity operators were introduced by Moreau [8]. We briefly recall some essential facts below and refer the reader to [4] and [9] for more details. Let f Γ 0 H). The proximity operator of f is the operator prox f : H H which maps every x H to the unique minimizer of the function y fy) + x y /. It is characterized by.3) x H) p H) p = prox f x x p fp). Lemma. Let f Γ 0 H). Then the following hold. i) x H) x Argmin f 0 fx) prox f x = x. ii) prox f = Id prox f. iii) prox f is firmly nonexpansive. iv) If f is even, then prox f is odd. Lemma.3 [4, Example.9] Let b k ) k N be an orthonormal basis of H and let.4) f : H ], + ] : x φ k x b k ), k N where φ k ) k N are functions in Γ 0 R) that satisfy k N) φ k φ k 0) = 0. Then f Γ 0 H) and x H) prox f x = k N prox φ k x b k b k. The remainder of this section is dedicated to proximity operators on the real line, the importance of which is underscored by Lemma.3. Proposition.4 Let ϱ be a function defined from R to R. Then ϱ is the proximity operator of a function in Γ 0 R) if and only if it is nonexpansive and nondecreasing. Proof. Let ξ and η be real numbers. First, suppose that ϱ = prox φ, where φ Γ 0 R). Then it follows from Lemma.iii) that ϱ is nonexpansive and that 0 ϱξ) ϱη) ξ η)ϱξ) ϱη)), which shows that ϱ is nondecreasing since ξ η and ϱξ) ϱη) have the same sign. Conversely, 5

6 suppose that ϱ is nonexpansive and nondecreasing and, without loss of generality, that ξ η. Then, 0 ϱξ) ϱη) ξ η and therefore ϱξ) ϱη) ξ η)ϱξ) ϱη)). Thus, ϱ is firmly nonexpansive. However, every firmly nonexpansive operator T : H H is of the form T = Id +A), where A: H H is a maximal monotone operator [6]. Since the only maximal monotone operators in R are subdifferentials of functions in Γ 0 R) [30, Section 4], we must have ϱ = Id + φ) = prox φ for some φ Γ 0 R). Corollary.5 Suppose that φ Γ 0 R) is minimized by 0. Then 0 prox φ ξ ξ, if ξ > 0;.5) ξ R) prox φ ξ = 0, if ξ = 0; ξ prox φ ξ 0, if ξ < 0. This is true in particular when φ is even, in which case prox φ is an odd operator. Proof. Since 0 Argmin φ, Lemma.i) yields prox φ 0 = 0. In turn, since prox φ is nonexpansive by Lemma.iii), we have ξ R) prox φ ξ = prox φ ξ prox φ 0 ξ 0 = ξ. Altogether, since Proposition.4 asserts that prox φ is nondecreasing, we obtain.5). Finally, if φ is even, its convexity yields ξ dom φ) φ0) = φ ξ ξ)/ ) φξ) + φ ξ) ) / = φξ). Therefore 0 Argmin φ, while the oddness of prox φ follows from Lemma.iv). Let us now provide some elementary examples Example.6 is illustrated in Fig. in the case when Ω = [, ]). Example.6 Let Ω R be a nonempty closed interval, let ω = inf Ω, let ω = sup Ω, and let ξ R. Then the following hold. ω, if ξ < ω; i) prox ιω ξ = P Ω ξ = ξ, if ξ Ω; ω, if ξ > ω. ii) prox σω ξ = soft Ω ξ, where soft Ω is the soft thresholder defined in.0). Proof. i) is clear and, since σ Ω = ι Ω, ii) follows from i) and Lemma.ii). Example.7 Let p [, + [, let ω ]0, + [, let φ: R R: η ω η p, let ξ R, and set π = prox φ ξ. Then the following hold. i) π = signξ) max{ ξ ω, 0}, if p = ; ii) π = ξ + 4ω 3 /3 ρ ξ) /3 ρ + ξ) /3), where ρ = ξ + 56ω 3 /79, if p = 4/3; iii) π = ξ + 9ω signξ) + 6 ξ /9ω ) ) /8, if p = 3/; iv) π = ξ/ + ω), if p = ; 6

7 Figure : Graphs of prox φ = soft [,] solid line) and prox φ = P [,] dashed line), where φ = and φ = ι [,]. v) π = signξ) + ω ξ ) /6ω), if p = 3; vi) π = ) ρ + ξ /3 8ω ) ρ ξ /3, where ρ = ξ 8ω + /7ω), if p = 4. Proof. i): Set Ω = [ ω, ω] in Example.6ii). ii) vi): Since φ is even, we can assume that ξ 0 and then extend the result for ξ 0 by antisymmetry via Corollary.5. As φ is differentiable, it follows from.3) and Corollary.5 that π is the unique nonnegative solution to ξ π = φ π) = pωπ p, which can be solved explicitly in each case. Proposition.8 Let ψ be a function in Γ 0 R), and let ρ and θ be real numbers in ]0, + [ such that: i) ψ ψ0) = 0; ii) ψ is differentiable at 0; iii) ψ is twice differentiable on [ ρ, ρ] {0} and inf 0< ξ ρ ψ ξ) θ. Then ξ [ ρ, ρ]) η [ ρ, ρ]) prox ψ ξ prox ψ η ξ η / + θ). Proof. Set R = [ ρ, ρ] {0} and ϕ: R R: ζ ζ + ψ ζ). We first infer from iii) that.6) ζ R) ϕ ζ) = + ψ ζ) + θ. 7

8 Moreover,.3) yields ζ R) prox ψ ζ = ϕ ζ). Note also that, in the light of.3), ii), and i), we have ζ R) prox ψ ζ = 0 ζ ψ0) = {ψ 0)} = {0}. Hence, prox ψ vanishes only at 0 and we derive from Lemma.iii) that.7) ζ R) 0 < ϕ ζ) = prox ψ ζ prox ψ 0 ζ 0 ρ. In turn, we deduce from.6) that.8) sup ζ R prox ψ ζ = inf ϕ ϕ ζ) ) ζ R inf ζ R ϕ ζ) + θ. Now fix ξ and η in R. First, let us assume that either ξ < η < 0 or 0 < ξ < η. Then, since prox ψ is nondecreasing by Proposition.4, it follows from the mean value theorem and.8) that there exists µ ]ξ, η[ such that.9) 0 prox ψ η prox ψ ξ = η ξ) prox ψ µ η ξ) sup prox ψ ζ η ξ ζ R + θ. Next, let us assume that ξ < 0 < η. Then the mean value theorem asserts that there exist µ ]ξ, 0[ and ν ]0, η[ such that.0) prox ψ 0 prox ψ ξ = ξ prox ψ µ and prox ψ η prox ψ 0 = η prox ψ ν. Since prox ψ is nondecreasing and prox ψ 0 = 0, we obtain.) 0 prox ψ η prox ψ ξ = η prox ψ ν ξ prox ψ µ η ξ) sup prox ψ ζ η ξ ζ R + θ. Altogether, we have shown that, for every ξ and η in R, prox ψ ξ prox ψ η ξ η / + θ). We conclude by observing that, due to the continuity of prox ψ Lemma.iii)), this inequality holds for every ξ and η in [ ρ, ρ]. 3 Proximal thresholding The standard soft thresholder of.), which was extended to closed intervals in.0), was seen in Example.6ii) to be a proximity operator. As such, it possesses attractive properties see Lemma.i)&iii)) that prove extremely useful in the convergence analysis of iterative methods []. This remark motivates the following definition. Definition 3. Let T : H H and let Ω be a nonempty closed convex subset of H. Then T is a proximal thresholder on Ω if there exists a function f Γ 0 H) such that 3.) T = prox f and x H) T x = 0 x Ω. The next proposition provides characterizations of proximal thresholders. 8

9 Proposition 3. Let f Γ 0 H) and let Ω be a nonempty closed convex subset of H. Then the following are equivalent. i) prox f is a proximal thresholder on Ω. ii) f0) = Ω. iii) x H) [ prox f x = x x Ω ]. iv) Argmin f = Ω. In particular, i) iv) hold when v) f = g + σ Ω, where g Γ 0 H) is Gâteaux differentiable at 0 and g0) = 0. Proof. i) ii): Fix x H. Then it follows from.3) that [ proxf x = 0 x Ω ] [ x f0) x Ω ] 3.) f0) = Ω. i) iii): Fix x H. Then it follows from Lemma.ii) that 3.3) [ proxf x = 0 x Ω ] [ x prox f x = 0 x Ω ]. iii) iv): Since f Γ 0 H), f Γ 0 H) and we can apply Lemma.i) to f. v) ii): Since v) implies that 0 core dom g, we have 0 core dom g) dom σ Ω and it follows from [39, Theorem.8.3] that 3.4) f0) = g + σ Ω )0) = g0) + σ Ω 0) = g0) + Ω, where the last equality results from the observation that, for every u H, Fenchel s identity yields u σ Ω 0) 0 = 0 u = σ Ω 0) + σ Ω u) 0 = σ Ω u) = ι Ωu) u Ω. However, since g0) = { g0)} = {0}, we obtain f0) = Ω, and ii) is therefore satisfied. The following theorem is a significant refinement of a result of Proposition 3. in the case when H = R, that characterizes all the functions φ Γ 0 R) for which prox φ is a proximal thresholder. Theorem 3.3 Let φ Γ 0 R) and let Ω R be a nonempty closed interval. Then the following are equivalent. i) prox φ is a proximal thresholder on Ω. ii) φ = ψ + σ Ω, where ψ Γ 0 R) is differentiable at 0 and ψ 0) = 0. 9

10 Proof. In view of Proposition 3., it is enough to show that φ0) = Ω ii). So let us assume that φ0) = Ω, and set ω = inf Ω and ω = sup Ω. Since φ0), we deduce from.) that 0 dom φ and that 3.5) ξ R) σ Ω ξ) = sup ξ 0)ν φξ) φ0). ν Ω Consequently, 3.6) dom φ dom σ Ω. Thus, in the case when Ω = R, Example.i) yields dom φ = dom σ Ω = {0} and we obtain φ = φ0) + ι {0} = φ0) + σ Ω, hence ii) with ψ φ0). We henceforth assume that Ω R and set φξ) φ0) ω ξ, if ξ > 0 and ω < + ; 3.7) ξ R) ϕξ) = φξ) φ0) ω ξ, if ξ < 0 and ω > ; 0, otherwise. Then Example.i) and 3.5) yield 3.8) ϕ 0 = ϕ0), which also shows that ϕ is proper. In addition, we derive from Example.i) and 3.7) the following three possible expressions for ϕ. a) If ω > and ω < +, then σ Ω is a finite continuous function and 3.9) ξ R) ϕξ) = φξ) φ0) σ Ω ξ). b) If ω = and ω < +, then 3.0) ξ R) ϕξ) = c) If ω > and ω = +, then 3.) ξ R) ϕξ) = { φξ) φ0) ω ξ, if ξ > 0; 0, otherwise. { φξ) φ0) ω ξ, if ξ < 0; 0, otherwise. Let us show that ϕ is lower semicontinuous. In case a), this follows at once from the lower semicontinuity of φ and the continuity of σ Ω. In cases b) and c), ϕ is clearly lower semicontinuous at every point ξ 0 and, by 3.8), at 0 as well. Next, let us establish the convexity of ϕ. To this end, we set { φξ) φ0) ω ξ, if ξ > 0 and ω < + ; 3.) ξ R) ϕξ) = 0, otherwise, 0

11 and 3.3) ξ R) ϕξ) = { φξ) φ0) ω ξ, 0, otherwise. if ξ < 0 and ω > ; By inspecting 3.7), 3.), and 3.3) we learn that ϕ coincides with ϕ on [0, + [ and with ϕ on ], 0]. Hence, 3.8) yields 3.4) ϕ 0 and ϕ 0, and 3.5) ϕ = max{ϕ, ϕ}. Furthermore, since φ is convex, so are the functions ξ φξ) φ0) ω ξ and ξ φξ) φ0) ω ξ, when ω < + and ω >, respectively. Therefore, it follows from 3.), 3.3), and 3.4) that ϕ and ϕ are convex, and hence from 3.5) that ϕ is convex. We have thus shown that ϕ Γ 0 R). We now claim that, for every ξ R, 3.6) φξ) = ϕξ) + φ0) + σ Ω ξ). We can establish this identity with the help of Example.i). In case a), 3.6) follows at once from 3.9) since σ Ω is finite. In case b), 3.6) follows from 3.0) when ξ 0, and from 3.5) when ξ < 0 since, in this case, σ Ω ξ) = +. Likewise, in case c), 3.6) follows from 3.) when ξ 0, and from 3.5) when ξ > 0 since, in this case, σ Ω ξ) = +. Next, let us show that 3.7) 0 intdom φ dom σ Ω ). In case a), we have Ω = [ ω, ω ]. Therefore dom σ Ω = R and 3.7) trivially holds. In case b), we have Ω = ], ω] and, therefore, dom σ Ω = [0, + [. This implies, via 3.6), that dom φ [0, + [. Therefore, there exists ν dom φ ]0, + [ since otherwise we would have dom φ = {0}, which, in view of.), would contradict the current working assumption that φ0) = Ω R. By convexity of φ, it follows that [0, ν] dom φ and, therefore, that ], ν] dom φ dom σ Ω. We thus obtain 3.7) in case b); case c) can be handled analogously. We can now appeal to [30, Theorem 3.8] to derive from 3.6), 3.7), and Example.ii) that 3.8) Ω = φ0) = ϕ0) + σ Ω 0) = ϕ0) + Ω and, therefore, that ϕ0) = {0} since Ω R. In turn, upon invoking [30, Theorem 5.], we conclude that ϕ is differentiable at 0 and that ϕ 0) = 0. Altogether, we obtain ii) by setting ψ = ϕ + φ0). Remark 3.4 A standard requirement for thresholders on R is that they be nondecreasing functions [, 3, 3, 37]. On the other hand, nonexpansivity is a key property to establish the convergence of iterative methods [] and, in particular, in Proposition. [6] and Proposition. [4]. As seen in Proposition.4 and Definition 3., the nondecreasing and nonexpansive functions ϱ: R R that vanish only on a closed interval Ω R coincide with the proximal thresholders on Ω. Hence, appealing to Theorem 3.3 and Lemma.3, we conclude that the operators that perform a componentwise nondecreasing and nonexpansive thresholding on Ω k ) k K of those coefficients of the decomposition in e k ) k N indexed by K are precisely the operators of the form prox Ψ, where Ψ is as in.8).

12 Next, we provide a convenient decomposition rule for implementing proximal thresholders. Proposition 3.5 Let φ = ψ + σ Ω, where ψ Γ 0 R) and Ω R is a nonempty closed interval. Suppose that ψ is differentiable at 0 with ψ 0) = 0. Then prox φ = prox ψ soft Ω. Proof. Fix ξ and π in R. We have 0 dom σ Ω and, since ψ is differentiable at 0, 0 int dom ψ. It therefore follows from.3) and [30, Theorem 3.8] that 3.9) π = prox φ ξ ξ π φπ) = ψπ) + σ Ω π) ν ψπ)) ξ π + ν) σ Ω π). Let us observe that, if ν ψπ), then, since 0 Argmin ψ,.) implies that 0 π)ν + ψπ) ψ0) ψπ) < + and, in turn, that πν 0. This shows that, if ν ψπ) and π 0, then either π > 0 and ν 0, or π < 0 and ν 0; in turn, Lemma.ii) yields σ Ω π) = σ Ω π + ν). Consequently, if π 0, we derive from 3.9) and Example.6ii) that 3.0) π = prox φ ξ ν ψπ)) ξ π + ν) σ Ω π + ν) ν ψπ)) π + ν = prox σω ξ = soft Ω ξ soft Ω ξ π ψπ) π = prox ψ softω ξ ). On the other hand, if π = 0, since ψ0) = {ψ 0)} = {0}, we derive from 3.9), Example.ii),.0), and Lemma.i) that 3.) π = prox φ ξ ξ σ Ω 0) = Ω soft Ω ξ = 0 prox ψ softω ξ ) = 0 = π. The proof is now complete. In view of Proposition 3.5 and.0), the computation of the proximal thresholder prox ψ+σω reduces to that of prox ψ. By duality, we obtain a decomposition formula for those proximal operators that coincide with the identity on a closed interval Ω. Proposition 3.6 Let φ = ψ ι Ω, where ψ Γ 0 R) and Ω R is a nonempty closed interval. Suppose that ψ is differentiable at 0 with ψ 0) = 0. Then the following hold. i) prox φ = P Ω + prox ψ soft Ω. ii) ξ R) prox φ ξ = ξ ξ Ω. Proof. It follows from [30, Theorem 6.4] that 3.) φ = ψ + ι Ω = ψ + σ Ω.

13 Figure : Graphs of the proximal thresholder prox φ solid line) and its dual prox φ φ = τ p +. Top: τ = 0.05 and p = 4; Bottom: τ = 0.9 and p = 4/3. dashed line), where Note also that, since ψ Γ 0 R), we have ψ Γ 0 R) [30, Theorem.]. i): Fix ξ R. Then, by Lemma.ii), 3.), Proposition 3.5, and Example.6, 3.3) 3.4) prox φ ξ = ξ prox φ ξ = ξ prox ψ +σ Ω ξ = ξ prox ψ proxσω ξ ) = ξ prox σω ξ + prox ψ proxσω ξ ) = prox σ Ω ξ + prox ψ proxσω ξ ) = prox ιω ξ + prox ψ proxσω ξ ) = P Ω ξ + prox ψ softω ξ ). 3

14 ii): It follows from 3.) and Theorem 3.3 that prox φ is a proximal thresholder on Ω. Hence, we derive from 3.3) and 3.) that ξ R) prox φ ξ = ξ prox φ ξ = 0 ξ Ω. Examples of proximal thresholders Proposition 3.5) and their duals Proposition 3.6) are provided in Figs. and 3 see also Fig. ) in the case when Ω = [, ] Figure 3: Graphs of the proximal thresholder prox φ solid line) and its dual prox φ dashed line), where φ = ψ +. Top: ψ = ι [,] ; Bottom: ψ : ξ ξ /, if ξ ; ξ /, if ξ >, is the Huber function [5]. 4 Iterative proximal thresholding Let us start with some basic properties of Problem.3. Proposition 4. Problem.3 possesses at least one solution. 4

15 Proof. Let Ψ be as in.8). We infer from the assumptions of Problem.3 and Lemma.3 that Ψ Γ 0 H) and, in turn, that Φ + Ψ Γ 0 H). Hence, it suffices to show that Φ + Ψ is coercive [39, Theorem.5.ii)], i.e., since inf ΦH) > by assumption i) in Problem.3, that Ψ is coercive. For this purpose, let x = ξ k ) k N denote a generic element in l N), and let 4.) Υ: l N) ], + ] : x k N ψ k ξ k ) + k K σ Ωk ξ k ). Then, by Parseval s identity, it is enough to show that Υ is coercive. To this end, set x K = ξ k ) k K and x L = ξ k ) k L, and denote by K and L the standard norms on l K) and l L), respectively. It follows from assumption vi) in Problem.3 that there exists ω ]0, + [ such that 4.) [ ω, ω] k K Ω k. Therefore, using 4.), assumption ii) in Problem.3, and Example.i), we obtain 4.3) x l N)) Υx) k K σ Ωk ξ k ) + k L ψ k ξ k ) ω ξ k + Υ L x L ) k K ω x K K + Υ L x L ). Now suppose that x = x K K + x L L +. Then 4.3) and assumption v) in Problem.3 yield Υx) +, as desired. Proposition 4. Let Ψ be as in.8), let x H, and let γ ]0, + [. Then x is a solution to Problem.3 if and only if x = prox γψ x γ Φx)). Proof. Since Problem.3 is equivalent to minimizing Φ + Ψ, this is a standard characterization, see for instance [4, Proposition 3.iii)]. Our algorithm for solving Problem.3 will be the following. Algorithm 4.3 Fix x 0 H and set 4.4) n N) x n+ = x n + λ n k K + k L α n,k + prox γn ψ k softγn Ω k x n γ n Φx n ) + b n ) e k )) e k ) α n,k + prox γn ψ k x n γ n Φx n ) + b n ) e k e k x n ), where: i) γ n ) is a sequence in ]0, + [ such that inf γ n > 0 and sup γ n < β; ii) λ n ) is a sequence in ]0, ] such that inf λ n > 0; 5

16 iii) for every n N, α n,k ) k N is a sequence in l N) such that k N α n,k < + ; iv) b n ) is a sequence in H such that b n < +. Remark 4.4 Let us highlight some features of Algorithm 4.3. The set K contains the indices of those coefficients of the decomposition in e k ) k N that are thresholded. The terms α n,k and b n stand for some numerical tolerance in the implementation of prox γn ψ k and the computation of Φx n ), respectively. The parameters λ n and γ n provide added flexibility to the algorithm and can be used to improve its convergence profile. The operator soft γn Ω k is given explicitly in.0). Our main convergence result can now be stated. Theorem 4.5 Every sequence generated by Algorithm 4.3 converges strongly to a solution to Problem.3. Proof. Hereafter, the arrow stands for weak convergence, x n ) is a sequence generated by Algorithm 4.3, and we define { ψ k + σ Ωk, if k K; 4.5) k N) φ k = ψ k, if k L. It follows from the assumptions on ψ k ) k N in Problem.3 that k N) ψ k 0) = 0. Therefore, for every n in N, Theorem 3.3 implies that 4.6) for every k in K, prox γn φ k is a proximal thresholder on γ n Ω k, while Proposition 3.5 supplies 4.7) k K) prox γnφ k = prox γnψ k +γ nσ Ωk = prox γnψ k +σ γ nω k ) = prox γ nψ k soft γnω k. Thus, 4.4) can be rewritten as 4.8) x n+ = x n + λ n k N αn,k + prox γn φ k x n γ n Φx n ) + b n ) e k ) e k x n ). Now, let Ψ be as in.8), i.e., Ψ = k N φ k e k ), and set n N) a n = k N α n,ke k. Then it follows from 4.5) and Lemma.3 that Ψ Γ 0 H) and that 4.8) can be rewritten as 4.9) x n+ = x n + λ n prox γn Ψ xn γ n Φx n ) + b n ) ) ) + a n x n. 6

17 Consequently, since Proposition 4. asserts that Φ + Ψ possesses a minimizer, we derive from assumptions i) iv) in Algorithm 4.3 and [4, Theorem 3.4] that 4.0) x n ) converges weakly to a solution x to Problem.3 and that 4.) x n prox γnψ xn γ n Φx n ) ) < + and Φx n ) Φx) < +. Hence, it follows from Lemma.iii) and assumption i) in Algorithm 4.3 that 4.) x n prox γn Ψ xn γ n Φx) ) x n prox γn Ψ xn γ n Φx n ) ) + prox γnψ xn γ n Φx n ) ) prox γnψ xn γ n Φx) ) x n prox γn Ψ xn γ n Φx n ) ) + γn Φx n ) Φx) x n prox γnψ xn γ n Φx n ) ) + 4β Φx n ) Φx) < +. Now define 4.3) n N) v n = x n x and h n = x γ n Φx). On the one hand, we derive from 4.0) that 4.4) v n 0 and, on the other hand, from 4.) and Proposition 4. that 4.5) v n prox γn Ψv n + h n ) + prox γn Ψ h n = x n prox γn Ψ xn γ n Φx) ) < +. By Parseval s identity, to establish that v n = x n x 0, we must show that 4.6) ν n,k 0 and ν n,k 0, where n N) k N) ν n,k = v n e k. k K k L To this end, it is useful to set, for every n N and k N, η n,k = h n e k and observe that 4.5), Parseval s identity, and Lemma.3 imply that 4.7) ν n,k prox γnφk ν n,k + η n,k ) + prox γnφk η n,k 0. k N 7

18 In addition, if we set r = β Φx) and, for every k N, ξ k = x e k and ρ k = r e k, then we derive from 4.3) and assumption i) in Algorithm 4.3 that 4.8) n N) k N) η n,k / ξ k + γ n Φx) e k ξ k + ρ k. To establish 4.6), let us first show that k K ν n,k 0. Assumption vi) in Problem.3 asserts that there exists ω ]0, + [ such that 4.9) [ ω, ω] k K Ω k. Now set δ = γω, where γ = inf γ n. Then it follows from assumption i) in Algorithm 4.3 that δ > 0 and from 4.9) that 4.0) [ δ, δ] γ n Ω k. On the other hand, 4.8) yields 4.) sup η n,k / ξk + ρ k ) = x + r < +. k N k K Hence, there exists a finite set K K such that 4.) n N) η n,k δ /4, where K = K K. k K In view of 4.4), we have k K ν n,k 0. Let us now show that k K ν n,k 0. Note that 4.0) and 4.) yield 4.3) n N) k K ) η n,k [ δ/, δ/] γ n Ω k. Therefore, 4.6) implies that k K 4.4) n N) k K ) prox γn φ k η n,k = 0. Let us define 4.5) n N) K,n = { k K νn,k + η n,k γ n Ω k }. Then, invoking 4.6) once again, we obtain 4.6) n N) k K,n ) prox γnφ k ν n,k + η n,k ) = 0 which, combined with 4.4), yields n N) ν n,k = k K,n 4.7) k K,n ν n,k prox γn φ k ν n,k + η n,k ) + prox γn φ k η n,k ν n,k prox γnφk ν n,k + η n,k ) + prox γnφk η n,k. k N 8

19 Consequently, it results from 4.7) that k K,n ν n,k 0. Next, let us set 4.8) n N) K,n = K K,n and show that k K,n ν n,k 0. It follows from 4.8), 4.5), and 4.0) that 4.9) n N) k K,n ) ν n,k + η n,k / γ n Ω k [ δ, δ]. Hence, appealing to 4.3), we obtain 4.30) n N) k K,n ) ν n,k + η n,k δ η n,k + δ/. Now take n N and k K,n. We derive from 4.4) and Lemma.ii) that 4.3) ν n,k prox γn φ k ν n,k + η n,k ) + prox γn φ k η n,k = ν n,k + η n,k ) prox γn φ k ν n,k + η n,k ) η n,k = prox γnφk ) ν n,k + η n,k ) η n,k. However, it results from 4.0), 4.6), and Proposition 3. that prox γn φ k ) ±δ) = ±δ. We consider two cases. First, if ν n,k + η n,k 0 then, since prox γn φ k ) is nondecreasing by Proposition.4, 4.30) yields ν n,k + η n,k δ and 4.3) prox γn φ k ) ν n,k + η n,k ) prox γn φ k ) δ = δ η n,k + δ/. Likewise, if ν n,k + η n,k 0, then 4.30) yields ν n,k + η n,k δ and 4.33) prox γnφk ) ν n,k + η n,k ) prox γnφk ) δ) = δ η n,k δ/. Altogether, we derive from 4.3) and 4.33) that 4.34) n N) k K,n ) prox γnφk ) ν n,k + η n,k ) η n,k δ/. In turn, 4.3) yields 4.35) n N) k K,n ν n,k prox γn φ k ν n,k + η n,k ) + prox γn φ k η n,k cardk,n )δ /4. However, it follows from 4.7) that, for n sufficiently large, 4.36) ν n,k prox γn φ k ν n,k + η n,k ) + prox γn φ k η n,k δ /5. k N Thus, for n sufficiently large, K,n =. k K ν n,k 0. We conclude from this first part of the proof that In order to obtain 4.6), we must now show that k L ν n,k 0. We infer from 4.4) that v n ) is bounded, hence 4.37) sup ν n,k sup v n ρ /4, k L 9

20 for some ρ ]0, + [. Now define 4.38) L = { k L n N) ηn,k ρ/ }. Then we derive from 4.8) that 4.39) k L ) n N) ξ k + ρ k η n,k / ρ /8. Consequently, we have 4.40) + > x + r ξk + ρ k ) card L )ρ /8 k L and therefore cardl ) < +. In turn, it results from 4.4) that k L ν n,k 0. Hence, to obtain k L ν n,k 0, it remains to show that k L ν n,k 0, where L = L L. In view of 4.38) and 4.37), we have 4.4) n N) k L ) η n,k < ρ/ and ν n,k + η n,k ν n,k + η n,k < ρ. On the other hand, assumption iv) in Problem.3 asserts that there exists θ ]0, + [ such that 4.4) inf inf k L inf γ nψ k ) ξ) γ inf 0< ξ ρ k L inf 0< ξ ρ ψ k ξ) γθ. It therefore follows from assumptions ii) and iii) in Problem.3, Proposition.8, and 4.5) that 4.43) n N) k L ) ν n,k ν n,k prox γn ψ k ν n,k + η n,k ) + prox γn ψ k η n,k + prox γn ψ k ν n,k + η n,k ) prox γn ψ k η n,k ν n,k prox γnψk ν n,k + η n,k ) + prox γnψk η n,k + ν n,k / + γθ) = ν n,k prox γnφk ν n,k + η n,k ) + prox γnφk η n,k + ν n,k / + γθ). Consequently, upon setting µ = + /γθ), we obtain 4.44) n N) k L ) ν n,k µ ν n,k prox γn φ k ν n,k + η n,k ) + prox γn φ k η n,k. In turn, 4.45) n N) ν n,k µ ν n,k prox γn φ k ν n,k + η n,k ) + prox γn φ k η n,k. k L k L Hence, 4.7) forces k L ν n,k 0, as desired. Remark 4.6 An important aspect of Theorem 4.5 is that it provides a strong convergence result. Indeed, in general, only weak convergence can be claimed for forward-backward methods [4, 36] see [3], [4], [4, Remark 5.], and [3] for explicit constructions in which strong convergence fails). In addition, the standard sufficient conditions for strong convergence in this type of algorithm see [, Remark 6.6] and [4, Theorem 3.4iv)]) are not satisfied in Problem.3. Further aspects of the relevance of strong convergence in proximal methods are discussed in [3, 4]. 0

21 Remark 4.7 Let T be a nonzero bounded linear operator from H to a real Hilbert space G, let z G, and let τ and ω be in ]0, + [. Specializing Theorem 4.5 to the case when Φ: x T x z / and either 4.46) K = and k L) ψ k = τ k p, where p ], ] and τ k [τ, + [, or 4.47) L = and k K) ψ k = 0 and Ω k = [ ω k, ω k ], where ω k [ω, + [, yields [4, Corollary 5.9]. If we further impose λ n, T <, γ n, α n,k 0, and b n 0, we obtain [6, Theorem 3.]. 5 Applications to sparse signal recovery 5. A special case of Problem.3 In.3), a single observation of the original signal x is available. In certain problems, q such noisy linear observations are available, say z i = T i x + v i i q), which leads to the weighted leastsquares data fidelity term x q i= µ i T i x z i ; see [0] and the references therein. Furthermore, signal recovery problems are typically accompanied with convex constraints that confine x to some closed convex subsets S i ) i m of H. These constraints can be aggregated via the cost function x m i= ν id S i x); see [9, 6] and the references therein. On the other hand, a common approach to penalize the coefficients of an orthonormal basis decomposition is to use power functions, e.g., [, 7, 6]. Moreover, we aim at promoting sparsity of a solution x H with respect to e k ) k N in the sense that, for every k in K, we wish to set to 0 the coefficient x e k if it lies in the interval Ω k. Altogether, these considerations suggest the following formulation. Problem 5. For every i {,..., q}, let µ i ]0, + [, let T i be a nonzero bounded linear operator from H to a real Hilbert space G i, and let z i G i. For every i {,..., m}, let ν i ]0, + [ and let S i be a nonempty closed and convex subset of H. Furthermore, let p k,l ) 0 l Lk be distinct real numbers in ], + [, let τ k,l ) 0 l Lk be real numbers in [0, + [, and let l k {0,..., L k } satisfy p k,lk = min 0 l Lk p k,l, where L k ) k N is a sequence in N. Finally, let K N, let L = N K, and let Ω k ) k K be a sequence of closed intervals in R. The objective is to 5.) minimize x H q µ i T i x z i + i= m ν i d S i x) + L k τ k,l x e k p k,l + σ Ωk x e k ), k N l=0 k K i= under the following assumptions: i) inf k L τ k,lk > 0; ii) inf k L p k,lk > ; iii) sup k L p k,lk ;

22 iv) 0 int k K Ω k. Proposition 5. Problem 5. is a special case of Problem.3. Proof. First, we observe that 5.) corresponds to.6) where 5.) Φ: x q µ i T i x z i + i= m L k ν i d S i x) and k N) ψ k : ξ τ k,l ξ p k,l. Hence, Φ is a finite continuous convex function with Fréchet gradient 5.3) Φ: x q i= i= µ i T i T i x z i ) + m ν i x P i x), where P i is the projection operator onto S i. Therefore, since the operators Id P i ) i m are nonexpansive, it follows that assumption i) in Problem.3 is satisfied with /β = q i= µ i T i + m i= ν i. Moreover, the functions ψ k ) k N are in Γ 0 R) and satisfy assumptions ii) and iii) in Problem.3. Let us now turn to assumption iv) in Problem.3. Fix ρ ]0, + [ and set τ = inf k L τ k,lk, p = inf k L p k,lk, and θ = τpp ) min{, /ρ}. Then it follows from i), ii), and iii) that θ > 0 and that i= l=0 5.4) inf k L inf 0< ξ ρ ψ k ξ) = inf k L inf 0< ξ ρ L k τ k,l p k,l p k,l ) ξ p k,l l=0 inf k L τ k,l k p k,lk p k,lk ) inf 0<ξ ρ ξp k,l k τpp ) inf inf k L 0<ξ ρ ξp k,l k θ, which shows that.7) is satisfied. It remains to check assumption v) in Problem.3. To this end, let L denote the standard norm on l L), take x = ξ k ) k L l L) such that x L, and set η k ) k L = x/ x L. Then, for every k L, η k and, since p k,lk ], ], we have η k p k,l k η k. Consequently, 5.5) L k Υ L x) = k,l ξ k k L l=0τ pk,l τ k,lk ξ k p k,l k k L τ ξ k p k,l k = τ k L k L x p k,l k L η k p k,l k τ k L x p k,l k L η k = τ x p k,l k L ξ k k L τ x L ξ k = τ x L. k L We conclude that Υ L x) + as x L +.

23 Figure 4: Original signal first example. 5. First example Our first example concerns the simulated X-ray fluorescence spectrum x displayed in Fig. 4, which is often used to test restoration methods. The underlying Hilbert space is H = l N). The measured signal z shown in Fig. 5 has undergone blurring by the limited resolution of the spectrometer and further corruption by addition of noise. In the underlying Hilbert space H = l N), this process is modeled by z = T x + v, where T : H H is the operator of convolution with a truncated Gaussian kernel. The noise samples are uncorrelated and drawn from a Gaussian population with mean zero and standard deviation 0.5. The original signal x has support {0,..., N } N = 04), takes on nonnegative values, and possesses a sparse structure. These features can be promoted in Problem 5. by letting e k ) k N be the canonical orthonormal basis of H, and setting K = N, τ k,l 0, and { ], ω], if 0 k N ; 5.6) k N) Ω k = R, otherwise, Figure 5: Degraded signal first example. 3

24 Figure 6: Signal restored by the proposed algorithm first example. where the one-sided thresholding level is set to ω = 0.0. On the other hand, using the methodology described in [35], the above information about the noise can be used to construct the constraint sets S = { x H } T x z δ and S = N { } l= x H T xl/n) ẑl/n) δ, where â: ν + k=0 a e k exp ıπkν) designates the Fourier transform of a H. The bounds δ and δ have been determined so as to guarantee that x lies in S and in S with a 99 percent confidence level see [3] for details). Finally, we set q = 0 and ν = ν = in 5.) the computation of the projectors P and P required in 5.3) is detailed in [35]). The solution produced by Algorithm 4.3 is shown in Fig. 6. It is of much better quality than the restorations obtained in [] and [35] via alternative methods. 5.3 Second example We provide a wavelet deconvolution example in H = L R). The original signal x is the classical bumps signal [38] displayed in Fig. 7. The degraded version shown in Fig. 8 is z = T x + v, where T models convolution with a uniform kernel and v is a realization of a zero-mean white Figure 7: Original signal second example. 4

25 Figure 8: Degraded signal second example. Gaussian noise. The basis e k ) k N is an orthonormal wavelet symlet basis with 8 vanishing moments [5]. Such wavelet bases are known to provide sparse representations for a wide class of signals [0] such as this standard test signal. Note that there exists a strong connection between Problem 5. and maximum a posteriori techniques for estimating x in the presence of white Gaussian noise. In particular, setting q =, m = 0, K = and L k 0, and using suitably subband-adapted values of p k,0 and τ k,0 amounts to fitting an appropriate generalized Gaussian prior distribution to the wavelet coefficients in each subband []. Such a statistical modeling is commonly used in waveletbased estimation, where values of p k,0 close to may provide a good model at coarse resolution levels, whereas values close to should preferably be used at finer resolutions. The setting of the more general model we adopt here is the following: in Problem 5., K and L are the index sets of the detail and approximation coefficients [7], respectively, and k K) Ω k = [ 0.003, 0.003], L k =, p k,0, p k, ) =, 4), τ k,0, τ k, ) = 0.005, 0.000). k L) L k = 0, p k,0 =, τ k,0 = Figure 9: Signal restored by the proposed algorithm second example. 5

26 Figure 0: Signal restored by solving.3) second example. In addition, we set q =, µ =, m =, ν =, and S = { x H x 0 } nonnegativity constraint). The solution x produced by Algorithm 4.3 is shown in Fig. 9. The estimation error is x x = For comparison, the signal x restored via.3) with Algorithm.4) is displayed in Fig. 0. In Problem 5., this corresponds to q =, m = 0, K = N, τ k,l 0, ω k.9 for the detail coefficients, and ω k for the approximation coefficients. This setup yields a worse error of x x = 4.4. The above results have been obtained with a discrete implementation of the wavelet decomposition over 4 resolution levels using 048 signal samples [7]. References [] A. Antoniadis, D. Leporini, and J.-C. Pesquet, Wavelet thresholding for some classes of non-gaussian noise, Statist. Neerlandica, vol. 56, pp , 00. [] S. Bacchelli and S. Papi, Filtered wavelet thresholding methods, J. Comput. Appl. Math., vol. 64/65, pp. 39 5, 004. [3] H. H. Bauschke, J. V. Burke, F. R. Deutsch, H. S. Hundal, and J. D. Vanderwerff, A new proximal point iteration that converges weakly but not in norm, Proc. Amer. Math. Soc., vol. 33, pp , 005. [4] H. H. Bauschke, E. Matoušková, and S. Reich, Projection and proximal point methods: Convergence results and counterexamples, Nonlinear Anal., vol. 56, pp , 004. [5] J. Bect, L. Blanc-Féraud, G. Aubert, and A. Chambolle, A l unified variational framework for image restoration, in Proc. Eighth Europ. Conf. Comput. Vision, Prague, 004, T. Pajdla and J. Matas, eds., Lecture Notes in Comput. Sci. 304, Springer-Verlag, New York, 004, pp 3. [6] R. E. Bruck and S. Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, Houston J. Math., vol. 3, pp , 977. [7] A. Chambolle, R. A. DeVore, N. Y. Lee, and B. J. Lucier, Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage, IEEE Trans. Image Process., vol. 7, pp ,

27 [8] S. Chen, D. Donoho, and M. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., vol. 43, pp. 9 59, 00. [9] P. L. Combettes, Inconsistent signal feasibility problems: Least-squares solutions in a product space, IEEE Trans. Signal Process., vol. 4, pp , 994. [0] P. L. Combettes, A block-iterative surrogate constraint splitting method for quadratic signal recovery, IEEE Trans. Signal Processing, vol. 5, pp , 003. [] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, vol. 53, pp , 004. [] P. L. Combettes and H. J. Trussell, Method of successive projections for finding a common point of sets in metric spaces, J. Optim. Theory Appl., vol. 67, pp , 990. [3] P. L. Combettes and H. J. Trussell, The use of noise properties in set theoretic estimation, IEEE Trans. Signal Process., vol. 39, pp , 99. [4] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., vol. 4, pp , 005. [5] I. Daubechies, Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 99. [6] I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math., vol. 57, pp , 004. [7] I. Daubechies and G. Teschke, Variational image restoration by means of wavelets: Simultaneous decomposition, deblurring, and denoising, Appl. Comput. Harmon. Anal., vol. 9, pp. 6, 005. [8] C. de Mol and M. Defrise, A note on wavelet-based inversion algorithms, Contemp. Math., vol. 33, pp , 00. [9] D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika, vol. 8, pp , 994. [0] D. L. Donoho and I. M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Amer. Stat. Assoc., vol. 90, pp. 00-4, 995. [] D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, Wavelet shrinkage: Asymptopia?, J. R. Statist. Soc. B., vol. 57, pp , 995. [] M. A. T. Figueiredo and R. D. Nowak, An EM algorithm for wavelet-based image restoration, IEEE Trans. Image Process., vol., pp , 003. [3] O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., vol. 9, pp , 99. [4] O. Güler, Convergence rate estimates for the gradient differential inclusion, Optim. Methods Softw., vol. 0, pp , 005. [5] P. J. Huber, Robust regression: Asymptotics, conjectures, and Monte Carlo, Ann. Statist., vol., pp , 973. [6] T. Kotzer, N. Cohen, and J. Shamir, A projection-based algorithm for consistent and inconsistent constraints, SIAM J. Optim., vol. 7, pp , 997. [7] S. G. Mallat, A Wavelet Tour of Signal Processing, nd ed, Academic Press, New York,

28 [8] J.-J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, C. R. Acad. Sci. Paris Sér. A Math., vol. 55, pp , 96. [9] J.-J. Moreau, Proximité et dualité dans un espace hilbertien, Bull. Soc. Math. France, vol. 93, pp , 965. [30] R. T. Rockafellar, Convex Analysis. Princeton, NJ: Princeton University Press, 970. [3] G. Steidl, J. Weickert, T. Brox, P. Mrázek, and M. Welk, On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs, SIAM J. Numer. Anal., vol. 4, pp , 004. [3] T. Tao and B. Vidakovic, Almost everywhere behavior of general wavelet shrinkage operators, Appl. Comput. Harmon. Anal., vol. 9, pp. 7 8, 000. [33] V. N. Temlyakov, Universal bases and greedy algorithms for anisotropic function classes, Constr. Approx., vol. 8, pp , 00. [34] J. A. Tropp, Just relax: Convex programming methods for identifying sparse signals in noise, IEEE Trans. Inform. Theory, vol. 5, pp , 006. [35] H. J. Trussell and M. R. Civanlar, The feasible solution in signal restoration, IEEE Trans. Acoust., Speech, Signal Process., vol. 3, pp. 0, 984. [36] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim., vol. 9, pp. 9 38, 99. [37] B. Vidakovic, Nonlinear wavelet shrinkage with Bayes rules and Bayes factors, J. Amer. Statist. Assoc., vol. 93, pp , 998. [38] WaveLab Toolbox, Stanford University, [39] C. Zălinescu, Convex Analysis in General Vector Spaces. World Scientific, River Edge, NJ, 00. 8

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES SIAM J. Optim. to appear PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES PATRICK L. COMBETTES AND JEAN-CHRISTOPHE PESQUET Abstract. The notion of soft thresholding plays a central

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

A VARIATIONAL FORMULATION FOR FRAME-BASED INVERSE PROBLEMS

A VARIATIONAL FORMULATION FOR FRAME-BASED INVERSE PROBLEMS A VARIATIONAL FORMULATION FOR FRAME-BASED INVERSE PROBLEMS Caroline Chaux, 1 Patrick L. Combettes, 2 Jean-Christophe Pesquet, 1 and Valérie R. Wajs 2 1 Institut Gaspard Monge and UMR CNRS 8049 Université

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

A characterization of essentially strictly convex functions on reflexive Banach spaces

A characterization of essentially strictly convex functions on reflexive Banach spaces A characterization of essentially strictly convex functions on reflexive Banach spaces Michel Volle Département de Mathématiques Université d Avignon et des Pays de Vaucluse 74, rue Louis Pasteur 84029

More information

SIGNAL RECOVERY BY PROXIMAL FORWARD-BACKWARD SPLITTING

SIGNAL RECOVERY BY PROXIMAL FORWARD-BACKWARD SPLITTING Multiscale Model. Simul. To appear SIGNAL RECOVERY BY PROXIMAL FORWARD-BACKWARD SPLITTING PATRICK L. COMBETTES AND VALÉRIE R. WAJS Abstract. We show that various inverse problems in signal recovery can

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Emil Ernst Michel Théra Rapport de recherche n 2004-04

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

Auxiliary-Function Methods in Iterative Optimization

Auxiliary-Function Methods in Iterative Optimization Auxiliary-Function Methods in Iterative Optimization Charles L. Byrne April 6, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem is to minimize f over C. In auxiliary-function

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings arxiv:1505.04129v1 [math.oc] 15 May 2015 Heinz H. Bauschke, Graeme R. Douglas, and Walaa M. Moursi May 15, 2015 Abstract

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup International Mathematical Forum, Vol. 11, 2016, no. 8, 395-408 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6220 The Split Hierarchical Monotone Variational Inclusions Problems and

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

FINDING BEST APPROXIMATION PAIRS RELATIVE TO TWO CLOSED CONVEX SETS IN HILBERT SPACES

FINDING BEST APPROXIMATION PAIRS RELATIVE TO TWO CLOSED CONVEX SETS IN HILBERT SPACES FINDING BEST APPROXIMATION PAIRS RELATIVE TO TWO CLOSED CONVEX SETS IN HILBERT SPACES Heinz H. Bauschke, Patrick L. Combettes, and D. Russell Luke Abstract We consider the problem of finding a best approximation

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999 Scientiae Mathematicae Vol. 3, No. 1(2000), 107 115 107 ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI Received December 14, 1999

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

Minimizing Isotropic Total Variation without Subiterations

Minimizing Isotropic Total Variation without Subiterations MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most

More information

Iterative Convex Optimization Algorithms; Part Two: Without the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part Two: Without the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part Two: Without the Baillon Haddad Theorem Charles L. Byrne February 24, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

THROUGHOUT this paper, we let C be a nonempty

THROUGHOUT this paper, we let C be a nonempty Strong Convergence Theorems of Multivalued Nonexpansive Mappings and Maximal Monotone Operators in Banach Spaces Kriengsak Wattanawitoon, Uamporn Witthayarat and Poom Kumam Abstract In this paper, we prove

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES Shih-sen Chang 1, Ding Ping Wu 2, Lin Wang 3,, Gang Wang 3 1 Center for General Educatin, China

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

1. Standing assumptions, problem statement, and motivation. We assume throughout this paper that

1. Standing assumptions, problem statement, and motivation. We assume throughout this paper that SIAM J. Optim., to appear ITERATING BREGMAN RETRACTIONS HEINZ H. BAUSCHKE AND PATRICK L. COMBETTES Abstract. The notion of a Bregman retraction of a closed convex set in Euclidean space is introduced.

More information

consistent learning by composite proximal thresholding

consistent learning by composite proximal thresholding consistent learning by composite proximal thresholding Saverio Salzo Università degli Studi di Genova Optimization in Machine learning, vision and image processing Université Paul Sabatier, Toulouse 6-7

More information

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Peter Ochs, Jalal Fadili, and Thomas Brox Saarland University, Saarbrücken, Germany Normandie Univ, ENSICAEN, CNRS, GREYC, France

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Stochastic Proximal Gradient Algorithm

Stochastic Proximal Gradient Algorithm Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces Cyril Dennis Enyi and Mukiawa Edwin Soh Abstract In this paper, we present a new iterative

More information

Which wavelet bases are the best for image denoising?

Which wavelet bases are the best for image denoising? Which wavelet bases are the best for image denoising? Florian Luisier a, Thierry Blu a, Brigitte Forster b and Michael Unser a a Biomedical Imaging Group (BIG), Ecole Polytechnique Fédérale de Lausanne

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

Variational inequalities for fixed point problems of quasi-nonexpansive operators 1. Rafał Zalas 2

Variational inequalities for fixed point problems of quasi-nonexpansive operators 1. Rafał Zalas 2 University of Zielona Góra Faculty of Mathematics, Computer Science and Econometrics Summary of the Ph.D. thesis Variational inequalities for fixed point problems of quasi-nonexpansive operators 1 by Rafał

More information

The Fitzpatrick Function and Nonreflexive Spaces

The Fitzpatrick Function and Nonreflexive Spaces Journal of Convex Analysis Volume 13 (2006), No. 3+4, 861 881 The Fitzpatrick Function and Nonreflexive Spaces S. Simons Department of Mathematics, University of California, Santa Barbara, CA 93106-3080,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Edge-preserving wavelet thresholding for image denoising

Edge-preserving wavelet thresholding for image denoising Journal of Computational and Applied Mathematics ( ) www.elsevier.com/locate/cam Edge-preserving wavelet thresholding for image denoising D. Lazzaro, L.B. Montefusco Departement of Mathematics, University

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

Fonctions Perspectives et Statistique en Grande Dimension

Fonctions Perspectives et Statistique en Grande Dimension Fonctions Perspectives et Statistique en Grande Dimension Patrick L. Combettes Department of Mathematics North Carolina State University Raleigh, NC 27695, USA Basé sur un travail conjoint avec C. L. Müller

More information

Examples of Convex Functions and Classifications of Normed Spaces

Examples of Convex Functions and Classifications of Normed Spaces Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 124, Number 11, November 1996 ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES HASSAN RIAHI (Communicated by Palle E. T. Jorgensen)

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES

A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES MATHEMATICS OF OPERATIONS RESEARCH Vol. 26, No. 2, May 2001, pp. 248 264 Printed in U.S.A. A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES HEINZ H. BAUSCHKE and PATRICK

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Learning MMSE Optimal Thresholds for FISTA

Learning MMSE Optimal Thresholds for FISTA MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Learning MMSE Optimal Thresholds for FISTA Kamilov, U.; Mansour, H. TR2016-111 August 2016 Abstract Fast iterative shrinkage/thresholding algorithm

More information

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES Kragujevac Journal of Mathematics Volume 36 Number 2 (2012), Pages 237 249. STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

BAND-LIMITED REFINABLE FUNCTIONS FOR WAVELETS AND FRAMELETS

BAND-LIMITED REFINABLE FUNCTIONS FOR WAVELETS AND FRAMELETS BAND-LIMITED REFINABLE FUNCTIONS FOR WAVELETS AND FRAMELETS WEIQIANG CHEN AND SAY SONG GOH DEPARTMENT OF MATHEMATICS NATIONAL UNIVERSITY OF SINGAPORE 10 KENT RIDGE CRESCENT, SINGAPORE 119260 REPUBLIC OF

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

The sum of two maximal monotone operator is of type FPV

The sum of two maximal monotone operator is of type FPV CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type

More information

Convex Hodge Decomposition of Image Flows

Convex Hodge Decomposition of Image Flows Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,

More information