A theoretical analysis of L 1 regularized Poisson likelihood estimation

Size: px
Start display at page:

Download "A theoretical analysis of L 1 regularized Poisson likelihood estimation"

Transcription

1 See discussions, stats, and author profiles for this publication at: A theoretical analysis of L 1 regularized Poisson likelihood estimation Article in Inverse Problems in Science and Engineering January 2009 DOI: / CITATIONS 2 READS 21 1 author: Aaron Luttman U.S. Department of Energy 45 PUBLICATIONS 337 CITATIONS SEE PROFILE All content following this page was uploaded by Aaron Luttman on 02 May The user has requested enhancement of the downloaded file.

2 Inverse Problems in Science and Engineering Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE A Theoretical Analysis of L 1 Regularized Poisson Likelihood Estimation Aaron Luttman, Department of Mathematical Sciences, Clarkson University, Potsdam, NY, (Received 00 Month 200x; in final form 00 Month 200x) A standard variational formulation for solving the linear operator equation Au = z is to compute the function u that minimizes Au z L 2 (). This is often an ill-posed problem that requires regularization, which is to say that there may exist infinitely many minimizers or that the solution may not be stable with respect to measurement errors. In many cases, however, the measured function z is contaminated by noise of both Gaussian and Poisson type. If the Poisson noise is mathematically significant but not of high values, then solving the operator equation via the ubiquitous least-squares minimization problem may not produce an appropriate reconstruction, and the Poisson negative log-likelihood estimator may lead to a more accurate result. The Poisson likelihood minimization is also generally an ill-posed problem, and we require a regularization term to impose stability or to pick out which of the infinitely many minimizers is appropriate. Many possible regularizations have been analyzed, including Tikhonov and total-variation regularizations, but in this work we perform the theoretical analysis to show that choosing the Poisson minimizer of smallest L 1 norm leads to a theoretically well-posed problem. Keywords: L 1 Regularization, ill-posed problems, Poisson likelihood estimation AMS Subject Classification: 49J99, 46N10 1. Introduction Given a bounded, open domain R d for some d 1, there are many applications where one wishes to solve the linear operator equation Au = z, for some u L 2 () where A: L 2 () L 2 () is the integral operator given by Au(x) = k(x; ζ)u(ζ) dζ, for some kernel k L ( ). In many cases, the values of the measured function z are artificially increased due to non-negative noise effects, and in some applications it is appropriately modeled by a Poisson random variable with expectation γ, which is strictly positive. Therefore we wish to solve the alternate operator equation Au + γ = z. (1) aluttman@clarkson.edu, aluttman/ ISSN: print/issn online c 200x Taylor & Francis DOI: / YYxxxxxxxx

3 2 If the functions Au and z are elements of L 2 (), then the first natural approach to solving (1) is to minimize Au+γ z 2 L 2 () over some appropriate set of functions C. Since, given u, this measures how close the minimizer Au γ is to the measured image z, this is called a data fidelity. The least-squares fidelity, i.e. using the L 2 () distance, has the effect of interpreting the noise, with expectation γ, as a Gaussian random variable, which is appropriate when the values of the background noise are high. When the noise is small, but significant, relative to the values of the function u, however, this does not give a good approximation. Thus we instead seek to minimize a data fidelity that respects the Poisson nature of the background noise. One such fidelity functional is the Poisson negative log-likelihood functional [Au + γ z log(au + γ)]. (2) By basic variational principles, a minimizer of this functional is a solution to (1). Minimizers of (2) need not exist, and, if they do exist, need not be unique [20, 21], depending on the operator A. Moreover, the computation of solutions may not be stable with respect to errors in the measurements. The nature of the regularization used to guarantee the well-posedness of the minimization problem depends on prior information about the solution. There is a well-developed regularization theory for the Poisson likelihood functional using L 2 () norm, argmin [Au + γ z log(au + γ)] + α u 2 L 2 (), where α > 0 is called the regularization parameter. Proving that a unique minimizer exists then reduces to proving that, under certain assumptions on A, there exists a unique minimizer of the data fidelity with smallest norm, which was done in [4] for an appropriate set of functions C L 2 (). If the function u being recovered is known to have other derivative properties, then it is possible to instead regularize using a diffusion operator of the form argmin [Au + γ z log(au + γ)] + α Λ u 2 L 2 (), where Λ is a data-dependent matrix, in which case the regularization term acts as a spatially-dependent penalty on the gradient. It was demonstrated in [5] that, for suitable Λ, this functional also has a unique minimizer in a class of functions C that are a subset of the Sobolev space H 1 (). If the function being reconstructed is piecewise constant, then it is also possible to regularize via the total-variation, argmin [Au + γ z log(au + γ)] + α u, and the corresponding existence and uniqueness theory was presented in [6]. All three of the above regularization schemes are collected into a single framework in [3]. Rather than assuming that the gradient of u is sparse, there has been a recent shift in digital signal recovery to analyzing signals that are themselves sparse, at least in some appropriate representation (see for example [2, 7, 8, 10]). Sparsity is naturally enforced by regularization using the L 1 norm, and the primary focus of this work is to develop the theoretical foundation for the L 1 ()-regularized Poisson

4 3 negative log-likelihood data fidelity, argmin [Au + γ z log(au + γ)] + α u, (3) within the variational framework established for related regularization techniques. 2. Problem Formulation 2.1. Notations and Definitions The domain over which the function u is being reconstructed is denoted by, and it is assumed to be a closed, bounded, convex subset of R d with finite measure, for some integer d 1. We also assume that measured function z is an element of L () with the property that z 0 almost everywhere (a.e.). Since we seek to solve (1) and γ > 0, actually z > 0 a.e. The function u that we seek is a reconstruction of a non-negative almost everywhere function, so it should also have the property of being non-negative almost everywhere. Thus the set C over which we desire to compute a minimizer of (3) is C = {u L 2 () u 0 a.e.}, which is a closed, convex subset of L 2 (). Moreover, since is a bounded set of finite measure, C L 1 (). Given the integral operator Au(x) = k(x; ζ)u(ζ) dζ, (4) with non-negative kernel k L ( ), u 0 a.e. implies that Au 0 a.e. Moreover, since the kernel k is given (measured in applications), we assume that k L ( ), so that A: L 2 () L 2 () is a compact linear operator [20, 21]. We also assume that the set of functions of interest are not in the kernel of A, i.e. Au = 0 implies u = 0 for u C. Though this is a harsh restriction on A, it will be shown that this is necessary in theory and natural in practice. The last thing to be noted about A is that it is a bounded linear operator when viewed as mapping L 1 () L 1 (). To see this, let u C. Then Au L 2 () L 1 (), and, keeping in mind that k 0, u 0 and Au 0 (all almost everywhere), we have Au L1 () = Au = k(x; ζ)u(ζ) dζ k L ( )u(ζ) dζ = k L ( ) u L1 (), (5) where u L1 () is finite since u L 2 () L 1 (), recalling that is a set of finite measure. This shows that A is a bounded linear operator in the sense of L 1 () when acting on elements of C, and it is in this sense that we will refer to the L 1 () operator norm of A. For α > 0, we define T α : C R by T α (u) = [(Au + γ) z log (Au + γ)] + α u, (6) and the goal is to compute argmin T α (u). Note that T 0 (u) is simply the unregularized Poisson likelihood.

5 4 3. Theoretical Analysis 3.1. Well-Posedness of Problem A problem is called well-posed in the sense of Hadamard if a solution exists, is unique, and if it is stable with respect to perturbations in the input. Stability means that having an estimate on the magnitude of the input error leads to a deterministic estimate of the output error, i.e. that the solution is, in some sense, continuous with respect to input errors. The goal is to demonstrate that argmin T α (u) = argmin [Au + γ z log (Au + γ)] + α u (7) is a well-posed problem in this sense. Thus we first show that a unique solution exists and that the solution is stable with respect to errors in z and k Existence and Uniqueness of Solutions If X is a Banach space, a functional T : X R is called convex on a set C X if T (αx + (1 α)y) αt (x) + (1 α)t (y) whenever 0 < α < 1 and x, y C. It is called strictly convex if we have strict inequality for x y. A functional T is called X-coercive if T (x n ) whenever x n X. These are both important to demonstrate the existence of minimizers, due to the theorem that follows [21, Theorem 2.30]. The proof is included here for completeness. Theorem 3.1 : Let R d be a bounded set of finite measure and C L p () (1 < p < ) be a closed, convex set. If T : C R is a convex functional that is bounded below and L 1 ()-coercive, then there exists u C such that T (u ) = inf T α (u). Moreover, if T is strictly convex, then u is unique. Proof : Let {u n } C be a sequence such that T (u n ) inf T (u). Since T is L 1 ()-coercive, {u n } is L 1 ()-bounded, which implies that it is also L p ()- bounded. Since C is a closed set contained in a reflexive Banach space, the boundedness of {u n } in the L p () norm implies that it has a subsequence {u nk } such that u nk u for some u C (where u nk u indicates weak convergence in L p ()). Since T is convex, it is weakly lower semicontinuous [24], which implies that T (u ) lim inf T (u nk ) = inf T (u). Since u C, we have that T (u ) = inf T (u), i.e. u is a minimizer of T on C. Now if T is strictly convex and u # is also a minimizer, then T ((u # ) + u )/2) < 1 2 T (u# ) T (u ) = T (u ), a contradiction. Therefore the minimizer u is unique. Given the general results above, we now address the properties of T α as defined by (6) and begin by showing that T 0 is convex. This follows the developments in [6] and [21, Theorem 2.42]. Lemma 3.2: The functional T 0 is convex on C, and, if ker(a) C = {0}, then the convexity is strict. Proof : A straightforward calculation yields that the Hessian of T 0 is ( 2 T 0 = A (diag z (Au + γ) 2 )) A, (8)

6 5 where diag(v): L 2 () L 2 () is the linear operator defined by diag(v)w = vw. Since Au 0 a.e. and γ > 0, this operator is well-defined. To see that this is positive semi-definite on C, note that, if u L 2 (), then ( ( A diag z (Au + γ) 2 )) ( ( Au, u = diag z (Au + γ) 2 )) Au, Au. ( ) Thus 2 z T 0 is positive semi-definite if diag (Au+γ) y, y 0 for all y 2 Image(A), which holds since > 0 a.e. and Image(A) C, which implies z (Au+γ) 2 y 0 a.e. Thus 2 T 0 is positive semi-definite, which shows that ( T 0 is convex. ) z If ker(a) C = {0}, then, recalling that z > 0 a.e., diag (Au+γ) y, y > 0 2 for all y 0, i.e. for all Au 0. Thus 2 T 0 is positive definite, which implies that T 0 is strictly convex. Lemma 3.3: If ker(a) C = {0}, then T 0 is L 1 ()-coercive. Proof : Suppose that {u n } C is such that u n L1 (). Let 1 = {x : u n (x) }. Since is a set of finite measure and u n L1 (), 1 has positive measure. Thus, invoking Fubini s Theorem, [Au n + γ] = = k(x; ζ)u n (ζ) dζ + γ k(x; ζ)u n (ζ) dζ + k(x; ζ)u n (ζ) dζ + γ 1 \ 1 k(x; ζ)u n (ζ) dζ = u n (ζ) k(x; ζ) dζ. 1 1 Suppose that k(x; ζ) = 0 for almost every (x; ζ) 1, in which case 1 u n (ζ) k(x; ζ) dζ = 0 for all n N. Then set u(x) = { 1 x 1 0 x \ 1, and Au(x) = k(x; ζ)u(ζ) dζ = k(x; ζ)u(ζ) dζ = 0, 1 for almost every x, contrary to the hypothesis that Au = 0 implies u = 0. Therefore there exists a set 2 1 of positive measure such that k(x; ζ) > 0 for almost every ζ 2, in which case u n (ζ) 1 k(x; ζ) dζ u n (ζ) 2 k(x; ζ) dζ which shows that u n L1 () implies that Au n + γ L1 (). Now, by Jensen s integral inequality [23, Theorem 7.44] and the facts that u n 0

7 6 and Au n 0 a.e., we have T 0 (u n ) = [Au n + γ z log(au n + γ)] Au n + γ L1 () z L () log(au n + γ) Au n + γ L1 () z L () log Au n + γ L1 (). Thus u n L1 () implies Au n + γ L1 (), which, by the properties of the function x c log x for a fixed c > 0, shows that T 0 (u n ). The previous two lemmas lead, then, the following corollary: Corollary 3.4: The functional T 0 has a unique minimizer in C if and only if ker(a) C = {0}. Proof : ( ) Suppose that there exists a nonzero u ker(a) C. If T 0 has no minimizer, there is not a unique minimizer, and we are done. Otherwise, if u is a minimizer of T 0 Then T 0 (u + u ) = = [ (A(u + u ) + γ) z log(a(u + u ) + γ) ] [Au + γ z log(au + γ)] = T 0 (u ), which shows that u + u is also a minimizer. Thus T 0 having a unique minimizer implies that ker(a) C = {0}. ( ) Now suppose that ker(a) C = {0}. By Lemma 3.2, T 0 is strictly convex. By Lemma 3.3, T 0 is L 1 ()-coercive, so, by Theorem 3.1, T 0 has a unique minimizer. Throughout the remainder of this work we assume that ker(a) C = {0}, and the unique minimizer of T 0 will be denoted u 0. The functional T α in (6) has a unique minimizer if it can be shown that T α is strictly convex on C and L 1 ()-coercive. Theorem 3.5 : Let α > 0 be a fixed, known constant. Then T α is strictly convex on C if and only if ker(a) C = {0}. Proof : If ker(a) C = {0}, then T 0 is strictly convex by Lemma 3.2, and the sum of a strictly convex functional and a convex functional is strictly convex. On the other hand, suppose that u 1, u 2 C ker(a) are distinct elements and 0 < β < 1. Then, given that u 1, u 2 0 a.e., a straightforward calculation shows that T α (βu 1 + (1 β)u 2 ) = T 0 (βu 1 + (1 β)u 2 ) + βu 1 + (1 β)u 2 L1 () = βt 0 (u 1 ) + (1 β)t 0 (u 2 ) + β u 1 L1 () + (1 β) u 2 L1 () = βt α (u 1 ) + (1 β)t α (u 2 ), which shows that T α is not strictly convex. Wheer(A) C = {0}, the L 1 ()-coercivity of T α follows from the fact that T 0 is L 1 ()-coercive. Then, given both the strict convexity and the L 1 ()-coercivity

8 7 of T α, Theorem 3.1 leads immediately to the following corollary: Corollary 3.6: minimizer on C. If ker(a) C = {0} and α > 0 is fixed, then T α has a unique Stability of Minimizers We next turn to showing that errors in z and k have a continuous effect on the minimizers to (6). Throughout this section we assume that α > 0 is a fixed, known constant. Suppose that {k n } L ( ) is a sequence of perturbed operator kernels, and let A n u(x) = k n(x; ζ)u(ζ) dζ be the associated sequence of operators. We are interested in analyzing estimates for the error in our solution given estimates for the error in these kernels. If k n k L ( ) 0, then A n u Au L1 () = (A n A)u L1 () = k n k L ( ) (k n (x; ζ) k(x; ζ)) u(ζ) dζ u(ζ) dζ = u L1 () k n k L ( ) 0. Thus k n k L () 0 implies A n A L1 () 0. Suppose also that we have a sequence {z n } L () such that z n z L () 0. Then we have a sequence of perturbed operator equations A n u + γ = z n and a sequence of corresponding optimization problems argmin T α,n (u) = argmin [(A n u + γ) z n log (A n u + γ)] + α u L1 (). (9) Corollary 3.6 gives the conditions on each A n under which each of the perturbed operator equations has a unique minimizer. Lemma 3.7: If ker(a n ) C = {0}, then for each n N there exists a unique u n C such that T α,n (u n) = inf T α,n (u). Supposing the existence of an error-free kernel k and an error-free measurement function z, we denote by u the unique minimizer of T α (which we note is not equal to u 0 ). Theorem 3.8 : Suppose that {k n } L ( ) is a sequence of perturbed kernels with k n (x; ζ) 0 for almost every (x; ζ) and that {z n } L () is a sequence of functions with z n > 0 a.e. in for all n. If k n k L ( ) 0, z n z L () 0, and ker(a n ) C = {0} for all n N, then u n u. The proof will proceed by a sequence of lemmas, and we assume in each that T α,n satisfies the hypotheses of Theorem 3.8. The first lemma shows that the sequence of perturbed Poisson likelihood functionals has a kind of collective coercivity. Lemma 3.9: If lim n u n L1 () =, then lim n T α,n(u n ) =. Proof : Since A n A L1 () 0, the Principle of Uniform Boundedness [11, Theorem 14.1] gives a constant M > 0 such that A n L1 () < M for all n. The fact that z n z L () 0 implies that there exists a constant Z > 0 such that z n L () < Z for all n. Now we invoke the fact that { z n L ()} is a bounded set of positive real numbers, the fact that u n 0 and A n u n 0 a.e., and again Jensen s inequality, to see

9 8 that T α,n (u n ) = [(A n u n + γ) z n log (A n u n + γ)] + α α u n L1 () z n L () log (A n u n + γ) α u n L1 () Z log A n u n + γ L1 () α u n L1 () Z log ( M u n L1 () + γ ). u n Since x c log x as x, we have that T α,n (u n ) whenever u n L1 (). The above lemma shows the behavior of the sequence T α,n over a sequence of elements {u n } C. The next lemma relates the values of T α and T α,n for a similar sequence. Lemma 3.10: Suppose that {u n } C is such that { u n L1 ()} is bounded. Then if ɛ > 0 and α 0 there exists N N such that T α,n (u n ) T α (u n ) < ɛ for all n > N. Proof : Let ɛ > 0 and {u n } C be such that { u n L1 ()} is bounded. Then T α,n (u n ) T α (u n ) [A n u n z n log(a n u n + γ) Au n + z log(au n + γ)] (A n A)u n + [z log(au n + γ) z n log(a n u n + γ)]. Since { u n L1 ()} is bounded, there exists some constant K 1 > 0 with u n L1 () < K 1 for all n N. Since k n k L ( ) 0, we have that A n A L1 () 0, which implies that there exists N N such that A n A L1 () < ɛ/(2k 1 ) for all n > N. Thus (A n A)u n A n A L1 () u n L1 () A n A L1 ()K 1 ɛ/2 for all n > N. Again, by Jensen s inequality we have log(γ) log (A n u n + γ) log A n u n + γ = log A n u n + γ L1 () log ( A n L1 () u n L1 () + γ ), which is bounded above, given that A n A L1 () 0. Thus there exists a constant K 2 > 0 such that log(a nu n + γ) < K 2 for all n N. Since z n z L () 0, there exists N N such that z(x) ɛ/(4k 2 ) z n (x) for almost every x

10 9 and all n > N. Thus, for sufficiently large n, 0 [z log(au n + γ) z n log(a n u + γ)] [z log(au n + γ) (z ɛ/(4k 2 )) log(a n u n + γ)] ( ) Aun + γ z log A n u n + γ + ɛ/(4k 2) log(a n u n + γ) ( ) Aun + γ z log A n u n + γ + ɛ/4 ( ) z L Aun + γ () log A n u n + γ + ɛ/4. Thus it is just left to be shown that there exists N N such that ( ) Aun + γ log A n u n + γ As was noted, { u n L1 ()} is a bounded set, thus Au n (x) + γ A n u n (x) + γ 1 (A A n )u n (x) A n u n (x) + γ 1 γ < ɛ/(4 z L ()). k k n L ( ) u n γ L1 () 0, (k(x; ζ) k n (x; ζ))u n (ζ) dζ for almost every x, which implies ( ) Aun (x) + γ log 0 A n u n (x) + γ for almost every x. Since is a set of finite measure, the Bounded Convergence Theorem [23, Corollary 5.37] implies that ( ) log Aun+γ A nu n+γ 0. Thus there exists N N such that ( ) Aun + γ log A n u n + γ < ɛ/(4 z L ()) for all n > N, which yields T α,n (u n ) T α (u n ) < ɛ. Proof : [Proof of Theorem 3.8] By Lemma 3.9, the uniform boundedness of T α,n (u n) implies that {u n} is uniformly bounded in C. Since C L 2 (), {u n} is weakly compact, which implies that there exists a subsequence {u } such that u û C. Let ɛ > 0 be given. Then by Lemma 3.10 there exists K N such that T α,nk (u ) T α (u ) < ɛ/2 and such that T α,nk (u ) T α (u ) + ɛ/2 for all

11 10 k > K. Thus T α (u ) = T α (u ) T α,nk (u ) + T α,nk (u ) T α (u ) T α,nk (u ) + T α,nk (u ) ɛ/2 + T α,nk (u ) ɛ/2 + T α (u ) + ɛ/2 = T α (u ) + ɛ. The weak lower semicontinuity of T α then gives T α (û) lim inf T α (u ) T α (u ). Since u is the unique minimizer of T α on C, û = u. This shows that every weakly convergent subsequence of {u n} converges to u, which implies that u n u Convergence of Solutions Clearly a minimizer of (6) (for α > 0) is not a solution to (1). In this section it is shown that there exists a sequence {α n } of positive real numbers such that α n 0 and the sequence of minimizers converges to a solution of (1). Notations and hypotheses from above are maintained, so that we have a sequence of perturbed functions {z n } L () and a sequence of perturbed operator kernels {k n } L ( ). Given a sequence {α n } of positive real numbers tending to 0, we have the sequence of optimization problems argmin T αn,n(u) = argmin [(A n u + γ) z n log(a n u + γ)] + α n u L1 (). By Theorem 3.1, for each n N, T αn,n has a unique minimizer, which will be denoted by u n. We keep this notation for its simplicity, but note that as n, we get a different sequence of solutions than in Section 3.1.2, since in this case each u n corresponds to a different value of α as well as different measurements z n and k n. Recall that the unique minimizer of T 0 is denoted by u 0. The treatment follows that in [3, 4, 6]. Theorem 3.11 : Let {z n } L (), {k n } L ( ), and {α n } (0, ) be such that z n > 0 a.e., z n z L () 0, k n k L ( ) 0, ker(a n ) C = {0}, and α n 0. If {u n} is the sequence of unique minimizers to the perturbed optimization problems argmin [(A n u + γ) z n log(a n u + γ)] + α n u L1 () and ( )/ T 0,n (u 0) inf T 0,n(u) α n (10) remains bounded, then u n u 0. We note here that the restrictions on {z n } and {k n } are severe but that they are actually quite natural in some applications (see Section 4). As above, we begin with preliminary lemmas, and we assume the hypotheses of Theorem 3.11 hold throughout this section.

12 11 Lemma 3.12: The sequence {T αn,n(u n)} is a bounded sequence of real numbers. Proof : Firstly, { T αn,n(u 0 ) } is a bounded sequence of real numbers. Suppose that there exists a subsequence {T αnk, (u 0 )} such that T α nk, (u 0 ). Then T 0,nk (u 0 ), since α 0 and u 0 L 1 () is fixed. Since T 0 (u 0 ) is constant, we must also therefore have T 0,nk (u 0 ) T 0(u 0 ), which contradicts Lemma Thus { T αn,n(u 0 ) } is a bounded sequence of numbers. Secondly, note that T αn,n(u n) T αn,n(u 0) (11) since u n is the unique minimizer of T αn,n. Thus {T αn,n(u n)} is bounded above. Now suppose that there exists { } N such that T αnk, (u ). Then T 0,nk (u ), which, by Lemma 3.10, implies that T 0 (u ), which is a contradiction, since T 0 is uniformly bounded below. Therefore {T αn,n(u n)} is bounded below. Lemma 3.13: The set { u n L2 ()} is bounded. Proof : Subtracting inf T 0,n (u) from both sides of (11) and dividing by α n gives ( )/ ( )/ T αn,n(u n) inf T 0,n(u) α n T αn,n(u ) inf T 0,n(u) α n u 0 L1 () + T 0,n(u 0 ) inf T 0,n (u) α n u 0 L1 () + K for some constant K > 0 (which exists by the hypotheses of Theorem 3.11). Expanding the left-hand side gives ( ) u n L1 () T 0,n (u n) inf T 0,n(u) + α n u L1 () + α n K, and the right-hand side is bounded above by Lemma Thus { u n L1 ()} is a uniformly bounded set, which implies that { u n L2 ()} is a bounded set. Proof : [Proof of Theorem 3.11] Since { u n L2 ()} is bounded by Lemma 3.13, {u n} contains a subsequence {u } that converges weakly to some û C. First we show Now T 0 (û) = lim T 0,n (u k ). (12) T 0,nk (u ) T 0 (û) = T 0,nk (u ) T 0 (u ) + T 0 (u ) T 0,nk (û) + T 0,nk (û) T 0 (û) T 0,nk (u ) T 0 (u ) + T 0 (u ) T 0,nk (û) + T 0,nk (û) T 0 (û), and, by Lemma 3.10, there exists N 1 N such that T 0,nk (u ) T 0 (u ) + T 0,nk (û) T 0 (û) < ɛ/2

13 12 for all > N 1. Similarly, T 0 (u ) T 0,nk (û) We first deal with (Au A nk û) [ (Au nk A nk û) ] + A(u û) + [ znk log(a nk û + γ) z log(au + γ) ]. (A A nk )û. Since is a bounded set of finite measure and A is a bounded linear operator, F (u) = Au is a bounded linear functional. The weak convergence of {u } to û then implies that A(u û) 0. The fact that A nk A yields (A A )û 0. Thus there exists N 1 N such that (Au A nk û) < ɛ/4 for all n > N 1. On the other hand, since A nk A, there exists K R such that log(a û + γ) < K. Then the fact that z n z L () 0 implies there exists N 2 N such that z nk (x) z(x) + ɛ/(8k) for all > N 2 and almost every x. Thus [ znk log(a nk û + γ) z log(au + γ) ] ( ) Ank û + γ z log Au + γ + ɛ/(8k) log(a nk û + γ). Then z log ( ) Ank û + γ Au + γ z L () log ( ) Ank û + γ Au + γ, and the already-demonstrated pointwise convergence of A nk û to Au and the Bounded Convergence Theorem imply that there exists N 3 N such that log ( ) Ank û + γ Au + γ for all > N 3, which verifies (12). We next note that < ɛ/(8 z L ()), lim T 0,n (u k ) T 0 (u 0). (13) Lemma 3.10 gives lim nk T 0,nk (u 0 ) = T 0(u 0 ), and (10) implies ( ) lim T 0,nk (u n k ) inf T 0, (u) = 0. Since lim T 0,n (u k ) exists by (12), lim T 0,n (u k ) = ( ) lim inf T 0,n (u) k lim T 0,n (u k 0) = T 0 (u 0).

14 13 Therefore, by (13) and (12), T 0 (û) T 0 (u 0 ), which implies that û = u 0, since u 0 is the unique minimizer of T 0. Therefore every weakly convergent subsequence of {u n} converges to u 0, which implies that u n u 0. Theorem 3.11 proves that computing the minimizer to the regularized Poisson likelihood functional is, in fact, an approximation to computing a minimizer to the unregularized functional. 4. Related work on Applications The primary purpose of this work is to develop a theoretical framework and to show that minimizing the L 1 -regularized Poisson negative log-likelihood functional is a well-posed problem. Nonetheless this problem is of more than just theoretical interest, and we give a few, non-exhaustive, references here to such problems. Sparse signal problems naturally arise in imaging applications; for example mapping a star field results in an image that is sparse modulo the background intensity. This problem, specifically, was analyzed by Jeffs et al in [14, 15], and the Poisson approach is appropriate, since the background noise is a counting process on the telescope CCD array. The fundamental problem in this case is undoing the blur caused by atmospheric effects, and this blur can be modeled as a linear integral operator [12, 13]. In this case, the integral kernel k is known as the point-spread function, and z is the image measured by the telescope. In such applications, the assumptions required of A, z, and k in Section are natural. The functions k and z are both measured by the telescope, making them non-negative almost everywhere, and the background noise has strictly positive expectation. Thus all the criteria of the results above are satisfied. Computational methods for related problem in imaging have been developed by Wang, et al, [22], who give a active-set method for computing solutions to (1) using the L p norm with L q regularization. A computational approach within a probabilistic framework using the EM algorithm was also developed by Multhei for minimizing the Poisson likelihood [16 18]. 5. Conclusion In order to reconstruct functions filtered through integral operators and contaminated by background noise of Poisson type, it is appropriate to minimize the Poisson negative log-likelihood functional. Due to the fact that such a minimization problem is ill-posed, it is necessary to add a regularization term in order to ensure the existence of a unique minimizer that is stable with respect to perturbations in the operator kernel and measured function. If the scene being imaged is sparse, using the L 1 () norm as a regularization can reconstruct more accurately than, for example, classical Tikhonov regularization or total-variation regularization. In this work the theoretical details were established to demonstrate that the use of the L 1 () norm as a regularization indeed results in a well-posed problem and that, as the regularization parameter tends to zero, the resulting minimizers converge to the unique minimizer of the unregularized minimization problem.

15 14 REFERENCES Acknowledgements The author would like to thank Carmeliza Navasca and Johnathan Bardsley for helpful discussions on L 1 regularization and Poisson negative log-likelihood estimation, respectively. We would also like to thank the referees for their helpful comments and suggestions on the manuscript. References [1] R. Acar and C. R. Vogel, Analysis of Bounded Variation Penalty Methods for Ill-Posed Problems, Inverse Problems, 10 (1994), pp [2] R. G. Baraniuk, Compressive Sensing, IEEE Signal Processing Magazine, July 2007, pp [3] J. Bardsley, A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems, preprint. [4] J. M. Bardsley and N. Laobeul, Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method, Inverse Problems in Science and Engineering, 16 (2008), no. 2, pp [5] J. M. Bardsley and N. Laobeul, An Analysis of Regularization by Diffusion for Ill-Posed Poisson Likelihood Esimation, Inverse Problems in Science and Engineering, to appear. [6] J. M. Bardsley and A. Luttman, Total Variation-Penalized Poisson Likelihood Estimation for Ill- Posed Problems, Adv. Comput. Math., Special Issue on Mathematical Imaging, to appear. [7] E. Candes, Compressive Sampling, Proceedings of the International Congress of Mathematicians, [8] E. Candes, J. Romberg, T. Tao, Stable Signal Recovery from Incomplete and Inaccurate Measurements, Comm. Pure and Applied Math., 59 (2006), Number 8, pp [9] N. Cao, A. Nehorai, and M. Jacob, Image Reconstruction for Diffuse Optical Tomography Using Sparsity Regularization and Expectation-Maximization Algorithm, Optics Express, 15 (2007), No. 21, pp [10] R. Chartrand, Exact Reconstruction from Surprisingly Little Data, Los Alamos Report LA-UR , [11] J. B. Conway, A Course in Functional Analysis, Second Edition, Springer, Graduate Texts in Mathematics Number 96, [12] P. C. Hansen, Deconvolution and Regularization with Toeplitz Matrices, Numerical Algorithms, 29 (2002), pp [13] P. C. Hansen, J. G. Nagy, and D. P. O Leary, Deblurring Images: Matrices, Spectra, and Filtering, SIAM Fundamentals of Algorithms Number 3, [14] Jeffs, B.D. and Elsmore, D., Maximally Sparse Reconstruction of Blurred Star Field Images, Int. Conf. Acoustics, Speech, and Sig. Process., 4 (1991), pp [15] Jeffs, B.D. and Gunsay, M., Restoration of Blurred Star Field Images by Maximally Sparse Optimization, IEEE Trans. Im. Process., 2 (1993), No. 2, pp [16] H. N. Mülthei, Iterative Continuous Maximum Likelihood Reconstruction Methods, Math. Methods Appl. Sci., 15 (1993), pp [17] H. N. Mülthei and B. Schorr, On an Iterative Method for a Class of Integral Equations of the First Kind, Math. Methods Appl. Sci., 9 (1987), pp [18] H. N. Mülthei, and B. Schorr, On Properties of the Iterative Maximum Likelihood Reconstruction Method, Math. Methods Appl. Sci., 11 (1989), pp [19] L. A. Shepp and Y. Vardi, Maximum Likelihood Reconstruction in Positron Emission Tomography, IEEE Trans. Med. Imaging, 1 (1982), pp [20] A. N. Tikhonov, A. V. Goncharsky, V. V. Stepanov, and A. G. Yagola, Numerical Methods for the Solution of Ill-Posed Problems, Kluwer Academic Publishers, [21] C. R. Vogel, Computational Methods for Inverse Problems, SIAM, Frontiers in Applied Mathematics Number 23, [22] Y. Wang and J. Cao and Y. Yuan and C. Yang and N. Xiu, Regularizing active set method for nonnegatively constrained ill-posed multichannel image restoration problem, Appl. Opt. 48(7): (2009). [23] R. L. Wheeden and A. Zygmund, Measure and Integral: An Introduction to Real Analysis, Marcel Dekker, Series in Pure and Applied Mathematics Number 43, [24] E. Zeidler, Applied Functional Analysis: Main Principles and their Applications, Springer-Verlag, New York, View publication stats

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated

More information

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Inverse Problems in Science and Engineering Vol. 00, No. 00, December 2006, 1 19 Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Johnathan M. Bardsley

More information

Total Variation-Penalized Poisson Likelihood Estimation for Ill-Posed Problems

Total Variation-Penalized Poisson Likelihood Estimation for Ill-Posed Problems Total Variation-Penalized Poisson Likelihood Estimation for Ill-Posed Problems Johnathan M. Bardsley Department of Mathematical Sciences, University of Montana, Missoula, MT. Email: bardsleyj@mso.umt.edu

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise J Math Imaging Vis c 27 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 1.7/s1851-7-652-y A Variational Approach to Reconstructing Images Corrupted by Poisson Noise TRIET

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

FUNCTIONAL COMPRESSION-EXPANSION FIXED POINT THEOREM

FUNCTIONAL COMPRESSION-EXPANSION FIXED POINT THEOREM Electronic Journal of Differential Equations, Vol. 28(28), No. 22, pp. 1 12. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) FUNCTIONAL

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

PROJECTIONS ONTO CONES IN BANACH SPACES

PROJECTIONS ONTO CONES IN BANACH SPACES Fixed Point Theory, 19(2018), No. 1,...-... DOI: http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html PROJECTIONS ONTO CONES IN BANACH SPACES A. DOMOKOS AND M.M. MARSH Department of Mathematics and Statistics

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Math 699 Reading Course, Spring 2007 Rouben Rostamian Homogenization of Differential Equations May 11, 2007 by Alen Agheksanterian

Math 699 Reading Course, Spring 2007 Rouben Rostamian Homogenization of Differential Equations May 11, 2007 by Alen Agheksanterian . Introduction Math 699 Reading Course, Spring 007 Rouben Rostamian Homogenization of ifferential Equations May, 007 by Alen Agheksanterian In this brief note, we will use several results from functional

More information

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:

More information

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Johnathan M. Bardsley and John Goldes Department of Mathematical Sciences University of Montana Missoula,

More information

REGULARIZATION METHODS FOR ILL- POSED POISSON IMAGING

REGULARIZATION METHODS FOR ILL- POSED POISSON IMAGING University of Montana ScholarWorks at University of Montana Graduate Student Theses, Dissertations, & Professional Papers Graduate School 2008 REGULARIZATION METHODS FOR ILL- POSED POISSON IMAGING N'Djekornom

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

Existence of Minimizers for Fractional Variational Problems Containing Caputo Derivatives

Existence of Minimizers for Fractional Variational Problems Containing Caputo Derivatives Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 1, pp. 3 12 (2013) http://campus.mst.edu/adsa Existence of Minimizers for Fractional Variational Problems Containing Caputo

More information

ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A.

ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A. COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 6, Number 4, December 27 pp. 917 936 ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

THEOREMS, ETC., FOR MATH 516

THEOREMS, ETC., FOR MATH 516 THEOREMS, ETC., FOR MATH 516 Results labeled Theorem Ea.b.c (or Proposition Ea.b.c, etc.) refer to Theorem c from section a.b of Evans book (Partial Differential Equations). Proposition 1 (=Proposition

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

POISSON noise, also known as photon noise, is a basic

POISSON noise, also known as photon noise, is a basic IEEE SIGNAL PROCESSING LETTERS, VOL. N, NO. N, JUNE 2016 1 A fast and effective method for a Poisson denoising model with total variation Wei Wang and Chuanjiang He arxiv:1609.05035v1 [math.oc] 16 Sep

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Weak-Star Convergence of Convex Sets

Weak-Star Convergence of Convex Sets Journal of Convex Analysis Volume 13 (2006), No. 3+4, 711 719 Weak-Star Convergence of Convex Sets S. Fitzpatrick A. S. Lewis ORIE, Cornell University, Ithaca, NY 14853, USA aslewis@orie.cornell.edu www.orie.cornell.edu/

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley Volume X, No. 0X, 200X, X XX Web site: http://www.aimsciences.org AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION Johnathan M. Bardsley Department of Mathematical

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Well-posedness for generalized mixed vector variational-like inequality problems in Banach space

Well-posedness for generalized mixed vector variational-like inequality problems in Banach space MATHEMATICAL COMMUNICATIONS 287 Math. Commun. 22(2017), 287 302 Well-posedness for generalized mixed vector variational-like inequality problems in Banach space Anurag Jayswal and Shalini Jha Department

More information

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space Mathematica Moravica Vol. 19-1 (2015), 95 105 Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space M.R. Yadav Abstract. In this paper, we introduce a new two-step iteration process to approximate

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Weak and Strong Convergence Theorems for a Finite Family of Generalized Asymptotically Quasi-Nonexpansive Nonself-Mappings

Weak and Strong Convergence Theorems for a Finite Family of Generalized Asymptotically Quasi-Nonexpansive Nonself-Mappings Int. J. Nonlinear Anal. Appl. 3 (2012) No. 1, 9-16 ISSN: 2008-6822 (electronic) http://www.ijnaa.semnan.ac.ir Weak and Strong Convergence Theorems for a Finite Family of Generalized Asymptotically Quasi-Nonexpansive

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING

APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING JOHNATHAN M. BARDSLEY Abstract. In this paper, we extend a nonnegatively

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

1-Bit Compressive Sensing

1-Bit Compressive Sensing 1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Spectral theory for compact operators on Banach spaces

Spectral theory for compact operators on Banach spaces 68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Sparsity Regularization for Image Reconstruction with Poisson Data

Sparsity Regularization for Image Reconstruction with Poisson Data Sparsity Regularization for Image Reconstruction with Poisson Data Daniel J. Lingenfelter a, Jeffrey A. Fessler a,andzhonghe b a Electrical Engineering and Computer Science, University of Michigan, Ann

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

PYTHAGOREAN PARAMETERS AND NORMAL STRUCTURE IN BANACH SPACES

PYTHAGOREAN PARAMETERS AND NORMAL STRUCTURE IN BANACH SPACES PYTHAGOREAN PARAMETERS AND NORMAL STRUCTURE IN BANACH SPACES HONGWEI JIAO Department of Mathematics Henan Institute of Science and Technology Xinxiang 453003, P.R. China. EMail: hongwjiao@163.com BIJUN

More information

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS JONATHAN M. BORWEIN AND MATTHEW K. TAM Abstract. We analyse the behaviour of the newly introduced cyclic Douglas Rachford algorithm

More information

MAT 544 Problem Set 2 Solutions

MAT 544 Problem Set 2 Solutions MAT 544 Problem Set 2 Solutions Problems. Problem 1 A metric space is separable if it contains a dense subset which is finite or countably infinite. Prove that every totally bounded metric space X is separable.

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization

More information

ON THE ESSENTIAL BOUNDEDNESS OF SOLUTIONS TO PROBLEMS IN PIECEWISE LINEAR-QUADRATIC OPTIMAL CONTROL. R.T. Rockafellar*

ON THE ESSENTIAL BOUNDEDNESS OF SOLUTIONS TO PROBLEMS IN PIECEWISE LINEAR-QUADRATIC OPTIMAL CONTROL. R.T. Rockafellar* ON THE ESSENTIAL BOUNDEDNESS OF SOLUTIONS TO PROBLEMS IN PIECEWISE LINEAR-QUADRATIC OPTIMAL CONTROL R.T. Rockafellar* Dedicated to J-L. Lions on his 60 th birthday Abstract. Primal and dual problems of

More information

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces Applied Mathematical Sciences, Vol. 11, 2017, no. 12, 549-560 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.718 The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

On pseudomonotone variational inequalities

On pseudomonotone variational inequalities An. Şt. Univ. Ovidius Constanţa Vol. 14(1), 2006, 83 90 On pseudomonotone variational inequalities Silvia Fulina Abstract Abstract. There are mainly two definitions of pseudomonotone mappings. First, introduced

More information

A NOTE ON PROJECTION OF FUZZY SETS ON HYPERPLANES

A NOTE ON PROJECTION OF FUZZY SETS ON HYPERPLANES Proyecciones Vol. 20, N o 3, pp. 339-349, December 2001. Universidad Católica del Norte Antofagasta - Chile A NOTE ON PROJECTION OF FUZZY SETS ON HYPERPLANES HERIBERTO ROMAN F. and ARTURO FLORES F. Universidad

More information

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 12, 1998, 47 59 SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS M. Grossi S. Kesavan F. Pacella M. Ramaswamy

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

A FIXED POINT THEOREM FOR GENERALIZED NONEXPANSIVE MULTIVALUED MAPPINGS

A FIXED POINT THEOREM FOR GENERALIZED NONEXPANSIVE MULTIVALUED MAPPINGS Fixed Point Theory, (0), No., 4-46 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html A FIXED POINT THEOREM FOR GENERALIZED NONEXPANSIVE MULTIVALUED MAPPINGS A. ABKAR AND M. ESLAMIAN Department of Mathematics,

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

Continuous State MRF s

Continuous State MRF s EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

A Note on the Class of Superreflexive Almost Transitive Banach Spaces

A Note on the Class of Superreflexive Almost Transitive Banach Spaces E extracta mathematicae Vol. 23, Núm. 1, 1 6 (2008) A Note on the Class of Superreflexive Almost Transitive Banach Spaces Jarno Talponen University of Helsinki, Department of Mathematics and Statistics,

More information

Examples of Convex Functions and Classifications of Normed Spaces

Examples of Convex Functions and Classifications of Normed Spaces Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University

More information

Package R1magic. August 29, 2016

Package R1magic. August 29, 2016 Type Package Package R1magic August 29, 2016 Title Compressive Sampling: Sparse Signal Recovery Utilities Version 0.3.2 Date 2015-04-19 Maintainer Depends stats, utils Utilities

More information

On the Spectrum of Volume Integral Operators in Acoustic Scattering

On the Spectrum of Volume Integral Operators in Acoustic Scattering 11 On the Spectrum of Volume Integral Operators in Acoustic Scattering M. Costabel IRMAR, Université de Rennes 1, France; martin.costabel@univ-rennes1.fr 11.1 Volume Integral Equations in Acoustic Scattering

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Superresolution in the maximum entropy approach to invert Laplace transforms

Superresolution in the maximum entropy approach to invert Laplace transforms Superresolution in the maximum entropy approach to invert Laplace transforms arxiv:164.6423v1 [math.oc] 2 Apr 216 Henryk Gzyl, Centro de Finanzas IESA, Caracas, (Venezuela) henryk.gzyl@iesa.edu.ve Abstract

More information

On the mean connected induced subgraph order of cographs

On the mean connected induced subgraph order of cographs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 71(1) (018), Pages 161 183 On the mean connected induced subgraph order of cographs Matthew E Kroeker Lucas Mol Ortrud R Oellermann University of Winnipeg Winnipeg,

More information

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization 1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University

More information