Laplace-distributed increments, the Laplace prior, and edge-preserving regularization

Size: px
Start display at page:

Download "Laplace-distributed increments, the Laplace prior, and edge-preserving regularization"

Transcription

1 J. Inverse Ill-Posed Probl.? (????), 1 15 DOI /JIIP.????.??? de Gruyter???? Laplace-distributed increments, the Laplace prior, and edge-preserving regularization Johnathan M. Bardsley Abstract. For a given two-dimensional image, we define the horizontal and vertical increments at a pixel location to be the difference between the intensity values at that pixel and at the neighboring pixels to the right and above, respectively. For a typical image, it makes intuitive sense that the increments will usually be near zero, corresponding to areas of smooth variation in image intensity, but will often have large magnitude, corresponding to edges where sharp intensity changes occur. In this paper, we explore the use of the Laplace increment model, in which the increments are assumed to be independent and identically distributed Laplace random variables a distribution with heavy tails allowing for large increment values with zero mean. The prior constructed from the Laplace increment model is very similar to the total variation (TV) prior. We perform a theoretical analysis of its properties, which shows that the Laplace prior yields a regularization scheme with regularized solutions contained in the space of bounded variation, just as for the TV prior. Moreover, numerical experiments indicate that the Laplace prior yields reconstructions that are qualitatively very similar to those obtained using TV. Keywords. inverse problems, regularization, Bayesian inference, total variation, Markov random fields. 1 Mathematics Subject Classification. 15A29, 65F22, 65C. 1 Introduction In this paper, we focus on linear inverse problems that can be modeled as a Fredholm integral equation of the first kind, which after numerical discretization yield a system of linear equations of the form b = Ax, where b R N is the observed data, x is the N 1 vector of unknowns, and A is an N N matrix with singular values decaying continuously to zero, and hence is ill-conditioned. In practice, b contains random noise. The most common choice of noise model is independent and identically distributed (iid) Gaussian, i.e. b = Ax + η, (1.1)

2 2 J. M. Bardsley where η is a Gaussian random vector with components satisfying η i N (, λ 1 ) for all pixels i; here the inverse-variance parameter λ is known as the precision. Another common noise model, used in both astronomical and medical imaging, is Poisson: b = Poisson(Ax + γ), (1.2) where γ is the m 1 vector of background counts and is assumed known. The probability density functions for (1.1) and (1.2) are given, respectively, by p(b x) exp ( λ2 ) Ax b 2, (1.3) n 2 p(b x) exp {([Ax] j + γ j ) + b j ln([ax] j + γ j )}, (1.4) j=1 where denotes proportionality. In both cases, due to the properties of the matrix A, the maximum likelihood estimator x ML = arg max p(b x) x is unstable with respect to the noise in the data b. Such instability is a characteristic of inverse problems, and it has to do with the fact that the matrix A is the numerical discretization of a compact operator defined on a function space with singular values that decay continuously to zero. The standard technique for overcoming this instability is regularization. For general discussions of inverse problems and techniques for regularization, from both numerical and functional analytic points-of-view, see one of the many excellent tests on the subject, e.g., [1, 13, 15, 19]. In the context of Bayesian statistics, regularization corresponds to the choice of the prior probability density function. Bayes Theorem states that given p(b x), and an assumed prior probability density function p(x), the posterior probability density function p(x b) can be written p(x b) p(b x)p(x). (1.5) One can then obtain a stable reconstructed image by computing the maximum of the posterior density the so called maximum a posteriori (MAP) estimator via x MAP = arg min { ln p(b x) ln p(x)}. (1.6) x In this paper, we obtain the prior by assuming that the increments (i.e. the differences between neighboring pixels) are iid Laplace distributed. This approach is

3 The Laplace prior for edge-preserving regularization 3 known as conditional autoregression [8] and it defines a Markov random field prior p(x). We begin with the one-dimensional case and assume that the increments are defined x i = x i+1 x i. Then our assumption is that x i Laplace(, δ 1 ), i = 2,..., n, (1.7) where Laplace(µ, δ 1 ) has probability density function p(x µ, δ 1 ) = δ exp ( δ x µ ). 2 Because of our assumption of independence, the joint density for x (i.e. the prior) is given by ( ) n p(x) exp δ x i x i 1 i=2 = exp ( δ Dx 1 ), (1.8) where 1 denotes the l 1 -norm and D is the forward difference matrix with periodic boundary conditions, i.e., D = n n. We note that a Neumann or Dirichlet boundary condition can also be assume here, which corresponds to a modification to the matrix D. The assumption of Laplacian increments is motivated by the fact that in many signals, the increments sizes are typically small, but outliers (large increments) are not uncommon. Due to the fact that the Laplace distribution has heavy tails, large increments (outliers) are much more probable than if a Gaussian increment model (as in [18]) is assumed. We note that the Gaussian increment model leads to the standard regularization function δx T Lx, where L is the discretized negative- Laplacian matrix. To illustrate the difference between the Laplace and Gaussian probability densities, we plot them together in Figure 1. The prior (1.8) yields total variation regularization [19]. The connection between the Laplacian increment model (1.7) and total variation regularization for 1D signals is discussed in some detail in [12]. In this paper, we explore the use

4 4 J. M. Bardsley.5.45 Normal(,1) Laplace(,1) Figure 1. The Laplace and Normal probability density functions with mean and variance 1. of the Laplace increment model for two-dimensional (2D) signals, which yields a regularization that is not 2D total variation, but yields very similar results. Indeed, we will show that viewed in the function space setting, these two regularization functions yield convergent regularization schemes with regularized solutions lying in the space of bounded variation. The paper is organized as follows. In the next section, we present the Laplace increment model for 2D signals, and then present a brief theoretical analysis that shows that the resulting prior yields a convergent regularization scheme with solutions lying in the space of bounded variation. Then in Section 3, we implement the regularization on problems from image deblurring and computed tomography. And finally, we end with conclusions in Section 4. 2 A Laplacian Increment Model for 2D Signals We begin by defining the horizontal and vertical increments, respectively, as h x ij = x i+1,j x ij and v x ij = x i,j+1 x ij, and assume h x ij, v x ij iid Laplace(, δ 1 ), i, j = 1,..., n 1. (2.1)

5 The Laplace prior for edge-preserving regularization 5 Then the probability density function for x has the form (see [18, Chapter 3]) p(x) exp δ n n 1 n n 1 x i+1,j x ij + x i,j+1 x ij 2 j=1 i=1 i=1 j=1 ( = exp δ 2 ( (I D)x 1 + (D I)x 1 ) ( = exp δ ) 2 ( D vx 1 + D h x 1 ), (2.2) where denotes the Kronecker product, D v = D I is the discrete vertical derivative, and D h = I D the discrete horizontal derivative. Note that in contrast, the 2D total variation prior has the form ( p(x) exp δ n ) [D v x] 2 i 2 + [D hx] 2 i. (2.3) i=1 It is well-known that when the total variation prior is used, the resulting reconstructed images have a cartoon texture, or in other words are approximately piecewise constant. In the next section, we present a theoretical analysis that shows that (2.2) can be expected to yield reconstructed images that are qualitatively similar to those obtained using total variation. One possible benefit of (2.2) over (2.3) is that different regularization parameters could be used for the vertical and horizontal increment terms, i.e. δ v and δ h rather than just δ, which would penalize the vertical and horizontal increments differently. Such an approach is discussed in the Gaussian case in [18]. ) 2.1 Theoretical Analysis The theoretical analysis of regularization schemes constitutes a significant portion of the work that s been done in the field of inverse problems; see, e.g., [1] and the references therein. While mathematical inverse problems is an interesting field on its own, it can also yield insights into a computational regularization method that cannot be obtained otherwise. This fact is perhaps best illustrated by the example of total variation (TV) regularization. In the discretized setting, where computations are done, the reconstructions obtained using TV have striking visual qualities that suggest that there is something special about the method (see, e.g., the results in the numerical experiments section). However, what makes TV regularization unique can only be made explicit through a theoretical analysis. In particular, for the Gaussian and

6 6 J. M. Bardsley Poisson likelihoods, (1.3) and (1.4), it is shown in [1] and [6], respectively, that when TV regularization is used, solutions lie in the Banach space of functions of bounded variation [19]. In this section, we will show that the Laplace increment prior (2.2) has the same properties as the TV prior. To do this, we mimic the exposition found in [2]. First, consider the functional analogue of (1.1), namely b = Ax. (2.4) Our application of interest is image processing, so that b L () denotes the image intensity, and x L 2 () the (nonnegative) intensity of the unknown object. Each function is defined on a closed, bounded domain R d. Finally, Ax(s) def = a(s; t)u(t) dt, where a L ( ) is the error free, nonnegative point spread function (PSF). Given these assumptions, A : L 2 () L 2 () is a compact operator, and hence, the problem of solving (2.4) for x is ill-posed [19]. Moreover, Ax whenever x, and hence, assuming that the true image x exact, we have that the error free data b = Ax exact is bounded below by zero. Note that here, and in the remainder of the document, we omit almost everywhere" from the mathematical statements in which its presence is called for. The functional analogue of the MAP estimation problem (1.6) defines the following function { } R α (a, b) def = min T α (Ax; b) def = T (Ax; b) + αj(x), (2.5) where x C C = {x L 2 () x }, and α and J are the regularization parameter and functional, respectively. For the Gaussian likelihood case, T (Ax; b) def = (Ax b) 2 dt (2.6) while for the Poisson likelihood, T (Ax; b) def = ((Ax + γ) b log(ax + γ)) dt. (2.7) In the case of total variation regularization, the regularization functional J has the form J(x) def = sup x y dt, (2.8) y Y

7 The Laplace prior for edge-preserving regularization 7 where is the divergence operator, and Y = {y C 1 (; R d ) : y(t) 1 t }. The functional analogue of the Laplace increment prior (2.2) takes the form J(x) = d i=1 sup y Y x y t i dt, (2.9) with Y as above, t = (t 1,..., t d ), and = 1 d. Note that for d = 1 these two definitions coincide. If x is continuously differentiable on, (2.8) and (2.9) take the less intimidating form [19, Remark 8.1] J(x) = x 2 dt, (2.1) J(x) = d i=1 x t i dt, (2.11) respectively, where is the gradient operator. Note that (2.1) and (2.11) are the functional analogues of (2.3) and (2.2), where there d = 2. We can finally define the space of bounded variation [11]: BV () = {x L 1 () : J(x) < + }, (2.12) with J defined by (2.1), which is a Banach space with norm R α defines a regularization scheme x BV () = x L 1 () + J(x). In this subsection, we define the notion of a regularization scheme, and then prove that R α defined in (2.5) with J given by (2.11) satisfies the conditions of this definition. We also show that the resulting regularized solutions lie in BV (). The regularization operator R α defined in (2.5) has domain B = {â L ( ) â } {ẑ L () ẑ }, which is a closed subset of the Banach space L ( ) L () with induced def norm (â, ẑ) B = max{ â L ( ), ẑ L ()}. To define regularization scheme, we need a sequence of operator equations b n = A n x, (2.13)

8 8 J. M. Bardsley where b n L () is nonnegative, and A n x(s) def = a n (s; t)x(t) dt, with a n L ( ) a nonnegative point spread function (PSF). Note, then, that A n x whenever x C. The functions b n and a n should be viewed as approximations of the true data b and PSF a, respectively. In astronomy and PET, for example, both the data and PSF are estimated and hence contain errors. Thus our definition of regularization scheme, which we present now, accounts for errors in the measurements of both b and a. Definition 2.1. {R α } α> defines a regularization scheme on B provided (i) R α is well-defined and continuous on B for all α > ; (ii) given a sequence {(a n, z n )} n=1 B such that (a n, z n ) (a, z) B, there exists a positive sequence {α n } n=1 such that for some p 1. R αn (a n, z n ) u exact L p () Next, we state and prove the main result of this section. Theorem 2.2. Let R α : B C be defined as in (2.5), with T defined by (2.6) or (2.7) and J defined by (2.8) or (2.11). Then {R α } α> is a regularization scheme provided the null-space of A does not contain the constant functions. Moreover, in both cases, Range(R α ) BV (). Proof. For J defined by (2.1), the result follows from [1] for least squares fit-todata (2.6) and from [6] for negative-log Poisson fit-to-data (2.7). For J defined by (2.11), the result for both fit-to-data functions follows from [1] and [6] together with the fact that 1 d d i=1 x t i dt x 2 dt d i=1 x t i dt.

9 The Laplace prior for edge-preserving regularization 9 3 Numerical Experiments The theoretical results of the previous section suggest that the use of the Laplace increment prior will yield regularized solutions that are qualitatively similar to those obtain when total variation regularization is used. We will see in this section that that is indeed the case. Our reconstruction technique is the extension of the lagged-diffusivity fixed point iteration to the Laplace increment case. First, note that p(x) defined by (2.2) is non-differentiable due to the presence of the absolute value, and hence we use the following differentiable approximation: ln p(x) δ n n 1 n n 1 ψ((x i+1,j x ij ) 2 ) + ψ((x i,j+1 x ij ) 2 ), 2 j=1 i=1 i=1 j=1 where ψ(t) = 2 t + β. This expression has gradient L(x)x where (see [19]) L(x) = D T v diag(ψ (D v x))d v + D T h diag(ψ (D h x))d h, (3.1) which motivates the following lagged-diffusivity-type algorithm. Algorithm 1. Set k =, L = D T v D v + D T h D h, and choose α >. (i) Compute x k+1 α = arg min x { ln p(b x) + α 2 xt L k x }. (ii) Update L k+1 = L(x k+1 α ) via (3.1), set k = k + 1, and return to Step 1. Remark 3.1. In practice, α > can be chosen at the outset (as in Algorithm 1) using a regularization parameter selection method with regularization matrix L. It can also be updated every jth iteration of Algorithm 1. In either case, we advocate the use of generalized cross validation (GCV), which can be found in [19] for the least squares likelihood and [5] for the Poisson likelihood. 3.1 Image Deblurring with Gaussian Noise Now we test the above iteration on a two-dimensional image deblurring problem. The forward model, mapping the unknown x to the observation b, both defined on [, 1] [, 1], has convolution form: b(s 1, s 2 ) = 1 1 a(s 1 t 1, s 2 t 2 )x(t 1, t 2 )dt 1 dt 2. For our experiments, we choose a Gaussian convolution kernel a, and discretize the integral using mid-point quadrature on a uniform computational grid

10 1 J. M. Bardsley Figure 2. On the left is the two-dimensional image used to generate the data, and on the right is the blurred noisy data. over [,1] [,1]. We assume that x extends periodically outside of [,1] [,1], which after discretization yields a linear system of equations b = Ax in which A has block circulant structure. Thus A can be diagonalized by the discrete Fourier transform (DFT) [19]. The data b is generated using (1.1) with the noise variance λ 1 chosen so that the noise strength is 2% that of the signal strength. The image used to generate the data and the data itself are shown in Figure 2. Since Gaussian noise is assumed, the likelihood function is given by (1.3) yielding a quadratic minimization problem in Step 1 of Algorithm 1. However since the matrix L k will not have block circulant structure, the preconditioned conjugate gradient method (PCG) must be used to approximately compute x k+1 α. For the preconditioner, we use M = A T A + α(d T h D h + D T v D v ), which given our assumptions is diagonalizable by the DFT; indeed, ( ( )) M 1 1 r = vec IDFT DFT(R), â s 2 + αˆl s where R is an n n array; r = vec(r) stacks the columns of R to create r; ˆl s is the n n eigenvalue array for D T h D h + D T v D v ; â s is the n n eigenvalue array of A; and IDFT is the inverse discrete Fourier transform. See [19] for more detail on the diagonalization of the block circulant matrices by the DFT. Finally, in order to obtain a value for α, we implement generalized cross validation (GCV) with regularization matrix L. This allows us to exploit circulant structure making the computation very efficient; specifically, we take α to be the

11 The Laplace prior for edge-preserving regularization Figure 3. On the left is the reconstruction obtained after 1 iterations of Algorithm 1, while on the right is the corresponding edge map. minimizer of G(α) = α 2 n i,j=1 [ˆl s ] 2 ˆB ij 2 ij / n 2 ( â s 2 ij + α[ˆl s ] ij ) 2 n i,j=1 â s 2 ij â s 2 ij + α[ˆl s ] ij where ˆB = DFT(B) with B the n n array satisfying b = vec(b). We are now ready to test Algorithm 1 on the above image deblurring test problem. We choose β =.1 in our definition of ψ(t) and show reconstructions after 1 iterations of the algorithm. The reconstruction is given on the upperleft in Figure 3, while the edge map, which is a plot of the diagonal values of (Dv x 1 ) 2 + (D h x 1 ) 2 is plotted on the upper-right. Finally, the relative error for the final reconstruction was x α x true / x true =.19. 2, 3.2 Positron Emission Tomography with Poisson Noise In positron emission tomography (PET), a radioactive tracer element is injected into the body, which exhibits radioactive decay, resulting in photon emission. The emitted photons that leave the body are recorded by a photon detector, which also determines the line of response (LOR) L(ω, y), along which the photon(s) have propagated; given a fixed coordinate system, L(ω, y) is the unique line making an angle ω with an axis (e.g. the vertical) that is a perpendicular distance of y from the origin. We parameterize L(ω, y) by L(ω, y) = {z(s) s S}. In PET, the data b(ω, y) corresponds to the number of detected incidents along

12 12 J. M. Bardsley L(ω, y). The model relating the tracer density x to the data is given by b(ω, y) = A ω,y (z(s))x(z(s))ds, L(ω,y) where the impulse response function A ω,y (z(r)) can be viewed as the probability that an emission event located at z(r) along L(ω, y) is recorded by the detector system. To determine A ω,y, we note that a pair of photons are emitted at a location z(r) along L(ω, y) with detectors located at z() and z(s). Then the probability that both photons reach the detector is A ω,y (z(r)) = e r µ(z(t)) dt e S r µ(z(t)) dt = e L(ω,y) µ(z(t)) dt, which doesn t depend on r and µ(z). Hence we can simplify the model to b(ω, y) = e L(ω,y) µ(z(t)) dt x(z(s))ds. (3.2) L(ω,y) Note that dividing both sides of (3.2) by e L(ω,y) µ(z(t)) dt yields the Radon transform, which is what is solved in the computed tomography inverse problem [16]. After discretization, (3.2) can be written as a system of linear equations of the form (1.1). The discretization occurs both in the spatial domain, where µ and x are defined, as well as in the Radon transform ((ω, y)) domain, where the data b is defined. We use a uniform n n spatial grid, and a uniform grid for the transform domain with n angles and n sensors. In our experiments, n = 128 so that (1) has size To generate the data b, we use the Poisson noise model (1.2) with γ = 1 synthetically generated Poisson noise. The true tracer density, given on the left in Figure 4, is the Shepp-Logan phantom. We take µ = in (3.2) to construct our matrix A, which is standard for PET numerical experiments [17], and scale the true tracer density x so that the percent-noise is approximately 11. The data is shown on the right in Figure 4. Next, we test Algorithm 1 on this synthetic PET example. We again choose β =.1 in our definition of ψ(t) and compute α using the GCV method of [5] for Poisson negative-log likelihood function with regularization matrix L. However in the Poisson case, this choice of α does not work well in later iterations of Algorithm 1, but using the hierarchical approach of [3], it can be motivated that α = α does work well, so this is what we use here. On various synthetic tests for PET data, this method of choosing α is effective, however we have not extensively tested it. Given this choice of α, the reconstruction after 1 iterations of Algorithm 1 is given on the upper-left in Figure 5, while the edge

13 The Laplace prior for edge-preserving regularization Figure 4. On the left is the true image, and on the right is the PET data Figure 5. On the left is the reconstruction obtained after 1 iterations of Algorithm 1, while on the right is the corresponding edge map. map, which is a plot of the diagonal values of (D v x 1 ) 2 + (D h x 1 ) 2, is plotted on the upper-right. Finally, the relative error for the final reconstruction was x α x true / x true = Notice that in both instances, the reconstructions are qualitatively similar to those obtained using total variation. 4 Conclusions In this paper, we focused on the use of the Laplace prior, or regularization function, constructed from the assumption that the increments in the unknown image are independent and identically distrubuted, zero-mean Laplace random variables.

14 14 J. M. Bardsley The Laplace prior is very similar to the total variation (TV) prior indeed, in one-dimension they are the same and yields reconstructed images that are both quantitatively, and qualitatively, very similar. We present a theoretical analysis that shows that just as for TV, the Laplace prior yields regularized solutions that lie in the space of bounded variation; and we present numerical experiments from both image deblurring and positron emission tomography showing that Laplace prior works well and yields TV-like reconstructed images. The benefit of using the Laplace prior, as apposed to TV, is that it follows from concrete distributional assumptions regarding the increments in the unknown image, which can be modified to better fit the specific situation. Bibliography [1] R. ACAR AND C. R. VOGEL, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems, 1 (1994), pp [2] J. M. BARDSLEY, A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems, Inverse Problems and Imaging, 4(1) (1), p [3] J. M. BARDSLEY, D. CALVETTI, AND E. SOMERSALO, Hierarchical regularization for edge-preserving reconstruction of PET images, Inverse Problems, 26 (1), 351. [4] J. M. BARDSLEY AND J. GOLDES, An Iterative Method for Edge-Preserving MAP Estimation when Data-Noise is Poisson, SIAM Journal on Scientific Computing, 32(1), (1), pp [5] J. M. BARDSLEY AND J. GOLDES, Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation, Inverse Problems, 25 (9) 955. [6] J. M. Bardsley and A. Luttman, Total variation-penalized poisson likelihood estimation for ill-posed problems, Advances in Computational Mathematics, 31 (9), [7] J. M. BARDSLEY AND C. R. VOGEL, A Nonnnegatively Constrained Convex Programming Method for Image Reconstruction, SIAM Journal on Scientific Computing, 25(4), 4, pp [8] J. BESAG, Spatial Interaction and the Statistical Analysis of Lattice Systems, Journal of the Royal Statistical Society, Series B, 36(2) (1974), pp [9] D. CALVETTI AND E. SOMERSALO, Introduction to Bayesian Scientific Computing, Springer 7. [1] H. K. ENGL, M. HANKE, AND A. NEUBAUER, Regularization of Inverse Problems, Kluwer, 1996.

15 The Laplace prior for edge-preserving regularization 15 [11] L. C. Evans and R. Gariepy, Measure Theory and Fine Properties of Functions, CRC Press, Boca Raton, [12] M. GREEN, Statistics of images, the TV algorithm of Rudin-Osher-Fatemi for image denoising, and an improved denoising algorithm, CAM Report 2-55, UCLA, October 2. [13] P. C. HANSEN, Discrete Inverse Problems: Insight and Algorithms, SIAM, Philadelphia, 1. [14] J. HUANG AND D. MUMFORD, Statistics of Natural Images and Models, IEEE Conf. on Computer Vision and Pattern Recognition, 1999, pp [15] J. KAIPIO AND E. SOMERSALO, Statistical and Computational Inverse Problems, Springer 5. [16] F. NATTERER AND F. WÜBBELING, Mathematical Methods in Image Reconstruction, SIAM 1. [17] J. M. OLLINGER AND J. A. FESSLER, Positron-Emission Tomography, IEEE Signal Processing Magazine, pp , (January 1997). [18] H. RUE AND L. HELD, Gaussian Markov Random Fields: Theory and Applications, Chapman and Hall/CRC, 5. [19] C. R. VOGEL, Computational Methods for Inverse Problems, SIAM, Philadelphia, 2. [] C. R. VOGEL AND M. E. OMAN, A fast, robust algorithm for total variation based reconstruction of noisy, blurred images, IEEE Transactions on Image Processing, 7 (1998), pp Received???. Author information Johnathan M. Bardsley, 32 Campus Drive, Department of Mathematical Sciences, University of Montana, Missoula, MT, 59812, USA. bardsleyj@mso.umt.edu

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated

More information

APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING

APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING JOHNATHAN M. BARDSLEY Abstract. In this paper, we extend a nonnegatively

More information

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences

More information

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley Volume X, No. 0X, 200X, X XX Web site: http://www.aimsciences.org AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION Johnathan M. Bardsley Department of Mathematical

More information

A Computational Framework for Total Variation-Regularized Positron Emission Tomography

A Computational Framework for Total Variation-Regularized Positron Emission Tomography Numerical Algorithms manuscript No. (will be inserted by the editor) A Computational Framework for Total Variation-Regularized Positron Emission Tomography Johnathan M. Bardsley John Goldes Received: date

More information

Dealing with Boundary Artifacts in MCMC-Based Deconvolution

Dealing with Boundary Artifacts in MCMC-Based Deconvolution Dealing with Boundary Artifacts in MCMC-Based Deconvolution Johnathan M. Bardsley Department of Mathematical Sciences, University of Montana, Missoula, Montana. Aaron Luttman National Security Technologies,

More information

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Johnathan M. Bardsley and John Goldes Department of Mathematical Sciences University of Montana Missoula,

More information

Total Variation-Penalized Poisson Likelihood Estimation for Ill-Posed Problems

Total Variation-Penalized Poisson Likelihood Estimation for Ill-Posed Problems Total Variation-Penalized Poisson Likelihood Estimation for Ill-Posed Problems Johnathan M. Bardsley Department of Mathematical Sciences, University of Montana, Missoula, MT. Email: bardsleyj@mso.umt.edu

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

Continuous State MRF s

Continuous State MRF s EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64

More information

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Inverse Problems in Science and Engineering Vol. 00, No. 00, December 2006, 1 19 Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Johnathan M. Bardsley

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Markov Random Fields

Markov Random Fields Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and

More information

Modeling Multiscale Differential Pixel Statistics

Modeling Multiscale Differential Pixel Statistics Modeling Multiscale Differential Pixel Statistics David Odom a and Peyman Milanfar a a Electrical Engineering Department, University of California, Santa Cruz CA. 95064 USA ABSTRACT The statistics of natural

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Hierarchical regularization for edge-preserving reconstruction of PET images

Hierarchical regularization for edge-preserving reconstruction of PET images Hierarchical regularization for edge-preserving reconstruction of PET images Johnathan M. Bardsley, Daniela Calvetti, and Erkki Somersalo Department of Mathematical Sciences, University of Montana, Missoula,

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998 Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,

More information

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing George Papandreou and Alan Yuille Department of Statistics University of California, Los Angeles ICCV Workshop on Information

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

PROBABILISTIC CONTINUOUS EDGE DETECTION USING LOCAL SYMMETRY

PROBABILISTIC CONTINUOUS EDGE DETECTION USING LOCAL SYMMETRY PROBABILISTIC CONTINUOUS EDGE DETECTION USING LOCAL SYMMETRY Gerald Mwangi, Paul Fieguth, Christoph S. Garbe : University of Heidelberg, Germany, : University of Waterloo, Canada ABSTRACT We describe a

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Analytical Approach to Regularization Design for Isotropic Spatial Resolution

Analytical Approach to Regularization Design for Isotropic Spatial Resolution Analytical Approach to Regularization Design for Isotropic Spatial Resolution Jeffrey A Fessler, Senior Member, IEEE Abstract In emission tomography, conventional quadratic regularization methods lead

More information

Linear Diffusion and Image Processing. Outline

Linear Diffusion and Image Processing. Outline Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for

More information

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise J Math Imaging Vis c 27 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 1.7/s1851-7-652-y A Variational Approach to Reconstructing Images Corrupted by Poisson Noise TRIET

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling

Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling Abstract An automated unsupervised technique, based upon a Bayesian framework, for the segmentation of low light level

More information

UPRE Method for Total Variation Parameter Selection

UPRE Method for Total Variation Parameter Selection UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National

More information

A theoretical analysis of L 1 regularized Poisson likelihood estimation

A theoretical analysis of L 1 regularized Poisson likelihood estimation See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228359355 A theoretical analysis of L 1 regularized Poisson likelihood estimation Article in

More information

Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems

Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Andrew Brown 1,2, Arvind Saibaba 3, Sarah Vallélian 2,3 CCNS Transition Workshop SAMSI May 5, 2016 Supported by SAMSI Visiting Research

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

An image decomposition model using the total variation and the infinity Laplacian

An image decomposition model using the total variation and the infinity Laplacian An image decomposition model using the total variation and the inity Laplacian Christopher Elion a and Luminita A. Vese a a Department of Mathematics, University of California Los Angeles, 405 Hilgard

More information

Convergence rates of convex variational regularization

Convergence rates of convex variational regularization INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 20 (2004) 1411 1421 INVERSE PROBLEMS PII: S0266-5611(04)77358-X Convergence rates of convex variational regularization Martin Burger 1 and Stanley Osher

More information

Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany

Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany Scale Space and PDE methods in image analysis and processing Arjan Kuijper Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, 64283

More information

Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications

Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications Aku Seppänen Inverse Problems Group Department of Applied Physics University

More information

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo

More information

(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics

(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics (4D) Variational Models Preserving Sharp Edges Institute for Computational and Applied Mathematics Intensity (cnt) Mathematical Imaging Workgroup @WWU 2 0.65 0.60 DNA Akrosom Flagellum Glass 0.55 0.50

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp

ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp Mathematical Modelling and Numerical Analysis ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2, pp. 799 8 STRUCTURAL PROPERTIES OF SOLUTIONS TO TOTAL VARIATION REGULARIZATION

More information

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for Objective Functions for Tomographic Reconstruction from Randoms-Precorrected PET Scans Mehmet Yavuz and Jerey A. Fessler Dept. of EECS, University of Michigan Abstract In PET, usually the data are precorrected

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Dealing with edge effects in least-squares image deconvolution problems

Dealing with edge effects in least-squares image deconvolution problems Astronomy & Astrophysics manuscript no. bc May 11, 05 (DOI: will be inserted by hand later) Dealing with edge effects in least-squares image deconvolution problems R. Vio 1 J. Bardsley 2, M. Donatelli

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

Discretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology

Discretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology Discretization-invariant Bayesian inversion and Besov space priors Samuli Siltanen RICAM 28.10.2008 Tampere University of Technology http://math.tkk.fi/inverse-coe/ This is a joint work with Matti Lassas

More information

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved

More information

Nonlinear Flows for Displacement Correction and Applications in Tomography

Nonlinear Flows for Displacement Correction and Applications in Tomography Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,

More information

Total Variation Theory and Its Applications

Total Variation Theory and Its Applications Total Variation Theory and Its Applications 2nd UCC Annual Research Conference, Kingston, Jamaica Peter Ndajah University of the Commonwealth Caribbean, Kingston, Jamaica September 27, 2018 Peter Ndajah

More information

MAP Reconstruction From Spatially Correlated PET Data

MAP Reconstruction From Spatially Correlated PET Data IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL 50, NO 5, OCTOBER 2003 1445 MAP Reconstruction From Spatially Correlated PET Data Adam Alessio, Student Member, IEEE, Ken Sauer, Member, IEEE, and Charles A Bouman,

More information

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling Jon Wakefield Departments of Statistics and Biostatistics University of Washington 1 / 37 Lecture Content Motivation

More information

Probabilistic Graphical Models Lecture Notes Fall 2009

Probabilistic Graphical Models Lecture Notes Fall 2009 Probabilistic Graphical Models Lecture Notes Fall 2009 October 28, 2009 Byoung-Tak Zhang School of omputer Science and Engineering & ognitive Science, Brain Science, and Bioinformatics Seoul National University

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Point spread function reconstruction from the image of a sharp edge

Point spread function reconstruction from the image of a sharp edge DOE/NV/5946--49 Point spread function reconstruction from the image of a sharp edge John Bardsley, Kevin Joyce, Aaron Luttman The University of Montana National Security Technologies LLC Montana Uncertainty

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Tikhonov Regularization in General Form 8.1

Tikhonov Regularization in General Form 8.1 Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual

More information

Scan Time Optimization for Post-injection PET Scans

Scan Time Optimization for Post-injection PET Scans Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J.

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J. Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox fox@physics.otago.ac.nz Richard A. Norton, J. Andrés Christen Topics... Backstory (?) Sampling in linear-gaussian hierarchical

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

An EM algorithm for Gaussian Markov Random Fields

An EM algorithm for Gaussian Markov Random Fields An EM algorithm for Gaussian Markov Random Fields Will Penny, Wellcome Department of Imaging Neuroscience, University College, London WC1N 3BG. wpenny@fil.ion.ucl.ac.uk October 28, 2002 Abstract Lavine

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Noise, Denoising, and Image Reconstruction with Noise (lecture 10)

EE 367 / CS 448I Computational Imaging and Display Notes: Noise, Denoising, and Image Reconstruction with Noise (lecture 10) EE 367 / CS 448I Computational Imaging and Display Notes: Noise, Denoising, and Image Reconstruction with Noise (lecture 0) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Regularizing inverse problems using sparsity-based signal models

Regularizing inverse problems using sparsity-based signal models Regularizing inverse problems using sparsity-based signal models Jeffrey A. Fessler William L. Root Professor of EECS EECS Dept., BME Dept., Dept. of Radiology University of Michigan http://web.eecs.umich.edu/

More information

Contre-examples for Bayesian MAP restoration. Mila Nikolova

Contre-examples for Bayesian MAP restoration. Mila Nikolova Contre-examples for Bayesian MAP restoration Mila Nikolova CMLA ENS de Cachan, 61 av. du Président Wilson, 94235 Cachan cedex (nikolova@cmla.ens-cachan.fr) Obergurgl, September 26 Outline 1. MAP estimators

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)

More information

Satellite image deconvolution using complex wavelet packets

Satellite image deconvolution using complex wavelet packets Satellite image deconvolution using complex wavelet packets André Jalobeanu, Laure Blanc-Féraud, Josiane Zerubia ARIANA research group INRIA Sophia Antipolis, France CNRS / INRIA / UNSA www.inria.fr/ariana

More information

Image processing and Computer Vision

Image processing and Computer Vision 1 / 1 Image processing and Computer Vision Continuous Optimization and applications to image processing Martin de La Gorce martin.de-la-gorce@enpc.fr February 2015 Optimization 2 / 1 We have a function

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Optimal control as a regularization method for. an ill-posed problem.

Optimal control as a regularization method for. an ill-posed problem. Optimal control as a regularization method for ill-posed problems Stefan Kindermann, Carmeliza Navasca Department of Mathematics University of California Los Angeles, CA 995-1555 USA {kinder,navasca}@math.ucla.edu

More information

A quantative comparison of two restoration methods as applied to confocal microscopy

A quantative comparison of two restoration methods as applied to confocal microscopy A quantative comparison of two restoration methods as applied to confocal microscopy Geert M.P. van Kempen 1, Hans T.M. van der Voort, Lucas J. van Vliet 1 1 Pattern Recognition Group, Delft University

More information

GAUSSIAN PROCESS TRANSFORMS

GAUSSIAN PROCESS TRANSFORMS GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)

More information

Superiorized Inversion of the Radon Transform

Superiorized Inversion of the Radon Transform Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Inference and estimation in probabilistic time series models

Inference and estimation in probabilistic time series models 1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence

More information

Solution-driven Adaptive Total Variation Regularization

Solution-driven Adaptive Total Variation Regularization 1/15 Solution-driven Adaptive Total Variation Regularization Frank Lenzen 1, Jan Lellmann 2, Florian Becker 1, Stefania Petra 1, Johannes Berger 1, Christoph Schnörr 1 1 Heidelberg Collaboratory for Image

More information

GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT

GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT Cha Zhang Joint work with Dinei Florêncio and Philip A. Chou Microsoft Research Outline Gaussian Markov Random Field Graph construction Graph transform

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information