Dealing with Boundary Artifacts in MCMC-Based Deconvolution

Size: px
Start display at page:

Download "Dealing with Boundary Artifacts in MCMC-Based Deconvolution"

Transcription

1 Dealing with Boundary Artifacts in MCMC-Based Deconvolution Johnathan M. Bardsley Department of Mathematical Sciences, University of Montana, Missoula, Montana. Aaron Luttman National Security Technologies, LLC, Las Vegas, Nevada. Abstract Many numerical methods for deconvolution problems are designed to take advantage of the computational efficiency of spectral methods, but classical approaches to spectral techniques require particular conditions be applied uniformly across all boundaries of the signal. These boundary conditions traditionally periodic, Dirichlet, Neumann, or related are essentially methods for generating data values outside the domain of the signal, but they often lack physical motivation and can result in artifacts in the reconstruction near the boundary. In this work we present a data-driven technique for computing boundary values by solving a regularized and well-posed form of the deconvolution problem on an extended domain. Further, a Bayesian framework is constructed for the deconvolution, and we present a Markov chain Monte Carlo method for sampling from the posterior distribution. There are several advantages to this approach, including that still takes advantage of the efficiency of spectral methods, that it allows the boundaries of the signal to This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-6NA25946 with the U.S. Department of Energy and supported by the Site Directed Research and Development program. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. DOE/NV/25946???? addresses: bardsleyj@mso.umt.edu (Johnathan M. Bardsley), LuttmaAB@nv.doe.gov (Aaron Luttman) Preprint submitted to Linear Algebra and Its Applications September 3, 13

2 be treated in a non-uniform manner thereby reducing artifacts and that the sampling scheme gives a natural method for quantifying uncertainties in the reconstruction. Keywords: imaging, deconvolution, inverse problems, boundary conditions, Bayesian methods, Markov chain Monte Carlo MSC: 15A29, 65F22, 65C5, 65C6, 94A8 1. Introduction Many applications in image and signal processing are modeled using convolution: b(s) = a(s t)x(t)dt, s Ω, (1) Ω where b corresponds to the measured data, defined on Ω R d, which in image processing is commonly referred to as the field of view (FOV); x corresponds to the unknown object, which is to be recovered and is defined on the extended domain Ω containing Ω; and a is the known convolution kernel, also called the system response or point spread function (PSF). In this manuscript, the dimension d is either 1 or 2. For a typical kernel a, the values of b(s) for s near the boundary of Ω, Ω, will depend upon values of x(t) for t / Ω but near Ω. Thus in order for (1) to be accurate in general, Ω must contain Ω as well as the spatial locations t / Ω for which x(t) has influence on b(s) for s Ω. Nonetheless, it is standard practice to make assumptions about the values of x(t) for t / Ω so that Ω = Ω in (1). These assumptions are called boundary conditions in the imaging literature [7]. They are made for computational reasons, as they allow for the use of highly efficient spectral methods, but they also result in unrealistic artifacts in the estimates of x(t), for t Ω near Ω, when they are inaccurate. We describe a few of the most common boundary conditions and their numerical advantages in Section 2, but in each case a single condition and its associated assumptions are taken uniformly across the entire boundary of the signal. The assumptions often aren t accurate, with the result that boundary artifacts appear in the computed estimates of x. In this work, we present a simple and computationally efficient alternative for constructing data-driven boundary conditions by solving the deconvolution problem on an extended domain, based on work developed in [4]. The approach has the 2

3 advantage that no explicit assumptions are made about the values of x(t) for t Ω, hence minimizing boundary artifacts, but this comes at the expense of solving a severely underdetermined inverse problem. We overcome this by assuming a prior on x and sampling from the corresponding posterior distribution. In Section 3, we develop a Markov chain Monte Carlo method for the sampling and for using the results to quantify uncertainties in the signal reconstruction. In Section 4, the method is demonstrated on applications in 1D and 2D. In both cases, the applications come from image deblurring. The 1D example uses real data used for density calibrations in quantitative X-ray imaging. The accuracy of such calibrations depends fundamentally on deblurring the measured images, and the ability to quantify uncertainties associated with the deblurring process are necessary to subsequently estimate errors in the density calibrations. In our 2D example, as is usually the case in imaging, the image boundaries vary significantly in intensity and uniform boundary conditions are unnatural. A data-driven approach to dealing with the deblur near the boundaries can show great advantages. In both cases, enough samples are computed to adequately describe the posterior distributions and estimate uncertainties in the reconstructions. 2. Boundary Conditions and Structured Matrix Computations In practice, signals are measured at discrete points, and these measurements contain random errors. It is, therefore, typical to work with a numerically discretized version of (1) containing an additive error term: b = Ax + η, (2) where x R n and b R m are discretizations of x, b; A is the real-valued m n matrix that arises from approximating integration by a quadrature rule; and η N (, λ 1 I), which is to say that η is an independent and identically distributed (iid) Gaussian random vector with mean zero and variance λ 1 (or precision λ) across all pixels. To look in more detail into the structure of (2), we first consider 1D convolution, which after discretization of (1), is given by b i = i+k j=i k a i j x j = k a j x j+i, for i = 1,..., m, (3) j= k 3

4 or in the matrix-vector notation of (2), a b k a a k 1 b 2. = b m a k a a k a k a a k a k a a k x k+1. x 1. x ṃ.. (4) x m+k In this formulation, the discrete PSF {a j } k j= k is assumed to be known, either by modeling or direct measurement. Since, like b, it can be measured, it has its own predetermined size (2k+1) 1; we do not assume that it is symmetric about a unless otherwise stated. It is evident from (4) that the values of b i for i near 1 and m will depend upon the elements (x k+1,..., x ) and (x m+1,..., x m+k ), respectively, all of which lie outside of the FOV. Note that (4) has the form of (2) with n = m + 2k and hence is an underdetermined problem. As stated above, the standard approach for dealing with this issue is to make assumptions about the values of x outside of the FOV based on a priori knowledge or by relating the values to those within the FOV. Periodic boundary conditions correspond to (x k+1,..., x ) = (x m k+1,..., x m ) and (x m+1,..., x m+k ) = (x 1,..., x k ); Neumann boundary conditions correspond to a reflection of the signal about the boundaries, i.e.,(x k+1,..., x ) = (x k,..., x 1 ) and (x m+1,..., x m+k ) = (x m,..., x m k+1 ); and a zero (or Dirichlet) boundary condition corresponds to the assumption that (x k+1,..., x ) = (x m+1,..., x m+k ) =. In each case, the resulting linear system (2) becomes m m with x = (x 1,..., x m ), which is the unknown restricted to the FOV. One of the primary reasons for choosing one of the above boundary conditions is that there are efficient spectral methods that can be used to solve the resulting deconvolution problem. Periodic boundary conditions result in a circulant matrix A that can be diagonalized by the Discrete Fourier Transform (DFT) [7, 15]. With Neumann boundary conditions, the resulting matrix A has Toeplitz-plus-Hankel structure, and, if the convolution kernel a is symmetric, i.e., a i = a i, then A can be diagonalized by the discrete cosine transform (DCT) [11, Theorem 3.2]. Dirichlet boundary conditions give a Toeplitz matrix A [7, 15], and this can be embedded in a circulant matrix that can be diagonalized by the DFT. 4

5 In the two-dimensional (2D) case, b and x are obtained by columnstacking the M M array B and the N N array X, denoted b = vec(b) and x = vec(x), and A is determined by the K K convolution kernel a = {a ij } K i,j= K. The discretization of (1) in this setting yields the 2D discrete convolution equation b r,s = r+k s+k i=r K j=s K a r i,s j x ij, for r, s = 1,..., M. (5) Note then that X is N N with N = M + 2K. After stacking the columns of the two dimensional arrays defined by (5), the resulting system of linear equations is of the form (2) with n = N 2 and m = M 2, which is once again under-determined. (Here we have assumed that B and X are square, for the simplicity of notation, but all of the theory and algorithms apply to non-square signals as well, with the appropriate bookkeeping of indices.) As in the 1D case, boundary conditions can be used to turn (5) into an M 2 M 2 system of equations. The 2D versions of the zero, periodic, and Neumann boundary conditions, are represented, respectively, by the following extensions of X [7]: X, X X X X X X X X X, and X vh X h X vh X v X X v X vh X h X vh, where X v is the image that results from flipping X across its central vertical axis; X h is the image that results from flipping X across its central horizontal axis; and X vh is the image that results from flipping X across its vertical then horizontal axes. In all three cases, the central X corresponds to the unknowns within the FOV, and again the primary advantage of these assumptions is that there are efficient spectral methods, involving the 2D discrete Fourier and cosine transforms, that can be used to solve the corresponding deconvolution problems. The drawbacks to these boundary conditions are that the associated efficient computational methods are only applicable if the condition is applied uniformly to all boundaries and that these particular boundary conditions often lack physical motivation. Thus we must balance the gain of the computational efficiency against the losses of flexibility in how the boundaries are treated. 5

6 2.1. A Simple Alternative to Boundary Conditions Assumptions A simple alternative to enforcing boundary conditions on Ω was suggested in [4]. The basic idea in 1D is to solve the problem on the extended FOV represented in (4) by x = [x k+1,..., x, x 1,..., x m, x m+1, x m+k ] T. Rather than use a boundary condition to reduce the number of unknowns in x to m, i.e. those within the FOV, we instead zero pad the PSF as follows: â = [,...,, a }{{} k,..., a,..., a k,,..., ] }{{} m+2k. (m 1)/2 (m 1)/2 (Here we have assumed m is odd for the ease of notation, but, again, the results carry over, with appropriate modifications to indices, in the case that m is even.) Then, assuming a periodic boundary condition on x, we obtain an (m + 2k) (m + 2k) matrix Â. We then restrict Âx to the central m elements, i.e. those within the FOV, to obtain the model b = DÂx. (6) Then D is the m (m + 2k) matrix whose ith row is equal to row k + i of I m+2k, the (m + 2k) (m + 2k) identity matrix. Moreover, if restrict x to its central m elements, i.e. those within the FOV, the possible boundary artifacts are removed. Similar arguments in 2D also yield (6), except in that case  is diagonalizable by the 2D-DFT and the system is N 2 N 2, where N = M +2K. Note that multiplication by both D and  is extremely efficient and require low storage. However, (6) is underdetermined and cannot be solved directly using a spectral method. The focus of the remainder of this paper is on solving the under-determined linear system (6). A truncated iterative approach was taken in [4], where the Richardson-Lucy iteration was use, and in [14], where the Landweber iteration was used. Here instead we take a Bayesian approach and define a prior on x, which makes the resulting maximum a posteriori (MAP) estimation problem over-determined and well-posed. Moreover, the Bayesian formulation of the problem allows us to quantify uncertainties in our estimates both of x and of the regularization parameter by sampling from the posterior density function. 6

7 3. The Bayesian Solution of the Problem In this section, we formulate a Bayesian solution to (6). In order to simplify notation, we remove from  and add random noise to obtain b = DAx + η, (7) where in 1D, b, η R m, D R m n with n = m + 2k, and A R n n ; while in 2D b, η R M 2, D R M 2 N 2 with N = M +2K, and A R N 2 N 2. Then, given our assumption that η is an iid Gaussian random vector with variance λ 1, the probability density function for (7) is given by p(b x, λ) exp ( λ2 ) DAx b 2, (8) where denotes proportionality, and denotes the Euclidean norm. Computing the maximizer of the likelihood L(x b, λ) = p(b x, λ) is not a well-posed problem. The standard technique for overcoming this for inverse problems is regularization, which, in the context of Bayesian statistics, corresponds to the choice of the prior probability density function p(x δ), where δ > is another precision parameter. In our case, we use a Gaussian Markov random field (GMRF) to model the prior [2], which yields ( p(x δ) exp δ ) 2 xt Lx, (9) where the precision matrix δl is sparse, symmetric, and positive definite. We build L by assuming independent Gaussian increments (local differences), as in [2]. In 1D, we model local differences as follows: x i+1 x i N (, (w i δ) 1 ), i = 1,..., n, where a periodic boundary condition implies that x n+1 = x 1. This yields (see [2] for details) L = D T WD, where D is the forward difference derivative matrix with periodic boundary conditions, and W = diag(w 1,..., w n ). In 2D, we model local vertical and horizontal differences as follows: x i+1,j x i,j, x i,j+1 x i,j N (, (w ij δ) 1 ), i, j = 1,..., N, where, again, a periodic boundary condition is assumed. This yields (see [2] for details) L = D T v WD v + D h WD h, where D v = D I and D h = I D, 7

8 with D the derivative matrix used in the 1D case, denoting the Kronecker product, and W = diag(vec ( {w ij } N i,j=1). Bayes Theorem then states that the posterior probability density function p(x b, λ, δ) can be expressed as p(x b, λ, δ) p(b x, λ)p(x δ) ( exp λ 2 DAx b 2 δ ) 2 xt Lx. (1) Maximizing (1) with respect to x yields the MAP estimator. We can equivalently minimize the negative-log of (1), given by ln p(x b, λ, δ) 1 2 DAx b 2 + α 2 xt Lx, = 1 [ ] [ ] DA b 2 2 (αl) 1/2 x. (11) where α = δ/λ is the r egularization parameter from classical inverse problems [15]. Note that (11) shows that the MAP estimator is the least squares solution of an over-determined linear system, of dimension (m + n) n in 1D and (M 2 + N 2 ) N 2 in 2D Sampling from the Posterior Density Function In order to compute an estimate for the maximizer of (1), we assume Gamma distributed hyper-priors on λ and δ and use the sampling scheme of [1]. Thus p(λ) λ α λ 1 exp( β λ λ), (12) p(δ) δ α δ 1 exp( β δ δ), (13) with α λ = α δ = 1, and β λ = β δ = 1 4, which have mean and variance α/β = 1 4 and α/β 2 = 1 8, respectively. Note that α = 1 yields exponentially distributed hyper-priors, however we present the full Gamma hyper-prior here because it is conjugate to the Gaussian distribution, making sampling straightforward, and other choices for α and β may be advantageous in other situations. Given the large variance values, the hyper-priors should have a negligible effect on the sampled values for λ and δ. 8

9 The posterior probability density then has the form p(x, λ, δ b) p(b x, λ)p(λ) p(x δ)p(δ) = λ n/2+α ( λ 1 δ (n 1)/2+αδ 1 exp λ 2 DAx b 2 δ ) 2 xt Lx β λ λ β δ δ. (14) The prior and hyper-priors were chosen because they are conjugate densities [6], by which we mean that the full conditional densities have the same form as the corresponding prior/hyper-prior; specifically, note that x λ, δ, b N ( (λa T D T DA + δl) 1 λa T D T b, (λa T D T DA + δl) 1),(15) λ x, δ, b Γ (m/2 + α λ, 12 ) DAx b 2 + β λ, (16) δ x, λ, b Γ ((n 1)/2 + α δ, 12 ) xt Lx + β δ, (17) where N and Γ denote Gaussian and Gamma distributions, respectively. The power in (15)-(17) lies in the fact that samples from these three distributions can be computed using standard statistical software, and a Gibbsian approach can be applied to (15)-(17) yielding the following MCMC method of [1] for sampling from (14): MCMC Method for Sampling from p(x, δ, λ b).. Initialize δ, and λ, and set k = ; 1. Compute x k N ( (λ k A T D T DA + δ k L) 1 λ k A T D T b, (λ k A T D T DA + δ k L) 1) ; 2. Compute λ k+1 Γ (m/2 + α λ, 12 ) DAxk b 2 + β λ δ k+1 Γ ((n 1)/2 + α δ, 12 ) (xk ) T Lx k + β δ ; 3. Set k = k + 1 and return to Step 1. The computational bottleneck in this MCMC method is Step 1. However, we can efficiently compute the sample x k by first generating new random data 9

10 ˆb N (b, λ 1 I) and ĉ N (, (δl) 1 ) and then by solving the least squares problem [ ] x k λ 1/2 2 k (DAx ˆb) = arg min x δ 1/2 k L 1/2 (x ĉ) { λk = arg min x 2 DAx ˆb 2 + δ } k 2 L1/2 (x ĉ) 2. (18) To see that x k has the correct probability density, note that the solution of the normal equations for (18) is given by x k = (λ k A T D T DA + δ k L) 1 (λ k A T D T ˆb + δk L 1/2 ĉ) = (λ k A T D T DA + δ k L) 1 λ k A T D T b + w, (19) where w N(, (λ k A T D T DA + δ k L) 1 ), which agrees with the distribution in Step 1. The solution of (18) (or equivalently, (19)), and hence the sample in Step 1, is computed directly in 1D, whereas in 2D the preconditioned conjugate gradient (PCG) algorithm is used with the preconditioner M = λ k A T A + δ k L, which can be diagonalized by the 2D-DFT in our examples. For determining convergence of the MCMC chain, we use the approach described in [1, 6], which monitors a statistic ˆR. Once ˆR is sufficiently near 1 for all sampled parameters, the samples from the last half of each of the MCMC chains are treated as samples from the target distribution. In [6], an ˆR threshold of 1.1 is deemed acceptable. For more detail on this convergence diagnostic, see [1, 6]. Finally, we note that the MCMC method yields sample distributions for x, λ and δ. Thus we can obtain an estimates of x, λ and δ by computing, for example, the sample mean or median, and we emphasize that a classical regularization parameter selection is not needed in this formulation. Moreover, the samples also provide a means of quantifying uncertainty in the estimates of x, λ, and δ. For example, we can compute sample variance or create histograms of sampled parameters, both of which are done in the numerical experiments that follow Poisson Noise Another noise distribution that commonly appears in imaging applications is the Poisson model. In this case, rather than (2), we have the statistical model b = Poisson(Ax + γ), () 1

11 where γ is the m 1 vector of background counts or the so-called dark field and is assumed known. The probability density function for () is given by ( ) n 2 p(b x) exp {([Ax] j + γ j ) + b j ln([ax] j + γ j )}, (21) j=1 where denotes proportionality, and this can be approximated by ln p(b x) 1 2 C 1/2 (DAx (b γ)) 2 + O( h 3 2, h 2 2 k 2, h 2 k 2 2, k 2 2), where denotes equal up to an additive constant, C = diag(b), h = x x true, and k = b DAx true [3]. Thus, we use the approximate likelihood function as follows: ( p(b x) exp 1 C 1/2 (DAx (b γ)) ) 2. (22) 2 Using this likelihood function leads to the MAP estimation problem { 1 x δ = arg min x 2 C 1/2 (DAx (b γ)) 2 + δ } 2 xt Lx. (23) Notice that the noise precision λ is not present in this case. This is due to the fact that [C] ii = b i provides an approximate of the variance of the data at the ith pixel. Thus only α = δ must be estimated, which can be done as above using, for example, the discrepancy principle [15]. Assuming a gamma hyper-prior for δ, as above, leads to the posterior density function p(x, δ b) p(b x) p(x δ)p(δ) = δ (n 1)/2+αδ 1 (24) ( exp 1 2 C 1/2 (DAx (b γ)) 2 δ ) 2 xt Lx β δ δ. Note that only x and δ need to be sampled in this MCMC method, which can be derived as in the Gaussian case. MCMC Method for Sampling from p(x, δ b). 11

12 . Initialize δ and set k = ; 1. Compute ˆb N (b, C), ĉ N (, (δl) 1 ), and then x k = arg min x { 1 2 C 1/2 (DAx (ˆb γ)) 2 + δ k 2 L1/2 (x ĉ) 2 }. 2. Compute δ k+1 Γ ((n 1)/2 + α δ, 12 ) (xk ) T Lx k + β δ ; 3. Set k = k + 1 and return to Step 1. Once again, in 1D, we compute x k in Step 1 directly, whereas in 2D we use conjugate gradient (CG) iterations. Because of the presence of C in the optimization problem, an efficient preconditioner for CG does not exist, however C can be viewed as a preconditioner of sorts, as it seems to accelerate the convergence of CG. 4. Numerical Experiments In this section we present the results obtained by applying our framework to both real and synthetic examples in 1D and 2D One Dimensional Examples We begin with a comparison of the results of our method to the results obtained using classical boundary conditions, calculated on synthetic data. This is followed by an example of real data from X-ray radiography, and the results of the sampling with an anisotropic prior and the data-driven boundary conditions are presented One-Dimensional Example for Comparing Boundary Conditions We begin with a synthetic 1D deconvolution problem in order to illustrate how the various boundary conditions work. Consider the 1D convolution model b(s) = 1 A(s s )x(s )ds, 21/1 s /1, 12

13 with a Gaussian convolution kernel A(s) = exp( s 2 /(2γ 2 ))/ πγ 2, γ >. Then, discretizing the integral using mid-point quadrature yields the matrix A defined by [A] ij = h exp ( ((i j)h) 2 /(2γ 2 ) ) / πγ 2, 1 i, j n, (25) where h = 1/n with n the number of grid points in [, 1]. We use n = 1 and then restrict to the central 8 grid points to obtain the vector representation, b, of b on [21/1, /1]. Thus the model has the form b = DAx, where D is the matrix obtained by extracting rows 21 through of the 1 1 identity matrix. The true image x used to generate the data is plotted on the upper-left in Figure 1, as is the data b generated using noise model (2) with the noise variance λ 1 chosen so that the noise strength is 2% that of the signal strength. 5 5 true image extended image BC Neumann BC periodic BC Dirichlet BC true image extended image BC Neumann BC periodic BC Dirichlet BC (a) true image extended image BC Neumann BC periodic BC Dirichlet BC (b) (c) (d) Figure 1: The one-dimensional image used to generate the data and blurred noisy data (a), and a comparison of classical boundary conditions to the method presented here (b). Images (c) and (d) show zoomed in views of the left and right boundaries, respectively. 13

14 On the upper-right in Figure 1 is a plot of the MAP reconstruction of x obtained by minimizing (11) and using the four different choices of boundary conditions. For regularization, we used L = D T D and a fixed regularization parameter α = δ/λ chosen by hand. The lower plots in Figure 1 are magnifications of the upper-right plot near the left and right boundary of the FOV, in the left and right plots respectively. Note that our approach is the only one that retains the model matrix DA; the others yield a different A based on the boundary conditions assumption. The results are as one would expect: the worse the boundary condition assumption for the specific example, the worse the boundary artifacts. Listed from worst to best (measured by nearness to the true image near the boundaries) for this example, the boundary conditions are: Dirichlet, periodic, Neumann, and our extended domain approach. Note that the extended domain boundary condition yields only slightly better results than the Neumann boundary condition, but only on the right hand side, where the Neumann (or reflective) boundary condition is not quite correct, however, the difference is quite small A One-dimensional Example in X-ray Radiography, Poisson Approximation Most applications of imaging are qualitative, in that the goal is to produce image reconstructions that look as good as possible. In the security sciences, however, pulsed power X-ray radiography is a quantitative imaging diagnostic, used to calculate the true locations of image features in 3D space or to calculate the densities of the objects being imaged [1, 16, 9, 13]. In these applications it is often possible to accurately determine the sources and magnitude of noise, and such images are often dominated by Poisson noise due to particle counting on the CCD. In this case, it is natural to use the Poisson formulation detailed in Section 3.2. Figure 2 (a) shows an image taken from a pulsed power X-ray radiography system, and the object in the scene is a so-called step wedge, which is used to determine the X-ray transmission of the system. The object consists of a single material, with steps of different thicknesses. Thicker regions of the object correspond to darker regions of the image. A single vertical crosssection is extracted from the image (or several image columns are averaged together), and the resulting curve must be de-blurred before the transmission can be accurately determined. (See [8] for a simulation-based approach to computing transmission curves. Experimental approaches for high energy X-ray systems are largely undocumented in the literature.) 14

15 25 δ, the prior precision 15 5 (a) (b) 1 lineout MCMC sample mean 95% credibility bounds (c) Figure 2: Image (a) shows the radiograph of a so-called step wedge used for object density reconstruction. The goal is distinguish among the darkest shades of gray. In (c), on the bottom, is shown a vertical lineout (cross section) from the image, with the mean of the deblurred samples and 95% credibility bands. Image (b) gives the histogram of precision parameter, δ, samples. 15

16 Given the nature of the data, it is undesirable to use the isotropic smoothness prior L = D T D. Hence we first compute the MAP reconstruction x δ defined by (23) with L = D T D and δ chosen using the discrepancy principle [15], and then use x δ to define an anisotropic smoothness prior as follows: ( ) L = D T 1 WD, but with W = diag (Dx DP ) This prior precision matrix is designed to pick up the edge information, since it allows for large increments (or local differences) where the derivative of the signal is large. The reconstruction shown in Figure 2 (b) is the mean of the samples computed using the MCMC scheme in Section 3.2. While the mean of the samples does not pick up the edge information clearly, the 95% credibility bands do show the step geometry of the object. Further, there are no discernible boundary artifacts in the reconstruction, due to the approach presented here. Figure 2 (c) shows the histogram of the samples of the precision (or regularization) parameter δ. The discrepancy principle value of the parameter was.49; the mean of the samples is.8 with an approximately single peaked distribution Two-Dimensional Examples Finally, we consider a two dimensional image deconvolution test case, in which the mathematical model is of the form b(s, t) = 3/2 3/2 1/2 1/2 a(s s, t t )x(s, t )ds dt, s, t 1. We discretize using mid-point quadrature on a uniform computational grid over [-1/2,3/2] [-1/2,3/2]. Then the vector x is while b is , since b is defined over [, 1] [, 1]. This yields a system of linear equations b = DAx; note that D is the discretization of the indicator function on [, 1] [, 1]. We assume that x is a periodic function on [-1/2,3/2] [-1/2,3/2], so that A has block circulant with circulant block structure and can be diagonalized by the 2D-DFT for low storage requirements and efficient matrix-vector multiplication Gaussian Noise, Synthetic Data Test Case The data b is generated using (7) with the noise variance λ 1 chosen so that the noise strength is 2% that of the signal strength. In order to 16

17 Figure 3: On the left is the two-dimensional image used to generate the data, and on the right is the blurred data corrupted by Gaussian noise. obtain the noise-free data DAx, we begin with an extended true image, compute 2D discrete-convolution (5) assuming periodic BCs, and then restrict to the central sub-image to obtain DAx. The central region of the image used to generate the data and the data b are shown in Figure 3. We reconstruct the image by sampling from the posterior density function p(x, λ, δ b) defined in (14) using the MCMC method described in Section 3.1. We computed 4 parallel MCMC chains and reached an R value of 1.5 when the length of the chains was 1.The initial values for the chains were δ =.1 and λ = 5. We plot the mean of the sampled images, with negative values set to zero, as the reconstruction on the upper-left in Figure 4; it had a relative error of.2. We also reconstruct x using MAP estimation with α = δ/λ = chosen using GCV and plot it in the upper-right in Figure 4; it had a relative error of x α x true / x true =.985. From the samples for λ and δ, on the lower-left in Figure 4, we plot histograms for λ, δ, and the regularization parameter α = δ/λ, which has a 95% credibility interval [ , ]. Note that the noise precision used to generate the data, λ = is contained in the 95% credibility interval for the λ samples: [3.139, 3.29]. And finally, we plot the pixel-wise standard deviation of the samples for x in the lower-right in Figure Poisson Noise, Synthetic Data Test Case Next, we use the same synthetic data example, but with the Poisson noise model (). We use the Gaussian approximation (22) so that the required 17

18 δ, the prior precision λ, the noise precision x α=δ/λ, the regularization parameter x Figure 4: Reconstructions, Gaussian Test Case. On the upper-left is the sample mean of the x samples computed using the MCMC method. On the upper-right is the MAP reconstruction with α chosen using GCV. On the lower-left are the histograms of the α and δ samples, as well as of the corresponding α = λ/δ. And finally, on the lower right is the pixel-wise standard deviation computed from the samples for x. 18

19 Figure 5: On the left is the two-dimensional image used to generate the data, and on the right is the blurred data corrupted by Poisson noise. large-scale minimization tasks are all quadratic, and hence only require an application of CG. The data b in the Poisson case corresponds to counts, so we increase the intensity of the true image from the previous example. The noise-free data DAx is obtained as in the Gaussian case. The central region of the image used to generate the data and the data b are shown in Figure 5. We reconstruct the image by sampling from the posterior density function p(x, δ b) defined in (14) using the MCMC method described in Section 3.2. We computed 4 parallel MCMC chains and reached an R value of 1.5 when the length of the chains was 13.The initial values for the chains were δ =.1. We plot the mean of the sampled images, with negative values set to zero, as the reconstruction on the upper-left in Figure 6; it had a relative error of We also reconstruct x using MAP estimation with δ = chosen using GCV. The reconstruction is given in the upperright in Figure 6 and it had a relative error of On the lower-left in Figure 6 we plot a histogram for the samples of δ, which has a 95% credibility interval [ , ]. And finally, we plot the pixel-wise standard deviation of the samples for x in the lower-right in Figure 6, which is smaller in the regions of the true image of lower intensity. 5. Conclusions We have presented an MCMC sampling scheme with a data-driven approach to dealing with boundary artifacts in deconvolution problems. The 19

20 δ, the prior precision x Figure 6: Reconstructions, Poisson Test Case. On the upper-left is the sample mean of the x samples computed using the MCMC method. On the upper-right is the MAP reconstruction with α chosen using GCV. On the lower-left are the histograms of the α and δ samples, as well as of the corresponding α/δ. And finally, on the lower right is the pixel-wise standard deviation computed from the samples for x.

21 method samples the unknown image x defined on an extended domain and then restricts to the field of view (FOV) of the imaging instrument, thus removing any boundary artifacts. The approach retains computational efficiency by assuming a periodic boundary condition on the extended domain. The resulting model is no longer diagonalizable by a fast transform, but efficient iterative methods nonetheless exist for its solution, and direct methods can be used in 1D cases. The MCMC method samples the unknown object x, as well as the noise precision λ and prior precision δ, making regularization parameter selection unnecessary. Moreover, in the Poisson noise case, we use a Gaussian approximation of the negative-log Poisson likelihood to extend our framework. In this case, there is no λ parameter and hence we only sample x and δ. The prior is defined by a Gaussian Markov random field (GMRF), which in all but one of the numerical experiments corresponds to the standard scaled negative-laplacian precision matrix δl. In the one experiment involving real data, however, an anisotropic smoothness prior was needed in order to obtain good results. In each of the numerical experiments presented, the sampling scheme worked well and boundary artifacts were negligible in the associated reconstructions. 6. Acknowledgements The authors would like to thank the U.S. Department of Energy Nevada Radiography Working Group for providing the X-ray radiography data. 7. References [1] J. M. Bardsley, MCMC-Based Image Reconstruction with Uncertainty Quantification, SIAM J. Sci. Comput., vol. 34, no. 3, 12, pp. A1316- A1332. [2] J. M. Bardsley, Gaussian Markov Random Field Priors for Inverse Problems, Inverse Problems and Imaging, vol. 7, no. 2, 13, pp [3] J. M. Bardsley and J. Goldes, Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation, Inverse Problems, 25 (9)

22 [4] M. Bertero and P. Boccacci, A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution, Astronomy and Astrophysics, 437 (5), pp [5] J. Besag, Spatial Interaction and the Statistical Analysis of Lattice Systems, Journal of the Royal Statistical Society, Series B, 36(2) (1974), pp [6] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis, Second Edition, Chapman & Hall/CRC, Texts in Statistical Science, 4. [7] P. C. Hansen, J. Nagy, and D. O Leary, Deblurring Images: Matrices, Spectra, and Filtering, SIAM, Philadelphia, 6. [8] G. Hoff, S. F. Firmino, R. M. Papaleo, M. T. de Vilhena, Estimating Transmission Curves of Primary X-Ray Beams Used in Diagnostic Radiology, IEEE Transactions on Nuclear Science, 59(2) (12), pp , DOI:1.119/TNS [9] T. A. Kelley and D. M. Stupin, Radiographic Least Squares Fitting Technique Accurately Measures Dimensions and X-Ray Attenuation, Review of Progress in Quantitative Nondestructive Evaluation, (1998), pp [1] D. Mosher, R. Commisso, G. Cooperstein, S. Stephanakis, S. Swanekamp, and F. Young,Rod-Pinch X-Radiography for Diagnosis of Material Response, American Physical Society, 42nd Annual Meeting of the APS Division of Plasma Physics combined with the 1th International Congress on Plasma Physics, October 23-27, Quebec City, Canada Meeting ID: DPP, abstract #PO2.1. [11] M. K. Ng, R. H. Chan and W. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput., 21(3) (1999), pp [12] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications, Chapman and Hall/CRC, 5. 22

23 [13] J. Smith,R. Carlson, et al., Cygnus Dual Beam Radiography Source, Pulsed Power Conference, 5 IEEE, pp.334,337, June 5, doi: 1.119/PPC [14] R. Vio, J. Bardsley, M. Donatelli, and W. Wamsteker, Dealing with edge effects in least-squares image deconvolution problems, Astronomy and Astrophysics, 442 (5), pp [15] C. R. Vogel, Computational Methods for Inverse Problems, SIAM, Philadelphia, 2. [16] R. L. Whitman, H. M. Hanson, and K. A. Mueller, Image Analysis for Dynamic Weapons Systems, Los Alamos Report LALP-85-15,

Laplace-distributed increments, the Laplace prior, and edge-preserving regularization

Laplace-distributed increments, the Laplace prior, and edge-preserving regularization J. Inverse Ill-Posed Probl.? (????), 1 15 DOI 1.1515/JIIP.????.??? de Gruyter???? Laplace-distributed increments, the Laplace prior, and edge-preserving regularization Johnathan M. Bardsley Abstract. For

More information

Dealing with edge effects in least-squares image deconvolution problems

Dealing with edge effects in least-squares image deconvolution problems Astronomy & Astrophysics manuscript no. bc May 11, 05 (DOI: will be inserted by hand later) Dealing with edge effects in least-squares image deconvolution problems R. Vio 1 J. Bardsley 2, M. Donatelli

More information

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences

More information

Point spread function reconstruction from the image of a sharp edge

Point spread function reconstruction from the image of a sharp edge DOE/NV/5946--49 Point spread function reconstruction from the image of a sharp edge John Bardsley, Kevin Joyce, Aaron Luttman The University of Montana National Security Technologies LLC Montana Uncertainty

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley Volume X, No. 0X, 200X, X XX Web site: http://www.aimsciences.org AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION Johnathan M. Bardsley Department of Mathematical

More information

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J.

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J. Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox fox@physics.otago.ac.nz Richard A. Norton, J. Andrés Christen Topics... Backstory (?) Sampling in linear-gaussian hierarchical

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Johnathan M. Bardsley and John Goldes Department of Mathematical Sciences University of Montana Missoula,

More information

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Inverse Problems in Science and Engineering Vol. 00, No. 00, December 2006, 1 19 Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Johnathan M. Bardsley

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

More information

APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING

APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING JOHNATHAN M. BARDSLEY Abstract. In this paper, we extend a nonnegatively

More information

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

A Computational Framework for Total Variation-Regularized Positron Emission Tomography

A Computational Framework for Total Variation-Regularized Positron Emission Tomography Numerical Algorithms manuscript No. (will be inserted by the editor) A Computational Framework for Total Variation-Regularized Positron Emission Tomography Johnathan M. Bardsley John Goldes Received: date

More information

GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT

GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT Cha Zhang Joint work with Dinei Florêncio and Philip A. Chou Microsoft Research Outline Gaussian Markov Random Field Graph construction Graph transform

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Quantile POD for Hit-Miss Data

Quantile POD for Hit-Miss Data Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

arxiv: v1 [math.st] 19 Mar 2016

arxiv: v1 [math.st] 19 Mar 2016 arxiv:63.635v [math.st] 9 Mar 6 Cauchy difference priors for edge-preserving Bayesian inversion with an application to X-ray tomography Markku Markkanen, Lassi Roininen, Janne M J Huttunen 3 and Sari Lasanen

More information

Statistical Estimation of the Parameters of a PDE

Statistical Estimation of the Parameters of a PDE PIMS-MITACS 2001 1 Statistical Estimation of the Parameters of a PDE Colin Fox, Geoff Nicholls (University of Auckland) Nomenclature for image recovery Statistical model for inverse problems Traditional

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Uncertainty quantification for Wavefield Reconstruction Inversion

Uncertainty quantification for Wavefield Reconstruction Inversion Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

An IDL Based Image Deconvolution Software Package

An IDL Based Image Deconvolution Software Package An IDL Based Image Deconvolution Software Package F. Városi and W. B. Landsman Hughes STX Co., Code 685, NASA/GSFC, Greenbelt, MD 20771 Abstract. Using the Interactive Data Language (IDL), we have implemented

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Empirical Mean and Variance!

Empirical Mean and Variance! Global Image Properties! Global image properties refer to an image as a whole rather than components. Computation of global image properties is often required for image enhancement, preceding image analysis.!

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Inference for latent variable models with many hyperparameters

Inference for latent variable models with many hyperparameters Int. Statistical Inst.: Proc. 5th World Statistical Congress, 011, Dublin (Session CPS070) p.1 Inference for latent variable models with many hyperparameters Yoon, Ji Won Statistics Department Lloyd Building,

More information

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing George Papandreou and Alan Yuille Department of Statistics University of California, Los Angeles ICCV Workshop on Information

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Bayesian data analysis in practice: Three simple examples

Bayesian data analysis in practice: Three simple examples Bayesian data analysis in practice: Three simple examples Martin P. Tingley Introduction These notes cover three examples I presented at Climatea on 5 October 0. Matlab code is available by request to

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Unfolding Methods in Particle Physics

Unfolding Methods in Particle Physics Unfolding Methods in Particle Physics Volker Blobel University of Hamburg, Hamburg, Germany 1 Inverse problems Abstract Measured distributions in particle physics are distorted by the finite resolution

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

arxiv: v1 [math.na] 15 Jun 2009

arxiv: v1 [math.na] 15 Jun 2009 Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Image Deconvolution. Xiang Hao. Scientific Computing and Imaging Institute, University of Utah, UT,

Image Deconvolution. Xiang Hao. Scientific Computing and Imaging Institute, University of Utah, UT, Image Deconvolution Xiang Hao Scientific Computing and Imaging Institute, University of Utah, UT, hao@cs.utah.edu Abstract. This is a assignment report of Mathematics of Imaging course. The topic is image

More information

A theoretical analysis of L 1 regularized Poisson likelihood estimation

A theoretical analysis of L 1 regularized Poisson likelihood estimation See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228359355 A theoretical analysis of L 1 regularized Poisson likelihood estimation Article in

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

SIAM Journal on Scientific Computing, 1999, v. 21 n. 3, p

SIAM Journal on Scientific Computing, 1999, v. 21 n. 3, p Title A Fast Algorithm for Deblurring Models with Neumann Boundary Conditions Author(s) Ng, MKP; Chan, RH; Tang, WC Citation SIAM Journal on Scientific Computing, 1999, v 21 n 3, p 851-866 Issued Date

More information

Implementing an anisotropic and spatially varying Matérn model covariance with smoothing filters

Implementing an anisotropic and spatially varying Matérn model covariance with smoothing filters CWP-815 Implementing an anisotropic and spatially varying Matérn model covariance with smoothing filters Dave Hale Center for Wave Phenomena, Colorado School of Mines, Golden CO 80401, USA a) b) c) Figure

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

AMS classification scheme numbers: 65F10, 65F15, 65Y20

AMS classification scheme numbers: 65F10, 65F15, 65Y20 Improved image deblurring with anti-reflective boundary conditions and re-blurring (This is a preprint of an article published in Inverse Problems, 22 (06) pp. 35-53.) M. Donatelli, C. Estatico, A. Martinelli,

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Inverse problems and uncertainty quantification in remote sensing

Inverse problems and uncertainty quantification in remote sensing 1 / 38 Inverse problems and uncertainty quantification in remote sensing Johanna Tamminen Finnish Meterological Institute johanna.tamminen@fmi.fi ESA Earth Observation Summer School on Earth System Monitoring

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

Polynomial accelerated MCMC... and other sampling algorithms inspired by computational optimization

Polynomial accelerated MCMC... and other sampling algorithms inspired by computational optimization Polynomial accelerated MCMC... and other sampling algorithms inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley Fore-words Motivation: an inverse oceanography

More information

Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems

Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Andrew Brown 1,2, Arvind Saibaba 3, Sarah Vallélian 2,3 CCNS Transition Workshop SAMSI May 5, 2016 Supported by SAMSI Visiting Research

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Probing the covariance matrix

Probing the covariance matrix Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241

More information

Time-Sensitive Dirichlet Process Mixture Models

Time-Sensitive Dirichlet Process Mixture Models Time-Sensitive Dirichlet Process Mixture Models Xiaojin Zhu Zoubin Ghahramani John Lafferty May 25 CMU-CALD-5-4 School of Computer Science Carnegie Mellon University Pittsburgh, PA 523 Abstract We introduce

More information

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016 Spatial Statistics Spatial Examples More Spatial Statistics with Image Analysis Johan Lindström 1 1 Mathematical Statistics Centre for Mathematical Sciences Lund University Lund October 6, 2016 Johan Lindström

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Markov Random Fields

Markov Random Fields Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Astronomy. Astrophysics. Least-squares methods with Poissonian noise: Analysis and comparison with the Richardson-Lucy algorithm

Astronomy. Astrophysics. Least-squares methods with Poissonian noise: Analysis and comparison with the Richardson-Lucy algorithm A&A 436, 74 755 (5) DOI:.5/4-636:4997 c ESO 5 Astronomy & Astrophysics Least-squares methods with ian noise: Analysis and comparison with the Richardson-Lucy algorithm R. Vio,3,J.Bardsley,andW.Wamsteker

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

Parameter Estimation in the Spatio-Temporal Mixed Effects Model Analysis of Massive Spatio-Temporal Data Sets

Parameter Estimation in the Spatio-Temporal Mixed Effects Model Analysis of Massive Spatio-Temporal Data Sets Parameter Estimation in the Spatio-Temporal Mixed Effects Model Analysis of Massive Spatio-Temporal Data Sets Matthias Katzfuß Advisor: Dr. Noel Cressie Department of Statistics The Ohio State University

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

Towards a Bayesian model for Cyber Security

Towards a Bayesian model for Cyber Security Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute

More information

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved

More information

An application of hidden Markov models to asset allocation problems

An application of hidden Markov models to asset allocation problems Finance Stochast. 1, 229 238 (1997) c Springer-Verlag 1997 An application of hidden Markov models to asset allocation problems Robert J. Elliott 1, John van der Hoek 2 1 Department of Mathematical Sciences,

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information