Scan Time Optimization for Post-injection PET Scans

Similar documents
Scan Time Optimization for Post-injection PET Scans

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for

Analytical Approach to Regularization Design for Isotropic Spatial Resolution

SPATIAL RESOLUTION PROPERTIES OF PENALIZED WEIGHTED LEAST-SQUARES TOMOGRAPHIC IMAGE RECONSTRUCTION WITH MODEL MISMATCH

Uniform Quadratic Penalties Cause Nonuniform Spatial Resolution

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 5, MAY

MAP Reconstruction From Spatially Correlated PET Data

Relaxed linearized algorithms for faster X-ray CT image reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

Variance Images for Penalized-Likelihood Image Reconstruction

Fast variance predictions for 3D cone-beam CT with quadratic regularization

Statistical Tomographic Image Reconstruction Methods for Randoms-Precorrected PET Measurements

Maximum-Likelihood Deconvolution in the Spatial and Spatial-Energy Domain for Events With Any Number of Interactions

Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction

Noise properties of motion-compensated tomographic image reconstruction methods

Two-Material Decomposition From a Single CT Scan Using Statistical Image Reconstruction

1. Abstract. 2. Introduction/Problem Statement

Noise Analysis of Regularized EM for SPECT Reconstruction 1

Robust Maximum- Likelihood Position Estimation in Scintillation Cameras

L vector that is to be estimated from a measurement vector

System Modeling for Gamma Ray Imaging Systems

Chapter 2 PET Imaging Basics

Inverse problem and optimization

Resampling techniques for statistical modeling

Theoretical study of lesion detectability of MAP reconstruction using computer observers

Compton Camera. Compton Camera

Introduction to the Mathematics of Medical Imaging

Non-linear least squares

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Steven Tilley Fully 3D Recon 2015 May 31 -June 4

Introduction to SPECT & PET TBMI02 - Medical Image Analysis 2017

S vide improved spatial resolution and noise properties over

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

Stochastic Analogues to Deterministic Optimizers

Sampling versus optimization in very high dimensional parameter spaces

Reconstruction from Digital Holograms by Statistical Methods

Continuous State MRF s

Signal Denoising with Wavelets

Neutron and Gamma Ray Imaging for Nuclear Materials Identification

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

Reconstruction for Proton Computed Tomography: A Monte Carlo Study

A Computational Framework for Total Variation-Regularized Positron Emission Tomography

Positron Emission Tomography

Computer Vision & Digital Image Processing

ECE521 week 3: 23/26 January 2017

Open Problems in Mixed Models

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Conjugate-Gradient Preconditioning Methods for Shift-Variant PET Image Reconstruction

Towards an Optimal Noise Versus Resolution Trade-off in Wind Scatterometry

STA 4273H: Statistical Machine Learning

Regularizing inverse problems using sparsity-based signal models

Cosmology & CMB. Set5: Data Analysis. Davide Maino

Improved FMRI Time-series Registration Using Probability Density Priors

Part 3. Algorithms. Method = Cost Function + Algorithm

Likelihood-Based Methods

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Sparsity Regularization for Image Reconstruction with Poisson Data

Recursive Least Squares for an Entropy Regularized MSE Cost Function

TRADEOFFS AND LIMITATIONS IN STATISTICALLY BASED IMAGE RECONSTRUCTION PROBLEMS

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Use of the magnetic field generated by the internal distribution of injected currents for Electrical Impedance Tomography (MR-EIT)

BLIND SOURCE SEPARATION IN POST NONLINEAR MIXTURES

On Information Maximization and Blind Signal Deconvolution

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

PET. Technical aspects

An IDL Based Image Deconvolution Software Package

Optimized first-order minimization methods

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

Superiorized Inversion of the Radon Transform

A quantative comparison of two restoration methods as applied to confocal microscopy

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

Neural Network Training

Near Optimal Adaptive Robust Beamforming

Iterative Methods for Smooth Objective Functions

Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size

The analytic treatment of the error probability due to clipping a solved problem?

for Image Restoration

Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut.

Lossless Compression of List-Mode 3D PET Data 1

Fast variance predictions for 3D cone-beam CT with quadratic regularization

A FISTA-like scheme to accelerate GISTA?

Regularization in Neural Networks

Analysis of observer performance in unknown-location tasks for tomographic image reconstruction

Growing Window Recursive Quadratic Optimization with Variable Regularization

WE consider the problem of estimating a time varying

Generalized Laplacian as Focus Measure

Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework

Lossless Compression of Dynamic PET Data

A Wavelet Phase Filter for Emission Tomography

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Channelized Hotelling Observer Performance for Penalized-Likelihood Image Reconstruction

Comparison of Modern Stochastic Optimization Algorithms

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Conjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing

ANALYSIS OF p-norm REGULARIZED SUBPROBLEM MINIMIZATION FOR SPARSE PHOTON-LIMITED IMAGE RECOVERY

A Time-Averaged Projection Matrix Approach to Motion Compensation

Transcription:

Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222 Abstract Previous methods for optimizing the scan times for PET transmission emission scans under a total scan time constraint were based on linear non-statistical methods used noise equivalent counts (NEC) criteria. The scan times determined by NEC analysis may be suboptimal when nonlinear statistical image reconstruction methods are used. For statistical image reconstruction, the predicted variance in selected regions of interest is an appropriate alternative to NEC analysis. We propose a new method for optimizing the relative scan times (fractions) based on analytical approximations to the covariance of images reconstructed by both conventional penalized-likelihood methods. We perform simulations to compare predicted stard deviations with empirical ones. Results show that for statistical transmission image reconstruction, the optimal fraction of the scan time devoted to transmission scanning is shorter than for conventional transmission smoothing. I. INTRODUCTION For PET reconstruction, one has to do two sets of scans, namely transmission emission scans. One uses the attenuation correction information obtained from the former scan to aid in estimating the radiotracer emission image from the latter one. Conventional methods of reconstruction are based on linear processing of the transmission emission data, multiplicative correction of attenuation factors in the sinogram domain followed by FBP to reconstruct the emission image. This approach ignores Poisson nature of the data. Recently, there is growing interest on reconstruct/reproject methods for attenuation correction in which one reconstructs the attenuation map, after possibly some processing in the image domain, this map is reprojected to be used in the attenuation correction factors (ACF) computation. The use of statistical methods for reconstructing attenuation maps as well as emission images is becoming attractive in the medical research community, especially due to faster computers faster algorithms. In this paper, we reconstruct ACFs using both conventional penalized-likelihood reconstruct/reproject (PL) methods for post-injection transmission scans. For brevity, we reconstruct emission images with FBP only. Resolution matching is critical in attenuation correction, so we add a post-filtering step to statistical reconstructions to yield approximately Gaussian point spread functions which reduces artifacts from point spread function mismatches. This post-filter reduces the negative sidelobes from the point spread function of penalized-likelihood reconstructions []. In this work, we study the effects of emission transmission scan time duration on the variance of the reconstructed emission image for different reconstruction methods. Particularly we are interested in the optimum scan time fractions under a fixed total scan time constraint, which would result in the smallest variance in a region of interest in the final emission image estimate. Previous studies of scan time optimization [2] were based on NEC criteria with multiple acquisitions of emission transmission data focused on conventional reconstructions. The (co)variance approximations developed here might also be useful for other purposes such as determining the weights in a weighted least-squares image reconstruction [3]. We analyze both the conventional statistical reconstruction cases. We give approximate analytical formulas for conventional quadratic penalty attenuation map reconstructions compare empirical results with the analytical predictions. Our analysis is based on Poisson statistics mathematical approximations [4]. Let be emission post-injection transmission scan count vectors, let be attenuation map emission image pixel value vectors respectively. We define the survival probabilities as follows: represents the line integral along projection of the attenuation map. We also define the emission contamination count rate. Here is the fraction of emission counts contaminating the transmission data (the portion in the transmission window for rotating rod sources), represents the geometric projections of the true emission image, contains the detector efficiencies a scaling factor that accounts for emission scan count rate. We assume that the emission scan measurements the transmission scan measurements are independent Poisson measurements with corresponding means:!" # $ % ()! # $ % (2) Here, are transmission emission scan times respectively. & ' ( )* + )) ( )* + )) are geometric tomographic projections of parameters. ", $ $ are blank scan, transmission scan roms emission scan roms count rates respectively. We assume,"-,$ -,-,$ -,+)- are known constants throughout this work. II. ACF ESTIMATION Attenuation correction is a must for quantitatively accurate emission image reconstruction. We define attenuation correction factors (ACFs). /0. This is the multiplicative factor that corrects for the effects of attenuation in the emission data. We consider two different ways of estimating the ACFs: ) Conventional smoothing

Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. method 2) Reconstruct/reproject penalized-likelihood (PL) method. In the non-statistical conventional method, we estimate the emission contamination by: smooth2! 0 3 $ %4 (3). /0, smooth2 0 3 $ 3 0"4 (4) we estimate the ACFs by reciprocating the survival probabilities, that is The smoothing operation is often used to reduce noise in the ACFs. We also use smoothing to reduce noise in the emission contamination estimate in (3). In a statistical reconstruction, one estimates the ACFs by. 5 is the attenuation map estimate computed by the reconstruction algorithm. The emission contamination estimate (3) is included in the model. The statistical reconstruction is considered in detail in section V. III. EMISSION IMAGE RECONSTRUCTION For brevity, we consider here the conventional FBP method to reconstruct emission images. We define the attenuated emission projections function as 6 A linear unbiased estimate of this function is 6 smooth2 0 3 $ 04 (5) Then, an estimate of the projections can be obtained by:. 6 The emission image is reconstructed by stard FBP method. We use the ramp filter only because the estimate is already a smooth estimate of. Thus, FBPramp, - IV. EMISSION COVARIANCE ESTIMATES The covariance of the emission image estimate vector obtained by the above procedure can be written as follows: 6 7 89 : ; < 7 89, - < (6) the matrix < represents the linear FBP operation with a ramp filter. We need to find the covariance of the rom vector. *. The computation of the exact covariance of this expression is computationally intensive is not desirable. Instead, we prefer to evaluate this covariance as a separable sum of the covariances of the vectors6.. For this purpose, we consider the Taylor series expansion of6. in the neighborhood of 6. 6. are mean values of6. respectively. Then: 6. = 6.. 6 6 # 6. 3. (7) 6. #. 6 3 6 (8) consequently, 7 89, - = >,6-7 89,. - >,6- # >,.- 7 89, 6- >,.- (9) The ACFs. are not linearly related to variables with known covariances. In the conventional method, /0.. In the statistical method. 5. These are both nonlinear functions. Since the covariance of can be found exactly for the conventional method the covariance of can be approximated for the statistical method, we can linearize these formulas around to get an estimate of the covariance of.. This linearization was the method used in [6] to estimate the variances of the ACFs. But, this linearization is not very accurate for especially the conventional method, because the function? @ /0@ cannot be closely approximated by a linear function especially when the denominator (survival probabilities) is close to zero the variance of the denominator is high. To overcome this problem, we propose an approximation for the probability distribution function of the ACFs. We assume that. are lognormal distributed. A rom variable is lognormal distributed if its logarithm is normally distributed. We believe this is a very accurate assumption because. is an estimate of the projections of any rom variable (here ) can be assumed Gaussian due to the Central Limit Theorem. This provides us extra information about the ACFs. With this assumption, one can compute the mean variance of. s directly in terms of mean variance of in the conventional method in terms of mean variance of in the statistical method.. A # D (0) BA5 E A # BA5 CABA5 F C () So, for the conventional method, we get: Even with the lognormality assumption, the covariance matrix of. is not easy to compute directly. But, the diagonal of the matrix is known. So, we propose this approximation for the covariances: 7 89,..) - = CB EB EG 5 CG 5 7 89, )- (2) BEB 5 EGH 5 ) (3) H ) represents the correlation coefficient of the vector. In matrix form: 7 89,. - = I 7 89, - I I > J B C E 5 K > L A # M N

Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. We make sure that the diagonal of the covariance matrix of. matches the variances we get from the lognormal assumption. This formula assumes that the correlation coefficient of. is largely determined by the smoothing operator O is the same as the correlation coefficient for. Plugging in the approximation (9) for. writing smooth, PQRS- 6 =, we get the following 7 89, - = I T 7 89, 6- I T # I U 7 89, - I U (4) I T ' > L A # D N I U ' I T > 2 smooth,pqrs-4 The mean covariance of 6 can be found easily from the expression (5) since it is linearly related to. A simple analysis yields: 6 smooth:pqrsvwxy; from (5) (2): 7 89, 6- / O >,Z- O (5) Z ' PQRSPQRS # $ 0A, O is a smoothing convolution matrix along the radial direction of the projection space. It was suggested in [5] that angular smoothing is not desirable in attenuation correction, so we smooth only in radial direction. For conventional ACF computation, ignoring the noise in the emission contamination estimate, the covariance of can be found from (4) (). 7 89, - / O >,[- O U (6) [ ' "PQRS # $ # 0"A. Here, O is the same smoothing matrix as in (5). The same operator O is used to obtain both6 to avoid artifacts from resolution mismatch [7, 8]. We used Gaussian smoothing as suggested in [7] which avoids any artifacts in the reconstructed image. The mean of emission contamination can be determined from (3) as O PQRSPQRS *. The variance of can be found from (6) as [0 \ ] ^ A] Using (4), one can find the mean values of as O PQRS * (7) The variance of the sum over a region of interest in the emission image can be found from (6), (4), (5) (6) as 7 89 :_ ; _ 7 89 : ; _ / ` # / ` (8) with ` ( * Za A ` ( * [a A _ is a vector of ones in the region of interest zeros else. We define the vectors a ' O I < _ a ' O I < _ V. PENALIZED-LIKELIHOOD ATTENUATION RECONSTRUCTION While conventional method of ACF computation has been used for some time, reconstruct/reproject methods have gained some interest recently. In a statistical reconstruct/reproject method for ACF computation, an attenuation map estimate is found from noisy transmission data by maximizing the penalized-likelihood e f objective functionb c d c 3, d is the log-likelihood function f the attenuation map 5, we estimate the ACFs by:. & is a regularizing roughness penalty function. After estimating is the geometric projection of the attenuation map estimate. If one uses FBP for emission reconstruction, then 6 should be smoothed to yield similar resolution with the. [9] in order to reduce resolution mismatch artifacts. A. Resolution Penalized likelihood (PL) or penalized weighted least squares (PWLS) methods are very attractive image reconstruction methods due to their superb noise reduction properties. The variance weighting in PWLS method reduces the variance of the estimates as compared to penalized unweighted least squares (PULS) or FBP reconstructions, because it makes use of the statistical information in the measurements. However, attenuation maps reconstructed with PL or PWLS methods have non-uniform resolution [] even with a quadratic penalty. This non-uniform resolution is caused by the variance weighting in PWLS (or PL) method hence does not exist in a PULS reconstruction. Due to this non-uniform resolution, ACF computation by PL method from a real transmission scan causes resolution mismatch between the emission data reconstructed ACFs. This mismatch reveals itself as artifacts in the final reconstructed emission image. Fessler s certainty based penalty [9] yields more uniform resolution in terms of the average FWHM of the point spread function over the image. But, it still has non-uniform resolution in that the psf is not circularly symmetric but the level contours look like ellipses whose orientation are image dependent space-variant. Stayman Fessler have recently proposed a new modification to the quadratic penalty [0] which yields more circularly symmetric uniform resolution properties. We used this modification in our reconstructions. This modification makes the resolution properties of the PL method close to PULS method. Quadratic PULS method was shown to be essentially equivalent to FBP method with the following constrained leastsquares (CLS) filter defined in spatial frequency domain by (equation (50) in [9]) g _ ce hijk_0 hijk_ hijk_a # le _ D _ m nn o (9) _ denotes spatial frequency, k is the ratio of the detector strip width to the pixel size of the system model, l is a constant dependent on system geometry. This CLS filter has high negative sidelobes in the space domain. The filters that

Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. smooth the ACFs emission data have to be matched. So, the emission data should be blurred with the same filter (9). But, due to high negative sidelobes of filter in (9), after dividing the appropriately blurred emission data to computed survival probabilities from reconstructions, we get artifacts especially for higher blurring amounts (highere s) around the boundaries of the image. So, we conclude that the results in [7] does only hold for Gaussian smoothing. To overcome this problem, we first reconstruct a higher resolution image using a smaller e value than desired then we filter the projections with the following filter: g A _ qg_ ce g q_ ce gp_ ca A # n // 3 lr[st m nn o g p _ ca is the desired Gaussian filter with desired FWHM a. Now, the emission data is also filtered with gp_ the Gaussian shaped filter ca. This approach reduces artifacts yields acceptable images. The ACF computation in this case is done as follows: argu vw b c O A & (20). 5 O is the convolution matrix corresponding to A g A _ above. B. Covariance Approximations The covariance formula in (9) is still valid in PL transmission reconstruction. We use the following first order Taylor series expansion for the ACFs:. 5 = x # x 3 (2) O A & y is the mean projection vector y argu z{ vw b c is the image reconstructed with noiseless is a very good approximation for the mean of data. y [4]. Note that, we do not use the lognormality assumption here, because we believe that the above approximation is accurate enough lognormal assumption leads to much more computation. From (2) (20), 7 89,. - > :x; O A & 7 89, - & O Ä> :x; To find the covariance of the implicitly defined estimator, we use the formulas introduced in [4]. The general form of penalized-likelihood estimates is argu vw b c is the parameter vector is the measurement vector. This defines an implicit function equation} b c. A n first order Taylor expansion of the around y yields the following approximation [4]: 7 89, - = ~ 7 89 2 4 ~ (22) ~ 3 } A{b y } b y We use this formula to evaluate the covariance of the penalized-likelihood estimate of the attenuation map. We again ignore the noise in the emission contamination estimate use the mean value for it in our approximations. The formula yields: 7 89, - = / & > L " A"ƒ # $ " # $ A N & & > L " ˆ/ 3 $ "ƒ # $ " # $ A N & # e Š Here Š is the Hessian of the penalty function includes the modified penalty weights [0] $ $ #. In this case, the variance of the sum over a region can be predicted with a formula similar to (8). The emission part of the formula is nowa O > :x; < _ Z remains same. The transmission part changes a lot due to statistical method as term a should be changed to: a & & O Ä> : 6x; < _ the [ term should be [ " A"ƒ # $ " #. $ A The most computationally intensive part in this computation is the part ` should be computed for & ` O Ä> : 6x; < _. This operation can be performed by solving the equation Œ @ ` using iterative methods such as conjugate gradient. Also, we assume the mean for. is now,. x. These variance predictions are useful, because they do not require hundreds of empirical reconstructions of data [4]. However, they require knowing the true parameters noiseless sinograms. For real data, these are not known, but one can still get a good approximation of variances by replacing the true parameters by their noisy counterparts [4]. Finally, the optimal time fraction for the emission scan can be found by minimizing the variance in (8) with respect to the emission scan time when total scan time is fixed. For the QPL method, the simple analysis yields Ž T Ž Ž ` ` 3 3 ` ` ` Note that for the conventional method, the above formula is invalid because the` term is not independent from the scan time duration. VI. RESULTS We have done a series of simulations to test the proposed variance predictions to find the optimal scan times under a total scan time constraint. We used two 2-D images corresponding to attenuation map emission image to generate noisy transmission emission data with 50000 50000 counts per minute respectively. The true images are shown in Figure. The transmission scan had 5% roms

Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. an emission contamination of 5%. Emission scan had 0% roms. Roms rates were assumed to be constant. The total scan time was 20 minutes. To obtain empirical stard deviations, 300 realizations were generated for each scan time distribution. The emission images were reconstructed with FBP with a smoothing filter that yields about 9 mm FWHM resolution in the image domain. ACFs were computed using the conventional quadratic penalized-likelihood statistical methods. The resolutions for the ACFs were matched for these two methods. The stard deviations of the sums over the heart region in the reconstructed emission images were found empirically predicted analytically using the derived formulas. The results are shown in Figure 2 as a plot of stard deviation estimates versus emission scan time fraction. The statistical method not only reduces the overall variance, but also yields a larger optimum emission scan time fraction (about 40%) as compared to the conventional method (about 30%). The stard deviation is reduced by about 5-20% in the statistical method as compared to the conventional method. The predictions seem to match the empirical data for the statistical reconstruction, but the predictions for the conventional method seem to underestimate the stard deviations. We conjecture that, the approximations used in deriving the variance formulas causes the mismatch. We are currently working on improving our approximations. Due to highly nonlinear processing of data however, it is likely that there will be some discrepancy between predicted empirical stard deviation estimates. VII. CONCLUSION We presented new approximate formulas for covariances of reconstructed emission images with conventional statistical ACF computation for post-injection scans. These formulas can be used to predict the variance of the sum over a region of interest in the final reconstructed emission image instead of expensive empirical reconstructions. These formulas can also be used to determine optimal scan times devoted to emission transmission scans under a total scan time constraint. Results show that, statistical ACF computation not only reduces the overall stard deviation but also yields higher optimum emission scan time fraction than the conventional method. The covariance approximations for statistical method work well for quadratic penalties, but not for non-quadratic penalties. We plan to extend our work to cover non-quadratic penalties in the future. attenuation map emission image stard deviation 80 75 70 65 60 55 50 45 40 35 30 sum of pixels inside the heart region conventional empirical conventional predicted statistical empirical statistical predicted 25 0 0.2 0.4 0.6 0.8 emission scan time fraction Figure 2: Stard deviation of the sum over the heart region estimates versus emission scan time fraction for conventional statistical ACF computations. VIII. REFERENCES [] J. A. Fessler W. L. Rogers, Spatial resolution properties of penalized-likelihood image reconstruction methods: Spaceinvariant tomographs, IEEE Tr. Im. Proc., vol. 5, no. 9, pp. 346 58, September 996. [2] T. Beyer, P. E. Kinahan, D. W. Townsend, Optimization of transmission emission scan duration in 3D whole-body PET, IEEE Tr. Nuc. Sci., vol. 44, no. 6, pp. 2400 7, December 997. [3] C. Comtat, P. E. Kinahan, M. Defrise, C. Michel, D. W. Townsend, Fast reconstruction of 3D PET data with accurate statistical modeling, IEEE Tr. Nuc. Sci., vol. 45, no. 3, pp. 083 9, June 998. [4] J. A. Fessler, Mean variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography, IEEE Tr. Im. Proc., vol. 5, no. 3, pp. 493 506, March 996. [5] M. E. Daube-Witherspoon R. E. Carson, Investigation of angular smoothing of PET data, in Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf., volume 3, pp. 557 6, 996. [6] C. Stearns D. C. Wack, A noise equivalent counts approach to transmission imaging source design, IEEE Tr. Med. Im., vol. 2, no. 2, pp. 287 292, June 993. [7] A. Chatziioannou M. Dahlbom, Detailed investigation of transmission emission data smoothing protocols their effects on emission images, IEEE Tr. Nuc. Sci., vol. 43, no., pp. 290 4, February 996. [8] H. Erdoǧan J. A. Fessler, Statistical image reconstruction methods for simultaneous emission/transmission PET scans, in Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf., volume 3, pp. 579 83, 996. [9] J. A. Fessler, Resolution properties of regularized image reconstruction methods, Technical Report 297, Comm. Sign. Proc. Lab., Dept. of EECS, Univ. of Michigan, Ann Arbor, MI, 4809-222, August 995. [0] W. Stayman J. A. Fessler, Spatially-variant roughness penalty design for uniform resolution in penalized-likelihood image reconstruction, in Proc. IEEE Intl. Conf. on Image Processing, volume 2, pp. 685 9, 998. Figure : The attenuation map emission image used to generate simulation data. This work was supported in part by NIH grants CA-607 CA-54362, by the Whitaker Foundation. First author was supported by TUBITAK-NATO Science Fellowship for doctoral studies.