A Generative Perspective on MRFs in Low-Level Vision Supplemental Material

Similar documents
Modeling Natural Images with Higher-Order Boltzmann Machines

Image representation with multi-scale gradients

Discriminative Fields for Modeling Spatial Dependencies in Natural Images

Markov Random Fields

TUTORIAL PART 1 Unsupervised Learning

Linear Dynamical Systems

Contrastive Divergence

Hierarchical Sparse Bayesian Learning. Pierre Garrigues UC Berkeley

Unsupervised Learning of Hierarchical Models. in collaboration with Josh Susskind and Vlad Mnih

Variational Inference (11/04/13)

Probabilistic Graphical Models

Undirected Graphical Models: Markov Random Fields

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Image Denoising using Uniform Curvelet Transform and Complex Gaussian Scale Mixture

Bayesian Learning in Undirected Graphical Models

A Graph Cut Algorithm for Generalized Image Deconvolution

PILCO: A Model-Based and Data-Efficient Approach to Policy Search

A Graph Cut Algorithm for Higher-order Markov Random Fields

Markov Random Fields for Computer Vision (Part 1)

Introduction to Machine Learning

Intelligent Systems:

Probabilistic Graphical Models Lecture Notes Fall 2009

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2

Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework

Noise-Blind Image Deblurring Supplementary Material

CPSC 540: Machine Learning

Pattern Recognition and Machine Learning

Restricted Boltzmann Machines for Collaborative Filtering

Lecture 16 Deep Neural Generative Models

What makes a good model of natural images?

Probabilistic Graphical Models

Linear Dynamical Systems (Kalman filter)

Higher-Order Clique Reduction in Binary Graph Cut

Chapter 16. Structured Probabilistic Models for Deep Learning

Bayesian Image Segmentation Using MRF s Combined with Hierarchical Prior Models

Higher-Order Clique Reduction Without Auxiliary Variables

Undirected Graphical Models

CPSC 540: Machine Learning

Chris Bishop s PRML Ch. 8: Graphical Models

Alternative Parameterizations of Markov Networks. Sargur Srihari

Low-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN

Introduction to Nonlinear Image Processing

STA 4273H: Statistical Machine Learning

The Origin of Deep Learning. Lili Mou Jan, 2015

Gaussian Processes for Audio Feature Extraction

CSC 412 (Lecture 4): Undirected Graphical Models

Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki

Better restore the recto side of a document with an estimation of the verso side: Markov model and inference with graph cuts

Does Better Inference mean Better Learning?

Learning Energy-Based Models of High-Dimensional Data

Deep Learning Srihari. Deep Belief Nets. Sargur N. Srihari

A graph contains a set of nodes (vertices) connected by links (edges or arcs)

Probabilistic Graphical Models

Bayesian Machine Learning - Lecture 7

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Generalized Roof Duality for Pseudo-Boolean Optimization

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN

Graphical Models and Kernel Methods

Graphical Models for Collaborative Filtering

Learning Gaussian Process Models from Uncertain Data

Introduction to Machine Learning

Energy Based Models. Stefano Ermon, Aditya Grover. Stanford University. Lecture 13

Is early vision optimised for extracting higher order dependencies? Karklin and Lewicki, NIPS 2005

Higher-order Graph Cuts

Statistics 352: Spatial statistics. Jonathan Taylor. Department of Statistics. Models for discrete data. Stanford University.

Human Pose Tracking I: Basics. David Fleet University of Toronto

Message-Passing Algorithms for GMRFs and Non-Linear Optimization

Introduction to Restricted Boltzmann Machines

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Part 1: Expectation Propagation

Deep Learning of Invariant Spatiotemporal Features from Video. Bo Chen, Jo-Anne Ting, Ben Marlin, Nando de Freitas University of British Columbia

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

MACHINE LEARNING 2 UGM,HMMS Lecture 7

Lecture 9: PGM Learning

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Deep unsupervised learning

Course 16:198:520: Introduction To Artificial Intelligence Lecture 9. Markov Networks. Abdeslam Boularias. Monday, October 14, 2015

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

ARestricted Boltzmann machine (RBM) [1] is a probabilistic

Approximate Message Passing

Nonparametric Bayesian Methods (Gaussian Processes)

Introduction to Probabilistic Graphical Models: Exercises

1 Bayesian Linear Regression (BLR)

Expectation propagation for signal detection in flat-fading channels

Alternative Parameterizations of Markov Networks. Sargur Srihari

Based on slides by Richard Zemel

Unsupervised Learning

Undirected graphical models

A Unified Energy-Based Framework for Unsupervised Learning

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Scaling Neighbourhood Methods

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

Intelligent Systems (AI-2)

Single-channel source separation using non-negative matrix factorization

Introduction to Probabilistic Graphical Models

Transcription:

A Generative Perspective on MRFs in Low-Level Vision Supplemental Material Uwe Schmidt Qi Gao Stefan Roth Department of Computer Science, TU Darmstadt 1. Derivations 1.1. Sampling the Prior We first rewrite the model density from Eqs. 1) and 2) of the main paper as px; Θ) = z 1 2 ZΘ) e ɛ x /2 c C N pz ic ) N J T i x c) ;, σi 2 /s zic ), 1) i=1 where we treat the scales z {1,..., J} N C for each expert and clique as random variables with pz ic ) = α izic i.e., the GSM mixture weights). Instead of marginalizing out the scales, we can also retain them explicitly and define the joint distribution cf. [13]) px, z; Θ) = 1 N 2 ZΘ) e ɛ x /2 pz ic ) N J T i x c) ;, σi 2 /s zic ). 2) c C i=1 The conditional distribution px z; Θ) can be derived as the multivariate Gaussian px z; Θ) e ɛ x 2 /2 c C 12 xt exp N i=1 N x;, ɛi + exp s ) z ic ) J T 2 2σi 2 i x c) ) ) N s zic σ 2 w ic wic T x i=1 c C i N ) ) 1 W i Z i Wi T, ɛi + i=1 where the w ic are defined such that w T ic x is the result of applying filter J i to clique c of the image x. Z i = diag{s zic /σ 2 i } are diagonal matrices with entries for each clique, and W i are filter matrices that correspond to a convolution of the image with filter J i, i.e. W T i x = [wt ic 1 x,..., w T ic C x] T = [J T i x C 1),..., J T i x C C )] T. Following Levi and Weiss [4, 11], we further rewrite the covariance as the matrix product Σ = ɛi + ) 1 N W i Z i Wi T = [W 1,..., W N, I] i=1 Z 1..... Z N ɛi W T 1. WN T I and sample y N, I) to obtain a sample x from px z; Θ) by solving the least-squares problem The first two authors contributed equally to this work. 1 = WZW T) 1 WZW T x = W Zy. 5) 3) 4) 1

By using the well-known property it follows that y N, I) Ay N, AIA T ), 6) x = WZW T) 1 WZW W Zy N T x;, ) 1 ) WZW W Z T I ) 1 ) ) T W Z N x;, WZW T) ) 7) 1 is indeed a valid sample from the conditional distribution as derived in Eq. 3). Since the scales are conditionally independent given the image by construction, the conditional distribution pz x; Θ) is readily given as 1.2. Conditional Sampling pz ic x; Θ) pz ic ) N J T i x c) ;, σ 2 i /s zic ). 8) In order to avoid extreme values at the less constrained boundary pixels [5] during learning and model analysis, or to perform inpainting of missing pixels given the known ones, we rely on conditional sampling. In particular, we sample the pixels x A given fixed x B and scales z according to the conditional Gaussian distribution px A x B, z; Θ), 9) where A and B denote the index sets of the respective pixels. Without loss of generality, we assume that x = [ xa x B ], Σ = WZW T) [ ] 1 1 A C = C T, 1) B where the square sub-matrix A has as many rows and columns as the vector x A has elements, etc. The conditional distribution of interest can now be derived as px A x B, z; Θ) exp 1 [ ] T [ ] [ ] ) xa A C xa 2 x B C T B x B exp 1 xa + A 1 ) T Cx B A xa + A 1 ) ) 11) Cx B 2 N x A ; A 1 Cx B, A 1). The matrices A and C are given by the appropriate sub-matrices of W i and Z i, and allow for the same efficient sampling scheme. The mean µ = A 1 Cx B can also be computed by solving a least squares problem. Sampling the conditional distribution of scales pz x A, x B ; Θ) = pz x; Θ) remains as before. 1.3. Sampling the Posterior for Image Denoising Assuming additive i. i. d. Gaussian noise with known standard derivation σ, the posterior given scales z can be written as px y, z; Θ) py x) px z; Θ) exp 1 2σ 2 y x 2) exp 1 ) 2 xt Σ 1 x exp 1 2x T y ) )) I 2 σ 2 + xt σ 2 + Σ 1 x N x; Σy/σ 2, Σ ), 12) where Σ = I/σ 2 + Σ 1) 1 and Σ as in Eq. 4). The conditional distribution of the scales pz x, y; Θ) = pz x; Θ) remains as before. 2

2. Image Restoration To further illustrate the image restoration performance of our approach, we provide the following additional results: Table 1 repeats Tab. 1 of the main paper and additionally gives the numerical results of MAP estimation with graph cuts and α-expansion [1]. Note that in most cases, α-expansion performs slightly worse in terms of PSNR than conjugate gradients, even and in fact particularly) for non-convex potentials. Also, using a Student-t potential [3] does not show favorable results. Table 2 shows the results of the same experiment as in Tab. 1, but reports the performance in terms of the perceptually more relevant structural similarity index SSIM) [1]. Note that all of the conclusions reported in the main paper also hold for this perceptual quality metric. Table 3 repeats Tab. 2 of the main paper, and additionally reports standard deviations as well as SSIM performance. The SSIM supports the same conclusions about relative performance as the PSNR. Figs. 1 6 show denoising results for 6 of the 68 images, for which the average performance is reported in Tab. 2 of the main paper. Note that in contrast to the tested previous approaches, combining our learned models with MMSE leads to good performance on relatively smooth as well as on strongly textured images. Fig. 7 provides a different view of the summary results in Tab. 2 of the paper. Instead of the average performance, we show a per-image comparison between the denoising results of the discriminative approach of [8] using MAP) and the results of our generatively-trained 3 3 FoE using MMSE). Note that the PSNR and particularly the SSIM show a substantial performance advantage for our approach. Fig. 8 shows an uncropped version of the inpainting result in Fig. 7 of the paper. Additionally, one other inpainting result is provided as further visual illustration. 3. Sampling the Prior and Posterior The following additional results illustrate properties of the auxiliary-variable Gibbs sampler. Fig. 9 shows five subsequent samples after reaching the equilibrium distribution) from all models listed in Table 1. Note how samples from common pairwise models appear too grainy, while those from previous FoE models are too smooth and without discontinuities. Fig. 1 shows two larger samples from our learned models. Note that our pairwise model leads to locally uniform samples with occasional discontinuities that appear spatially isolated speckles ). Our learned high-order model, on the other hand, leads to smoothly varying samples with occasional spatially correlated discontinuities, which appear more realistic. Fig. 11 illustrates the convergence of the sampling procedure for the prior and the posterior in case of denoising). Fig. 12 illustrates the advantages of running multiple parallel samplers. References [1] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 2311):1222 1239, 21. [2] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. CVPR 25. [3] X. Lan, S. Roth, D. P. Huttenlocher, and M. J. Black. Efficient belief propagation with learned higher-order Markov random fields. ECCV 26. [4] E. Levi. Using natural image priors Maximizing or sampling? Master s thesis, The Hebrew University of Jerusalem, 29. [5] M. Norouzi, M. Ranjbar, and G. Mori. Stacks of convolutional restricted Boltzmann machines for shift-invariant feature learning. CVPR, 29. [6] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE TIP, 1211):1338 1351, 23. [7] S. Roth and M. J. Black. Fields of experts. IJCV, 822):25 229, 29. [8] K. G. G. Samuel and M. F. Tappen. Learning optimized MAP estimates in continuously-valued MRF models. CVPR 29. 3

Model MAP λ=1) MAP opt. λ) MMSE conj. gradient α-expansion conj. gradient α-expansion σ=1 σ=2 σ=1 σ=2 σ=1 σ=2 σ=1 σ=2 σ=1 σ=2 pairwise marginal fitting) 28.35 23.96 27.32 23.24 3.98 26.92 3.76 26.73 29.7 24.72 pairwise generalized Laplacian [9]) 27.35 22.97 26.97 22.7 31.54 27.59 31.46 27.48 28.64 23.92 pairwise Laplacian) 29.36 24.27 29.37 24.25 31.91 28.11 31.9 28.11 3.34 25.47 pairwise Student-t [3]) 28.19 24.4 27.83 23.72 31.22 26.7 31.26 26.83 29.38 24.68 pairwise ours) 3.27 26.48 28.14 24.12 3.41 26.55 3.5 25.98 32.9 28.32 5 5 FoE from [7] 27.92 23.81 32.63 28.92 29.38 24.95 15 15 FoE from [12] 22.51 2.45 32.27 28.47 23.22 21.47 3 3 FoE ours) 3.33 25.15 32.19 27.98 32.85 28.91 Table 1. Average PSNR db) of denoising results for 1 test images [3]. Model MAP λ=1) MAP opt. λ) MMSE conj. gradient α-expansion conj. gradient α-expansion σ=1 σ=2 σ=1 σ=2 σ=1 σ=2 σ=1 σ=2 σ=1 σ=2 pairwise marginal fitting).787.599.742.559.873.748.87.747.833.631 pairwise generalized Laplacian [9]).745.559.727.54.886.777.885.773.84.69 pairwise Laplacian).832.624.832.624.897.81.897.8.869.73 pairwise Student-t [3]).784.62.771.584.887.746.889.761.825.631 pairwise ours).855.72.784.64.854.725.853.718.94.88 5 5 FoE from [7].763.595.913.833.826.657 15 15 FoE from [12].515.445.93.82.564.489 3 3 FoE ours).838.638.99.798.923.839 Table 2. Average SSIM [1] of denoising results for 1 test images [3]. PSNR in db SSIM [1] Model Learning Inference average std. dev. average std. dev. 5 5 FoE from [7] CD generative) MAP w/λ 27.44 2.36.746.8 5 5 FoE from [8] discriminative MAP 27.86 2.9.776.51 pairwise ours) CD generative) MMSE 27.54 2.5.758.45 3 3 FoE ours) CD generative) MMSE 27.95 2.3.788.59 Non-local means [2] MMSE) 27.5 2.12.734.46 BLS-GSM [6] MMSE 28.2 2.24.789.59 Table 3. Denoising results for 68 test images [7, 8] σ = 25). [9] M. F. Tappen, B. C. Russell, and W. T. Freeman. Exploiting the sparse derivative prior for super-resolution and image demosaicing. Int. Workshop SCTV, 23. [1] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE TIP, 134):6 612, 24. [11] Y. Weiss. Personal communication, 25. [12] Y. Weiss and W. T. Freeman. What makes a good model of natural images? CVPR 27. [13] M. Welling, G. E. Hinton, and S. Osindero. Learning sparse topographic representations with products of Student-t distributions. NIPS*22. 4

a) Original b) Noisy σ = 25), PSNR = 2.29dB, SSIM =.31 c) Pairwise Laplacian), PSNR = 27.88dB, SSIM =.763 d) Pairwise g. Lapl. [9]), PSNR = 27.65dB, SSIM =.749 e) Pairwise ours), PSNR = 28.34dB, SSIM =.779 f) 3 3 FoE ours), PSNR = 28.7dB, SSIM =.829 g) 5 5 FoE from [7], PSNR = 28.52dB, SSIM =.816 h) 5 5 FoE from [8], PSNR = 28.51dB, SSIM =.89 Figure 1. Denoising results for test image Castle : e, f) MMSE, c, d, g) MAP w/λ, h) MAP. 5

a) Original b) Noisy σ = 25), PSNR = 2.22dB, SSIM =.297 c) Pairwise Laplacian), PSNR = 28.78dB, SSIM =.757 d) Pairwise g. Lapl. [9]), PSNR = 27.87dB, SSIM =.711 e) Pairwise ours), PSNR = 29.dB, SSIM =.768 f) 3 3 FoE ours), PSNR = 29.79dB, SSIM =.82 g) 5 5 FoE from [7], PSNR = 29.16dB, SSIM =.794 h) 5 5 FoE from [8], PSNR = 29.35dB, SSIM =.82 Figure 2. Denoising results for test image Birds : e, f) MMSE, c, d, g) MAP w/λ, h) MAP. 6

a) Original b) Noisy σ = 25), PSNR = 2.71dB, SSIM =.57 c) Pairwise Laplacian), PSNR = 26.32dB, SSIM =.789 d) Pairwise g. Lapl. [9]), PSNR = 26.12dB, SSIM =.781 e) Pairwise ours), PSNR = 26.51dB, SSIM =.794 f) 3 3 FoE ours), PSNR = 27.dB, SSIM =.813 g) 5 5 FoE from [7], PSNR = 26.84dB, SSIM =.792 h) 5 5 FoE from [8], PSNR = 27.6dB, SSIM =.817 Figure 3. Denoising results for test image LA : e, f) MMSE, c,d, g) MAP w/λ, h) MAP. 7

a) Original b) Noisy σ = 25), PSNR = 2.34dB, SSIM =.475 c) Pairwise Laplacian), PSNR = 25.93dB, SSIM =.674 d) Pairwise g. Lapl. [9]), PSNR = 25.36dB, SSIM =.647 e) Pairwise ours), PSNR = 26.12dB, SSIM =.685 f) 3 3 FoE ours), PSNR = 26.27dB, SSIM =.689 g) 5 5 FoE from [7], PSNR = 25.36dB, SSIM =.592 h) 5 5 FoE from [8], PSNR = 26.19dB, SSIM =.686 Figure 4. Denoising results for test image Goat : e, f) MMSE, c, d, g) MAP w/λ, h) MAP. 8

a) Original b) Noisy σ = 25), PSNR = 22.44dB, SSIM =.278 c) Pairwise Laplacian), PSNR = 28.77dB, SSIM =.838 d) Pairwise g. Lapl. [9]), PSNR = 28.65dB, SSIM =.831 e) Pairwise ours), PSNR = 28.81dB, SSIM =.829 f) 3 3 FoE ours), PSNR = 28.72dB, SSIM =.834 g) 5 5 FoE from [7], PSNR = 28.52dB, SSIM =.82 h) 5 5 FoE from [8], PSNR = 3.99dB, SSIM =.81 Figure 5. Denoising results for test image Wolf : e, f) MMSE, c, d, g) MAP w/λ, h) MAP. 9

a) Original b) Noisy σ = 25), PSNR = 2.21dB, SSIM =.136 c) Pairwise Laplacian), PSNR = 32.78dB, SSIM =.89 d) Pairwise g. Lapl. [9]), PSNR = 32.36dB, SSIM =.87 e) Pairwise ours), PSNR = 33.51dB, SSIM =.829 f) 3 3 FoE ours), PSNR = 35.28dB, SSIM =.931 g) 5 5 FoE from [7], PSNR = 35.dB, SSIM =.938 h) 5 5 FoE from [8], PSNR = 33.63dB, SSIM =.881 Figure 6. Denoising results for test image Airplane : e, f) MMSE, c, d, g) MAP w/λ, h) MAP. 1

PSNR db) using our 3x3 FoE 36 34 32 3 28 26 Airplane Wolf SSIM using our 3x3 FoE.9.8.7.6 24 Ours better Samuel and Tappen better 24 26 28 3 32 34 36 PSNR db) using FoE of Samuel and Tappen a).5 Ours better Samuel and Tappen better.5.6.7.8.9 SSIM using FoE of Samuel and Tappen b) Figure 7. Comparing the denoising performance σ = 25) in terms of a) PSNR and b) SSIM for 68 test images between our 3 3 FoE using MMSE) and the 5 5 FoE from [8] using MAP). A red circle above the black line means performance is better with our approach. a) Original photograph b) Restored with our pairwise MRF c) Original photograph Figure 8. MMSE-based image inpainting with our learned models. d) Restored with our 3 3 FoE 11

5 1 15 2 1 2 1 1 a) Pairwise, ours 2 1 35 3 25 2 15 9 95 1 15 3 2 1 5 6 7 b) Pairwise, marginal fitting 11 115 12 125 35 3 295 3 25 2 15 16 255 25 17 245 24 18 235 c) Pairwise, generalized Laplacian from [9] 125 12 115 11 15 55 6 65 7 75 4 6 8 9 8 7 6 5 1 9 8 7 d) Pairwise, Laplacian 7 6 5 4 3 4 6 8 5 1 15 25 2 15 1 5 e) 3 3 FoE, ours 1 5 5 5 5 1 15 1 5 5 2 4 4 2 5 4 3 2 1 f) 5 5 FoE from [7] 3 2 1 1 2 3 4 x 1 4 1.546 1.547 1.548 1.549 1.55 1.551 1.552 215 214 213 212 211 21 299 g) 15 15 FoE from [12] convolution with circular boundary handling, no pixels removed) Figure 9. Five subsequent samples l. to r.) from various MRF models after reaching the equilibrium distribution. The boundary pixels are removed for better visualization. 4141 4142 4143 4144 4145 4146 57 58 59 51 511 512 513 x 1 4 3.3449 3.345 3.3451 3.3452 3.3453 3.3454 12

25 1 2 5 15 1 5 5 1 a) Pairwise MRF 5 b) 3 3 FoE Figure 1. 256 256 pixel sample from our learned models after reaching the equilibrium distribution. The boundary pixels are removed for better visualization. 15 energy 2.6 x 14 2.5 2.4 2.3 2.2 energy 4.6 x 15 4.4 4.2 4 2.1 1 2 3 4 5 # of iterations a) Prior 3.8 1 2 3 4 5 # of iterations b) Posterior Figure 11. Monitoring the convergence of sampling. a) Sampling a 5 5 image from the learned pairwise MRF prior conditioned on a 1-pixel boundary. Three chains and over-dispersed starting points red, dashed interior of the boundary image; blue, solid medianfiltered version; black, dash-dotted noisy version). Approximate convergence is reached after 25 iterations ˆR < 1.1). b) Sampling the posterior σ = 2, image size 16 24) with four chains and over-dispersed starting points red, dashed noisy image; blue, dash-dotted Gauss filtered version; green, solid median filtered version; black, dotted Wiener filtered version). Approximate convergence is reached after 24 iterations. 29.5 29 29 28.5 PSNR 28 27.5 27 PSNR 28 27 26.5 1 samplers 4 samplers 26 2 samplers 1 sampler 25.5 1 2 3 4 5 # of iterations a) Assuming parallel computing 26 1 samplers 4 samplers 2 samplers 1 sampler 25 5 75 1 125 15 # of samples b) Assuming sequential computing Figure 12. Efficiency of sampling-based MMSE denoising with different number of samplers. Learned pairwise MRF, σ = 2, image size 16 24. a) In case of parallel computing one sampler per computing core), faster convergence of the denoised image can be achieved. b) Even when using sequential computing, multiple samplers can improve performance, as the samples are less correlated. 13