Screen-space processing Rafał Mantiuk Computer Laboratory, University of Cambridge
Cornell Box and tone-mapping Rendering Photograph 2
Real-world scenes are more challenging } The match could not be achieved if the light source in the top of the box was visible } The display could not reproduce the right level of brightness 3
Digression: Linear filtering } Output pixel value is a weighted sum of neighboring pixels Input pixel value Kernel (filter) Resulting pixel value Sum over neighboring pixels, e.g. k=-1,0,1, j=-1,0,1 for 3x3 neighborhood compact notation: 4 Convolution operation
Linear filter: example Why is the matrix g smaller than f? 5
Padding an image Padded and blurred image Padded image 6
What is the computational cost of the convolution? } How many multiplications do we need to do to convolve 10x10 image with 9x9 kernel? } The image is padded, but we do not compute the values for the padded pixels 7
Separable kernels } Convolution operation can be made much faster if split into two separate steps: } 1) convolve all rows in the image with a 1D filter } 2) convolve columns in the result of 1) with another 1D filter } But to do this, the kernel must be separable " h 1,1 h 1,2 h 1,3 % $ ' $ h 2,1 h 2,2 h 2,3 ' # $ h 3,1 h 3,2 h 3,3 &' h = u v = " $ $ # $ u 1 u 2 u 3 % ' ' v 2 v 3 &' [ ] 8
Examples of separable filters } Box filter: } Gaussian filter: } What are the corresponding 1D components of this separable filter (u(x) and v(y))? ú û ù ê ë é ú ú ú ú ú ú û ù ê ê ê ê ê ê ë é = ú ú ú ú ú ú û ù ê ê ê ê ê ê ë é 3 1 3 1 3 1 3 1 3 1 3 1 9 1 9 1 9 1 9 1 9 1 9 1 9 1 9 1 9 1 G(x, y) = u(x) v(y) 9
Unsharp masking } How to use blurring to sharpen an image? results original image high-pass image blurry image 10
Why linear filters? } Linear functions have two properties: } Additivity: f(x) + f(y) = f(x+y) } Homogenity: f(ax) = af(x) (where f is a linear function) } Why is it important? } Linear operations can be performed in an arbitrary order } Example: blur( af + b ) = ablur( F ) + b } Performance can be improved by changing the order of operations } This is also how the separable filters work: Matrix multiplication Convolution The components of a separable kernel (u v) f = u (v f ) An image 11
Dynamic range Luminance max L min L (for SNR>3) 12
Measures of dynamic range (contrast) } As ratio: } Usually written as C:1, for example 1000:1. } As orders of magnitude or log10 units: } As stops: C = L max L min C 10 = log 10 L max L min C 2 = log 2 L max L min One stop is doubling of halving the amount of light 13
High dynamic range (HDR) 10-6 10-4 10-2 100 102 104 106 108 Dynamic Luminance [cd/m 2 ] Range 1000:1 1500:1 30:1 14
Digital vs. physical image 15
Digital vs. physical image 16
Digital vs. physical image 17
Basic display model: gamma correction } Gamma correction is used to encode luminance or tristimulus color values (RGB) in imaging systems (displays, printers, cameras, etc.) Gain Gamma (usually =2.2) V out = a V in γ (relative) Luminance Physical signal For color images: Luma Digital signal (0-1) R = a ( R #) γ and the same for green and blue 18
Why is gamma needed? <- Pixel value (luma) <- Luminance } Gamma corrected pixel values give a scale of brightness levels that is more perceptually uniform } At least 12 bits (instead of 8) would be needed to encode each color channel without gamma correction } And accidentally it was also the response of the CRT gun 19
Luminance physical unit } Luminance perceived brightness of light, adjusted for the sensitivity of the visual system to wavelengths L V = 0 L(λ) V (λ)dλ Luminance ISO Unit: cd/m 2 Light spectrum (radiance) Luminous efficiency function (weighting) 20
Luma gray-scale pixel value } Luma - pixel brightness in gamma corrected units L " = 0.2126R " + 0.7152G " + 0.0722B " } R ", G " and B " are gamma corrected colour values } Prime symbol denotes gamma corrected } Used in image/video coding } Note that relative luminance if often approximated with L = 0.2126R + 0.7152G + 0.0722B = 0.2126(R " ) 5 +0.7152(G " ) 5 +0.0722(B " ) 5 } R, G, and B are linear colour values } Luma and luminace are difference concepts despite similar formulas 21
srgb colour space } RGB colour space is not a standard. Colours may differ depending on the choice of the primaries } srgb is a standard colour space, which most displays try to mimic (standard for HDTV) } The chromacities above are also known as Rec. 709 22
srgb colour space } Two step XYZ to srgb transformation: } Step 1: Linear color transform } Step 2: Non-linearity 23
Why do we need tone mapping? } To reduce excessive dynamic range } To customize the look (colour grading) } To simulate human vision (for example night vision) } To adapt to a display and viewing conditions } To make rendered images look more realistic } Different tone mapping operators achieve different goals 24
Two ways to do tone-mapping Liminance, linear RGB Luma, gamma corrected RGB, srgb HDR image Tone mapping A LDR image Liminance, linear RGB HDR image Tone mapping B Inverse display model LDR image Display model can account for: Display peak luminance Display dynamic range (contrast) Ambient light Sometimes known as gamma 25
Inverse display model as tone mapping } Inverse display model is the simplest tone mapping } It transforms linear pixel values to gamma-corrected values Prime ( ) denotes a gamma-corrected value R " = R L 789:; < 5 Typically γ=2.2 } L 789:; is the intensity that should be mapped to the maximum brightness of a display (display white) } The same operation applied to red, green and blue channels } OpenGL offers srgb textures to automate RGB <-> srgb conversion } srgb textures store data in gamma-corrected space } srgb convered to (linear) RGB on texture look-up (and filtering) } RGB to srgb conversion when writing to srgb texture with glenable(gl_framebuffer_srgb) 26
Tone-curve Best tonemapping is the one which does not do anything, i.e. slope of the tone-mapping curves is equal to 1. Image histogram 27
Tone-curve But in practice contrast (slope) must be limited due to display limitations. 28
Tone-curve Global tonemapping is a compromise between clipping and contrast compression. 29
Sigmoidal tone-curves } Very common in digital cameras } Mimic the response of analog film } Analog film has been engineered over many years to produce good tone-reproduction } Fast to compute 30
Sigmoidal tone mapping } Simple formula for a sigmoidal tone-curve: R (x, y) = R(x, y) A L Ba A + R(x, y) A where L B is the geometric mean (or mean of logarithms): L B = exp 1 N G ln (L x, y ) (J,K) and L x, y is the luminance of the pixel x, y. 31
Sigmoidal tone mapping example a=0.25 a=1 a=4 32 b=0.5 b=1 b=2
Glare Illusion Alan Wake Remedy Entertainment 33
Glare Illusion Photography Painting 34 Computer Graphics HDR rendering in games
Scattering of the light in the eye From: Sekuler, R., and Blake, R. Perception, second ed. McGraw- Hill, New York, 1990 35
Point Spread Function of the eye Green daytime (photopic) Red night time (scotopic) } What portion of the light is scattered towards a certain visual angle } To simulate: } construct a digital filter } convolve the image with that filter From: Spencer, G. et al. 1995. Proc. of SIGGRAPH. (1995) 36
Selective application of glare } A) Glare applied to the entire image I M = I G Glare kernel (PSF) } Reduces image contrast and sharpness B) Glare applied only to the clipped pixels I M = I + I OP9Q;R G I OP9Q;R I for I > 1 where I OP9Q;R = T 0 otherwise Better image quality 37
Selective application of glare A) Glare applied to the entire image Original image B) Glare applied to clipped pixels only 38
Glare (or bloom) in games } Convolution with large, non-separable filters is too slow } The effect is approximated by a combination of Gaussian filters } Each filter with different sigma } The effect is meant to look good, not be be accurate model of light scattering } Some games simulate a camera rather than the eye 39
References } Tone-mapping } REINHARD, E., HEIDRICH, W., DEBEVEC, P., PATTANAIK, S., WARD, G., AND MYSZKOWSKI, K. 2010. High Dynamic Range Imaging: Acquisition, Display, and Image- Based Lighting. Morgan Kaufmann. } MANTIUK, R.K., MYSZKOWSKI, K., AND SEIDEL, H. 2015. High Dynamic Range Imaging. In: Wiley Encyclopedia of Electrical and Electronics Engineering. John Wiley & Sons, Inc., Hoboken, NJ, USA, 1 42. } http://www.cl.cam.ac.uk/~rkm38/pdfs/mantiuk15hdri.pdf (Chapter 5) } Display models } MANTIUK, R.K. 2016. Perceptual display calibration. In: R.R. Hainich and O. Bimber, eds., Displays: Fundamentals and Applications. CRC Press. } http://www.cl.cam.ac.uk/~rkm38/pdfs/mantiuk2016perceptual_display.pdf } Digital Image Processing } GONZALEZ, R.C. AND WOODS, R.E. 2009. Digital Image Processing. Pearson. } Ch. 3 and 7 40