Intrinsic Images by Entropy Minimization

Similar documents
When taking a picture, what color is a (Lambertian) surface?

Entropy Minimization for Shadow Removal

Visual Object Recognition

Color Constancy and a Changing Illumination

Shades of Gray and Colour Constancy

Colour Part One. Energy Density CPSC 553 P Wavelength 700 nm

Colour. The visible spectrum of light corresponds wavelengths roughly from 400 to 700 nm.

Radiometry. Nuno Vasconcelos UCSD

Detectors part II Descriptors

Color Image Correction

Deformation and Viewpoint Invariant Color Histograms

Linear bases for representation of natural and artificial illuminants

Internet Video Search

Learning features by contrasting natural images with noise

Shape of Gaussians as Feature Descriptors

Single Exposure Enhancement and Reconstruction. Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman

Sensitivity Curve Approximation using Linear Algebra

3D from Photographs: Camera Calibration. Dr Francesco Banterle

THE MEASUREMENT OF COLOR IN A QUALITY CONTROL SYSTEM: TEXTILE COLOR MATCHING

Technology, 3 rd edition, John Wiley, New York, 2000

What is Image Deblurring?

The distribution of electron energy is given by the Fermi-Dirac distribution.

Mixture Models and EM

Large Scale Environment Partitioning in Mobile Robotics Recognition Tasks

March 26, Title: TEMPO 21 Report. Prepared for: Sviazinvest, OJSC. Prepared by: Cree Durham Technology Center (DTC) Ticket Number: T

ADMINISTRIVIA SENSOR

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

Self-Tuning Spectral Clustering

Local Criterion of Quality for Color Difference Formula

Optical Theory Basics - 1 Radiative transfer

Guidance for Writing Lab Reports for PHYS 233:

Extracting Scale and Illuminant Invariant Regions through Color

Screen-space processing Further Graphics

Tracking Human Heads Based on Interaction between Hypotheses with Certainty

Lighting fundamentals

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Machine Learning for Computer Vision 8. Neural Networks and Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

EECS490: Digital Image Processing. Lecture #26

Illumination, Radiometry, and a (Very Brief) Introduction to the Physics of Remote Sensing!

Abstract To characterize color values measured by color devices èe.g. scanners, color copiers and color camerasè in a deviceíindependent fashion these

Feature extraction: Corners and blobs

Photometric Studies of GEO Debris

wavelength (nm)

CSE 559A: Computer Vision EVERYONE needs to fill out survey.

CS5670: Computer Vision

The Haar Wavelet Transform: Compression and. Reconstruction

Errata V7. Page 24 The left-hand figure did not print correctly. The figure should have looked like:

Color perception SINA 08/09

TRNSYS MODELING OF A HYBRID LIGHTING SYSTEM: BUILDING ENERGY LOADS AND CHROMATICITY ANALYSIS

Histogram Preserving Image Transformations

Michel Hénon A playful and simplifying spirit

HYPERSPECTRAL IMAGING

Background The power radiated by a black body of temperature T, is given by the Stefan-Boltzmann Law

Old painting digital color restoration

EE Camera & Image Formation

SUBJECT: SPIRE Spectrometer Detector Thermal Time Constant Derivation Procedure

MITOCW watch?v=pqkyqu11eta

Uncertainty determination of correlated color temperature for high intensity discharge lamps.

Magnetics: Fundamentals and Parameter Extraction

Consensus Algorithms for Camera Sensor Networks. Roberto Tron Vision, Dynamics and Learning Lab Johns Hopkins University

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Robustifying the Flock of Trackers

Exploring EDA. Exploring EDA, Clustering and Data Preprocessing Lecture 1. Exploring EDA. Vincent Croft NIKHEF - Nijmegen

Uncertainty determination of correlated color temperature for high intensity discharge lamps

Introduction to Colorimetry

Visual Imaging and the Electronic Age Color Science

Subspace Methods for Visual Learning and Recognition

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

CS 4495 Computer Vision Principle Component Analysis

DETERMINING SIZE OF SOURCE FOR HANDHELD INFRARED THERMOMETERS THEORY AND PRACTICE

L ESSON P LAN:ACTIVITY 1: SHADOWS

Fourier Series (Dover Books On Mathematics) PDF

Spectra from transitions in atoms and lighting

1 Review of the dot product

Definition of Algebraic Graph and New Perspective on Chromatic Number

Computer Vision & Digital Image Processing

Visual Imaging and the Electronic Age Color Science Metamers & Chromaticity Diagrams. Lecture #6 September 7, 2017 Prof. Donald P.

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

The Truth about Neutrality

Spatial Vision: Primary Visual Cortex (Chapter 3, part 1)

Monte Carlo Quality Assessment (MC-QA)

TWO-COLOR TEMPERATURE MEASUREMENT METHOD USING BPW34 PIN PHOTODIODES

Machine Learning (Spring 2012) Principal Component Analysis

Go to Click on the first animation: The north pole, observed from space

Nonparametric Methods Lecture 5

COURSE INTRODUCTION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

Frequency, Vibration, and Fourier

RESTORATION OF VIDEO BY REMOVING RAIN

Narrowband Imaging, or How to Image in Downtown Portland. Duncan Kitchin OMSI Astro Imaging Conference 2010

Statistics and Data Analysis in Geology

Uncertainty Quantification for Machine Learning and Statistical Models

The fundamental theorem of calculus for definite integration helped us to compute If has an anti-derivative,

Vision & Perception. Simple model: simple reflectance/illumination model. image: x(n 1,n 2 )=i(n 1,n 2 )r(n 1,n 2 ) 0 < r(n 1,n 2 ) < 1

Tracking for VR and AR

A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES

Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart

Parabolas and lines

Bayesian Classifiers and Probability Estimation. Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington

CITS 4402 Computer Vision

A Spatially Adaptive Wiener Filter for Reflectance Estimation

Transcription:

Intrinsic Images by Entropy Minimization How Peter Pan Really Lost His Shadow Presentation By: Jordan Frank Based on work by Finlayson, Hordley, Drew, and Lu [1,2,3] Image from http://dvd.monstersandcritics.com/reviews/article_1273419.php/dvd_review_peter_pan Two-Disc_Platinum_Edition_

Colour Constancy Humans automatically remove the effect of lighting in visual perception. Our vision system is very robust to illumination. We can easily tell that the bricks in the sunlight are the same colour as the bricks in the shade.

Colour Constancy However, this is an Ill-posed problem. We cannot tell whether differences in the colour that we detect are due to differences in the colour of the object, or differences in illumination. What if the bricks on the right really were darker? In fact, there are small differences in the colours, even within a single brick.

Invariant Image Goal is to produce an image that is invariant to effects of illumination. Motivation: Add our own illumination. Remove shadows! Almost every presentation prior to this one has talked about how shadows are problematic.

Cameras/Sensors RGB colour at a pixel results from an integral over the visible wavelength R k = σ E(λ)S(λ)Q k (λ)dλ, k = R, G, B (1) σ Lambertian shading E(λ) illumination spectral power distribution S(λ) surface spectral reflectance Q k (λ) camera sensitivity

Cameras/Sensors For convenience we assume that camera sensitivity is exactly a Dirac delta function Q k (λ) = q k δ(λ λ k ) q k = Q k (λ k ) is the strength of the sensor. So (1) reduces to R k = σe(λ k )S(λ k )q k

More Approximations Supposing that lighting can be approximated by Planck s law, with Wien s approximation [4] we get R k = σik 1 λ 5 k e k 2 T λ S(λ k )q k (2) k 1, k2 are constants, temperature T characterizes the lighting colour and I gives the overall light intensity.

Removing I and σ We can effectively remove the effect of Lambertian shading and illumination from (2) by dividing to get the band-ratio 2-vector chromaticities c, c k = R k /R p, where p is one of the channels and k=1,2 indexes over the remaining responses.

Log Chromaticities Log chromaticities are independent of illumination intensity, and translate under change of illumination colour. (ln R/G,ln B/G) under one light becomes (ln R/G,ln B/G)+(a,b) under a second light. More importantly, the translational term for different illuminants can always be written as (αa,αb) where a,b are constants and α depends on illumination.

Invariance, finally Illumination change translates log chromaticities in the same direction. Therefore, the coordinate axis orthogonal to the direction of illumination variation, y = -(a/b)x, records only illuminant invariant information.

Example (simulated) Measured log chromaticities After rotation. 6. Perfect Dirac delta camera data (sensitivities anchor Figures from [3] 7. Perfect Dirac delta camera data (sensitivities anch

Great in theory, but... There are a number of problems with this approach. Cameras do not have Dirac delta responses. Noise! How do we determine how much to rotate.

Narrowband, not Dirac Actual measurements from a SONY DXC-930 camera Rotated 10. SONY DXC-930 camera data. LCD s for seven s 1. SONY DXC-930 camera data. LCD s for seven So we are still doing pretty well. Figures from [3]

How to calibrate We can carefully calibrate our cameras to determine the best rotation angle through inspection of log chromaticities. 0 0.2 0.2 0 0.4 0.2 log(b/g) 0.6 0.8 1 log(b/g) 0.4 0.6 0.8 1.2 1 1.4 1.2 (a) 1.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 log( R/G) (b) 1.4 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 log(r/g) (c) Fig. 2. (a): Macbeth ColorChecker Chart image under a Planckian light. (b): Log-chromaticities of the 24 patches. (c): Median chromaticities for 6 patches, imaged under 14 different Planckian illuminants. Figure and caption from [1]

How to calibrate Manual calibration is time-consuming, and requires numerous images under various lighting conditions. We would like to find a way to find the desired parameters from just one image, without even knowing the camera that was used.

Intuition Invariant direction Greyscale image Wrong invariant direction Greyscale image log(b/r) log(b/r) log(g/r) log(g/r) And so what might be a good way to determine the best invariant direction? (Hint: we learned about it in this course) Figures from [1]

Entropy to the Rescue Invariant direction Greyscale image Wrong invariant direction Greyscale image log(b/r) log(b/r) log(g/r) log(g/r) Low Entropy Figures from [1] High Entropy

Entropy Minimization Algorithm: 1. Form a 2D log-chromaticity representation of the image. 2. for θ = 1..180 a) Rotate by θ and take projection onto x- axis b) Calculate entropy c) Keep track of θ that minimizes entropy

Problems/Solutions How to calculate entropy? Solution: Create a histogram, compute bin widths using Scott s Rule: bin_width = 3.49 std(projected_data) N 1/3 Noise in the data? Solution: Use only the middle 90% of the projected data. How to go back from rotated data to a usable image? Solution: See the paper, not easy!

Problems/Solutions How to calculate entropy? Solution: Create a histogram, compute bin widths using Scott s Rule: bin_width = 3.49 std(projected_data) N 1/3 Noise in the data? Solution: Use only the middle 90% of the projected data. How to go back from rotated data to a usable image? Solution: See the paper, not easy!

Problems/Solutions How to calculate entropy? Solution: Create a histogram, compute bin widths using Scott s Rule: bin_width = 3.49 std(projected_data) N 1/3 Noise in the data? Solution: Use only the middle 90% of the projected data. How to go back from rotated data to a usable image? Solution: See the paper, not easy!

Problems/Solutions How to calculate entropy? Solution: Create a histogram, compute bin widths using Scott s Rule: bin_width = 3.49 std(projected_data) N 1/3 Noise in the data? Solution: Use only the middle 90% of the projected data. How to go back from rotated data to a usable image? Solution: See the paper, not easy!

Example Original Image 6 5.5 Entropy 5 Min-Entropy Projection 4.5 4 0 50 100 150 200 Angle Figures from [1]

Removing Shadows Now that we have an image without shadows, how can we use this to remove shadows from the original image? Image from http://collectibles.about.com/library/priceguides/blpgdspeterpan903.htm

Removing Shadows Need to reintegrate the illumination invariant image into the original image. Create an edge map for the Mean-Shift processed original image as well as the invariant image. If the magnitude of the gradient in the invariant image is close to zero where the gradient in log response in the original image is high, then this is evidence of a shadow edge.

Removing Shadows Left to right, Edges in the original image, edges in the invariant image, recovered shadow edge. Figures from [2]

Removing Shadows All edges in the image that are not shadow edges are indicative of material changes. There are no sharp changes due to illumination and so shadows have been removed. Reintegrating the gradient gives a log response image which does not have shadows. For details, see the paper. To get back to a realistic image, we simply have to add artificial illumination.

Results (Paper) Intrinsic Images by Entropy Minimization 6.4 6.2 6 5.6 5.4 5.2 5 4.8 0 20 40 60 80 100 120 140 160 Angle 12.5 Entropy 12 11.5 11 0 20 40 60 80 100 120 140 160 180 100 120 140 160 180 Angle 6.45 6.4 6.35 6.3 6.25 Entropy Entropy 5.8 6.2 6.15 6.1 6.05 6 5.95 0 20 40 60 80 Angle 4.6 Figures from [3] 180 593

11 0 20 40 60 80 100 120 140 160 180 Angle 6.45 6.4 6.35 6.3 Results (Paper) Entropy 6.25 6.2 6.15 6.1 6.05 6 5.95 0 20 40 60 80 100 120 140 160 180 100 120 140 160 180 100 120 140 160 180 100 120 140 160 180 100 120 140 160 180 Angle 4.6 4.4 4.2 Entropy 4 3.8 3.6 3.4 3.2 0 20 40 60 80 Angle 7.75 7.7 7.65 7.6 Entropy 7.55 7.5 7.45 7.4 7.35 7.3 7.25 0 20 40 60 80 Angle 6 5.8 5.6 Entropy 5.4 5.2 5 4.8 4.6 4.4 0 20 40 60 80 Angle 4.8 4.75 4.7 Entropy 4.65 4.6 4.55 4.5 4.45 4.4 4.35 0 20 40 60 80 Angle Figures from [3]

Results (Mine) The cameras that were used by the authors of the paper are professional quality. They claim in the paper that the methods work on all of the cameras that they tested, but the question is whether they actually tested consumer-grade cameras. So I went out on a sunny day and snapped a few shots with my Pentax Optio WPi 6.0 megapixel camera.

Results (Mine) My results weren t quite as spectacular.

Results (Mine)

Questions? If he thought at all, but I don't believe he ever thought, it was that he and his shadow, when brought near each other, would join like drops of water, and when they did not he was appalled. He tried to stick it on with soap from the bathroom, but that also failed. A shudder passed through Peter, and he sat on the floor and cried. Peter Pan : The Story of Peter and Wendy J. M. Barrie

References [1] Graham D. Finlayson, Mark S. Drew, and Cheng Lu, "Intrinsic Images by Entropy Minimization", European Conference on Computer Vision, Prague, May 2004. Springer Lecture Notes in Computer Science, Vol. 3023, pp. 582-595, 2004. http://www.cs.sfu.ca/~mark/ftp/eccv04/. [2] Graham D. Finlayson, Steven D. Hordley, and Mark S. Drew, "Removing Shadows from Images", European Conference on Computer Vision, ECCV'02 Vol.4, Lecture Notes in Computer Science Vol. 2353, pp. 823-836, 2002. http://www.cs.sfu.ca/~mark/ftp/eccv02/shadowless.pdf [3] Graham D. Finlayson and Steven D. Hordley, Color constancy at a pixel, Journal of the Optical Society of America, Optics, Image Science, and Vision, Volume 18, Issue 2, February 2001, pp.253-264. [4] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulas, Wiley, New York, 2nd edition, 1982.