Single Exposure Enhancement and Reconstruction. Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman
|
|
- Lily Mabel Simpson
- 5 years ago
- Views:
Transcription
1 Single Exposure Enhancement and Reconstruction Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman 1
2 Reconstruction as an Inverse Problem Original image f Distortion & Sampling h n noise ( f ) n g = h + measurements g Reconstruction Algorithm fˆ 2
3 g = h( f ) + n g h 1( g n) fˆ Typically: The distortion h is non-invertible or ill-posed. The noise n is unknown, only its statistical properties can be learnt. 3
4 Typical Degradation Sources Low Illumination Optical distortions (geometric, blurring) Sensor distortion (quantization, sampling: spatial+temporal+spectral, sensor noise) Atmospheric attenuation (haze, turbulence, ) 4
5 Today s s Topics Spatial and spectral sampling De-mosaicing Contaminated noise De-noising Geometrical distortion Geometrical rectification Illumination White balancing 5
6 Key point: Prior knowledge of Natural Images 6
7 The Image Prior P f (f) 1 Image space 0 7
8 Problem: P(image) is complicated to model Defined over a huge dimensional space. Sparsely sampled. Known to be non Gaussian. form Mumford & Huang,
9 Spatial and Spectral Sampling Demosaicing CCD CMOS 9
10 Spatial/Spectral Sampling Possible Configurations light beam splitter 3 CCD 1 CCD 10
11 Color Filter Arrays (CFAs) Bayer pattern 11
12 Image Mosaicing 12
13 Inverse Problem: Image Demosaicing The CCD sensor in a digital camera acquires a single color component for each pixel. Problem: How to interpolate the missing components? 13
14 Demosaicing Types of Solutions Exploiting spatial coherence: Non adaptive: Nearest Neighbor interpolation Bilinear interpolation Bicubic interpolation, etc. Adaptive: Edge sensing interpolation Exploiting color coherence: Non Adaptive Adaptive 14
15 Nearest neighbor interpolation: Assumption: Neighboring pixels are similar. Conclusion: Take the value of a missing pixel from its nearest neighbor s value Assuming piecewise constant function Problematic in piecewise linear areas and near edges G 15
16 Linear interpolation 1D: G G(k)=(G(k+1)+G(k-1)) /2 Assuming piecewise linear function Artifacts near edges original input linear interpolation 16
17 Bilinear interpolation 2D: 17
18 Mosaic Image 18
19 Bilinear Interpolation 19
20 Bilinear Interpolation 20
21 Bilinear Interpolation 21
22 Adaptive Interpolation 2D (example 1): Assumption: Neighboring pixels are similar- if there is no edge h v = abs = abs [ G 45 G 43 ] [ ] G 34 G 54 G 44 = G 34 G G + G G 2 + G 2 + G G 45 if if if h >> v v v >> h h Problem: G 44 does not exploit all available information. 22
23 Adaptive Interpolation 2D (example 2): 0.08 ϕ(x) w w w w = ϕ = ϕ = ϕ = ϕ ( B46 B44 ) ( B24 B44 ) ( B42 B44 ) ( B B ) G 44 = w 45 G 45 + w w G + w w + w G 43 + w + w G 54 23
24 24 Color Coherence: Diffuse Model: In the Log domain: + = + = = B G R B G R K K K L K K K n l B G R b g r log ) log( log ( ) ( ) ( ) ( ) = B G R K K K y x n l y x b y x g y x r,,,, most edges uncommon edges Assumption: Color bands are highly correlated in high frequencies
25 Color Coherence: Example R=R L +R H luminance edge (common) G=G L +G H R-G R L +G L I chrominance edge (uncommon) R-G 25 x
26 Outcomes: R(x)-G(x) rg I R(x) G(x) + rg R x (x) = B x (x) = G x (x) k x Spatial coherence: G(k)=(G(k-1)+G(k+1))/2 Color coherence: rg =average{r(x)-g(x)} R(k)=G(k)+ rg Problem: - Assumes piecewise constant - Fails near luminance and chrominance edges Solution: Assume piecewise linear + adaptive interpolation 26
27 27 Exploiting color coherence for green interpolation: Since R x (x)=g x (x) we have x I ( ) ( ) 2 ~ 2 ~ = = k k k k k k k k k k R R G G R R G G k These estimates for G k can be improved using adaptive scheme
28 28 Edge Sensing Interpolation using Color Coherence: ~ ~ ~ ~ w w w w G w G w G w G w G = ( ) ( ) ( ) ( ) 2 ~ 2 ~ 2 ~ 2 ~ B B G G B B G G B B G G B B G G = = = = ( ) ij ij ij G R G R w w w w R w R w R w R w R + = = ~ ~ ~ ~ ~
29 Bilinear 29
30 Adaptive + color coherence 30
31 Bilinear 31
32 Adaptive + color coherence 32
33 Bilinear 33 Adaptive
34 Image Denoising 34
35 Noise Models Additive noise: g = f + n Multiplicative noise g = f+f*n General case: ( g f ) P 35
36 Example Noise additive noise multiplicative noise 36
37 Examples of independent and identically distributed (i.i.d) Noise Gaussian white noise (i.i.d.): P 1 ( ) 2 2 ( f g ) 2σ g f = e 2π σ σ=20
38 Additive uniform noise [a,b]: P ( g f ) = 1 ( b a) a ( g f ) 0 otherwise b b-a=20 38
39 Impulse noise (S & P): P ( g f ) = 1 P P P a a b P b for for for g g g = = = a b f Note: this noise is not additive!
40 Bayesian denoising Assume the noise component is i.i.d. Gaussian: g= f + n where n is distributed ~N(0,σ) A MAP estimate for the original f: f ˆ = arg max P = arg max f ( f g) Using Bayes rule this leads to : fˆ = arg min f f P {( g f ) + λψ ( f )} 2 ( g f ) P( f ) P( g) where ψ(f) is a penalty for non probable f 40
41 First try prior Similarity of neighboring pixels ( f ) = ( ) ( ) p Ws p q f p fq ψ p q q N p W s is a Gaussian profile giving less weight to distant pixels. (f p -f q ) W s
42 42 This leads to Gaussian smoothing filters: Reduces noise but blurs out edges. The parameter α depends on the noise variance. ( ) ( ) ( ) = N p q q p s p f f q p W f 2 ψ ( ) ( ) ( ) + = p p N q s q N q s p p q p W g q p W g f α α 1 ˆ
43 Noisy Image 43
44 Filtered Image 44
45 The role of prior term: Gaussian smoothing ˆ f = arg min g f + W p q f f f q Np 2 2 ( ) λ ( ) ( ) p p p s p q data term prior term sum p q
46 Example 2: Prior Term Edge sensitive similarity: ψ ( ) 2 ( f ) = ( ) + ( ) p Ws p q log 1 f p fq q N p log(1+(f p -f q ) 2 ) p q
47 This leads to the Bilateral filter (edge-preserving smoothing): fˆ p = ( 1 ) α g p + α q N p q N W p S W ( p q) W ( g g ) S R ( p qw ) ( g g ) R p p q q g q W S is a monotonically descending spatial weight W R is a monotonically descending radiometric weight A.k.a. Bilateral Filtering 47
48 Bilateral filter =. =. =. from P. Milinfar. 48
49 Gaussian Smoothing I W S = e ( p q) σ s 2 I Smooth edges x slide from Darya Frolova and Denis Simakov 49
50 Bilateral Filtering I W W S R = e 2 ( p q) ( gp gq ) σ s + σ R 2 I Preserves discontinuities x slide from Darya Frolova and Denis Simakov 50
51 Noisy Image 51
52 Filtered Image 52
53 Edge preserving smoothing Gaussian smoothing 53
54 Bilateral Filer- Example 54
55 55
56 56
57 The role of prior term: Robust {( ) 2 ψ ( )} fˆ = arg min g f + f f p p p robust p q f ψ ( ) robust v
58 The role of prior term: Robust {( ) 2 ψ ( )} fˆ = arg min g f + f f p p p robust p q f data term prior term sum p q
59 Example 3: Prior Term Images are homogeneous Out Images are self-similar - In 59
60 Example 3: Prior Term ψ ( f ) = ( )( ) p WN N p Nq f p fq q N p N p (and N q ) denotes the local neighborhood of p (and q). 2 N q N p 60
61 Neighborhood weights: examples 61
62 62 This leads to the non-local mean filter: W is a monotonically descending function, e.g. ( ) ( ) ( ) + = q q p N q q q p N p p N N W g N N W g f α α 1 ˆ ( ) 2 N N p N q q p N e N N W σ =
63 Patch-based Denoising (NL-mean) Noisy patches w 1 w 2 + w N Linear combination v(n i ) v(n j ) 63
64 Non-Local means - Example 64
65 Non-Local means - Example 65
66 White Balancing Also known as Illumination Estimation Color constancy Color correction 66
67 Experiment 1: Yellow illumination From David Brainard 67
68 Experiment 2: Blue illumination From David Brainard 68
69 Experiment 3: Blue illumination From David Brainard 69
70 Yellow illumination Blue illumination Blue illumination 70
71 Conclusions: The eye cares more about objects intrinsic color, not the color of the light leaving the objects The HVS applies color adaptation. 71
72 Color under different illuminations from S. Alpert & D. Simakov 72
73 Experiment 2: which one looks more natural? 73
74 White Balance Problem The eye cares more about objects intrinsic color, not the color of the light leaving the objects The HVS applies color adaptation. When watching a picture on screen or print, we adapt to the illuminant of the room, not that of the scene in the picture We need to discount the color of the light source White Balancing 74
75 White Balance & Film Different types of film for fluorescent, tungsten, daylight Need to change film! Electronic & Digital imaging are more flexible 75
76 Image formation Image color E k k = R,G,B = e( λ) s( λ) ρ ( λ) λ k d λ Illumination object reflectance Sensor response from S. Alpert & D. Simakov 76
77 White Balancing If Spectra of Light Source Changes Spectra of Reflected Light Changes The goal : Evaluate the surface color as if it was illuminated with white light (canonical) from S. Alpert & D. Simakov 77
78 Von Kries adaptation Multiply each channel by a gain factor R G = B W r W E E E Note that the light source could have a more complex effect Arbitrary 3x3 matrix More complex spectrum transformation Can be justified if the sensor spectral sensitivity function are narrow. g W b
79 Manual white balancing white spot Take a picture of a neutral object (white or gray) Deduce the weight of each channel If the object is recoded as r w, g w, b w use weights 1/r w, 1/g w, 1/b w 79
80 Manual white balancing light selection Select type of light from a finite set. Correct accordingly. 80
81 Automatic white balancing Gray world assumption The average color in the image is grey Use weights ( ) W R WG WB =,, R G B image image image Note that this also sets the exposure/brightness Usually assumes 18% grey 81
82 Automatic white balancing Brightest pixel assumption Highlights usually have the color of the light source Apply white balancing by using the highlight pixels Problem: Highlight pixels are contaminated with diffuse components 82
83 How can we detect highlight pixels? Following G. J. Klinker, S. A. Shafer and T. Kanade. A Physical Approach to Color Image Understanding. International Journal of Computer Vision, Two reflectance components total = diffuse + specular = + from S. Alpert & D. Simakov 83
84 Reminder - Image Formation R V N θ L Diffuse reflection: I diff = K(λ) e p (λ) (N L) Specular reflection: I spec = K s (λ)e p (λ) (R V) n e p - the point light intensities. K, K s [0,1] - the surface diffuse / specular reflectivity. N - the surface normal, L - the light direction, V viewing direction 84
85 Diffuse object in RGB space Linear cluster in color space from S. Alpert & D. Simakov 85
86 Specular object in RGB space Linear cluster in the direction of the illuminant color from S. Alpert & D. Simakov 86
87 Combined reflectance in RGB space Skewed T from S. Alpert & D. Simakov 87
88 Combined reflectance of several objects Several T-clusters Specular lines are parallel (why?) from S. Alpert & D. Simakov 88
89 Step I Group objects by region grouping Group together diffuse and specular image parts of the same object Grow regions in image domain so that to form clusters in color domain from S. Alpert & D. Simakov 89
90 Step II: Decompose into diffuse + specular Coordinate transform in color space R G B = [ C C C C ] matte spec matte spec matte specular noise Cspec Cmatte CmatteXCspec matte specular noise = [ C C C C ] matte spec matte spec 1 R G B from S. Alpert & D. Simakov 90
91 Decompose into matte + specular (2) + [ C C C ] 1 C matte spec matte spec * = in RGB space from S. Alpert & D. Simakov 91
92 Decompose into matte + specular (3) + Cmatte + Cspec = + from S. Alpert & D. Simakov 92
93 Final results: Reflectance Decomposition input image matte component specular component = + G. J. Klinker, S. A. Shafer and T. Kanade. A Physical Approach to Color Image Understanding. International Journal of Computer Vision, from S. Alpert & D. Simakov 93
94 Geometric Distortion 94
95 Geometric Distortion Correction No distortion Pin cushion Barrel Radial distortion of the image Caused by imperfect lenses Deviations are most noticeable for rays that pass through the edge of the lens 95
96 General Geometric Transformation Operations depend on Pixel s Coordinates. Independent of pixel values. x y f f x y ( x, y) = x ( x, y) = y I( x, y) = I' ( f ( x, y), f ( x y) ) x y, (x,y) (x,y ) I(x,y) I (x,y ) 96
97 Radial Distortions No Distortion Barrel Distortion Pincushion Distortion Distortion model (inverse mapping): 2 d r = r + ar + u d br 3 d r u = undistorted radius r d = distorted radius 97
98 Geometric Rectification using Calibration target 98
99 99 Parameter Estimation Minimum Least Squared Error solution: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) b M p r r r r b a r r r r d u d u d d d d = = M M M d d d u br ar r r + + = ( ) b M M M p T T 1 ˆ =
100 Correcting radial distortion from Helmut Dersch 100
101 Correcting Radial Distortions Before After 101
102 Single Image Enhancement - Summary Demosiacing Sensor correction Denoising Sensor/Acqusition correction White Balance Color correction Barrel/Pincusion Distortion Geometric correction (Lens) 102
103 THE END 103
Visual Object Recognition
Visual Object Recognition Lecture 2: Image Formation Per-Erik Forssén, docent Computer Vision Laboratory Department of Electrical Engineering Linköping University Lecture 2: Image Formation Pin-hole, and
More informationSuper-Resolution. Shai Avidan Tel-Aviv University
Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris
More informationWhat is Image Deblurring?
What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.
More informationCS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015
CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff
More informationTRACKING and DETECTION in COMPUTER VISION Filtering and edge detection
Technischen Universität München Winter Semester 0/0 TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Slobodan Ilić Overview Image formation Convolution Non-liner filtering: Median
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:
More informationSuper-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion
Super-Resolution Dr. Yossi Rubner yossi@rubner.co.il Many slides from Mii Elad - Technion 5/5/2007 53 images, ratio :4 Example - Video 40 images ratio :4 Example Surveillance Example Enhance Mosaics Super-Resolution
More informationIllumination, Radiometry, and a (Very Brief) Introduction to the Physics of Remote Sensing!
Illumination, Radiometry, and a (Very Brief) Introduction to the Physics of Remote Sensing! Course Philosophy" Rendering! Computer graphics! Estimation! Computer vision! Robot vision" Remote sensing! lhm
More informationComputer Vision & Digital Image Processing
Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image
More informationMultimedia communications
Multimedia communications Comunicazione multimediale G. Menegaz gloria.menegaz@univr.it Prologue Context Context Scale Scale Scale Course overview Goal The course is about wavelets and multiresolution
More informationUsing Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques
Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques Zayed M. Ramadan Department of Electronics and Communications Engineering, Faculty of Engineering,
More informationWavelets and Multiresolution Processing
Wavelets and Multiresolution Processing Wavelets Fourier transform has it basis functions in sinusoids Wavelets based on small waves of varying frequency and limited duration In addition to frequency,
More informationComputational Photography
Computational Photography Si Lu Spring 208 http://web.cecs.pdx.edu/~lusi/cs50/cs50_computati onal_photography.htm 04/0/208 Last Time o Digital Camera History of Camera Controlling Camera o Photography
More informationDepartment of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM
Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL512 --- DIGITAL IMAGE PROCESSING (I) Y. Wang Final Exam 1/5/06, 1PM-4PM Your Name: ID Number: Closed book. One sheet of
More informationOptical Flow, Motion Segmentation, Feature Tracking
BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature
More informationDigital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer
Wilhelm Burger Mark J. Burge Digital Image Processing An algorithmic introduction using Java Second Edition ERRATA Springer Berlin Heidelberg NewYork Hong Kong London Milano Paris Tokyo 5 Filters K K No
More informationADMINISTRIVIA SENSOR
CSE 559A: Computer Vision ADMINISRIVIA Everyone needs to fill out survey and join Piazza. We strongly encourage you to use Piazza over e-mail for questions (mark only to instructors if you wish). Also
More informationSATELLITE REMOTE SENSING
SATELLITE REMOTE SENSING of NATURAL RESOURCES David L. Verbyla LEWIS PUBLISHERS Boca Raton New York London Tokyo Contents CHAPTER 1. SATELLITE IMAGES 1 Raster Image Data 2 Remote Sensing Detectors 2 Analog
More informationA Spatially Adaptive Wiener Filter for Reflectance Estimation
A Spatially Adaptive Filter for Reflectance Estimation Philipp Urban, Mitchell R. Rosen, Roy S. Berns; Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science, Rochester Institute
More informationINTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)
INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 ISSN 0976 6464(Print)
More informationImage Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang
Image Noise: Detection, Measurement and Removal Techniques Zhifei Zhang Outline Noise measurement Filter-based Block-based Wavelet-based Noise removal Spatial domain Transform domain Non-local methods
More informationMotion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationThe shapes of faint galaxies: A window unto mass in the universe
Lecture 15 The shapes of faint galaxies: A window unto mass in the universe Intensity weighted second moments Optimal filtering Weak gravitational lensing Shear components Shear detection Inverse problem:
More informationVideo and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)
Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow
More informationLucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)
Lucas-Kanade Optical Flow Computer Vision 16-385 Carnegie Mellon University (Kris Kitani) I x u + I y v + I t =0 I x = @I @x I y = @I u = dx v = dy I @y t = @I dt dt @t spatial derivative optical flow
More informationCSE 559A: Computer Vision EVERYONE needs to fill out survey.
CSE 559A: Computer Vision ADMINISRIVIA EVERYONE needs to fill out survey. Setup git and Anaconda, send us your public key, and do problem set 0. Do immediately: submit public key and make sure you can
More informationOn the FPA infrared camera transfer function calculation
On the FPA infrared camera transfer function calculation (1) CERTES, Université Paris XII Val de Marne, Créteil, France (2) LTM, Université de Bourgogne, Le Creusot, France by S. Datcu 1, L. Ibos 1,Y.
More information6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005
6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose
More informationLinear Operators and Fourier Transform
Linear Operators and Fourier Transform DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 13, 2013
More informationIntroduction to Computer Vision. 2D Linear Systems
Introduction to Computer Vision D Linear Systems Review: Linear Systems We define a system as a unit that converts an input function into an output function Independent variable System operator or Transfer
More informationADMM algorithm for demosaicking deblurring denoising
ADMM algorithm for demosaicking deblurring denoising DANIELE GRAZIANI MORPHEME CNRS/UNS I3s 2000 route des Lucioles BP 121 93 06903 SOPHIA ANTIPOLIS CEDEX, FRANCE e.mail:graziani@i3s.unice.fr LAURE BLANC-FÉRAUD
More information3D from Photographs: Camera Calibration. Dr Francesco Banterle
3D from Photographs: Camera Calibration Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction Dense
More informationIntensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER?
: WHICH ONE LOOKS BETTER? 3.1 : WHICH ONE LOOKS BETTER? 3.2 1 Goal: Image enhancement seeks to improve the visual appearance of an image, or convert it to a form suited for analysis by a human or a machine.
More informationImage Degradation Model (Linear/Additive)
Image Degradation Model (Linear/Additive),,,,,,,, g x y h x y f x y x y G uv H uv F uv N uv 1 Source of noise Image acquisition (digitization) Image transmission Spatial properties of noise Statistical
More informationScience Insights: An International Journal
Available online at http://www.urpjournals.com Science Insights: An International Journal Universal Research Publications. All rights reserved ISSN 2277 3835 Original Article Object Recognition using Zernike
More informationECE Digital Image Processing and Introduction to Computer Vision
ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Outline Recap, image degradation / restoration Template
More informationNonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion
Nonlinear Diffusion These notes summarize the way I present this material, for my benefit. But everything in here is said in more detail, and better, in Weickert s paper. 1 Introduction: Motivation for
More informationTemplates, Image Pyramids, and Filter Banks
Templates, Image Pyramids, and Filter Banks 09/9/ Computer Vision James Hays, Brown Slides: Hoiem and others Review. Match the spatial domain image to the Fourier magnitude image 2 3 4 5 B A C D E Slide:
More informationOPPA European Social Fund Prague & EU: We invest in your future.
OPPA European Social Fund Prague & EU: We invest in your future. 1D Projective Coordinates The 1-D projective coordinate of a point P : [P ] = [P P 0 P I P ] = [p p 0 p I p] = p pi p 0 p I p 0 p p p p
More informationAtmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism
Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism Magali Lemaitre 1, Olivier Laligant 1, Jacques Blanc-Talon 2, and Fabrice Mériaudeau 1 1 Le2i Laboratory, University
More informationInverse Problems in Image Processing
H D Inverse Problems in Image Processing Ramesh Neelamani (Neelsh) Committee: Profs. R. Baraniuk, R. Nowak, M. Orchard, S. Cox June 2003 Inverse Problems Data estimation from inadequate/noisy observations
More informationImage Processing in Astrophysics
AIM-CEA Saclay, France Image Processing in Astrophysics Sandrine Pires sandrine.pires@cea.fr NDPI 2011 Image Processing : Goals Image processing is used once the image acquisition is done by the telescope
More informationEdges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise
Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationLecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?
Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We
More informationDetectors part II Descriptors
EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:
More informationDiscrimination of Closely-Spaced Geosynchronous Satellites Phase Curve Analysis & New Small Business Innovative Research (SBIR) Efforts
Discrimination of Closely-Spaced Geosynchronous Satellites Phase Curve Analysis & New Small Business Innovative Research (SBIR) Efforts Paul LeVan Air Force Research Laboratory Space Vehicles Directorate
More informationHow we wanted to revolutionize X-ray radiography, and how we then "accidentally" discovered single-photon CMOS imaging
How we wanted to revolutionize X-ray radiography, and how we then "accidentally" discovered single-photon CMOS imaging Stanford University EE Computer Systems Colloquium February 23 rd, 2011 EE380 Peter
More informationIMAGE WARPING APPLICATIONS. register multiple images (registration) correct system distortions. register image to map (rectification)
APPLICATIONS IMAGE WARPING N correct system distortions Rectification of Landsat TM image register multiple images (registration) original register image to map (rectification) N fill Speedway rectified
More informationITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms
ITK Filters Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms ITCS 6010:Biomedical Imaging and Visualization 1 ITK Filters:
More informationVlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course
More informationLORD: LOw-complexity, Rate-controlled, Distributed video coding system
LORD: LOw-complexity, Rate-controlled, Distributed video coding system Rami Cohen and David Malah Signal and Image Processing Lab Department of Electrical Engineering Technion - Israel Institute of Technology
More informationImage enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding
13 Why image enhancement? Image enhancement Example of artifacts caused by image encoding Computer Vision, Lecture 14 Michael Felsberg Computer Vision Laboratory Department of Electrical Engineering 12
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationRESTORATION OF VIDEO BY REMOVING RAIN
RESTORATION OF VIDEO BY REMOVING RAIN Sajitha Krishnan 1 and D.Venkataraman 1 1 Computer Vision and Image Processing, Department of Computer Science, Amrita Vishwa Vidyapeetham University, Coimbatore,
More informationMultiscale Image Transforms
Multiscale Image Transforms Goal: Develop filter-based representations to decompose images into component parts, to extract features/structures of interest, and to attenuate noise. Motivation: extract
More informationFinite Apertures, Interfaces
1 Finite Apertures, Interfaces & Point Spread Functions (PSF) & Convolution Tutorial by Jorge Márquez Flores 2 Figure 1. Visual acuity (focus) depends on aperture (iris), lenses, focal point location,
More informationEdge Detection in Computer Vision Systems
1 CS332 Visual Processing in Computer and Biological Vision Systems Edge Detection in Computer Vision Systems This handout summarizes much of the material on the detection and description of intensity
More informationEE Camera & Image Formation
Electric Electronic Engineering Bogazici University February 21, 2018 Introduction Introduction Camera models Goal: To understand the image acquisition process. Function of the camera Similar to that of
More information7. Telescopes: Portals of Discovery Pearson Education Inc., publishing as Addison Wesley
7. Telescopes: Portals of Discovery Parts of the Human Eye pupil allows light to enter the eye lens focuses light to create an image retina detects the light and generates signals which are sent to the
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationLinear Diffusion. E9 242 STIP- R. Venkatesh Babu IISc
Linear Diffusion Derivation of Heat equation Consider a 2D hot plate with Initial temperature profile I 0 (x, y) Uniform (isotropic) conduction coefficient c Unit thickness (along z) Problem: What is temperature
More informationVariational Methods in Bayesian Deconvolution
PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the
More informationReview for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA
Review for Exam Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 0003 March 26, 204 Abstract Here are some things you need to know for the in-class
More informationMotion Estimation (I)
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationGradient-domain image processing
Gradient-domain image processing http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 10 Course announcements Homework 3 is out. - (Much) smaller
More informationScreen-space processing Further Graphics
Screen-space processing Rafał Mantiuk Computer Laboratory, University of Cambridge Cornell Box and tone-mapping Rendering Photograph 2 Real-world scenes are more challenging } The match could not be achieved
More informationImage Segmentation: Definition Importance. Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation.
: Definition Importance Detection of Discontinuities: 9 R = wi z i= 1 i Point Detection: 1. A Mask 2. Thresholding R T Line Detection: A Suitable Mask in desired direction Thresholding Line i : R R, j
More informationVision & Perception. Simple model: simple reflectance/illumination model. image: x(n 1,n 2 )=i(n 1,n 2 )r(n 1,n 2 ) 0 < r(n 1,n 2 ) < 1
Simple model: simple reflectance/illumination model Eye illumination source i(n 1,n 2 ) image: x(n 1,n 2 )=i(n 1,n 2 )r(n 1,n 2 ) reflectance term r(n 1,n 2 ) where 0 < i(n 1,n 2 ) < 0 < r(n 1,n 2 )
More informationADMM algorithm for demosaicking deblurring denoising
ADMM algorithm for demosaicking deblurring denoising Daniele Graziani, Laure Blanc-Féraud, Gilles Aubert To cite this version: Daniele Graziani, Laure Blanc-Féraud, Gilles Aubert. ADMM algorithm for demosaicking
More informationLecture 7: Edge Detection
#1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection
More informationCHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION
59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods
More informationGradient Domain High Dynamic Range Compression
Gradient Domain High Dynamic Range Compression Raanan Fattal Dani Lischinski Michael Werman The Hebrew University of Jerusalem School of Computer Science & Engineering 1 In A Nutshell 2 Dynamic Range Quantity
More informationComputer Vision Lecture 3
Computer Vision Lecture 3 Linear Filters 03.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Demo Haribo Classification Code available on the class website...
More informationImage representation with multi-scale gradients
Image representation with multi-scale gradients Eero P Simoncelli Center for Neural Science, and Courant Institute of Mathematical Sciences New York University http://www.cns.nyu.edu/~eero Visual image
More informationOld painting digital color restoration
Old painting digital color restoration Michail Pappas Ioannis Pitas Dept. of Informatics, Aristotle University of Thessaloniki GR-54643 Thessaloniki, Greece Abstract Many old paintings suffer from the
More informationNotes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998
Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve
More informationIntrinsic Images by Entropy Minimization
Intrinsic Images by Entropy Minimization How Peter Pan Really Lost His Shadow Presentation By: Jordan Frank Based on work by Finlayson, Hordley, Drew, and Lu [1,2,3] Image from http://dvd.monstersandcritics.com/reviews/article_1273419.php/dvd_review_peter_pan
More informationMultiview Geometry and Bundle Adjustment. CSE P576 David M. Rosen
Multiview Geometry and Bundle Adjustment CSE P576 David M. Rosen 1 Recap Previously: Image formation Feature extraction + matching Two-view (epipolar geometry) Today: Add some geometry, statistics, optimization
More informationMixture Models and EM
Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering
More informationToday. MIT 2.71/2.710 Optics 11/10/04 wk10-b-1
Today Review of spatial filtering with coherent illumination Derivation of the lens law using wave optics Point-spread function of a system with incoherent illumination The Modulation Transfer Function
More informationSignal Denoising with Wavelets
Signal Denoising with Wavelets Selin Aviyente Department of Electrical and Computer Engineering Michigan State University March 30, 2010 Introduction Assume an additive noise model: x[n] = f [n] + w[n]
More informationImage Degradation Model (Linear/Additive)
Image Degradation Model (Linear/Additive),,,,,,,, g x y f x y h x y x y G u v F u v H u v N u v 1 Source of noise Objects Impurities Image acquisition (digitization) Image transmission Spatial properties
More informationImage Restoration. Enhancement v.s. Restoration. Typical Degradation Sources. Enhancement vs. Restoration. Image Enhancement: Image Restoration:
Image Retoration Retoration v.. Enhancement Image Denoiing Image Retoration Enhancement v.. Retoration Image Enhancement: A roce which aim to imrove bad image o they will look better. Image Retoration:
More informationAdvances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.
6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;
More informationDigital Matting. Outline. Introduction to Digital Matting. Introduction to Digital Matting. Compositing equation: C = α * F + (1- α) * B
Digital Matting Outline. Introduction to Digital Matting. Bayesian Matting 3. Poisson Matting 4. A Closed Form Solution to Matting Presenting: Alon Gamliel,, Tel-Aviv University, May 006 Introduction to
More informationOptical/IR Observational Astronomy Telescopes I: Optical Principles. David Buckley, SAAO. 24 Feb 2012 NASSP OT1: Telescopes I-1
David Buckley, SAAO 24 Feb 2012 NASSP OT1: Telescopes I-1 1 What Do Telescopes Do? They collect light They form images of distant objects The images are analyzed by instruments The human eye Photographic
More informationSIFT: Scale Invariant Feature Transform
1 SIFT: Scale Invariant Feature Transform With slides from Sebastian Thrun Stanford CS223B Computer Vision, Winter 2006 3 Pattern Recognition Want to find in here SIFT Invariances: Scaling Rotation Illumination
More informationNoise, Image Reconstruction with Noise!
Noise, Image Reconstruction with Noise! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 10! Gordon Wetzstein! Stanford University! What s a Pixel?! photon to electron
More informationA Method for Blur and Similarity Transform Invariant Object Recognition
A Method for Blur and Similarity Transform Invariant Object Recognition Ville Ojansivu and Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu,
More informationTikhonov Regularization in General Form 8.1
Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual
More informationDIFFRACTION GRATING. OBJECTIVE: To use the diffraction grating in the formation of spectra and in the measurement of wavelengths.
DIFFRACTION GRATING OBJECTIVE: To use the diffraction grating in the formation of spectra and in the measurement of wavelengths. THEORY: The operation of the grating is depicted in Fig. 1 on page Lens
More informationOptical Systems Program of Studies Version 1.0 April 2012
Optical Systems Program of Studies Version 1.0 April 2012 Standard1 Essential Understand Optical experimental methodology, data analysis, interpretation, and presentation strategies Essential Understandings:
More informationSampling in 1D ( ) Continuous time signal f(t) Discrete time signal. f(t) comb
Sampling in 2D 1 Sampling in 1D Continuous time signal f(t) Discrete time signal t ( ) f [ k] = f( kt ) = f( t) δ t kt s k s f(t) comb k 2 Nyquist theorem (1D) At least 2 sample/period are needed to represent
More informationFitting Narrow Emission Lines in X-ray Spectra
Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park Department of Statistics, Harvard University October 25, 2005 X-ray Spectrum Data Description Statistical Model for the Spectrum Quasars are
More informationCamera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool
Outline Camera calibration Camera projection models Camera calibration i Nonlinear least square methods A camera calibration tool Applications Digital Visual Effects Yung-Yu Chuang with slides b Richard
More information18/10/2017. Image Enhancement in the Spatial Domain: Gray-level transforms. Image Enhancement in the Spatial Domain: Gray-level transforms
Gray-level transforms Gray-level transforms Generic, possibly nonlinear, pointwise operator (intensity mapping, gray-level transformation): Basic gray-level transformations: Negative: s L 1 r Generic log:
More informationApplication to Hyperspectral Imaging
Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral
More informationProperties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context
Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa
More information