Single-Image-Based Rain and Snow Removal Using Multi-guided Filter

Similar documents
A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES

RESTORATION OF VIDEO BY REMOVING RAIN

A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE SIMILARITY

Robust Motion Segmentation by Spectral Clustering

Generalized Laplacian as Focus Measure

Multiple Wavelet Coefficients Fusion in Deep Residual Networks for Fault Diagnosis

Tracking Human Heads Based on Interaction between Hypotheses with Certainty

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Human Pose Tracking I: Basics. David Fleet University of Toronto

Open Access Recognition of Pole Piece Defects of Lithium Battery Based on Sparse Decomposition

CS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint

Stability of Recursive Gaussian Filtering for Piecewise Linear Bilateral Filtering

FILTERING IN THE FREQUENCY DOMAIN

Total Variation Blind Deconvolution: The Devil is in the Details Technical Report

Wavelet Decomposition in Laplacian Pyramid for Image Fusion

Ragav Venkatesan, 2 Christine Zwart, 2,3 David Frakes, 1 Baoxin Li

Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism

OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS

Compressed Sensing and Neural Networks

Iterative Laplacian Score for Feature Selection

Wavelet-based Salient Points with Scale Information for Classification

Detection of Unfocused Raindrops on a Windscreen using Low Level Image Processing

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Conjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing

Detecting Unfocused Raindrops

Covariance Tracking Algorithm on Bilateral Filtering under Lie Group Structure Yinghong Xie 1,2,a Chengdong Wu 1,b

PoS(CENet2017)018. Privacy Preserving SVM with Different Kernel Functions for Multi-Classification Datasets. Speaker 2

Determining the Translational Speed of a Camera from Time-Varying Optical Flow

A Generative Model Based Approach to Motion Segmentation

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

EVER since the early 1980 s, computer scientists have

Introduction to motion correspondence

Computational Photography

Robustifying the Flock of Trackers

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

K-Means, Expectation Maximization and Segmentation. D.A. Forsyth, CS543

Affine Structure From Motion

OUTDOOR vision systems employed for various tasks such

Urban land use information retrieval based on scene classification of Google Street View images

Markov Random Fields (A Rough Guide)

Random walks and anisotropic interpolation on graphs. Filip Malmberg

Feature Vector Similarity Based on Local Structure

Level-Set Person Segmentation and Tracking with Multi-Region Appearance Models and Top-Down Shape Information Appendix

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Motion Estimation (I)

Approximation Bound for Fuzzy-Neural Networks with Bell Membership Function

Online Appearance Model Learning for Video-Based Face Recognition

Gradient-domain image processing

Speaker Verification Using Accumulative Vectors with Support Vector Machines

Analysis of Numerical Methods for Level Set Based Image Segmentation

Absolute phase map recovery of two fringe patterns with flexible selection of fringe wavelengths

Orientation Map Based Palmprint Recognition

Rotational Invariants for Wide-baseline Stereo

Affine Normalization of Symmetric Objects

Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques

Motion Estimation (I) Ce Liu Microsoft Research New England

Continuous State MRF s

Improved Image Fusion Algorithm for Detecting Obstacles in Forests

A Generative Perspective on MRFs in Low-Level Vision Supplemental Material

Shape of Gaussians as Feature Descriptors

Iterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research

Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors

Fast Algebraic Immunity of 2 m + 2 & 2 m + 3 variables Majority Function

6.869 Advances in Computer Vision. Bill Freeman, Antonio Torralba and Phillip Isola MIT Oct. 3, 2018

Digital Matting. Outline. Introduction to Digital Matting. Introduction to Digital Matting. Compositing equation: C = α * F + (1- α) * B

A REVIEW ON DIRECTIONAL FILTER BANK FOR THEIR APPLICATION IN BLOOD VESSEL DETECTION IN IMAGES

Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support

Edge Detection in Computer Vision Systems

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

N-mode Analysis (Tensor Framework) Behrouz Saghafi

Analysis on a local approach to 3D object recognition

Pattern Theory and Its Applications

Shape Outlier Detection Using Pose Preserving Dynamic Shape Models

Anti-synchronization of a new hyperchaotic system via small-gain theorem

Open Access Measuring Method for Diameter of Bearings Based on the Edge Detection Using Zernike Moment

Supplemental Figures: Results for Various Color-image Completion

Energy Tensors: Quadratic, Phase Invariant Image Operators

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

Convolutional Associative Memory: FIR Filter Model of Synapse

Higher-order Statistical Modeling based Deep CNNs (Introduction) Peihua Li Dalian University of Technology

Constructing Node-Disjoint Paths in Enhanced Pyramid Networks

Super-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion

Spatially adaptive alpha-rooting in BM3D sharpening

OVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM.

Deep Learning: Approximation of Functions by Composition

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Wavelet Packet Based Digital Image Watermarking

On Single-Sequence and Multi-Sequence Factorizations

Review: Learning Bimodal Structures in Audio-Visual Data

Achieving scale covariance

Modulation-Feature based Textured Image Segmentation Using Curve Evolution

Stereoscopic Programmable Automotive Headlights for Improved Safety Road

Lecture 8: Interest Point Detection. Saad J Bedros

Laplacian Eigenimages in Discrete Scale Space

Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing

Transcription:

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter Xianhui Zheng 1, Yinghao Liao 1,,WeiGuo 2, Xueyang Fu 2, and Xinghao Ding 2 1 Department of Electronic Engineering, Xiamen University, Xiamen, China 2 Department of Communication Engineering, Xiamen University, Xiamen, China {yhliao,dxh}@xmu.edu.cn Abstract. In this paper, we propose a new rain and snow removal method through using low frequency part of a single image. It is based on a key difference between clear background edges and rain streaks or snowflakes, low frequency part can obviously distinguish the different properties of them. Low frequency part is the non-rain or non-snow component. We modify it as a guidance image, the high frequency part as input image of guided filter, so we get a non-rain or non-snow component of high frequency part and add the low frequency part is the restored image. We further make it more clear based on the properties of clear background edges. Our results show that it has good performance in rain removal and snow removal. Keywords: rain removal, snow removal, low frequency. 1 Introduction A photo taken in the rainy day or snowy day is covered with bright streaks (Figure 1). The streaks not only cause a bad human vision, but also significantly degrade effectiveness of any computer vision algorithm, such as object recognition, tracking, retrieving and so on. Removal of rain or snow has been paid much attention, especially rain removal. Gary and Nayar suggested a correlation model capturing the dynamics of rain and a physics-based motion blur model explaining the photometry of rain [1]. Then they proposed how to modify camera parameters to remove the effects of rain [2]. Zhang proposed a detection method combining temporal and chromatic properties of rain [5]. Barnum thought rain or snow streaks are formulated by a blurred Gaussian model, and rain or snow is detected base on the statistical information in frequency space with different frames. Then rain or snow can be removed or increased [6] [7]. Fu et al proposed a rain removal method via image decomposition, the rain component of single image could be removed via performing dictionary learning and sparse coding [8]. Jing xu proposed a method using guided filter to remove rain streaks or snow streaks[10], and then improved the performance by refining the guidance image [11]. Corresponding author. M. Lee et al. (Eds.): ICONIP 2013, Part III, LNCS 8228, pp. 258 265, 2013. c Springer-Verlag Berlin Heidelberg 2013

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter 259 In this work, a novel method is proposed based on the difference between clear background edges and rain streaks or snow streaks. The method mainly uses the guided filter to remove rain streaks or snowflakes. Section 2 suggests that the rain streaks or snowflakes are different from other textures. In Section 3 we introduce the algorithm of removing the rain streaks or snowflakes. Section 4 shows the experimental results and compares with other methods. The last part is conclusion. Fig. 1. Intensity profiles along the horizontal indicated by the red line. (a) Blurred rain streaks; (b) The edges with low pixel values; (c) The edges of different adjacency pixel values; (d) The clear background edges like rain. Blue line is pixel values of input image in a partial row to the same scale. The red line is the corresponding low frequency part via guided filter. The green line is the edge enhancement of red line. 2 The Difference between Clear Background Edges and Rain or Snow Streaks Due to the size and the speed of raindrop or snowflake, they are imaged in form of bright and blurry streaks. The streaks are higher than adjacent pixel values and they will disappear in the low frequency part such as Figure 1(a). Some background edges are lower than adjacent pixel values like Figure 1(b) and they can t be rain or snow streaks. But the values will become higher through using guided filter. Other edges like Figure 1(c), some pixels near to the edges are higher, the others are lower, these edges are retained in low frequency, but become a little smooth, can be recovered through edge enhancement. There exist some textures like rain or snow streaks, which are also higher than adjacent pixel values, but they are clearer than rain or snow streaks, as shown in Figure 1(d). After transforming to low frequency part by appropriate parameters, some of

260 X. Zheng et al. them are likely to become the little textures. And we also enhance them to close to original textures. 2.1 Image Model of Blurred Rain Streaks or Snow Streaks The commonly used mathematical model of rainy or snowy image, that is, input image can be decomposed into two components: a clean background image and a rain or snow component. I in = I b + I r (1) For obviously seeing the different textures of rain or snow and background, firstly, an input image is decomposed into low frequency part and high-frequency part by using guided image filter. The low frequency part is non-rain or nonsnow component. All the rain and snow streaks are in the high-frequency part, which also has non-rain or non-snow textures. The opinion above is the same as [8] [12] [13]. So formula (1) can be changed to (2): I bl is the low-frequency part of background. I bh is the high-frequency part of background. I rh denotes the rain or snow in the high-frequency part. I guide in means the transformation of I in by using guided filter. I in =I bl + I bh + I rh = I guide in +I bh + I r (2) and (2) can be simplified as (3): I in = I LF + I HF (3) The transformation makes the textures change, which can be showed by the red line in Figure 1. From the Figure 1(a) and Figure 3(b), we see the low frequency part is non-rain part, and contains the edges of the background. So we get a non-rain or non-snow guidance image. By using it we can remove rain and snow. The experiment results prove our idea. 2.2 Guided Filter Guided filter is an edge-preserving smoothing filter, and has good behavior near the edges [9]. Guidance image can be itself or another reference image. Besides, the guided filter has a fast and non approximate linear-time algorithm, whose computational complexity is independent of the filtering kernel size. So we choose it to realize our idea. Guided filter formulates output image I guide in is a linear of guidance image I as followed: I guide in = a k I i + b k (4) where a k and a k are defined as: a k =(( I i p i )/ ω μ k p k ))/(σ i ω k 2 + ε) (5) k b k = p k a k μ k (6)

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter 261 3 Rain and Snow Removal Fig. 2. Block diagram of the proposed removal method Block diagram of the proposed removal method is shown in Figure 2. It explicitly describes the framework of our method. First, input image is decomposed into low frequency part and high-frequency part by using guided filter. We introduce the low frequency part is not rain or snow part before. But due to the effect of guided filter, the edges of image become a little smooth. In order to make the existed edges more close to the edges of input image, we use edge enhancement to realize this process as follow expression: I LF = I LF + ω I LF (7) where I LF is the gradient of I LF and ω =0.1 in the paper. This enhancement is showed by the green line in Figure 1. From the Figure 1, the enhanced edges are more close to the edges of background, and all the enhanced edges are still background textures. So we get the more refined guidance image. We don t use input image but the high-frequency part as the input image of guided filter. Since there is a big difference between the pixel values of the low frequency image and the original input image, the restored image is easy to have unsmooth flakes. The high-frequency part is more close to low frequency part, which will get better performance. After using guided filter, high-frequency part remains the non-rain or non-snow component. Through adding the low frequency part, we can get a rough recovered image. On one hand, because we don t completely recover the edges like Fig 1(b) and just make low value pixels edges become higher, recovered image is blurred as shown in Figure 3(d). But the edges can t be rain or snow streaks, we don t

262 X. Zheng et al. (a) Input image (b) Low frequency part (c) High frequency part (d) Recovered image (e) Clear recovered image (f) Refined recovered image Fig. 3. Intermediate results in proposed method want to change the low pixels value edges. On the other hand, using guided filter also makes recovered image blurred (such as background near to rain streaks in Figure 3(d)). So we change it to make recovered image clear as follow: I cr = min(i r,i in ) (8) Due to the effect of guided filter, the values of removing rain or snow part are a little higher than nearby pixel values as shown in Figure 3(e). It is not good for our visual. So we take a weighted summation of I r and I cr to get the refined guidance image (equation 9), and then use guided filter once again to get the final result. I ref = βi cr +(1 β)i r (9) where β =0.8 in rain removal and β =0.5 in snow removal in this paper.

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter 263 4 Experimental Results Figure 3 shows the removal procedure and intermediate results proposed by this paper. Figure 3(b) is low frequency part of input image using guided filter in horizon(because the direction of rain streaks can t be in horizon). It indeed doesn t contain rain information. Figure 3(c) is high frequency part of input image. From the result, we can see that high frequency part has the information of background textures and rain streaks. And the rough recovered image Figure 3(d) is not rain but blurred because of guided filter. With minimize the input image and recovered image we get the clear recovered image Figure 3(e). It is clear, but has some flakes. We balance it with rough recovered image to get final result Figure 3(f), and it does not have flakes and very clear. Figure 4 shows the best result of [11] and our result. Visually it is obvious that our result is better than the result of the method [11]. Our result is clearer and more effective in rain removal (e.g., the zoom-in region of red box). By the way, our method and method [11] both use guided filter, and guided filter is O(N) time algorithm. (a) The result in [11] (b) Proposed method Fig. 4. Comparison of rain removal results Figure 5 shows a comparison between removal result of snow obtained by [11] and the result of our algorithm. Clearly, our method achieves more accurate removal of the snow. And our restored image is more clear (e.g., the background in the red box in Figure 5).

264 X. Zheng et al. (a) Original image (b) The result in [11] (c) Proposed methods Fig. 5. The removal results of snow 5 Conclusion In this paper, we propose a new method for rain and snow removal of a single image. Through analysing the difference between clear background edges and rain or snow streaks, low frequency part can express the different characteristics of them, and then a rain and snow removal method base on low frequency is proposed. The removal part is mainly made up of guided filter. The results show that our method is effective and efficient in rain removal and snow removal. Our method and method [11] both use guided filter to remove rain and snow streaks, but our method has better performance. Acknowledgments. The project is supported by the National Natural Science Foundation of China (No.30900328, 61172179,61103121 ), the Natural Science

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter 265 Foundation of Fujian Province of China(NO: 2012J05160), The National Key Technology R&D Program(2012BAI07B06,the Fundamental Research Funds for the Central Universities(No.: 2011121051,2013121023). References 1. Garg, K., Nayar, S.K.: Detection and removal of rain from videos. In: Proc. CVPR, vol. 1, pp. 528 535 (2004) 2. Garg, K., Nayar, S.K.: When does a camera see rain? In: Proc. CVPR, vol. 2, pp. 1067 1074 (2005) 3. Garg, K., Nayar, S.K.: Photorealistic rendering of rain streaks. Proc. SIG- GRAPH 25(3), 996 1002 (2006) 4. Garg, K., Nayar, S.K.: Vision and Rain. Proc. IJCV, 3 27 (2007) 5. Zhang, X., et al.: Rain removal in video by combining temporal and chromatic properties. In: Proc. ICME, pp. 461 464 (2006) 6. Barnum, P., Kanade, T., Narasimhan, S.: Spatio-Temporal Frequency Analysis for Removing Rain and Snow from Videos. In: Proc. ICCV, pp. 1 8 (2007) 7. Barnum, P., Narasimhan, S., Kanade, T.: Analysis of rain and snow in frequency space. Proc. IJCV 86(2-3), 256 274 (2010) 8. Fu, Y.H., Kang, L.W., Lin, C.W., Hsu, C.: Single-frame-based rain removal via image decomposition. In: IEEE Int. Conf. Acoustics, Speech & Signal Processing, Prague, Czech Republic, pp. 1453 1456 (2011) 9. He, K., Sun, J., Tang, X.: Guided Image Filtering. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 1 14. Springer, Heidelberg (2010) 10. Xu, J., Zhao, W., Liu, P., Tang, X.: Removing rain and snow in a single image using guided filter. In: Proc. CSAE, vol. 2, pp. 304 307 (2012) 11. Xu, J., Zhao, W., Liu, P., Tang, X.: An Improved Guidance Image Based Method to Remove Rain and Snow in a Single Image. CCSE Computer and Information Science Journal 5(3), 49 55 (2012) 12. Kang, L.W., Lin, C.W., Fu, Y.H.: Automatic Single-Image-Based Rain Streaks Removal via Image Decomposition. IEEE Transactions on Image Processing 21(4), 1742 1755 (2012) 13. Chen, D.Y., Chen, C.C., Kang, L.W.: Visual depth guided image rain streaks removal via sparse coding. In: Proc. ISPACS, pp. 151 156 (2012)