Detecting Unfocused Raindrops

Size: px
Start display at page:

Download "Detecting Unfocused Raindrops"

Transcription

1 Detecting Unfocused Raindrops In-Vehicle Multipurpose Cameras image licensed by ingram publishing By Aurélien Cord and Nicolas Gimonet Advanced driver assistance systems (ADASs) based on video cameras are becoming pervasive in today s automotive industry. However, while most of these systems perform nicely in clear weather conditions, their performances fail drastically in Digital Object Identifier 1.119/MRA Date of publication: 7 February 214 adverse weather and particularly in the rain. We present two novel approaches that aim to detect unfocused raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithms rely on image processing techniques to highlight them. The results will be used to improve ADAS behavior under rainy conditions. Both approaches are compared with each other and the techniques from the literature /14/$ IEEE MARCH 214 IEEE ROBOTICS & AUTOMATION MAGAZINE 49

2 characterize rainy conditions. Both approaches are based on the photometric properties of the raindrops: brightness and blur. They rely on advanced image processing tools, such as morphological transformations, watershed [7], and Otsu s method [8] to produce a mask localizing raindrops on the windshield. Finally, the rain detection is handled by counting the number of detected raindrops. Figure 1. An image acquired by our camera. Raindrop Detection Camera-based ADASs are becoming common in new cars: they are used for lane departure warning, forward collision warning, or adaptive headlamp control, for instance. Most of these systems are designed to work in clear visibility conditions. However, under poor weather conditions, such as rain, snow, or fog, their performance is heavily The algorithms rely affected. Moreover, such weather conditions on image processing strongly impact the driver s safety by reducing techniques to highlight both the visibility distance and the tire grip. raindrops. Therefore, when the driver needs assistance, it can be inferred that some failures will result from the meteorological conditions. Developing algorithms that work perfectly under all weather conditions appears to be unrealistic. Therefore, the reliable detection and characterization of adverse conditions is needed to compensate for the lack of camera-based systems. Using the same vision sensor, systems should be able to evaluate their reliability and extend their functionality. Moreover, it could lead to new ADASs by, for instance, alerting the driver if his behavior is inappropriate for the current visibility conditions. Rainy weather is clearly the most common adverse weather condition. Because it is a challenging task in computer vision, few studies have been published [1] [6] on rain detection by an onboard camera. These studies generally assumed that camera focus is set to the windshield. However, due to industrial constraints, cameras dedicated to ADASs are usually installed behind the interior rear-view mirror. Since the camera is very close to the windshield, the windshield is out of focus. As shown in Figure 1, the signal resulting from the presence of a raindrop is a blurred spot, with weak visual features, making previous approaches inefficient for their detection. In this article, we present two novel approaches to detect and localize unfocused raindrops in color images and to Related Work Many techniques for blurring effect detection are presented in the literature, for instance in [9]. However, in the context of images provided by embedded cameras, the blurring effect would not only be generated by drops but also by the vehicle motion. This would induce a lot of false detections. Starting in 24 with the development of ADASs, several approaches were attempted to achieve the detection of rainy conditions by an onboard camera [1] [6], [1]. The various proposed approaches differ by the type of detected features (streaks or drops), the technique that was used (statistical learning or image processing), or the drops focus on the screen. Garg and Nayar [11] [13] proposed an accurate photometric model for stationary spherical raindrops and the detection of rain streaks based on the drops motion. However, even if this research is fundamentally interesting, the static observer and the long temporal integration (3 frames) make the approach unsuitable for moving camera scenarios. Yamashita et al. [14] and Tanak et al. [15] proposed several approaches for detecting and removing raindrops or mud blobs from multiple camera images. By comparing different images of the same background, they identified raindrop areas in one image and replaced them with a region extracted from another image. Then, they proposed a spatio-temporal approach [16] with a fixed camera position. In these approaches, the drops are required to be almost focused, because the impact of a strongly unfocused raindrop would hide too much of the image. Kurihata et al. [1] used a single onboard camera. Based on a statistical learning method, so-called eigendrops, they identified raindrop characteristics by their circular shape on a large documented database. This approach requires the raindrops to be clearly visible and sharp. Even though results on the sky area are encouraging, it still shows a large number of false detections within complex environments such as urban scenes. This is very restrictive for a vehicle application. Roser and Moosmann [2] and Yan et al. [3] proposed indentifying rainy conditions, relying on a global analysis of the image. Because of statistical learning, they weighted different image characteristics (color, contrast, and sharpness) to detect the presence or the absence of rain. These approaches do not localize the raindrops within the image. Gormer et al. [4] proposed a specific onboard camera adaptation. Relying on a set of mirror lenses and near infrared light, the bottom of the field of view is focused on the windshield. In this area, the raindrops are easily detectable, and this allows the quantitative measurement of the rain intensity. 5 IEEE ROBOTICS & AUTOMATION MAGAZINE MARCH 214

3 Roser and Geiger [5] took advantage of a specific algorithm that models a raindrop image on the windshield. It considers the raindrop s geometric shape and photometric properties and the road scene geometry. With this model, they identified real raindrops among a preselection of potential candidates. The results are very impressive; however, the photometrical effect induced by the defocus is not considered by their raindrop model. Cord and Aubert [6] presented an approach to detect focused raindrops. It relies on image processing techniques combined with raindrop pattern recognition. No camera adjustment for the specific purpose of detecting rain has been done, but the distance from the windshield was large enough to ensure the focus both on the raindrops and the scene. Nashashibi et al. [1] presented the first approach to detect strongly unfocused raindrops on the windshield. Combining raindrop features in the images with a temporal approach, they detected the appearance of raindrops in real time. As they admitted in [1], their algorithm may suffer from a lack of flexibility and thus needs improvements to be fully reliable. In this article, we developed two approaches that are totally dedicated to the detection and localization of unfocused raindrops on a windshield in a moving vehicle. They do not require any specific device or the focus to be set on the windshield. To evaluate our approaches, we compare our results with the most recent approaches [1]. Methods Visual Properties of Drops The visual effects of water drops involve elaborate physical mechanisms, reflecting their physical, optical, and statistical characteristics. The study of those characteristics has led to numerous publications in the context of real-time rendering of a realistic rainy scene. Researchers have studied raindrop shapes and the induced light ray interactions [17], their perception relying on the principle of human vision persistence [18] and the rain streaks, and sharp intensity changes appearing in the images and the videos [13]. As explained by Garg and Nayar [13], the visibility of rain streaks in an image strongly depends on the camera setup. Typical onboard camera setups combined with vehicle motion lead to the fact that rain streaks are not visible. Only drops on the windshield are visible in the image (see Figure 1). The complex appearance of unfocused raindrops is nicely described by Nashashibi et al. [1]. They have listed the following properties: Focused raindrops appear as well-separated objects from their background. On the contrary, unfocused raindrops are not separated from their background and are quite complicated to locate precisely in the image. The scene can be seen through the raindrop. The brightness of raindrops, induced by the sky, tends to be higher than the background. As the focus decreases, the drop s visual appearance becomes bigger with a rounded shape, less noticeable contours, and a stronger blurring effect. The raindrops move slowly on the windshield and thus could be considered as stationary for image sequences of less than.5 s. Relying on those properties, we propose two different algorithms to detect unfocused raindrops in an image. Figure 2. The image shown in Figure 1 is segmented: the dark region (raindrops are visible only in this image) and the clear region. Figure 3. The background image and the difference between the original (see Figure 1) and background image. Table 1. The algorithm scheme. Algorithm A Algorithm B Input: Color image from onboard camera Step 1: Segmentation of dark versus clear region Step 2A: Background substraction (raindrop candidate detection) Step 2B: Watershed (raindrop candidate detection) Step 3: Raindrop pattern recognition Step 4: Temporal filtering Output: Mask of raindrop detected Figure 4. The opened image and raindrop candidate detection of the background substraction approach. MARCH 214 IEEE ROBOTICS & AUTOMATION MAGAZINE 51

4 Table 2. The background substraction parameters. Background Substraction Step 1 Structural element: disk of size 2 pixels Step 2A Size of Gaussian filter: 11 # 11 Size of structural element (road): 33 # 11 Th BG :.8 Step 3 Maximum size of the drop: 16,95 Minimum size of the drop: 1, Rmax : Th sim :.79 Table 3. The watershed parameters. Watershed Step 1 Structural element: disk of size 2 pixels Step 2B Size of Gaussian filter: 61 # 61 Standard deviation of Gaussian filter: 13 Step 3 Maximum size of the drop: 4, Minimum size of the drop: 7 Rmax : 2 Th sim :.79 Raindrop Detection Algorithms In this article, we introduce two rain detection algorithms. They rely on four successive steps according to the scheme shown in Table 1. First Step Because of their physical properties, raindrops appear brighter than their background. Raindrops located against a clear area (generally the sky) are not perceptible even by a human expert (see Figure 1). Therefore, the first step consists of a segmentation of dark and clear regions to apply detection methods only on the darker region. Dark areas are extracted by a combination of morphological operations (Figure 2): 1) erosion of the original image by a disk of a diameter of 2 pixels as a structuring element 2) morphological reconstruction [19] of the original image with the eroded image as a mask 3) application of Step 1 and 2 on the complement of the reconstructed image 4) segmentation of the result using Otsu s method [8]. Second Step Two different approaches are proposed for the second step: Step 2A Background Substraction: The following algorithm is applied on each channel (R, G, B) of the original image. A smoothed image is calculated by applying a large Gaussian filter to produce a drop-free background image. Because the raindrop brightness is higher than the background (see the Visual Properties of Drops section), the difference between the original and smoothed images allows us to extract local maxima corresponding to potential raindrops (Figure 3). A morphological opening is applied to each channel of the image represented in Figure 3 to highlight the contrast of clear raindrops (Figure 4). A binary mask is calculated by applying a threshold Th BG. Finally, the union of the three masks coming from the three channels (R, G, B) and the application of the mask obtained from the first step produce the raindrop candidates image displayed in Figure 4. Table 2 shows all of the parameters corresponding to this approach. Step 2B Watershed: The morphological watershed is a well-known image processing technique to produce an image segmentation into small regions. It starts with a set of markers corresponding to the centers of each final region. It assigns a label to every pixel by viewing intensity in an image as elevation and simulating rainfall from each marker (see [7] for more details). Image 199 Image 2 Image 21 (c) Figure 6. A temporal filter is applied to improve precision: original images, raindrops detected using the background Figure 5. The smoothed image s gradient and the raindrop candidate detection of watershed approach. substraction method, and (c) detected raindrops after applying the temporal filter. Green: correctly detected raindrops. Blue: missed raindrops. Red: false detections. 52 IEEE ROBOTICS & AUTOMATION MAGAZINE MARCH 214

5 A Gaussian filter is calculated to avoid over segmentation on the image converted into gray scale. The watershed is applied on the gradient of the smoothed image (Figure 5) by placing a marker at each local minima. Indeed, blurring effect of the raindrops (see the Visual Properties of Drops section) induces a local minimum on the gradient image. The morphological watershed segments the full image into small regions [Figure 5]. The sky area is not segmented thanks to the first stage. Each region segmented by the watershed is a raindrop candidate, thus the next stage consists of rejecting the false detections. Table 3 shows all of the parameters corresponding to this approach. Third Step For each raindrop candidate detected during the second step (Steps 2A and 2B), some tests are applied to eliminate false detections, based on the drops rounded shape: The area should be between a minimum and a maximum size. The ratio between the lengths of the major and minor axis should be smaller than a threshold Rmax. This ensures that the shape is not too extended for a raindrop. The candidate is compared with an ellipsoid pattern of the same size, and their similarity should be smaller than a threshold value Thsim (set experimentally). Each candidate that does not pass the tests is removed. The first one is a fully documented raindrop image database. It contains ten subsets of seven consecutive images. For each subset, an operator has marked the center of each raindrop on the central (fourth) image. For example, in Figure 8, he blue stars show the localization of the raindrops centers as marked by the operator. This ground truth of 7 images is used to evaluate criteria presented in the Performance Measurement section. Note that the last step of the raindrop detection algorithms requires three successive image detections to improve the detector s performance. Thus, on seven consecutive images, the first Using the same vision and second one do not sensor, systems should have a final segmentation (i.e., they do not be able to evaluate their pass in the last step). This first database reliability and extend their contains more than 6 raindrops. functionality. The second database corresponds to two video sequences: one with light rain and a wiping and the second during clear weather. It is used to compare the results of our approach with the state-of-the-art algorithm from [1]. Each video is composed of + 6 images. The databases are available upon request. Fourth Step The precision is increased using the temporal information: the areas that are detected less than twice in three successive images are removed. This step is shown in Figure 6. The results using all steps are shown in Figure 7. Parameters In Table 2, all parameters of Steps 2A and 3 were obtained by simulated annealing, except Th sim, which was chosen heuristically from histograms. In Table 3, all parameters were chosen heuristically from histograms. Results Image Database The Laboratoire sur les Interactions Véhicules Infrastructure Conducteurs (LIVIC), or Laboratory on Vehicle Infrastructure Driver Interactions, is a research laboratory for ADASs, with a fleet of several test vehicles. One is equipped with an 8-b color Omnivision camera. The camera records color images 8 # 1,28 with a frequency of 15 images/s. It is located in the car s interior behind the central mirror at a distance of about 3 cm from the windshield. This position allows the camera to be well focused on the road scene but strongly unfocused on the windshield. To develop and evaluate the algorithms, we recorded videos under various weather conditions. We extracted two databases: Figure 7. Raindrop detection using the four steps: the background substraction approach and the watershed approach. Green: correctly detected raindrops. Blue: missed raindrops. Red: false detections. Figure 8. An example of ground truth: blue stars correspond to the mark made by the operator. MARCH 214 IEEE ROBOTICS & AUTOMATION MAGAZINE 53

6 Table 4. Performance measurement. Background Substraction Watershed CDR 67.97% 67.64% FPPI 1.92% 7.42 % Dice 52.92% 59.94% Performance Measurement The raindrop detection performance is related to the number of true and false positives. Detection is defined as a true positive if the true raindrop center falls into the detection patch. For each algorithm, the correct detection rate (CDR), the false positive per image (FPPI), and the dice coefficient are calculated as CDR = TP P FPPI = FP Dice = 2TP, n TP + FP + P where TP is the number of true positives, FP is the number of false positives, P is the total number of raindrops, and n is the total number of images. The dice coefficient can be seen as the size of the union of two sets (here, raindrops and detection) divided by the average size of the two sets. Performance Analysis on Raindrop Localization Both algorithms are implemented on a CPU (Core i7, 3 GHz) using MATLAB. The raindrop detection based on the background substraction approach takes ~.75 s/image, and the one based on watershed takes ~1.5 s/image. These algorithms are in the development stage and not optimized for real-time applications. However, it could be done by implementing them under C++ and reducing the image size (here 8 # 1,28). As detailed in the Raindrop Detection Algorithms section, the only parameters for raindrop detection based on watershed are linked to the size of the smoothing The morphological filter (size and standard deviation). Modifying watershed is a wellknown image-processing final region and then this filter would change the form and size of the consequently alter the technique to produce an raindrop pattern recognition step. Therefore, it image segmentation into was not possible to plot a receiver operating characteristic curve for this small regions. algorithm. To compare the performance of both approaches, all the criteria have to be analyzed at the same time. However, if one approach has a better CDR and a worse FPPI than the other, the conclusion would not be obvious. Moreover, the FPPI s variations based on the threshold ThBG are much bigger than those of the CDR. Therefore, we choose to adjust the CDR of the two methods, by adapting the threshold ThBG of the background substraction algorithm. The raindrop detection performance criteria are evaluated using the first fully documented raindrop image database (see the Image Database section) and are presented in Table 4. It confirms that the CDR are similar for both approaches; however, the FPPI is greater for the background substraction approach. According to both the results obtained for the FPPI and the dice coefficient, the watershed approach outperformed the background substraction approach. The results are shown in Figure 7. The well-detected drops are plotted in green, the missed drops in blue, and the false detections in red. In Figure 7, the number of missed raindrops and true detections are similar with both algorithms, but the number of FPs is slightly higher for the background substraction approach. The missed raindrops are often located on strong edges of the background, such as as lane markings. Indeed, both raindrop candidate algorithms produce two regions, corresponding to the two halves of the raindrops, and those regions are removed during the pattern recognition step, due to their small sizes. These figures also show that the two approaches seem to be complementary. Indeed, the missed raindrops and the false detections are different for the two algorithms. It opens the way to combine them to improve the raindrop candidate detection step, taking advantage of both methods. Performance Analysis on Rain Prediction Even for the best method, the FPPI stays quite high (more than seven false detections per image). We expect that using temporal filtering (as the last step described in the Raindrop Detection Algorithms section) on a longer period (~1 s or 15 images) will strongly decrease the number of false detections. Moreover, to predict rainy situations, we propose that the number of detected raindrops must exceed a threshold greater than the FPPI. To test this, we plot in Figure 9 the number of detections per image on the rainy video of the second database for both approaches. In Figure 9, it appears clearly that the number of detected raindrops falls just after the wiper has cleaned the windshield (wiping starts at image 263). In the following, the number of detected raindrops is increasing slowly. In Figure 1, we plot the results for the two approaches, under rainy and clear meteorological conditions. This figure clearly shows that the number of detections is higher under rainy situations for both approaches. The maximum value in a clear situation is nine for background substraction and eight for watershed. By using a threshold of 54 IEEE ROBOTICS & AUTOMATION MAGAZINE MARCH 214

7 Background Substraction Watershed Figure 9. The number of detections per image on the rainy videos of the second database: the blue curve is the number of detections made by background substraction, and the green one is the result of the watershed Background Substraction Watershed Figure 1. The number of detections per image for the two videos of the second database: blue represents the background substraction approach, green represents the watershed approach, the solid line corresponds to rainy conditions, and the dashed line to clear weather Figure 11. The number of raindrop apparitions per image on the two videos of the second database for [1] s approach: the red solid line corresponds to rainy conditions and the magenta dashed line to clear weather. around ten detections, both algorithms would perform well in the task of predicting rainy situations. Discussion To compare our approach with the literature, we first tested the approach proposed by Cord et al. [6]. The results were dramatically unsuccessful: not a single raindrop was detected by the proposed algorithm. Indeed, this approach was calibrated to detect focused raindrops on the windshield. It relies on the existence of a strong gradient in the image induced by a raindrop. Obviously, as detailed in the Visual Drop Properties of Drops section, unfocused raindrops induce a decrease of the gradient and could not be detected by this approach. Then, we compared our results with the only available unfocused raindrop detection algorithm. It was published by Nashashibi et al. [1], and we have coded it in MATLAB by adapting the thresholds to our images. This approach relies on three steps: detection of potential regions is achieved by segmenting the image into raindrop and nonraindrop regions using a priori knowledge on the intensity variation potential regions for raindrops are filtered, and only those that verify a lack of contours are validated spatial and temporal correlation in continuous spacetime volume to validate the previously validated regions. One major difference between our approaches and the one in [1] is that they do not detect raindrops in an image but raindrop apparitions in the current frame of a video. Therefore, it is not possible to directly compare the result of our segmentations with their approach. This approach works using successive images to detect raindrop apparitions. For the same reason, we have tested it on the second database, because it corresponds to two videos with 6 consecutive frames. In Figure 11, we plotted the number of raindrop apparitions as a function of image number in the two different sequences of the database. The red solid line corresponds to rainy conditions, and the magenta dashed line to clear weather. Even if the number of raindrop apparitions is slightly smaller during the clear situation (dashed line), the two curves are very close and cannot be easily distinguished. Thus, the rain detection is very difficult by applying a threshold on the two curves. Table 5 lists some characteristics of the curves, such as max value, mean, and the number of no detections. Table 5. Curve characteristics. Rainy Conditions Clear Conditions Max value Mean.8.49 Number of detection MARCH 214 IEEE ROBOTICS & AUTOMATION MAGAZINE 55

8 Processing the same video database, the approach from [1] does not allow us to reliably predict the presence or absence of rain. Unlike this approach, the results presented here show that our methods are successful in this task. Conclusion The challenge here was to provide methods allowing the use of a standard camera for raindrop detection. No adjustment for the specific purpose of rain detection has been done. This choice was made to preserve the usual working state of other camera-based ADASs. The two proposed methods are efficient for the Potential regions for detection of rainy situations. The method based raindrops are filtered, and on watershed outperforms only those that verify a lack the method based on background substraction, of contours are validated. in terms of number of FPPI. However, in terms of calculation time, the background detection is twice as fast. By improving the calculation efficiency, the two methods could be combined to ensure reliable raindrop detection. To improve the watershed segmentation, we could use the photometrical properties of raindrops that correspond to local brightness maxima. Indeed, we could adapt the position of the set of markers by aggregating all markers that correspond to local brightness minima, thus limiting the number of regions created and a part of the false detections. To improve our pattern recognition, we could rely on the region s geometrical characteristics (for instance, the ratio between the area and convex area) of the true and false detections. These characteristics are strongly related to the shape of the raindrop. Detecting raindrops on the windshield leads to a new research field: image restoration under rainy situations. It could aim at enhancing the perception of the driver during rain or compensate for image degradations induced by raindrops presence to preserve the normal behavior of other camera-based ADASs. Acknowledgments Improved camera-based detection under adverse conditions (ICADAC) is a research project involving both France and Germany. It aims at detecting and characterizing weather conditions and restoring a proper video signal to improve and design new driving aids. We would like to thank Clement Boussard for his careful reading. We would also like to thank the reviewers for their constructive comments that improved the quality of the article. References [1] H. Kurihata, T. Takahashi, I. Ide, Y. Mekada, H. Murase, Y. Tamatsu, and T. Miyahara, Rainy weather recognition from in-vehicle camera images for driver assistance, in Proc. IEEE Intelligent Vehicles Symp., 25, pp [2] M. Roser and F. Moosmann, Classification of weather situations on single color images, in Proc. Intelligent Vehicles Symp., 28, pp [3] X. Yan, Y. Luo, and X. Zheng, Weather recognition based on images captured by vision system in vehicle, in Advances in Neural Networks (Lecture Notes in Computer Science, vol. 5553), W. Yu, H. He, and N. Zhang, Eds. Berlin Heidelberg, Germany: Springer-Verlag, 29, pp [4] S. Gormer, A. Kummert, S.-B. Park, and P. Egbert, Vision-based rain sensing with an in-vehicle camera, in Proc. IEEE Intelligent Vehicles Symp., 29, pp [5] M. Roser and A. Geiger, Video-based raindrop detection for improved image registration, in Proc. IEEE Int. Conf. Computer Vision Workshops, 29, pp [6] A. Cord and D. Aubert, Towards rain detection through use of in-vehicle multipurpose cameras, in Proc. Intelligent Vehicles Symp., 211, pp [7] F. Meyer, Topographic distance and watershed lines, Signal process., vol. 38, no. 1, pp , [8] N. Otsu, A threshold selection method from gray-level histograms, Automatica, vol. 11, nos , pp , [9] R. Liu, Z. Li, and J. Jia, Image partial blur detection and classification, in Proc. IEEE Conf. Computer Vision Pattern Recognition, 28, pp [1] F. Nashashibi, R. de Charette, and A. Lia. (21). Detection of unfocused raindrops on a windscreen using low level image processing. presented at 11th Int. Conf. Control Automation Robotics Vision, Singapore, pp [Online]. Available: [11] K. Garg and S. K. Nayar, Detection and removal of rain from videos, in Proc. IEEE Computer Society Conf. Computer Vision Pattern Recognition, 24, vol. 1, pp [12] K. Garg and S. Nayar, When does a camera see rain? in Proc. IEEE Int. Conf. Computer Vision, Oct. 25, vol. 2, pp [13] K. Garg and S. K. Nayar, Vision and rain, Int. J. Comput. Vision, vol. 75, no. 1, pp. 3 27, 27. [14] A. Yamashita, M. Kuramoto, T. Kaneko, and K. Miura, A virtual wiper Restoration of deteriorated images by using multiple cameras, in Proc. Intelligent Robots Systems, 23, vol. 4, pp [15] Y. Tanaka, A. Yamashita, T. Kaneko, and K. T. Miura, Removal of adherent waterdrops from images acquired with a stereo camera system, IEICE Trans. Inform. Systems, vol. E89, no. 7, pp , July 26. [16] A. Yamashita, I. Fukuchi, T. Kaneko, and K. T. Miura, Removal of adherent noises from image sequences by spatio-temporal image processing, in Proc. IEEE Int. Conf. Robotics Automation, 28, pp [17] P. Rousseau, V. Jolivet, and D. Ghazanfarpour, Realistic real-time rain rendering, Comput. Graph., vol. 3, no. 4, pp , Aug. 26. [18] W. Changbo, Z. Wang, X. Zhang, L. Huang, Z. Yang, and Q. Peng, Realtime modeling and rendering of raining scenes, Vis. Comput., vol. 24, no. 7, pp , 28. [19] L. Vincent, Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms, IEEE Trans. Image Process., vol. 2, no. 2, pp , Aurélien Cord, IFSTTAR, IM, LIVIC, Versailles, France. aurelien.cord@ifsttar.fr. Nicolas Gimonet, IFSTTAR, IM, LIVIC, Versailles, France. nicolas.gimonet@gmail.com. 56 IEEE ROBOTICS & AUTOMATION MAGAZINE MARCH 214

Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support

Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support 2012 15th International IEEE Conference on Intelligent Transportation Systems Anchorage, Alaska, USA, September 16-19, 2012 Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart

More information

Detection of Unfocused Raindrops on a Windscreen using Low Level Image Processing

Detection of Unfocused Raindrops on a Windscreen using Low Level Image Processing Detection of Unfocused Raindrops on a Windscreen using Low Level Image Processing Fawzi Nashashibi, Raoul De Charette de La Contrie, Alexandre Lia To cite this version: Fawzi Nashashibi, Raoul De Charette

More information

A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES

A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES V.Sridevi, P.Malarvizhi, P.Mathivannan Abstract Rain removal from a video is a challenging problem due to random spatial distribution and

More information

RESTORATION OF VIDEO BY REMOVING RAIN

RESTORATION OF VIDEO BY REMOVING RAIN RESTORATION OF VIDEO BY REMOVING RAIN Sajitha Krishnan 1 and D.Venkataraman 1 1 Computer Vision and Image Processing, Department of Computer Science, Amrita Vishwa Vidyapeetham University, Coimbatore,

More information

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter Single-Image-Based Rain and Snow Removal Using Multi-guided Filter Xianhui Zheng 1, Yinghao Liao 1,,WeiGuo 2, Xueyang Fu 2, and Xinghao Ding 2 1 Department of Electronic Engineering, Xiamen University,

More information

Towards Fully-automated Driving

Towards Fully-automated Driving Towards Fully-automated Driving Challenges and Potential Solutions Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Mobile Perception Systems 6 PhDs, postdoc, project manager, software engineer,

More information

JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT

JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT Alexandre Bargeton, Fabien Moutarde, Fawzi Nashashibi and Anne-Sophie Puthon Robotics Lab (CAOR),

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 7 Sept 11 th, 2018 Pranav Mantini Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Today Review Binary Image Processing Opening and Closing Skeletonization

More information

CS 4495 Computer Vision Binary images and Morphology

CS 4495 Computer Vision Binary images and Morphology CS 4495 Computer Vision Binary images and Aaron Bobick School of Interactive Computing Administrivia PS6 should be working on it! Due Sunday Nov 24 th. Some issues with reading frames. Resolved? Exam:

More information

Temporal analysis for implicit compensation of local variations of emission coefficient applied for laser induced crack checking

Temporal analysis for implicit compensation of local variations of emission coefficient applied for laser induced crack checking More Info at Open Access Database www.ndt.net/?id=17661 Abstract Temporal analysis for implicit compensation of local variations of emission coefficient applied for laser induced crack checking by G. Traxler*,

More information

Orientation Map Based Palmprint Recognition

Orientation Map Based Palmprint Recognition Orientation Map Based Palmprint Recognition (BM) 45 Orientation Map Based Palmprint Recognition B. H. Shekar, N. Harivinod bhshekar@gmail.com, harivinodn@gmail.com India, Mangalore University, Department

More information

Morphological image processing

Morphological image processing INF 4300 Digital Image Analysis Morphological image processing Fritz Albregtsen 09.11.2017 1 Today Gonzalez and Woods, Chapter 9 Except sections 9.5.7 (skeletons), 9.5.8 (pruning), 9.5.9 (reconstruction)

More information

Morphology Gonzalez and Woods, Chapter 9 Except sections 9.5.7, 9.5.8, and Repetition of binary dilatation, erosion, opening, closing

Morphology Gonzalez and Woods, Chapter 9 Except sections 9.5.7, 9.5.8, and Repetition of binary dilatation, erosion, opening, closing 09.11.2011 Anne Solberg Morphology Gonzalez and Woods, Chapter 9 Except sections 9.5.7, 9.5.8, 9.5.9 and 9.6.4 Repetition of binary dilatation, erosion, opening, closing Binary region processing: connected

More information

Pole searching algorithm for Wide-field all-sky image analyzing monitoring system

Pole searching algorithm for Wide-field all-sky image analyzing monitoring system Contrib. Astron. Obs. Skalnaté Pleso 47, 220 225, (2017) Pole searching algorithm for Wide-field all-sky image analyzing monitoring system J. Bednář, P. Skala and P. Páta Czech Technical University in

More information

EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL IMAGE IN URBAN AREAS. Received September 2015; revised January 2016

EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL IMAGE IN URBAN AREAS. Received September 2015; revised January 2016 International Journal of Innovative Computing, Information and Control ICIC International c 2016 ISSN 1349-4198 Volume 12, Number 2, April 2016 pp. 371 383 EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL

More information

Lane Marker Parameters for Vehicle s Steering Signal Prediction

Lane Marker Parameters for Vehicle s Steering Signal Prediction Lane Marker Parameters for Vehicle s Steering Signal Prediction ANDRIEJUS DEMČENKO, MINIJA TAMOŠIŪNAITĖ, AUŠRA VIDUGIRIENĖ, LEONAS JAKEVIČIUS 3 Department of Applied Informatics, Department of System Analysis

More information

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis Previous Lecture Multimedia Databases Texture-Based Image Retrieval Low Level Features Tamura Measure, Random Field Model High-Level Features Fourier-Transform, Wavelets Wolf-Tilo Balke Silviu Homoceanu

More information

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Previous Lecture Texture-Based Image Retrieval Low

More information

Attentive Generative Adversarial Network for Raindrop Removal from A Single Image

Attentive Generative Adversarial Network for Raindrop Removal from A Single Image Attentive Generative Adversarial Network for Raindrop Removal from A Single Image Rui Qian 1, Robby T. Tan 2,3, Wenhan Yang 1, Jiajun Su 1, and Jiaying Liu 1 1 Institute of Computer Science and Technology,

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications Edge Detection Roadmap Introduction to image analysis (computer vision) Its connection with psychology and neuroscience Why is image analysis difficult? Theory of edge detection Gradient operator Advanced

More information

VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION. Jingyu Yan and Marc Pollefeys

VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION. Jingyu Yan and Marc Pollefeys VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION Jingyu Yan and Marc Pollefeys {yan,marc}@cs.unc.edu The University of North Carolina at Chapel Hill Department of Computer Science Chapel

More information

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis 4 Shape-based Features Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Multiresolution Analysis

More information

Detection of Artificial Satellites in Images Acquired in Track Rate Mode.

Detection of Artificial Satellites in Images Acquired in Track Rate Mode. Detection of Artificial Satellites in Images Acquired in Track Rate Mode. Martin P. Lévesque Defence R&D Canada- Valcartier, 2459 Boul. Pie-XI North, Québec, QC, G3J 1X5 Canada, martin.levesque@drdc-rddc.gc.ca

More information

Tennis player segmentation for semantic behavior analysis

Tennis player segmentation for semantic behavior analysis Proposta di Tennis player segmentation for semantic behavior analysis Architettura Software per Robot Mobili Vito Renò, Nicola Mosca, Massimiliano Nitti, Tiziana D Orazio, Donato Campagnoli, Andrea Prati,

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Sky Segmentation in the Wild: An Empirical Study

Sky Segmentation in the Wild: An Empirical Study Sky Segmentation in the Wild: An Empirical Study Radu P. Mihail 1 Scott Workman 2 Zach Bessinger 2 Nathan Jacobs 2 rpmihail@valdosta.edu scott@cs.uky.edu zach@cs.uky.edu jacobs@cs.uky.edu 1 Valdosta State

More information

Effect of Environmental Factors on Free-Flow Speed

Effect of Environmental Factors on Free-Flow Speed Effect of Environmental Factors on Free-Flow Speed MICHAEL KYTE ZAHER KHATIB University of Idaho, USA PATRICK SHANNON Boise State University, USA FRED KITCHENER Meyer Mohaddes Associates, USA ABSTRACT

More information

Intelligent Rain Sensing and Fuzzy Wiper Control Algorithm for Vision-based Smart Windshield Wiper System

Intelligent Rain Sensing and Fuzzy Wiper Control Algorithm for Vision-based Smart Windshield Wiper System 1 Intelligent Sensing and Fuzzy Wiper Control Algorithm for Vision-based Smart Windshield Wiper System Joonwoo Son Seon Bong Lee Department of Mechatronics, Daegu Gyeongbuk Institute Science & Technology,

More information

THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY

THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY Irene Cara Jan-Pieter Paardekooper TNO Helmond The Netherlands Paper Number

More information

Tracking Human Heads Based on Interaction between Hypotheses with Certainty

Tracking Human Heads Based on Interaction between Hypotheses with Certainty Proc. of The 13th Scandinavian Conference on Image Analysis (SCIA2003), (J. Bigun and T. Gustavsson eds.: Image Analysis, LNCS Vol. 2749, Springer), pp. 617 624, 2003. Tracking Human Heads Based on Interaction

More information

A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE SIMILARITY

A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE SIMILARITY IJAMML 3:1 (015) 69-78 September 015 ISSN: 394-58 Available at http://scientificadvances.co.in DOI: http://dx.doi.org/10.1864/ijamml_710011547 A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE

More information

Statistical Filters for Crowd Image Analysis

Statistical Filters for Crowd Image Analysis Statistical Filters for Crowd Image Analysis Ákos Utasi, Ákos Kiss and Tamás Szirányi Distributed Events Analysis Research Group, Computer and Automation Research Institute H-1111 Budapest, Kende utca

More information

Stereoscopic Programmable Automotive Headlights for Improved Safety Road

Stereoscopic Programmable Automotive Headlights for Improved Safety Road Stereoscopic Programmable Automotive Headlights for Improved Safety Road Project ID: 30 Srinivasa Narasimhan (PI), Associate Professor Robotics Institute, Carnegie Mellon University https://orcid.org/0000-0003-0389-1921

More information

Identify the letter of the choice that best completes the statement or answers the question.

Identify the letter of the choice that best completes the statement or answers the question. Chapter 12 - Practice Questions Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1) Never remove a radiator cap on a hot engine because a. the

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Automatic estimation of crowd size and target detection using Image processing

Automatic estimation of crowd size and target detection using Image processing Automatic estimation of crowd size and target detection using Image processing Asst Prof. Avinash Rai Dept. of Electronics and communication (UIT-RGPV) Bhopal avinashrai@rgtu.net Rahul Meshram Dept. of

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

Boosting: Algorithms and Applications

Boosting: Algorithms and Applications Boosting: Algorithms and Applications Lecture 11, ENGN 4522/6520, Statistical Pattern Recognition and Its Applications in Computer Vision ANU 2 nd Semester, 2008 Chunhua Shen, NICTA/RSISE Boosting Definition

More information

C. Watson, E. Churchwell, R. Indebetouw, M. Meade, B. Babler, B. Whitney

C. Watson, E. Churchwell, R. Indebetouw, M. Meade, B. Babler, B. Whitney Reliability and Completeness for the GLIMPSE Survey C. Watson, E. Churchwell, R. Indebetouw, M. Meade, B. Babler, B. Whitney Abstract This document examines the GLIMPSE observing strategy and criteria

More information

Generalized Laplacian as Focus Measure

Generalized Laplacian as Focus Measure Generalized Laplacian as Focus Measure Muhammad Riaz 1, Seungjin Park, Muhammad Bilal Ahmad 1, Waqas Rasheed 1, and Jongan Park 1 1 School of Information & Communications Engineering, Chosun University,

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Used to extract image components that are useful in the representation and description of region shape, such as

Used to extract image components that are useful in the representation and description of region shape, such as Used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction skeletons convex hull morphological filtering thinning pruning Sets

More information

The Detection Techniques for Several Different Types of Fiducial Markers

The Detection Techniques for Several Different Types of Fiducial Markers Vol. 1, No. 2, pp. 86-93(2013) The Detection Techniques for Several Different Types of Fiducial Markers Chuen-Horng Lin 1,*,Yu-Ching Lin 1,and Hau-Wei Lee 2 1 Department of Computer Science and Information

More information

3.8 Combining Spatial Enhancement Methods 137

3.8 Combining Spatial Enhancement Methods 137 3.8 Combining Spatial Enhancement Methods 137 a b FIGURE 3.45 Optical image of contact lens (note defects on the boundary at 4 and 5 o clock). (b) Sobel gradient. (Original image courtesy of Mr. Pete Sites,

More information

Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques

Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques Zayed M. Ramadan Department of Electronics and Communications Engineering, Faculty of Engineering,

More information

Vision for Mobile Robot Navigation: A Survey

Vision for Mobile Robot Navigation: A Survey Vision for Mobile Robot Navigation: A Survey (February 2002) Guilherme N. DeSouza & Avinash C. Kak presentation by: Job Zondag 27 February 2009 Outline: Types of Navigation Absolute localization (Structured)

More information

Reducing False Alarm Rate in Anomaly Detection with Layered Filtering

Reducing False Alarm Rate in Anomaly Detection with Layered Filtering Reducing False Alarm Rate in Anomaly Detection with Layered Filtering Rafa l Pokrywka 1,2 1 Institute of Computer Science AGH University of Science and Technology al. Mickiewicza 30, 30-059 Kraków, Poland

More information

A Contrario Detection of False Matches in Iris Recognition

A Contrario Detection of False Matches in Iris Recognition A Contrario Detection of False Matches in Iris Recognition Marcelo Mottalli, Mariano Tepper, and Marta Mejail Departamento de Computación, Universidad de Buenos Aires, Argentina Abstract. The pattern of

More information

Application of Micro-Flow Imaging (MFI TM ) to The Analysis of Particles in Parenteral Fluids. October 2006 Ottawa, Canada

Application of Micro-Flow Imaging (MFI TM ) to The Analysis of Particles in Parenteral Fluids. October 2006 Ottawa, Canada Application of Micro-Flow Imaging (MFI TM ) to The Analysis of Particles in Parenteral Fluids October 26 Ottawa, Canada Summary The introduction of a growing number of targeted protein-based drug formulations

More information

YOUR VEHICLE WINDOWS

YOUR VEHICLE WINDOWS REDUCED VISIBILITY WHENEVER VISIBILITY IS REDUCED DRIVERS NEED MORE TIME TO USE THE IPDE PROCESS. YOU CAN MAINTAIN A SAFE INTENDED PATH OF TRAVEL BY: SLOWING DOWN TO GIVE YOURSELF MORE TIME SCANNING IN

More information

ADVANCES in NATURAL and APPLIED SCIENCES

ADVANCES in NATURAL and APPLIED SCIENCES ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BY AENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 February 10(2): pages Open Access Journal Analysis of Image Based

More information

Classifying Galaxy Morphology using Machine Learning

Classifying Galaxy Morphology using Machine Learning Julian Kates-Harbeck, Introduction: Classifying Galaxy Morphology using Machine Learning The goal of this project is to classify galaxy morphologies. Generally, galaxy morphologies fall into one of two

More information

A conventional approach to nighttime visibility in adverse weather conditions

A conventional approach to nighttime visibility in adverse weather conditions A conventional approach to nighttime visibility in adverse weather conditions Romain Gallen, Eric Dumont and Nicolas Hautière Keywords: target visibility, headlight, fog 1 Introduction This paper presents

More information

Covariance Tracking Algorithm on Bilateral Filtering under Lie Group Structure Yinghong Xie 1,2,a Chengdong Wu 1,b

Covariance Tracking Algorithm on Bilateral Filtering under Lie Group Structure Yinghong Xie 1,2,a Chengdong Wu 1,b Applied Mechanics and Materials Online: 014-0-06 ISSN: 166-748, Vols. 519-50, pp 684-688 doi:10.408/www.scientific.net/amm.519-50.684 014 Trans Tech Publications, Switzerland Covariance Tracking Algorithm

More information

A Generative Model Based Approach to Motion Segmentation

A Generative Model Based Approach to Motion Segmentation A Generative Model Based Approach to Motion Segmentation Daniel Cremers 1 and Alan Yuille 2 1 Department of Computer Science University of California at Los Angeles 2 Department of Statistics and Psychology

More information

Parking Place Inspection System Utilizing a Mobile Robot with a Laser Range Finder -Application for occupancy state recognition-

Parking Place Inspection System Utilizing a Mobile Robot with a Laser Range Finder -Application for occupancy state recognition- Parking Place Inspection System Utilizing a Mobile Robot with a Laser Range Finder -Application for occupancy state recognition- Sanngoen Wanayuth, Akihisa Ohya and Takashi Tsubouchi Abstract The automated

More information

The SKYGRID Project A Calibration Star Catalog for New Sensors. Stephen A. Gregory Boeing LTS. Tamara E. Payne Boeing LTS. John L. Africano Boeing LTS

The SKYGRID Project A Calibration Star Catalog for New Sensors. Stephen A. Gregory Boeing LTS. Tamara E. Payne Boeing LTS. John L. Africano Boeing LTS The SKYGRID Project A Calibration Star Catalog for New Sensors Stephen A. Gregory Boeing LTS Tamara E. Payne Boeing LTS John L. Africano Boeing LTS Paul Kervin Air Force Research Laboratory POSTER SESSION

More information

Two-Stream Bidirectional Long Short-Term Memory for Mitosis Event Detection and Stage Localization in Phase-Contrast Microscopy Images

Two-Stream Bidirectional Long Short-Term Memory for Mitosis Event Detection and Stage Localization in Phase-Contrast Microscopy Images Two-Stream Bidirectional Long Short-Term Memory for Mitosis Event Detection and Stage Localization in Phase-Contrast Microscopy Images Yunxiang Mao and Zhaozheng Yin (B) Computer Science, Missouri University

More information

Estimation of Rain Drop Size using Image Processing

Estimation of Rain Drop Size using Image Processing GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 5 April 2017 ISSN: 2455-5703 Estimation of Rain Drop Size using Image Processing Shraddha Dilip Shetye Department of

More information

Determining absolute orientation of a phone by imaging celestial bodies

Determining absolute orientation of a phone by imaging celestial bodies Technical Disclosure Commons Defensive Publications Series October 06, 2017 Determining absolute orientation of a phone by imaging celestial bodies Chia-Kai Liang Yibo Chen Follow this and additional works

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

Fusion of xfcd and local road weather data for a reliable determination of the road surface condition SIRWEC Prague 2008

Fusion of xfcd and local road weather data for a reliable determination of the road surface condition SIRWEC Prague 2008 Fusion of xfcd and local road weather data for a reliable determination of the road surface condition SIRWEC Prague 2008 Alexander Dinkel, Axel Leonhardt Technische Universität München Chair of Traffic

More information

Improving the travel time prediction by using the real-time floating car data

Improving the travel time prediction by using the real-time floating car data Improving the travel time prediction by using the real-time floating car data Krzysztof Dembczyński Przemys law Gawe l Andrzej Jaszkiewicz Wojciech Kot lowski Adam Szarecki Institute of Computing Science,

More information

Visual Object Recognition

Visual Object Recognition Visual Object Recognition Lecture 2: Image Formation Per-Erik Forssén, docent Computer Vision Laboratory Department of Electrical Engineering Linköping University Lecture 2: Image Formation Pin-hole, and

More information

Safe Driving in Bad Weather Conditions

Safe Driving in Bad Weather Conditions Training Package 10/12 Safe Driving in Bad Weather Conditions Asia Industrial Gases Association 3 HarbourFront Place, #09-04 HarbourFront Tower 2, Singapore 099254 Internet: http//www.asiaiga.org Acknowledgement

More information

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier Seiichi Ozawa 1, Shaoning Pang 2, and Nikola Kasabov 2 1 Graduate School of Science and Technology,

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Adaptive Binary Integration CFAR Processing for Secondary Surveillance Radar *

Adaptive Binary Integration CFAR Processing for Secondary Surveillance Radar * BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No Sofia 2009 Adaptive Binary Integration CFAR Processing for Secondary Surveillance Radar Ivan Garvanov, Christo Kabakchiev

More information

Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework

Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework Fatih Porikli, Tekin Kocak TR2006-100 January 2007

More information

AUTOMATED BUILDING DETECTION FROM HIGH-RESOLUTION SATELLITE IMAGE FOR UPDATING GIS BUILDING INVENTORY DATA

AUTOMATED BUILDING DETECTION FROM HIGH-RESOLUTION SATELLITE IMAGE FOR UPDATING GIS BUILDING INVENTORY DATA 13th World Conference on Earthquake Engineering Vancouver, B.C., Canada August 1-6, 2004 Paper No. 678 AUTOMATED BUILDING DETECTION FROM HIGH-RESOLUTION SATELLITE IMAGE FOR UPDATING GIS BUILDING INVENTORY

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

Edge Detection in Computer Vision Systems

Edge Detection in Computer Vision Systems 1 CS332 Visual Processing in Computer and Biological Vision Systems Edge Detection in Computer Vision Systems This handout summarizes much of the material on the detection and description of intensity

More information

Michelson Interferometer

Michelson Interferometer Michelson Interferometer Objective Determination of the wave length of the light of the helium-neon laser by means of Michelson interferometer subsectionprinciple and Task Light is made to produce interference

More information

Analysis of Forward Collision Warning System. Based on Vehicle-mounted Sensors on. Roads with an Up-Down Road gradient

Analysis of Forward Collision Warning System. Based on Vehicle-mounted Sensors on. Roads with an Up-Down Road gradient Contemporary Engineering Sciences, Vol. 7, 2014, no. 22, 1139-1145 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2014.49142 Analysis of Forward Collision Warning System Based on Vehicle-mounted

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

Old painting digital color restoration

Old painting digital color restoration Old painting digital color restoration Michail Pappas Ioannis Pitas Dept. of Informatics, Aristotle University of Thessaloniki GR-54643 Thessaloniki, Greece Abstract Many old paintings suffer from the

More information

Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll

Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll 08.02.2017 By 2030 road traffic deaths will be the fifth leading cause of death

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

Chapter 6 Lecture. The Cosmic Perspective. Telescopes Portals of Discovery Pearson Education, Inc.

Chapter 6 Lecture. The Cosmic Perspective. Telescopes Portals of Discovery Pearson Education, Inc. Chapter 6 Lecture The Cosmic Perspective Telescopes Portals of Discovery 2014 Pearson Education, Inc. Telescopes Portals of Discovery CofC Observatory 6.1 Eyes and Cameras: Everyday Light Sensors Our goals

More information

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008 Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008 Announcements PS 2 is available Please read it by Thursday During Thursday lecture, I will be going over it in some detail Monday - Computer

More information

1 Introduction. 2 Wind dependent boundary conditions for oil slick detection. 2.1 Some theoretical aspects

1 Introduction. 2 Wind dependent boundary conditions for oil slick detection. 2.1 Some theoretical aspects On C-Band SAR Based Oil Slick Detection in the Baltic Sea Markku Similä, István Heiler, Juha Karvonen, and Kimmo Kahma Finnish Institute of Marine Research (FIMR), PB 2, FIN-00561, Helsinki, Finland Email

More information

Lecture 04 Image Filtering

Lecture 04 Image Filtering Institute of Informatics Institute of Neuroinformatics Lecture 04 Image Filtering Davide Scaramuzza 1 Lab Exercise 2 - Today afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Work description: your first

More information

Rapid Object Recognition from Discriminative Regions of Interest

Rapid Object Recognition from Discriminative Regions of Interest Rapid Object Recognition from Discriminative Regions of Interest Gerald Fritz, Christin Seifert, Lucas Paletta JOANNEUM RESEARCH Institute of Digital Image Processing Wastiangasse 6, A-81 Graz, Austria

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS

OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS Y. Li 1,*, M. Sakamoto 1, T. Shinohara 1, T. Satoh 1 1 PASCO CORPORATION, 2-8-10 Higashiyama, Meguro-ku, Tokyo 153-0043,

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington Image Filtering Slides, adapted from Steve Seitz and Rick Szeliski, U.Washington The power of blur All is Vanity by Charles Allen Gillbert (1873-1929) Harmon LD & JuleszB (1973) The recognition of faces.

More information

Field Tests of elongated Sodium LGS wave-front sensing for the E-ELT

Field Tests of elongated Sodium LGS wave-front sensing for the E-ELT Florence, Italy. May 2013 ISBN: 978-88-908876-0-4 DOI: 10.12839/AO4ELT3.13437 Field Tests of elongated Sodium LGS wave-front sensing for the E-ELT Gérard Rousset 1a, Damien Gratadour 1, TIm J. Morris 2,

More information

CHALLENGE #1: ROAD CONDITIONS

CHALLENGE #1: ROAD CONDITIONS CHALLENGE #1: ROAD CONDITIONS Your forward collision warning system may struggle on wet or icy roads because it is not able to adjust for road conditions. Wet or slick roads may increase your stopping

More information

Understand FORWARD COLLISION WARNING WHAT IS IT? HOW DOES IT WORK? HOW TO USE IT?

Understand FORWARD COLLISION WARNING WHAT IS IT? HOW DOES IT WORK? HOW TO USE IT? Understand WHAT IS IT? Forward collision warning systems warn you of an impending collision by detecting stopped or slowly moved vehicles ahead of your vehicle. Forward collision warning use radar, lasers,

More information

Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism

Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism Magali Lemaitre 1, Olivier Laligant 1, Jacques Blanc-Talon 2, and Fabrice Mériaudeau 1 1 Le2i Laboratory, University

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

A Driving Simulation Based Study on the Effects of. Road Marking Luminance Contrast on Driving Safety. Yong Cao, Jyh-Hone Wang

A Driving Simulation Based Study on the Effects of. Road Marking Luminance Contrast on Driving Safety. Yong Cao, Jyh-Hone Wang A Driving Simulation Based Study on the Effects of Road Marking Luminance Contrast on Driving Safety Yong Cao, Jyh-Hone Wang Department of Industrial and Manufacturing Engineering University of Rhode Island,

More information

Robustifying the Flock of Trackers

Robustifying the Flock of Trackers 16 th Computer Vision Winter Workshop Andreas Wendel, Sabine Sternig, Martin Godec (eds.) Mitterberg, Austria, February 2-4, 2011 Robustifying the Flock of Trackers Jiří Matas and Tomáš Vojíř The Center

More information

Projection-invariant pattern recognition with a phase-only logarithmic-harmonic-derived filter

Projection-invariant pattern recognition with a phase-only logarithmic-harmonic-derived filter Projection-invariant pattern recognition with a phase-only logarithmic-harmonic-derived filter A. Moya, D. Mendlovic, J. García, and C. Ferreira A phase-only filter based on logarithmic harmonics for projection-invariant

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Wavelet Packet Based Digital Image Watermarking

Wavelet Packet Based Digital Image Watermarking Wavelet Packet Based Digital Image ing A.Adhipathi Reddy, B.N.Chatterji Department of Electronics and Electrical Communication Engg. Indian Institute of Technology, Kharagpur 72 32 {aar, bnc}@ece.iitkgp.ernet.in

More information

Who needs a spotter? AGENDA Role of a Spotter Circle of Safety Parking Practices

Who needs a spotter? AGENDA Role of a Spotter Circle of Safety Parking Practices Who needs a spotter? AGENDA Role of a Spotter Circle of Safety Parking Practices The Role of a Spotter To minimize potential backing and equipment incidents by communicating the hazards present with the

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information