Overtaking & Receding Vehicle Detection for Driver Assistance and Naturalistic Driving Studies

Similar documents
Towards Fully-automated Driving

THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY

Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll

Mejbah Alam. Justin Gottschlich. Tae Jun Lee. Stan Zdonik. Nesime Tatbul

1 INTRODUCTION 2 PROBLEM DEFINITION

Learning and Predicting On-road Pedestrian Behavior Around Vehicles

Boosting: Algorithms and Applications

CHAPTER 3. CAPACITY OF SIGNALIZED INTERSECTIONS

Traffic safety research: a challenge for extreme value statistics

Advanced Adaptive Cruise Control Based on Collision Risk Assessment

2.1 Traffic Stream Characteristics. Time Space Diagram and Measurement Procedures Variables of Interest

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill.

Time Space Diagrams for Thirteen Shock Waves

FIELD EVALUATION OF SMART SENSOR VEHICLE DETECTORS AT RAILROAD GRADE CROSSINGS VOLUME 4: PERFORMANCE IN ADVERSE WEATHER CONDITIONS

JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT

Multiple Changepoint Detection on speed profile in work zones using SHRP 2 Naturalistic Driving Study Data

Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support

Evaluation of fog-detection and advisory-speed system

HELLA AGLAIA MOBILE VISION GMBH

A NOVEL METHOD TO EVALUATE THE SAFETY OF HIGHLY AUTOMATED VEHICLES. Joshua L. Every Transportation Research Center Inc. United States of America

arxiv: v1 [cs.cv] 15 May 2018

Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving

Learning-based approach for online lane change intention prediction

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Chapter 5 Traffic Flow Characteristics

FPGA Implementation of a HOG-based Pedestrian Recognition System

A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES

Statistical Filters for Crowd Image Analysis

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

Global Scene Representations. Tilke Judd

Analysis of Forward Collision Warning System. Based on Vehicle-mounted Sensors on. Roads with an Up-Down Road gradient

Modeling Complex Temporal Composition of Actionlets for Activity Prediction

Chalmers Publication Library

EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL IMAGE IN URBAN AREAS. Received September 2015; revised January 2016

Anticipating Visual Representations from Unlabeled Data. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba

TRAJECTORY PLANNING FOR AUTOMATED YIELDING MANEUVERS

Assessing the uncertainty in micro-simulation model outputs

Unsupervised Learning with Permuted Data

A Study on Performance Analysis of V2V Communication Based AEB System Considering Road Friction at Slopes

A Proposed Driver Assistance System in Adverse Weather Conditions

Safe Transportation Research & Education Center UC Berkeley

How would surround vehicles move? A Unified Framework for Maneuver Classification and Motion Prediction

On Motion Models for Target Tracking in Automotive Applications

Available online Journal of Scientific and Engineering Research, 2017, 4(4): Research Article

Stereoscopic Programmable Automotive Headlights for Improved Safety Road

Tracking Human Heads Based on Interaction between Hypotheses with Certainty

Graphical Object Models for Detection and Tracking

The state of the art and beyond

Online Appearance Model Learning for Video-Based Face Recognition

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Analytical investigation on the minimum traffic delay at a two-phase. intersection considering the dynamical evolution process of queues

Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models

2D Image Processing Face Detection and Recognition

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

Reconnaissance d objetsd et vision artificielle

OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS

Multi-scale Geometric Summaries for Similarity-based Upstream S

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

arxiv: v1 [physics.soc-ph] 13 Oct 2017

Eigen analysis and gray alignment for shadow detection applied to urban scene images

Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity- Representativeness Reward

Design of Driver-Assist Systems Under Probabilistic Safety Specifications Near Stop Signs

Indirect Clinical Evidence of Driver Inattention as a Cause of Crashes

Improved Kalman Filter Initialisation using Neurofuzzy Estimation

Section Distance and displacment

GISLab (UK) School of Computing and Mathematical Sciences Liverpool John Moores University, UK. Dr. Michael Francis. Keynote Presentation

ILR Perception System Using Stereo Vision and Radar

Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models

A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality

CHAPTER 2. CAPACITY OF TWO-WAY STOP-CONTROLLED INTERSECTIONS

Snow Removal Equipment Visibility

N-mode Analysis (Tensor Framework) Behrouz Saghafi

Tennis player segmentation for semantic behavior analysis

Shape of Gaussians as Feature Descriptors

A Hierarchical Convolutional Neural Network for Mitosis Detection in Phase-Contrast Microscopy Images

Automotive Radar and Radar Based Perception for Driverless Cars

Unit 2 - Linear Motion and Graphical Analysis

Ranking Verification Counterexamples: An Invariant guided approach

Asaf Bar Zvi Adi Hayat. Semantic Segmentation

Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANs

Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior

OVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION

AN ADAPTIVE cruise control (ACC) system, which maintains

Real-time image-based parking occupancy detection using deep learning. Debaditya Acharya, Weilin Yan & Kourosh Khoshelham The University of Melbourne

Global Behaviour Inference using Probabilistic Latent Semantic Analysis

Multi-Sensor Multi-Target Tracking - Strategies for Events that Become Invisible

Sound Recognition in Mixtures

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Sparse Kernel Machines - SVM

AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-12 NATIONAL SECURITY COMPLEX

The Detection Techniques for Several Different Types of Fiducial Markers

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Active Traffic & Safety Management System for Interstate 77 in Virginia. Chris McDonald, PE VDOT Southwest Regional Operations Director

EFFECTS OF WEATHER-CONTROLLED VARIABLE MESSAGE SIGNING IN FINLAND CASE HIGHWAY 1 (E18)

Related topics Velocity, acceleration, force, gravitational acceleration, kinetic energy, and potential energy

Pedestrian Density Estimation by a Weighted Bag of Visual Words Model

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning

Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework

Modeling Multiple-mode Systems with Predictive State Representations

Transcription:

Overtaking & Receding Vehicle Detection for Driver Assistance and Naturalistic Driving Studies Ravi Kumar Satzoda and Mohan M. Trivedi Abstract Although on-road vehicle detection is a wellresearched area, overtaking and receding vehicle detection with respect to (w.r.t) the ego-vehicle is less addressed. In this paper, we present a novel appearance-based method for detecting both overtaking and receding vehicles w.r.t the ego-vehicle. The proposed method is based on Haar-like features that are classified using Adaboost-cascaded classifiers, which result in detection windows that are tracked in two directions temporally to detect overtaking and receding vehicles. A detailed and novel evaluation method is presented with 51 overtaking and receding events occurring in 27000 video frames. Additionally, an analysis of the detected events is presented, specifically for naturalistic driving studies (NDS) to characterize the overtaking and receding events during a drive. To the best knowledgea of the authors, this automated analysis of overtaking/receding events for NDS is a first of its kind in literature. I. MOTIVATION & SCOPE According to National Highway Traffic Safety Administration (NHTSA), multi-vehicle crashes involving the egovehicle and surrounding vehicles result in over 68% of fatalities [1]. Vehicle detection [2], [3], [4], [5], lane detection [6], [7], [8], pedestrian detection [9] and higher order tasks involving lanes and vehicles such as trajectory analysis [10], [11], [12] using different sensing modalities, is therefore an active area of research for automotive active safety systems, and in offline data analysis for naturalistic driving studies (NDS) also [13], [14], [15]. A. Related ork Among the different sensor modalities, monocular camerabased vehicle detection is being extensively researched because of the increasing trend towards understanding the surround scene of the ego-vehicle using multiple cameras [16]. hile on-road vehicle detection techniques such as [2], [3], [4] detect vehicles that are visible partially or fully with varying levels of accuracy and efficiency, techniques such as [17], [18], [19] are particularly dealing with overtaking vehicles, which is relatively less explored [17]. Fig. 1 defines the terms overtaking and receding vehicles with respect to the ego-vehicle. Most related studies such as [20], [19], [17], [21] have primarily focused on detecting overtaking vehicle only. Vehicles that are receding w.r.t ego-vehicle are not addressed in any of the works. Another observation is that overtaking vehicle detection methods can be classified into two broad categories depending on the type of overtaking event that is Ravi Kumar Satzoda and Mohan M. Trivedi are with University of California San Diego, La Jolla, CA, USA. rsatzoda@eng.ucsd.edu, mtrivedi@ucsd.edu being detected. Methods such as [20], [19], [17] detect the partially visible overtaking vehicles that are passing by the side of the ego-vehicle as seen from the front view of egovehicle. The second category has methods such as [21], [17] that detect overtaking events as seen from the rear-view of the ego-vehicle. Fig. 1. Definition of overtaking vehicle V O and receding vehicle V R with respect to ego vehicle V E Motion cues have been used extensively in different ways to detect the overtaking vehicles. This is understandable considering the fact that the optical flow of the overtaking vehicles is in the opposite direction as compared to the background. However, this also limits the use of the same methods for detecting receding vehicles because the optical flow of the receding vehicles will be in the same direction as the background. Furthermore, a study of the recent works shows that there are no standard methodologies or datasets which are available for evaluating the overtaking/receding vehicle detection techniques. Each study reports the evaluation in a different way using different number of sequences or frames etc., and different metrics are used for the evaluations. This aspect on evaluation should also be addressed for benchmarking purposes. B. Problem Statement & Scope of Study In this paper, we focus on monocular vision based techniques for detecting both the overtaking and receding vehicles. More specifically, this work is related to detecting vehicles that are partially visible in the front-view of the ego-vehicle as they either emerge from the blind-spot and pass the ego-vehicle, or recede into the blind-spot as the ego-vehicle passes them (Fig. 1). A vehicle is considered to be overtaking from the moment it first appears in the side view from the ego-vehicle until more than 10% of the rear of the overtaking vehicle is seen from the ego-vehicle. Similarly, the receding vehicle is defined. In the forthcoming sections, we present an appearance based method that is used

to temporally track the vehicles that are either overtaking or receding with respect to the ego-vehicle. The Haar-Adaboost feature classification coupled with a bi-directional tracking of output appearance windows temporally enables the detection of both overtaking and receding vehicles. A comprehensive evaluation of the proposed technique is presented using realworld driving datasets. Additionally, the proposed method is is also extended further to characterize the overtaking and receding events for naturalistic driving studies (NDS) [13]. Based on available literature on NDS, this characterization is the first study of its kind. II. OVERTAKING & RECEDING VEHICLE DETECTION METHOD hen a vehicle is overtaking the ego-vehicle, it is visible partially as shown in Fig. 2. Similarly, when a vehicle is receding w.r.t the ego-vehicle, the vehicle appears partially until it disappears from the view of the ego-vehicle. In this section, we first explain the feature extraction and classifier training that are used for learning the classifiers, which will be used to detect the partially visible vehicles. Then, we explain the proposed detection process for overtaking vehicles followed by receding vehicles. Fig. 2. (a) Regions I L and I R in the image where overtaking vehicle V O and receding vehicle V R will be appear with respect to the ego-vehicle V E, (b) Feature extraction windows IS L and IL P that are used to train two classifiers. A. Learning Stage: Feature Extraction & Classifier Training The proposed method uses Haar-like features and two Adaboost classifiers to detect partially visible overtaking and receding vehicles. Given an input image I of size M N (columns by rows), we define the two regions I L and I R as shown in Fig. 2(a). In order to generate the ground truth annotations for training, we have considered 2500 training images in which the side view of the vehicle is visible within I L only. Annotation window IS L that encapsulates the entire side view of the vehicle in I L are marked as shown by the red window in Fig. 2(b). A second window that is denoted by IP L is generated from IS L by considering the front half of IL S as shown in Fig. 2(b). It is to be noted that annotation windows IS L and IL P correspond to the full side view and front-part side view respectively of the overtaking vehicle to the left of the ego-vehicle, i.e. in I L. The full side view sample IS L is flipped to generate the training samples for overtaking vehicles in the right region I R. Therefore, we get IS R and IR P using the following: I R S (x,y) = I L S (M S x+1,y), I R P (x,y) = I L P(M P x+1,y) (1) where M S and M P are the number of columns in IS L and IP L respectively. Given IL S and IL P, and IR S and IR P, Haar-like features are extracted from each training sample, which are then used to learn two Adaboost cascaded classifiers for each region resulting in the following four classifiers CS L, CL P, CR S and CP R for IL S, IL P, IR S and IR P respectively. The proposed method is aimed at detecting overtaking vehicles in I L and I R adjacent to the ego-vehicle and in lanes that are farther away laterally in the ego-vehicle. Therefore, training samples also include selection of vehicles that are overtaking the egovehicle in the farther lanes (like the white sedan in I L in Fig. 2(a)). Such selection also ensures that the classifiers are trained with vehicles of multiple sizes. B. Overtaking Vehicle Detection Method In this section, we will discuss the proposed overtaking vehicle detection method. Given an input image frame at time t, i.e. I, the two regions I L and I R are marked first. The superscript indicates temporal reference of the frame I. Classifiers CS L and CL P are applied to IL, and classifiers CS R and CR P are applied on IR resulting in possible full or part side view candidates in I L and I R. e describe the remainder of the proposed method using the left region I L ; the same can be applied on I R. Let us consider the case when there is no overtaking vehicle V O, and in I, the V O appears. Let P and S be the set of candidate detection windows that CS L and CL P give. Ideally, S will not have any candidate windows because V O appears partially in I. P is defined as a set of windows such that: P = { P1, P2 where each k-th window,...,,..., PN P } (2) denotes the bounding window is a set of four standard variables = [x y w h]t, i.e. the x and y positions of the top left corner of the window (origin is considered to be the top left corner, x is along the horizontal axis, and y is along the vertical axis), w and h indicate the width (along x axis) and height (along y axis) of the bounding window respectively. for the part side-view shown earlier in Fig. 2(b). Given a detection window for the part side view say, we first initialize the overtaking event if a part side view window does not occur in the past N O frames, i.e. P (t 1) P (t i) P (t N O ) = φ (3) where t N O t i < t. As part of initialization, given a set of windows P in the current frame I that satisfy (3), the window P that is closest to the left edge of the image frame is considered as the initialization window, i.e. satisfies the following k = argmin j x j (4) This is considered as initialization condition because it is assumed that the overtaking vehicle in I L would enter first from the left edge of the image w.r.t the ego-vehicle.

IEEE Conference on Intelligent Transportation Systems, Oct. 2014 After getting the initialization window of the part side view in the current frame I, the overtaking event detection is initialized. However, overtaking event is not considered detected until the (t + NT )-th frame. In every frame I (ti ) such that t < ti t +NT, we determine the presence of nearest part side-view windows between adjacent frames. In order to do this, we start with the t-th frame, when we initialized the overtaking event. In the t +1-th frame, we look (t+1) for a window P (t+1) such that the k-th window satisfies the following minimum distance criteria: (t+1) k = arg min P j j (t+1) such that P j < δe (5) where δe caters to the distance and the change in scale if any due to the movement of the vehicle. If a window is detected in the (t + 1)-th window, the k-th window selected using the above condition in the (t + 1)-th frame is used to detect the window in (t + 2)-th frame. This process is repeated till (t + NT )-th frame. An overtaking vehicle is considered to be detected if we find nd windows within NT frames from the t-th frame. This tracking will aid in eliminating any false positive windows that may detected by the classifiers sporadically. After nd windows are detected in NT frames, as the overtaking vehicle moves, the front part side-view of the vehicle would be less trackable. But the vehicle is still overtaking the ego-vehicle. That is when the second classifier CSL, i.e. the full side-view classifier, is expected to detect the full side-view of the overtaking vehicle. CSL gives the set of windows S. The conditions for a window Sk S to satisfy are: 1) is already detected and overtaking vehicle has been initialized and detected, before Sk is detected. 2) Sk meets the minimum x-position criteria similar to (4). 3) if found must lie within the full side-view window, i.e. ( Sk ) > poverlap (wp hp ), where poverlap is the percentage overlap between and Sk. The first condition eliminates false positive windows from CSL because a full side-view is not considered unless the part side-view is already detected. The second condition eliminates false windows that do not appear to be in the side view from the ego-vehicle. The third condition is an additional step to further narrow down the windows from CSL in the presence of the part side-view windows that are detected by CPL in the same frame. Therefore, the proposed method continues to show overtaking event as long as the following conditions are met: Fig. 3. (a) Sequence showing an overtaking vehicle in the far left lane with time stamps (frame numbers). The detection of the part side-view (in green) and full side-view (in blue) are shown in images, (b) Plot showing the results from the part and full side-view classifiers and the overtaking detection decision. A classifier detection of a part or full side-view is indicated by 1. The overtaking detection as estimated by the proposed algorithm is set to 2 when the initialization of the overtaking event is done followed by 3 when the overtaking event detection is finally done. The ground truth is also shown. 10% of the rear of the overtaking vehicle is visible in the input image frame). The classifier outputs using the part and full side-view classifiers are shown in Fig. 3(a). Fig. 3(b) shows the binarized responses of the classifiers. The overtaking detection decision that is estimated by the proposed method is shown in Fig. 3(b). hen the overtaking event initialization takes place, the detection value is set to 2 in the plot in Fig. 3(b). The overtaking event is detected after two frames (NT = 3) when it is set to 3. The ground truth is also plotted in Fig. 3. It can be seen that the proposed method initializes the overtaking detection at t = 3, i.e. 3 time units after the actual overtaking starts (from ground truth). The overtaking event is further confirmed at t = 6 and continues till t = 24 until the full side-view classifier continues to detect the overtaking vehicle. 1) Presence of the part side-view window that is constrained by (5) above. 2) Presence of the full side-view window Sk that occurs after is detected. Fig. 3(a) shows a sample sequence of an overtaking vehicle with the time stamps from the beginning of the overtaking event till the end of overtaking event (when Fig. 4. Block diagram for detecting overtaking and receding vehicles. The flow in black is used to detect the overtaking vehicles in I L and I R regions, whereas the flow in red is used to detect the receding vehicles. C. Receding Vehicle Detection The proposed algorithm for detecting overtaking vehicles can be extended further to detect receding vehicles using

the flow diagram shown in Fig. 4. As discussed in the previous sub-section, the classification windows from the part side-view are tracked first followed by the full sideview classification windows. The flow shown in black arrows from left to right in Fig. 4 enables the detection of overtaking vehicles. If the same rules are applied as discussed in Section II-B but from the opposite direction as shown by the red arrows, the same algorithm can be used to detect the receding vehicles. In order to detect the receding vehicles, we use the classification outputs from full side-view classifier CS L to initialize the tracking of receding vehicle at t-th time instance or frame. If n d full side-view windows are detected within N T frames from I, then we consider the detection of receding vehicle by the side of the ego-vehicle. As the vehicle recedes further, the detection windows from the part side-view classifier CP L are used to continue detecting the receding vehicle till it moves into the blind-spot of the egovehicle. The same conditions that are applied on the part and full side-view windows for the overtaking vehicle can be interchanged in the case of receding vehicle detection. D. Relative Velocity Computation In this sub-section, we use the output of the proposed overtaking and receding vehicle detectors to determine the relative velocity between the ego-vehicle and the neighboring vehicle with which the overtaking/receding event is occurring. This will aid in understanding how the overtaking or receding occurred, and is particularly useful for naturalistic driving studies that will be described in Section IV. e explain the method for the case of overtaking vehicles; similar equations can be derived for receding vehicles also. In order to compute the relative velocity, the image I at given time instance t is converted to its inverse perspective mapping (IPM) domain [22]. The conversion is performed using the homography matrix [22], which can be computed offline using the camera calibration for a given setup of the camera in the ego-vehicle. The homography matrix is denoted by H. Therefore, H is used to compute the IPM view of I, which is denoted by I. Fig. 5 illustrates the images I and I (t+1) at t and t + 1, and their corresponding IPM views I and I(t+1). The proposed overtaking vehicle detection method gives the position of the overtaking vehicle as shown by the green rectangular windows in I and I (t+1) in Fig. 5. The bottom right positions of these windows denoted by P (x+w,y+h) and P (t+1) (x + w,y + h) are used alongwith the homography matrix H to compute the position of the vehicle along the road, i.e. y positions in Fig. 5. The following transformation equation is applied on P and P (t+1), k [ x y 1 ] T = H [ x y 1 ] T (6) which results in y and y(t+1) respectively. The relative velocity between the ego-vehicle and the overtaking vehicle at time instance t + 1 for the time period = (t + 1) t is given by y v (t+1) r = ψ y(t+1) (7) Fig. 5. Relative velocity estimation between the ego-vehicle and the neighboring overtaking vehicle. where ψ is the calibration constant to convert the IPM image coordinates into real-world coordinates (in meters), and v (t+1) r is the relative velocity in m/sec at time t + 1. Velocities of the two vehicles are assumed to be constants for the short time interval in this study. However, more complete vehicle dynamics can be applied with lateral and longitudinal accelerations. It can be seen from Fig. 5 that as the overtaking vehicle overtakes the ego-vehicle, the distance between the two is also increasing. The relative velocity is computed between every two adjacent frames from the beginning of the overtaking event till the end, and average relative velocity can be computed for the entire overtaking event. In Section IV, we use (7) for the analysis of naturalistic driving data to infer about the driving characteristics of the ego-vehicle during overtaking/receding events. Fig. 6. Analysis of overtaking events: Each blue-red rectangle pair refers to one overtaking event. x-axis (frame numbers) shows the duration of each event. Duration in ground truth is shown by blue rectangle. The corresponding red rectangle indicates the overtaking event detection result from the proposed method. III. RESULTS & DISCUSSION In this section, we will first evaluate the performance of the proposed overtaking and receding vehicle detection method. Existing evaluations for overtaking vehicle indicate

the number of overtaking events that were detected as compared to the ground truth. However, such evaluation presents the accuracy of the techniques partially. In this paper, we present a more detailed evaluation of the overtaking and receding vehicles. In order to do this, we considered two video sequences that are captured at 30 frames per second and having 27000 frames in total. The total number of overtaking events, and their properties are listed in Table I. A total of 51 events (34 overtaking and 17 receding events) were annotated. The events were occurring either on the immediately adjacent lanes to the ego-lane or neighbor lanes beyond the adjacent lanes, i.e. the performance is evaluated for overtaking and receding vehicles in multiple lanes. TABLE I PROPERTIES OF DATASET FOR EVALUATION Dataset name Duration Remarks New LISA overtaking 1410 frames Highway scenario, dense traffic vehicle dataset LISA-Toyota CSRC [15] 27000 frames Highway, urban scenario, highly dense traffic. Overtaking & Receding Vehicle Events Properties Left Right Adjacent lanes Neighbor lanes Overtaking 26/34 8/34 34 0 Receding 2/17 15/17 8 7 Total 28/51 23/51 42 7 In our experiments, all the 51 overtaking and receding vehicles were detected resulting in 100% of detection accuracy. However, this needs to be further explained in terms of how early was the detection of the event. This is illustrated through the plots in Fig. 6 and 7 for overtaking and receding vehicles respectively. Referring to Fig. 6, the x-axis indicates the frame numbers. Each blue and red rectangle pair corresponds to one overtaking event with respect to the ego vehicle. The blue rectangle shows the duration of the overtaking event as annotated in the ground truth, whereas the red rectangle shows the duration as estimated by the proposed method. The overlap between the red and blue rectangles indicates how much of the overtaking event is detected by the proposed method when compared to the ground truth. Similarly, the receding events are listed in Fig. 7. It can be seen from Fig. 6 that it takes a few frames from the start of the overtaking event (as annotated in the ground truth) for the proposed method to detect the overtaking vehicles. This is primarily because the part sideview classifier requires a few frames before the part sideview of the overtaking vehicle appears in the regions I L and I R for the classifiers to detect accurately. Similarly, in Fig. 7, the proposed algorithm will stop detecting the receding vehicle earlier than ground truth because the ground truth annotation is done till the receding vehicle disappears from the image frame. Considering 30 frames a second, the proposed method is able to detect the overtaking events within 0.7s ± 0.5s from the start of overtaking event. Similarly, in the case of receding vehicle detection, the proposed method stops Fig. 7. Analysis of receding events: Each blue-red rectangle pair refers to one receding event. x-axis (frame numbers) shows the duration of each event. Duration in ground truth is shown by blue rectangle. The corresponding red rectangle indicates the receding event detection result from the proposed method. detecting the receding vehicle 0.4s ± 0.4s before the actual receding event stops in the image frame. IV. ANALYSIS FOR NATURALISTIC DRIVING STUDIES (NDS) In this section, we will briefly discuss how the proposed method and analysis presented in the previous sections can be extended for data analysis in naturalistic driving studies (NDS). Fig. 8 shows three events that are related to overtaking and receding vehicle detection which are identified and extracted by data reductionists manually in NDS [23]. hile the analysis presented in Section III evaluates the detection of such events, we will now present ways to extract information of the detected events in this section. Fig. 8. Overtaking receding events that are identified by data reductionists in NDS [23]: (a) Neighboring vehicle V O is overtaking ego-vehicle V E, (b) Ego-vehicle is overtaking neighboring vehicle V R, (c) Neighboring vehicle V O is overtaking ego-vehicle and cutting into ego-lane. In addition to the number of detections as listed in Table I, we extract the following information about the detected events. The duration of the events indicates the relative velocities between the ego-vehicle and overtaking/receding vehicles. Table II is generated from 45 events in one of the drives that was used to generate the results in the previous section. Table II shows the mean, standard deviation, minimum and maximum durations in seconds for the overtaking and receding events with respect to the ego-vehicle. Additionally, identifying overtaking events with durations less than the mean by a considerable margin could indicate specific maneuvers such as cutting in front of the ego-vehicle.

TABLE II NDS METRICS FOR OVERTAKING/RECEDING EVENTS Duration Metrics Overtaking Receding Mean ± deviation (s) 1.9176 ± 1.1461 2.2451 ± 1.7748 Minimum/Maximum (s) 0.266/6.33 0.3/5.96 ABSOLUTE RELATIVE VELOCITIES Overtaking Receding Left/Right (m/s) 6.9/9.2 12.4/5.6 It should be noted that with respect to the neighboring vehicle, a receding event indicates an overtaking maneuver by the ego-vehicle. If a receding event is detected on the left of the ego-vehicle, it implies that the ego-vehicle is moving faster than the vehicles moving on the faster lanes (the left lanes are considered as fast lanes in USA). Therefore, egovehicle was overtaking the fast moving vehicles in the faster lanes. Similarly an overtaking event in the right lane indicates the other vehicles are moving faster than the ego-vehicle, and they can cause a driving hazard for the ego-vehicle. It can be seen from Table I that the proposed method is able to determine that there were 2 receding events on the left of the ego-vehicle, and 8 overtaking events on the right of the ego-vehicle. These specific events could be flagged for further analysis by data reductionists in NDS. The relative velocity estimation method presented in Section II-D was used to determine the average relative velocities for the overtaking and receding events. Table II lists the relative velocities for different overtaking and receding conditions. From Table II, the following can be inferred about the drive from which the overtaking and receding vehicles were considered. The relative velocity was higher for right overtaking events, i.e. the ego-vehicle was moving slower than the vehicles on its right lane during the drive. Next, the relative velocity is 12.4 m/s during left receding events, which implies that the ego-vehicle accelerated more than the vehicles on the fast lane (left lane) during the receding events in the drive. The study presented in this section is an initial work on the analysis of the overtaking and receding events for naturalistic driving studies. More metrics and analysis items will be explored in future works. V. CONCLUSIONS In this paper, an appearance based technique to detect both overtaking and receding vehicles w.r.t ego-vehicle is presented. The classification windows that are obtained from Haar features and Adaboost cascaded classifiers are tracked in bi-directional manner, which enables the detection of both optically opposite flowing events. The detailed evaluations show that the proposed method detects/terminates the events within an average of 1 second as compared to the ground truth. The extension of the proposed work to extract information of the drive for NDS is being reported for the first time in the context of receding and overtaking vehicles. Future work would focus on improving the accuracy of the proposed event detection technique, and also on extracting more analysis metrics for NDS. REFERENCES [1] TRAFFIC SAFETY FACTS 2011, National Highway Traffic Safety Administration, U.S. Department of Transportation, Tech. Rep., 2011. [2] S. Sivaraman and M. M. Trivedi, A General Active-Learning Framework for On-Road Vehicle Recognition and Tracking, IEEE Trans. on Intell. Transp. Sys., vol. 11, no. 2, pp. 267 276, June 2010. [3] R. K. Satzoda and M. M. Trivedi, Efficient Lane and Vehicle detection with Integrated Synergies ( ELVIS ), in 2014 IEEE Conference on Computer Vision and Pattern Recognition orkshops on Embedded Vision, 2014, p. To be published. [4] S. Sivaraman and M. M. Trivedi, Integrated Lane and Vehicle Detection, Localization, and Tracking : A Synergistic Approach, IEEE Trans. on Intell. Transp. Sys., pp. 1 12, 2013. [5] E. Ohn-bar, A. Tawari, S. Martin, and M. M. Trivedi, Vision on heels : Looking at Driver, Vehicle, and Surround for On- Road Maneuver Analysis, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition orkshops, 2014. [6] R. K. Satzoda and M. M. Trivedi, Selective Salient Feature based Lane Analysis, in 2013 IEEE Intell. Transp. Sys. Conference, 2013, pp. 1906 1911. [7], Vision-based Lane Analysis : Exploration of Issues and Approaches for Embedded Realization, in 2013 IEEE Conference on Computer Vision and Pattern Recognition orkshops on Embedded Vision, 2013, pp. 604 609. [8], On Performance Evaluation Metrics for Lane Estimation, in International Conference on Pattern Recognition, 2014, p. To Appear. [9] A. Prioletti, A. Mø gelmose, P. Grisleri, M. M. Trivedi, A. Broggi, and T. B. Moeslund, Part-Based Pedestrian Detection and Feature-Based Tracking for Driver Assistance : Real-Time, Robust Algorithms, and Evaluation, IEEE Trans. on Intell. Transp. Sys., vol. 14, no. 3, pp. 1346 1359, 2013. [10] B. Morris and M. M. Trivedi, Trajectory Learning for Activity Understanding: Unsupervised, Multilevel, and Long-Term Adaptive Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 2287 2301, Nov. 2011. [11] S. Sivaraman and M. M. Trivedi, Dynamic Probabilistic Drivability Maps for Lane Change and Merge Driver Assistance, IIEEE Trans. on Intell. Transp. Sys., p. To appear, 2014. [12] B. Morris and M. Trivedi, Robust classification and tracking of vehicles in traffic video streams, 2006 IEEE Intell. Transp. Sys. Conference, pp. 1078 1083, 2006. [13] R. K. Satzoda and M. M. Trivedi, Drive Analysis using Vehicle Dynamics and Vision-based Lane Semantics, IEEE Transactions on Intelligent Transportation Systems, p. To appear, 2014. [14] L. N. Boyle, S. Hallmark, J. D. Lee, D. V. McGehee, D. M. Neyens, and N. J. ard, Integration of Analysis Methods and Development of Analysis Plan - SHRP2 Safety Research, Transportation Research Board of the National Academies, Tech. Rep., 2012. [15] R. K. Satzoda, P. Gunaratne, and M. M. Trivedi, Drive Analysis using Lane Semantics for Data Reduction in Naturalistic Driving Studies, in IEEE Intell. Veh. Symp., 2014, pp. 293 298. [16] S. Sivaraman and M. M. Trivedi, Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis, IEEE Trans. on Intell. Transp. Sys., vol. 14, no. 4, pp. 1773 1795, Dec. 2013. [17] A. Ramirez, E. Ohn-bar, and M. Trivedi, Integrating Motion and Appearance for Overtaking Vehicle Detection, IEEE Intell. Veh. Symp., pp. 96 101, 2014. [18] X. Zhang, P. Jiang, and F. ang, Overtaking Vehicle Detection Using A Spatio-temporal CRF, in IEEE Intell. Veh. Symp., no. Iv, 2014, pp. 338 342. [19] D. Hultqvist, J. Roll, F. Svensson, J. Dahlin, and T. B. Sch, Detecting and positioning overtaking vehicles using 1D optical flow, in IEEE Intell. Veh. Symp., no. Iv, 2014, pp. 861 866. [20] F. Garcia, P. Cerri, A. Broggi, and S. Member, Data Fusion for Overtaking Vehicle Detection based on Radar and Optical Flow, in IEEE Intell. Veh. Symp., 2012, pp. 494 499. [21] P. Guzmán, J. Díaz, J. Ralli, R. Agís, and E. Ros, Low-cost sensor to detect overtaking based on optical flow, Machine Vision and Applications, vol. 25, no. 3, pp. 699 711, Dec. 2011. [22] A. Broggi, Parallel and local feature extraction: a real-time approach to road boundary detection. IEEE Trans. on Image Proc., vol. 4, no. 2, pp. 217 23, Jan. 1995. [23] Researcher Dictionary for Video Reduction Data, VTTI, Tech. Rep., 2012.