Overtaking & Receding Vehicle Detection for Driver Assistance and Naturalistic Driving Studies

Size: px
Start display at page:

Download "Overtaking & Receding Vehicle Detection for Driver Assistance and Naturalistic Driving Studies"

Transcription

1 Overtaking & Receding Vehicle Detection for Driver Assistance and Naturalistic Driving Studies Ravi Kumar Satzoda and Mohan M. Trivedi Abstract Although on-road vehicle detection is a wellresearched area, overtaking and receding vehicle detection with respect to (w.r.t) the ego-vehicle is less addressed. In this paper, we present a novel appearance-based method for detecting both overtaking and receding vehicles w.r.t the ego-vehicle. The proposed method is based on Haar-like features that are classified using Adaboost-cascaded classifiers, which result in detection windows that are tracked in two directions temporally to detect overtaking and receding vehicles. A detailed and novel evaluation method is presented with 51 overtaking and receding events occurring in video frames. Additionally, an analysis of the detected events is presented, specifically for naturalistic driving studies (NDS) to characterize the overtaking and receding events during a drive. To the best knowledgea of the authors, this automated analysis of overtaking/receding events for NDS is a first of its kind in literature. I. MOTIVATION & SCOPE According to National Highway Traffic Safety Administration (NHTSA), multi-vehicle crashes involving the egovehicle and surrounding vehicles result in over 68% of fatalities [1]. Vehicle detection [2], [3], [4], [5], lane detection [6], [7], [8], pedestrian detection [9] and higher order tasks involving lanes and vehicles such as trajectory analysis [10], [11], [12] using different sensing modalities, is therefore an active area of research for automotive active safety systems, and in offline data analysis for naturalistic driving studies (NDS) also [13], [14], [15]. A. Related ork Among the different sensor modalities, monocular camerabased vehicle detection is being extensively researched because of the increasing trend towards understanding the surround scene of the ego-vehicle using multiple cameras [16]. hile on-road vehicle detection techniques such as [2], [3], [4] detect vehicles that are visible partially or fully with varying levels of accuracy and efficiency, techniques such as [17], [18], [19] are particularly dealing with overtaking vehicles, which is relatively less explored [17]. Fig. 1 defines the terms overtaking and receding vehicles with respect to the ego-vehicle. Most related studies such as [20], [19], [17], [21] have primarily focused on detecting overtaking vehicle only. Vehicles that are receding w.r.t ego-vehicle are not addressed in any of the works. Another observation is that overtaking vehicle detection methods can be classified into two broad categories depending on the type of overtaking event that is Ravi Kumar Satzoda and Mohan M. Trivedi are with University of California San Diego, La Jolla, CA, USA. rsatzoda@eng.ucsd.edu, mtrivedi@ucsd.edu being detected. Methods such as [20], [19], [17] detect the partially visible overtaking vehicles that are passing by the side of the ego-vehicle as seen from the front view of egovehicle. The second category has methods such as [21], [17] that detect overtaking events as seen from the rear-view of the ego-vehicle. Fig. 1. Definition of overtaking vehicle V O and receding vehicle V R with respect to ego vehicle V E Motion cues have been used extensively in different ways to detect the overtaking vehicles. This is understandable considering the fact that the optical flow of the overtaking vehicles is in the opposite direction as compared to the background. However, this also limits the use of the same methods for detecting receding vehicles because the optical flow of the receding vehicles will be in the same direction as the background. Furthermore, a study of the recent works shows that there are no standard methodologies or datasets which are available for evaluating the overtaking/receding vehicle detection techniques. Each study reports the evaluation in a different way using different number of sequences or frames etc., and different metrics are used for the evaluations. This aspect on evaluation should also be addressed for benchmarking purposes. B. Problem Statement & Scope of Study In this paper, we focus on monocular vision based techniques for detecting both the overtaking and receding vehicles. More specifically, this work is related to detecting vehicles that are partially visible in the front-view of the ego-vehicle as they either emerge from the blind-spot and pass the ego-vehicle, or recede into the blind-spot as the ego-vehicle passes them (Fig. 1). A vehicle is considered to be overtaking from the moment it first appears in the side view from the ego-vehicle until more than 10% of the rear of the overtaking vehicle is seen from the ego-vehicle. Similarly, the receding vehicle is defined. In the forthcoming sections, we present an appearance based method that is used

2 to temporally track the vehicles that are either overtaking or receding with respect to the ego-vehicle. The Haar-Adaboost feature classification coupled with a bi-directional tracking of output appearance windows temporally enables the detection of both overtaking and receding vehicles. A comprehensive evaluation of the proposed technique is presented using realworld driving datasets. Additionally, the proposed method is is also extended further to characterize the overtaking and receding events for naturalistic driving studies (NDS) [13]. Based on available literature on NDS, this characterization is the first study of its kind. II. OVERTAKING & RECEDING VEHICLE DETECTION METHOD hen a vehicle is overtaking the ego-vehicle, it is visible partially as shown in Fig. 2. Similarly, when a vehicle is receding w.r.t the ego-vehicle, the vehicle appears partially until it disappears from the view of the ego-vehicle. In this section, we first explain the feature extraction and classifier training that are used for learning the classifiers, which will be used to detect the partially visible vehicles. Then, we explain the proposed detection process for overtaking vehicles followed by receding vehicles. Fig. 2. (a) Regions I L and I R in the image where overtaking vehicle V O and receding vehicle V R will be appear with respect to the ego-vehicle V E, (b) Feature extraction windows IS L and IL P that are used to train two classifiers. A. Learning Stage: Feature Extraction & Classifier Training The proposed method uses Haar-like features and two Adaboost classifiers to detect partially visible overtaking and receding vehicles. Given an input image I of size M N (columns by rows), we define the two regions I L and I R as shown in Fig. 2(a). In order to generate the ground truth annotations for training, we have considered 2500 training images in which the side view of the vehicle is visible within I L only. Annotation window IS L that encapsulates the entire side view of the vehicle in I L are marked as shown by the red window in Fig. 2(b). A second window that is denoted by IP L is generated from IS L by considering the front half of IL S as shown in Fig. 2(b). It is to be noted that annotation windows IS L and IL P correspond to the full side view and front-part side view respectively of the overtaking vehicle to the left of the ego-vehicle, i.e. in I L. The full side view sample IS L is flipped to generate the training samples for overtaking vehicles in the right region I R. Therefore, we get IS R and IR P using the following: I R S (x,y) = I L S (M S x+1,y), I R P (x,y) = I L P(M P x+1,y) (1) where M S and M P are the number of columns in IS L and IP L respectively. Given IL S and IL P, and IR S and IR P, Haar-like features are extracted from each training sample, which are then used to learn two Adaboost cascaded classifiers for each region resulting in the following four classifiers CS L, CL P, CR S and CP R for IL S, IL P, IR S and IR P respectively. The proposed method is aimed at detecting overtaking vehicles in I L and I R adjacent to the ego-vehicle and in lanes that are farther away laterally in the ego-vehicle. Therefore, training samples also include selection of vehicles that are overtaking the egovehicle in the farther lanes (like the white sedan in I L in Fig. 2(a)). Such selection also ensures that the classifiers are trained with vehicles of multiple sizes. B. Overtaking Vehicle Detection Method In this section, we will discuss the proposed overtaking vehicle detection method. Given an input image frame at time t, i.e. I, the two regions I L and I R are marked first. The superscript indicates temporal reference of the frame I. Classifiers CS L and CL P are applied to IL, and classifiers CS R and CR P are applied on IR resulting in possible full or part side view candidates in I L and I R. e describe the remainder of the proposed method using the left region I L ; the same can be applied on I R. Let us consider the case when there is no overtaking vehicle V O, and in I, the V O appears. Let P and S be the set of candidate detection windows that CS L and CL P give. Ideally, S will not have any candidate windows because V O appears partially in I. P is defined as a set of windows such that: P = { P1, P2 where each k-th window,...,,..., PN P } (2) denotes the bounding window is a set of four standard variables = [x y w h]t, i.e. the x and y positions of the top left corner of the window (origin is considered to be the top left corner, x is along the horizontal axis, and y is along the vertical axis), w and h indicate the width (along x axis) and height (along y axis) of the bounding window respectively. for the part side-view shown earlier in Fig. 2(b). Given a detection window for the part side view say, we first initialize the overtaking event if a part side view window does not occur in the past N O frames, i.e. P (t 1) P (t i) P (t N O ) = φ (3) where t N O t i < t. As part of initialization, given a set of windows P in the current frame I that satisfy (3), the window P that is closest to the left edge of the image frame is considered as the initialization window, i.e. satisfies the following k = argmin j x j (4) This is considered as initialization condition because it is assumed that the overtaking vehicle in I L would enter first from the left edge of the image w.r.t the ego-vehicle.

3 IEEE Conference on Intelligent Transportation Systems, Oct After getting the initialization window of the part side view in the current frame I, the overtaking event detection is initialized. However, overtaking event is not considered detected until the (t + NT )-th frame. In every frame I (ti ) such that t < ti t +NT, we determine the presence of nearest part side-view windows between adjacent frames. In order to do this, we start with the t-th frame, when we initialized the overtaking event. In the t +1-th frame, we look (t+1) for a window P (t+1) such that the k-th window satisfies the following minimum distance criteria: (t+1) k = arg min P j j (t+1) such that P j < δe (5) where δe caters to the distance and the change in scale if any due to the movement of the vehicle. If a window is detected in the (t + 1)-th window, the k-th window selected using the above condition in the (t + 1)-th frame is used to detect the window in (t + 2)-th frame. This process is repeated till (t + NT )-th frame. An overtaking vehicle is considered to be detected if we find nd windows within NT frames from the t-th frame. This tracking will aid in eliminating any false positive windows that may detected by the classifiers sporadically. After nd windows are detected in NT frames, as the overtaking vehicle moves, the front part side-view of the vehicle would be less trackable. But the vehicle is still overtaking the ego-vehicle. That is when the second classifier CSL, i.e. the full side-view classifier, is expected to detect the full side-view of the overtaking vehicle. CSL gives the set of windows S. The conditions for a window Sk S to satisfy are: 1) is already detected and overtaking vehicle has been initialized and detected, before Sk is detected. 2) Sk meets the minimum x-position criteria similar to (4). 3) if found must lie within the full side-view window, i.e. ( Sk ) > poverlap (wp hp ), where poverlap is the percentage overlap between and Sk. The first condition eliminates false positive windows from CSL because a full side-view is not considered unless the part side-view is already detected. The second condition eliminates false windows that do not appear to be in the side view from the ego-vehicle. The third condition is an additional step to further narrow down the windows from CSL in the presence of the part side-view windows that are detected by CPL in the same frame. Therefore, the proposed method continues to show overtaking event as long as the following conditions are met: Fig. 3. (a) Sequence showing an overtaking vehicle in the far left lane with time stamps (frame numbers). The detection of the part side-view (in green) and full side-view (in blue) are shown in images, (b) Plot showing the results from the part and full side-view classifiers and the overtaking detection decision. A classifier detection of a part or full side-view is indicated by 1. The overtaking detection as estimated by the proposed algorithm is set to 2 when the initialization of the overtaking event is done followed by 3 when the overtaking event detection is finally done. The ground truth is also shown. 10% of the rear of the overtaking vehicle is visible in the input image frame). The classifier outputs using the part and full side-view classifiers are shown in Fig. 3(a). Fig. 3(b) shows the binarized responses of the classifiers. The overtaking detection decision that is estimated by the proposed method is shown in Fig. 3(b). hen the overtaking event initialization takes place, the detection value is set to 2 in the plot in Fig. 3(b). The overtaking event is detected after two frames (NT = 3) when it is set to 3. The ground truth is also plotted in Fig. 3. It can be seen that the proposed method initializes the overtaking detection at t = 3, i.e. 3 time units after the actual overtaking starts (from ground truth). The overtaking event is further confirmed at t = 6 and continues till t = 24 until the full side-view classifier continues to detect the overtaking vehicle. 1) Presence of the part side-view window that is constrained by (5) above. 2) Presence of the full side-view window Sk that occurs after is detected. Fig. 3(a) shows a sample sequence of an overtaking vehicle with the time stamps from the beginning of the overtaking event till the end of overtaking event (when Fig. 4. Block diagram for detecting overtaking and receding vehicles. The flow in black is used to detect the overtaking vehicles in I L and I R regions, whereas the flow in red is used to detect the receding vehicles. C. Receding Vehicle Detection The proposed algorithm for detecting overtaking vehicles can be extended further to detect receding vehicles using

4 the flow diagram shown in Fig. 4. As discussed in the previous sub-section, the classification windows from the part side-view are tracked first followed by the full sideview classification windows. The flow shown in black arrows from left to right in Fig. 4 enables the detection of overtaking vehicles. If the same rules are applied as discussed in Section II-B but from the opposite direction as shown by the red arrows, the same algorithm can be used to detect the receding vehicles. In order to detect the receding vehicles, we use the classification outputs from full side-view classifier CS L to initialize the tracking of receding vehicle at t-th time instance or frame. If n d full side-view windows are detected within N T frames from I, then we consider the detection of receding vehicle by the side of the ego-vehicle. As the vehicle recedes further, the detection windows from the part side-view classifier CP L are used to continue detecting the receding vehicle till it moves into the blind-spot of the egovehicle. The same conditions that are applied on the part and full side-view windows for the overtaking vehicle can be interchanged in the case of receding vehicle detection. D. Relative Velocity Computation In this sub-section, we use the output of the proposed overtaking and receding vehicle detectors to determine the relative velocity between the ego-vehicle and the neighboring vehicle with which the overtaking/receding event is occurring. This will aid in understanding how the overtaking or receding occurred, and is particularly useful for naturalistic driving studies that will be described in Section IV. e explain the method for the case of overtaking vehicles; similar equations can be derived for receding vehicles also. In order to compute the relative velocity, the image I at given time instance t is converted to its inverse perspective mapping (IPM) domain [22]. The conversion is performed using the homography matrix [22], which can be computed offline using the camera calibration for a given setup of the camera in the ego-vehicle. The homography matrix is denoted by H. Therefore, H is used to compute the IPM view of I, which is denoted by I. Fig. 5 illustrates the images I and I (t+1) at t and t + 1, and their corresponding IPM views I and I(t+1). The proposed overtaking vehicle detection method gives the position of the overtaking vehicle as shown by the green rectangular windows in I and I (t+1) in Fig. 5. The bottom right positions of these windows denoted by P (x+w,y+h) and P (t+1) (x + w,y + h) are used alongwith the homography matrix H to compute the position of the vehicle along the road, i.e. y positions in Fig. 5. The following transformation equation is applied on P and P (t+1), k [ x y 1 ] T = H [ x y 1 ] T (6) which results in y and y(t+1) respectively. The relative velocity between the ego-vehicle and the overtaking vehicle at time instance t + 1 for the time period = (t + 1) t is given by y v (t+1) r = ψ y(t+1) (7) Fig. 5. Relative velocity estimation between the ego-vehicle and the neighboring overtaking vehicle. where ψ is the calibration constant to convert the IPM image coordinates into real-world coordinates (in meters), and v (t+1) r is the relative velocity in m/sec at time t + 1. Velocities of the two vehicles are assumed to be constants for the short time interval in this study. However, more complete vehicle dynamics can be applied with lateral and longitudinal accelerations. It can be seen from Fig. 5 that as the overtaking vehicle overtakes the ego-vehicle, the distance between the two is also increasing. The relative velocity is computed between every two adjacent frames from the beginning of the overtaking event till the end, and average relative velocity can be computed for the entire overtaking event. In Section IV, we use (7) for the analysis of naturalistic driving data to infer about the driving characteristics of the ego-vehicle during overtaking/receding events. Fig. 6. Analysis of overtaking events: Each blue-red rectangle pair refers to one overtaking event. x-axis (frame numbers) shows the duration of each event. Duration in ground truth is shown by blue rectangle. The corresponding red rectangle indicates the overtaking event detection result from the proposed method. III. RESULTS & DISCUSSION In this section, we will first evaluate the performance of the proposed overtaking and receding vehicle detection method. Existing evaluations for overtaking vehicle indicate

5 the number of overtaking events that were detected as compared to the ground truth. However, such evaluation presents the accuracy of the techniques partially. In this paper, we present a more detailed evaluation of the overtaking and receding vehicles. In order to do this, we considered two video sequences that are captured at 30 frames per second and having frames in total. The total number of overtaking events, and their properties are listed in Table I. A total of 51 events (34 overtaking and 17 receding events) were annotated. The events were occurring either on the immediately adjacent lanes to the ego-lane or neighbor lanes beyond the adjacent lanes, i.e. the performance is evaluated for overtaking and receding vehicles in multiple lanes. TABLE I PROPERTIES OF DATASET FOR EVALUATION Dataset name Duration Remarks New LISA overtaking 1410 frames Highway scenario, dense traffic vehicle dataset LISA-Toyota CSRC [15] frames Highway, urban scenario, highly dense traffic. Overtaking & Receding Vehicle Events Properties Left Right Adjacent lanes Neighbor lanes Overtaking 26/34 8/ Receding 2/17 15/ Total 28/51 23/ In our experiments, all the 51 overtaking and receding vehicles were detected resulting in 100% of detection accuracy. However, this needs to be further explained in terms of how early was the detection of the event. This is illustrated through the plots in Fig. 6 and 7 for overtaking and receding vehicles respectively. Referring to Fig. 6, the x-axis indicates the frame numbers. Each blue and red rectangle pair corresponds to one overtaking event with respect to the ego vehicle. The blue rectangle shows the duration of the overtaking event as annotated in the ground truth, whereas the red rectangle shows the duration as estimated by the proposed method. The overlap between the red and blue rectangles indicates how much of the overtaking event is detected by the proposed method when compared to the ground truth. Similarly, the receding events are listed in Fig. 7. It can be seen from Fig. 6 that it takes a few frames from the start of the overtaking event (as annotated in the ground truth) for the proposed method to detect the overtaking vehicles. This is primarily because the part sideview classifier requires a few frames before the part sideview of the overtaking vehicle appears in the regions I L and I R for the classifiers to detect accurately. Similarly, in Fig. 7, the proposed algorithm will stop detecting the receding vehicle earlier than ground truth because the ground truth annotation is done till the receding vehicle disappears from the image frame. Considering 30 frames a second, the proposed method is able to detect the overtaking events within 0.7s ± 0.5s from the start of overtaking event. Similarly, in the case of receding vehicle detection, the proposed method stops Fig. 7. Analysis of receding events: Each blue-red rectangle pair refers to one receding event. x-axis (frame numbers) shows the duration of each event. Duration in ground truth is shown by blue rectangle. The corresponding red rectangle indicates the receding event detection result from the proposed method. detecting the receding vehicle 0.4s ± 0.4s before the actual receding event stops in the image frame. IV. ANALYSIS FOR NATURALISTIC DRIVING STUDIES (NDS) In this section, we will briefly discuss how the proposed method and analysis presented in the previous sections can be extended for data analysis in naturalistic driving studies (NDS). Fig. 8 shows three events that are related to overtaking and receding vehicle detection which are identified and extracted by data reductionists manually in NDS [23]. hile the analysis presented in Section III evaluates the detection of such events, we will now present ways to extract information of the detected events in this section. Fig. 8. Overtaking receding events that are identified by data reductionists in NDS [23]: (a) Neighboring vehicle V O is overtaking ego-vehicle V E, (b) Ego-vehicle is overtaking neighboring vehicle V R, (c) Neighboring vehicle V O is overtaking ego-vehicle and cutting into ego-lane. In addition to the number of detections as listed in Table I, we extract the following information about the detected events. The duration of the events indicates the relative velocities between the ego-vehicle and overtaking/receding vehicles. Table II is generated from 45 events in one of the drives that was used to generate the results in the previous section. Table II shows the mean, standard deviation, minimum and maximum durations in seconds for the overtaking and receding events with respect to the ego-vehicle. Additionally, identifying overtaking events with durations less than the mean by a considerable margin could indicate specific maneuvers such as cutting in front of the ego-vehicle.

6 TABLE II NDS METRICS FOR OVERTAKING/RECEDING EVENTS Duration Metrics Overtaking Receding Mean ± deviation (s) ± ± Minimum/Maximum (s) 0.266/ /5.96 ABSOLUTE RELATIVE VELOCITIES Overtaking Receding Left/Right (m/s) 6.9/ /5.6 It should be noted that with respect to the neighboring vehicle, a receding event indicates an overtaking maneuver by the ego-vehicle. If a receding event is detected on the left of the ego-vehicle, it implies that the ego-vehicle is moving faster than the vehicles moving on the faster lanes (the left lanes are considered as fast lanes in USA). Therefore, egovehicle was overtaking the fast moving vehicles in the faster lanes. Similarly an overtaking event in the right lane indicates the other vehicles are moving faster than the ego-vehicle, and they can cause a driving hazard for the ego-vehicle. It can be seen from Table I that the proposed method is able to determine that there were 2 receding events on the left of the ego-vehicle, and 8 overtaking events on the right of the ego-vehicle. These specific events could be flagged for further analysis by data reductionists in NDS. The relative velocity estimation method presented in Section II-D was used to determine the average relative velocities for the overtaking and receding events. Table II lists the relative velocities for different overtaking and receding conditions. From Table II, the following can be inferred about the drive from which the overtaking and receding vehicles were considered. The relative velocity was higher for right overtaking events, i.e. the ego-vehicle was moving slower than the vehicles on its right lane during the drive. Next, the relative velocity is 12.4 m/s during left receding events, which implies that the ego-vehicle accelerated more than the vehicles on the fast lane (left lane) during the receding events in the drive. The study presented in this section is an initial work on the analysis of the overtaking and receding events for naturalistic driving studies. More metrics and analysis items will be explored in future works. V. CONCLUSIONS In this paper, an appearance based technique to detect both overtaking and receding vehicles w.r.t ego-vehicle is presented. The classification windows that are obtained from Haar features and Adaboost cascaded classifiers are tracked in bi-directional manner, which enables the detection of both optically opposite flowing events. The detailed evaluations show that the proposed method detects/terminates the events within an average of 1 second as compared to the ground truth. The extension of the proposed work to extract information of the drive for NDS is being reported for the first time in the context of receding and overtaking vehicles. Future work would focus on improving the accuracy of the proposed event detection technique, and also on extracting more analysis metrics for NDS. REFERENCES [1] TRAFFIC SAFETY FACTS 2011, National Highway Traffic Safety Administration, U.S. Department of Transportation, Tech. Rep., [2] S. Sivaraman and M. M. Trivedi, A General Active-Learning Framework for On-Road Vehicle Recognition and Tracking, IEEE Trans. on Intell. Transp. Sys., vol. 11, no. 2, pp , June [3] R. K. Satzoda and M. M. Trivedi, Efficient Lane and Vehicle detection with Integrated Synergies ( ELVIS ), in 2014 IEEE Conference on Computer Vision and Pattern Recognition orkshops on Embedded Vision, 2014, p. To be published. [4] S. Sivaraman and M. M. Trivedi, Integrated Lane and Vehicle Detection, Localization, and Tracking : A Synergistic Approach, IEEE Trans. on Intell. Transp. Sys., pp. 1 12, [5] E. Ohn-bar, A. Tawari, S. Martin, and M. M. Trivedi, Vision on heels : Looking at Driver, Vehicle, and Surround for On- Road Maneuver Analysis, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition orkshops, [6] R. K. Satzoda and M. M. Trivedi, Selective Salient Feature based Lane Analysis, in 2013 IEEE Intell. Transp. Sys. Conference, 2013, pp [7], Vision-based Lane Analysis : Exploration of Issues and Approaches for Embedded Realization, in 2013 IEEE Conference on Computer Vision and Pattern Recognition orkshops on Embedded Vision, 2013, pp [8], On Performance Evaluation Metrics for Lane Estimation, in International Conference on Pattern Recognition, 2014, p. To Appear. [9] A. Prioletti, A. Mø gelmose, P. Grisleri, M. M. Trivedi, A. Broggi, and T. B. Moeslund, Part-Based Pedestrian Detection and Feature-Based Tracking for Driver Assistance : Real-Time, Robust Algorithms, and Evaluation, IEEE Trans. on Intell. Transp. Sys., vol. 14, no. 3, pp , [10] B. Morris and M. M. Trivedi, Trajectory Learning for Activity Understanding: Unsupervised, Multilevel, and Long-Term Adaptive Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp , Nov [11] S. Sivaraman and M. M. Trivedi, Dynamic Probabilistic Drivability Maps for Lane Change and Merge Driver Assistance, IIEEE Trans. on Intell. Transp. Sys., p. To appear, [12] B. Morris and M. Trivedi, Robust classification and tracking of vehicles in traffic video streams, 2006 IEEE Intell. Transp. Sys. Conference, pp , [13] R. K. Satzoda and M. M. Trivedi, Drive Analysis using Vehicle Dynamics and Vision-based Lane Semantics, IEEE Transactions on Intelligent Transportation Systems, p. To appear, [14] L. N. Boyle, S. Hallmark, J. D. Lee, D. V. McGehee, D. M. Neyens, and N. J. ard, Integration of Analysis Methods and Development of Analysis Plan - SHRP2 Safety Research, Transportation Research Board of the National Academies, Tech. Rep., [15] R. K. Satzoda, P. Gunaratne, and M. M. Trivedi, Drive Analysis using Lane Semantics for Data Reduction in Naturalistic Driving Studies, in IEEE Intell. Veh. Symp., 2014, pp [16] S. Sivaraman and M. M. Trivedi, Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis, IEEE Trans. on Intell. Transp. Sys., vol. 14, no. 4, pp , Dec [17] A. Ramirez, E. Ohn-bar, and M. Trivedi, Integrating Motion and Appearance for Overtaking Vehicle Detection, IEEE Intell. Veh. Symp., pp , [18] X. Zhang, P. Jiang, and F. ang, Overtaking Vehicle Detection Using A Spatio-temporal CRF, in IEEE Intell. Veh. Symp., no. Iv, 2014, pp [19] D. Hultqvist, J. Roll, F. Svensson, J. Dahlin, and T. B. Sch, Detecting and positioning overtaking vehicles using 1D optical flow, in IEEE Intell. Veh. Symp., no. Iv, 2014, pp [20] F. Garcia, P. Cerri, A. Broggi, and S. Member, Data Fusion for Overtaking Vehicle Detection based on Radar and Optical Flow, in IEEE Intell. Veh. Symp., 2012, pp [21] P. Guzmán, J. Díaz, J. Ralli, R. Agís, and E. Ros, Low-cost sensor to detect overtaking based on optical flow, Machine Vision and Applications, vol. 25, no. 3, pp , Dec [22] A. Broggi, Parallel and local feature extraction: a real-time approach to road boundary detection. IEEE Trans. on Image Proc., vol. 4, no. 2, pp , Jan [23] Researcher Dictionary for Video Reduction Data, VTTI, Tech. Rep., 2012.

Towards Fully-automated Driving

Towards Fully-automated Driving Towards Fully-automated Driving Challenges and Potential Solutions Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Mobile Perception Systems 6 PhDs, postdoc, project manager, software engineer,

More information

THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY

THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY THE POTENTIAL OF APPLYING MACHINE LEARNING FOR PREDICTING CUT-IN BEHAVIOUR OF SURROUNDING TRAFFIC FOR TRUCK-PLATOONING SAFETY Irene Cara Jan-Pieter Paardekooper TNO Helmond The Netherlands Paper Number

More information

Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll

Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll Maarten Bieshaar, Günther Reitberger, Stefan Zernetsch, Prof. Dr. Bernhard Sick, Dr. Erich Fuchs, Prof. Dr.-Ing. Konrad Doll 08.02.2017 By 2030 road traffic deaths will be the fifth leading cause of death

More information

Mejbah Alam. Justin Gottschlich. Tae Jun Lee. Stan Zdonik. Nesime Tatbul

Mejbah Alam. Justin Gottschlich. Tae Jun Lee. Stan Zdonik. Nesime Tatbul Nesime Tatbul Mejbah Alam Justin Gottschlich Tae Jun Lee Stan Zdonik 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, Canada Motivation: Time Series Anomaly Detection

More information

1 INTRODUCTION 2 PROBLEM DEFINITION

1 INTRODUCTION 2 PROBLEM DEFINITION Autonomous cruise control with cut-in target vehicle detection Ashwin Carvalho, Alek Williams, Stéphanie Lefèvre & Francesco Borrelli Department of Mechanical Engineering University of California Berkeley,

More information

Learning and Predicting On-road Pedestrian Behavior Around Vehicles

Learning and Predicting On-road Pedestrian Behavior Around Vehicles Learning and Predicting On-road Pedestrian Behavior Around Vehicles Nachiket Deo and Mohan M. Trivedi Laboratory for Intelligent and Safe Automobiles University of California San Diego La Jolla, USA {ndeo,

More information

Boosting: Algorithms and Applications

Boosting: Algorithms and Applications Boosting: Algorithms and Applications Lecture 11, ENGN 4522/6520, Statistical Pattern Recognition and Its Applications in Computer Vision ANU 2 nd Semester, 2008 Chunhua Shen, NICTA/RSISE Boosting Definition

More information

CHAPTER 3. CAPACITY OF SIGNALIZED INTERSECTIONS

CHAPTER 3. CAPACITY OF SIGNALIZED INTERSECTIONS CHAPTER 3. CAPACITY OF SIGNALIZED INTERSECTIONS 1. Overview In this chapter we explore the models on which the HCM capacity analysis method for signalized intersections are based. While the method has

More information

Traffic safety research: a challenge for extreme value statistics

Traffic safety research: a challenge for extreme value statistics Traffic safety research: a challenge for extreme value statistics VTTI Driving Transportation with Technology Jenny Jonasson, Astra-Zeneca, Gothenburg Holger Rootzén, Mathematical Sciences, Chalmers http://www.math.chalmers.se/~rootzen/

More information

Advanced Adaptive Cruise Control Based on Collision Risk Assessment

Advanced Adaptive Cruise Control Based on Collision Risk Assessment Advanced Adaptive Cruise Control Based on Collision Risk Assessment Hanwool Woo 1, Yonghoon Ji 2, Yusuke Tamura 1, Yasuhide Kuroda 3, Takashi Sugano 4, Yasunori Yamamoto 4, Atsushi Yamashita 1, and Hajime

More information

2.1 Traffic Stream Characteristics. Time Space Diagram and Measurement Procedures Variables of Interest

2.1 Traffic Stream Characteristics. Time Space Diagram and Measurement Procedures Variables of Interest 2.1 Traffic Stream Characteristics Time Space Diagram and Measurement Procedures Variables of Interest Traffic Stream Models 2.1 Traffic Stream Characteristics Time Space Diagram Speed =100km/h = 27.78

More information

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill.

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill. We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill. 2.3.2. Steering control using point mass model: Open loop commands We consider

More information

Time Space Diagrams for Thirteen Shock Waves

Time Space Diagrams for Thirteen Shock Waves Time Space Diagrams for Thirteen Shock Waves Benjamin Coifman Institute of Transportation Studies 19 McLaughlin Hall University of California Berkeley, CA 9472 zephyr@eecs.berkeley.edu http://www.cs.berkeley.edu/~zephyr

More information

FIELD EVALUATION OF SMART SENSOR VEHICLE DETECTORS AT RAILROAD GRADE CROSSINGS VOLUME 4: PERFORMANCE IN ADVERSE WEATHER CONDITIONS

FIELD EVALUATION OF SMART SENSOR VEHICLE DETECTORS AT RAILROAD GRADE CROSSINGS VOLUME 4: PERFORMANCE IN ADVERSE WEATHER CONDITIONS CIVIL ENGINEERING STUDIES Illinois Center for Transportation Series No. 15-002 UILU-ENG-2015-2002 ISSN: 0197-9191 FIELD EVALUATION OF SMART SENSOR VEHICLE DETECTORS AT RAILROAD GRADE CROSSINGS VOLUME 4:

More information

JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT

JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT JOINT INTERPRETATION OF ON-BOARD VISION AND STATIC GPS CARTOGRAPHY FOR DETERMINATION OF CORRECT SPEED LIMIT Alexandre Bargeton, Fabien Moutarde, Fawzi Nashashibi and Anne-Sophie Puthon Robotics Lab (CAOR),

More information

Multiple Changepoint Detection on speed profile in work zones using SHRP 2 Naturalistic Driving Study Data

Multiple Changepoint Detection on speed profile in work zones using SHRP 2 Naturalistic Driving Study Data Multiple Changepoint Detection on speed profile in work zones using SHRP 2 Naturalistic Driving Study Data Hossein Naraghi 2017 Mid-Continent TRS Background Presence of road constructions and maintenance

More information

Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support

Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support 2012 15th International IEEE Conference on Intelligent Transportation Systems Anchorage, Alaska, USA, September 16-19, 2012 Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart

More information

Evaluation of fog-detection and advisory-speed system

Evaluation of fog-detection and advisory-speed system Evaluation of fog-detection and advisory-speed system A. S. Al-Ghamdi College of Engineering, King Saud University, P. O. Box 800, Riyadh 11421, Saudi Arabia Abstract Highway safety is a major concern

More information

HELLA AGLAIA MOBILE VISION GMBH

HELLA AGLAIA MOBILE VISION GMBH HELLA AGLAIA MOBILE VISION GMBH DRIVING SOFTWARE INNOVATION H E L L A A g l a i a M o b i l e V i s i o n G m b H I C o m p a n y P r e s e n t a t i o n I D e l h i, A u g u s t 2 0 1 7 11 Hella Aglaia

More information

A NOVEL METHOD TO EVALUATE THE SAFETY OF HIGHLY AUTOMATED VEHICLES. Joshua L. Every Transportation Research Center Inc. United States of America

A NOVEL METHOD TO EVALUATE THE SAFETY OF HIGHLY AUTOMATED VEHICLES. Joshua L. Every Transportation Research Center Inc. United States of America A NOVEL METHOD TO EVALUATE THE SAFETY OF HIGHLY AUTOMATED VEHICLES Joshua L. Every Transportation Research Center Inc. United States of America Frank Barickman John Martin National Highway Traffic Safety

More information

arxiv: v1 [cs.cv] 15 May 2018

arxiv: v1 [cs.cv] 15 May 2018 Multi-Modal Trajectory Prediction of Surrounding Vehicles with Maneuver based LSTMs Nachiket Deo and Mohan M. Trivedi arxiv:1805.05499v1 [cs.cv] 15 May 2018 Abstract To safely and efficiently navigate

More information

Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving

Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving J. Quilbeuf 1 M. Barbier 2,3 L. Rummelhard 3 C. Laugier 2 A. Legay 1 T. Genevois 2 J. Ibañez-Guzmán 3

More information

Learning-based approach for online lane change intention prediction

Learning-based approach for online lane change intention prediction Learning-based approach for online lane change intention prediction P. Kumar, Mathias Perrollaz, Stéphanie Lefèvre, Christian Laugier To cite this version: P. Kumar, Mathias Perrollaz, Stéphanie Lefèvre,

More information

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms Recognition of Visual Speech Elements Using Adaptively Boosted Hidden Markov Models. Say Wei Foo, Yong Lian, Liang Dong. IEEE Transactions on Circuits and Systems for Video Technology, May 2004. Shankar

More information

Chapter 5 Traffic Flow Characteristics

Chapter 5 Traffic Flow Characteristics Chapter 5 Traffic Flow Characteristics 1 Contents 2 Introduction The Nature of Traffic Flow Approaches to Understanding Traffic Flow Parameters Connected with Traffic Flow Categories of Traffic Flow The

More information

FPGA Implementation of a HOG-based Pedestrian Recognition System

FPGA Implementation of a HOG-based Pedestrian Recognition System MPC Workshop Karlsruhe 10/7/2009 FPGA Implementation of a HOG-based Pedestrian Recognition System Sebastian Bauer sebastian.bauer@fh-aschaffenburg.de Laboratory for Pattern Recognition and Computational

More information

A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES

A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES V.Sridevi, P.Malarvizhi, P.Mathivannan Abstract Rain removal from a video is a challenging problem due to random spatial distribution and

More information

Statistical Filters for Crowd Image Analysis

Statistical Filters for Crowd Image Analysis Statistical Filters for Crowd Image Analysis Ákos Utasi, Ákos Kiss and Tamás Szirányi Distributed Events Analysis Research Group, Computer and Automation Research Institute H-1111 Budapest, Kende utca

More information

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems Daniel Meyer-Delius 1, Christian Plagemann 1, Georg von Wichert 2, Wendelin Feiten 2, Gisbert Lawitzky 2, and

More information

Global Scene Representations. Tilke Judd

Global Scene Representations. Tilke Judd Global Scene Representations Tilke Judd Papers Oliva and Torralba [2001] Fei Fei and Perona [2005] Labzebnik, Schmid and Ponce [2006] Commonalities Goal: Recognize natural scene categories Extract features

More information

Analysis of Forward Collision Warning System. Based on Vehicle-mounted Sensors on. Roads with an Up-Down Road gradient

Analysis of Forward Collision Warning System. Based on Vehicle-mounted Sensors on. Roads with an Up-Down Road gradient Contemporary Engineering Sciences, Vol. 7, 2014, no. 22, 1139-1145 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2014.49142 Analysis of Forward Collision Warning System Based on Vehicle-mounted

More information

Modeling Complex Temporal Composition of Actionlets for Activity Prediction

Modeling Complex Temporal Composition of Actionlets for Activity Prediction Modeling Complex Temporal Composition of Actionlets for Activity Prediction ECCV 2012 Activity Recognition Reading Group Framework of activity prediction What is an Actionlet To segment a long sequence

More information

Chalmers Publication Library

Chalmers Publication Library Chalmers Publication Library Probabilistic Threat Assessment and Driver Modeling in Collision Avoidance Systems This document has been downloaded from Chalmers Publication Library (CPL). It is the author

More information

EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL IMAGE IN URBAN AREAS. Received September 2015; revised January 2016

EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL IMAGE IN URBAN AREAS. Received September 2015; revised January 2016 International Journal of Innovative Computing, Information and Control ICIC International c 2016 ISSN 1349-4198 Volume 12, Number 2, April 2016 pp. 371 383 EXTRACTION OF PARKING LOT STRUCTURE FROM AERIAL

More information

Anticipating Visual Representations from Unlabeled Data. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba

Anticipating Visual Representations from Unlabeled Data. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba Anticipating Visual Representations from Unlabeled Data Carl Vondrick, Hamed Pirsiavash, Antonio Torralba Overview Problem Key Insight Methods Experiments Problem: Predict future actions and objects Image

More information

TRAJECTORY PLANNING FOR AUTOMATED YIELDING MANEUVERS

TRAJECTORY PLANNING FOR AUTOMATED YIELDING MANEUVERS TRAJECTORY PLANNING FOR AUTOMATED YIELDING MANEUVERS, Mattias Brännström, Erik Coelingh, and Jonas Fredriksson Funded by: FFI strategic vehicle research and innovation ITRL conference Why automated maneuvers?

More information

Assessing the uncertainty in micro-simulation model outputs

Assessing the uncertainty in micro-simulation model outputs Assessing the uncertainty in micro-simulation model outputs S. Zhu 1 and L. Ferreira 2 1 School of Civil Engineering, Faculty of Engineering, Architecture and Information Technology, The University of

More information

Unsupervised Learning with Permuted Data

Unsupervised Learning with Permuted Data Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University

More information

A Study on Performance Analysis of V2V Communication Based AEB System Considering Road Friction at Slopes

A Study on Performance Analysis of V2V Communication Based AEB System Considering Road Friction at Slopes , pp. 71-80 http://dx.doi.org/10.14257/ijfgcn.2016.9.11.07 A Study on Performance Analysis of V2V Communication Based AEB System Considering Road Friction at Slopes Sangduck Jeon 1, Jungeun Lee 1 and Byeongwoo

More information

A Proposed Driver Assistance System in Adverse Weather Conditions

A Proposed Driver Assistance System in Adverse Weather Conditions 1 A Proposed Driver Assistance System in Adverse Weather Conditions National Rural ITS Conference Student Paper Competition Second runner-up Ismail Zohdy Ph.D. Student, Department of Civil & Environmental

More information

Safe Transportation Research & Education Center UC Berkeley

Safe Transportation Research & Education Center UC Berkeley Safe Transportation Research & Education Center UC Berkeley Title: A 3D Computer Simulation Test of the Leibowitz Hypothesis Author: Barton, Joseph E., UC Berkeley Department of Mechanical Engineering

More information

How would surround vehicles move? A Unified Framework for Maneuver Classification and Motion Prediction

How would surround vehicles move? A Unified Framework for Maneuver Classification and Motion Prediction JOURNAL NAME 1 How would surround vehicles move? A Unified Framework for Maneuver Classification and Motion Prediction Nachet Deo, Akshay Rangesh, and Mohan M. Trivedi, Fellow, IEEE Abstract Reliable prediction

More information

On Motion Models for Target Tracking in Automotive Applications

On Motion Models for Target Tracking in Automotive Applications On Motion Models for Target Tracking in Automotive Applications Markus Bühren and Bin Yang Chair of System Theory and Signal Processing University of Stuttgart, Germany www.lss.uni-stuttgart.de Abstract

More information

Available online Journal of Scientific and Engineering Research, 2017, 4(4): Research Article

Available online  Journal of Scientific and Engineering Research, 2017, 4(4): Research Article Available online www.jsaer.com, 2017, 4(4):137-142 Research Article ISSN: 2394-2630 CODEN(USA): JSERBR A Qualitative Examination of the Composition of the Cooperative Vehicles Çağlar Koşun 1, Çağatay Kök

More information

Stereoscopic Programmable Automotive Headlights for Improved Safety Road

Stereoscopic Programmable Automotive Headlights for Improved Safety Road Stereoscopic Programmable Automotive Headlights for Improved Safety Road Project ID: 30 Srinivasa Narasimhan (PI), Associate Professor Robotics Institute, Carnegie Mellon University https://orcid.org/0000-0003-0389-1921

More information

Tracking Human Heads Based on Interaction between Hypotheses with Certainty

Tracking Human Heads Based on Interaction between Hypotheses with Certainty Proc. of The 13th Scandinavian Conference on Image Analysis (SCIA2003), (J. Bigun and T. Gustavsson eds.: Image Analysis, LNCS Vol. 2749, Springer), pp. 617 624, 2003. Tracking Human Heads Based on Interaction

More information

Graphical Object Models for Detection and Tracking

Graphical Object Models for Detection and Tracking Graphical Object Models for Detection and Tracking (ls@cs.brown.edu) Department of Computer Science Brown University Joined work with: -Ying Zhu, Siemens Corporate Research, Princeton, NJ -DorinComaniciu,

More information

The state of the art and beyond

The state of the art and beyond Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching

More information

Online Appearance Model Learning for Video-Based Face Recognition

Online Appearance Model Learning for Video-Based Face Recognition Online Appearance Model Learning for Video-Based Face Recognition Liang Liu 1, Yunhong Wang 2,TieniuTan 1 1 National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences,

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Analytical investigation on the minimum traffic delay at a two-phase. intersection considering the dynamical evolution process of queues

Analytical investigation on the minimum traffic delay at a two-phase. intersection considering the dynamical evolution process of queues Analytical investigation on the minimum traffic delay at a two-phase intersection considering the dynamical evolution process of queues Hong-Ze Zhang 1, Rui Jiang 1,2, Mao-Bin Hu 1, Bin Jia 2 1 School

More information

Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models

Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models SUBMISSION TO IEEE TRANS. ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models Xiaogang Wang, Xiaoxu Ma,

More information

2D Image Processing Face Detection and Recognition

2D Image Processing Face Detection and Recognition 2D Image Processing Face Detection and Recognition Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet CSC487/2503: Foundations of Computer Vision Visual Tracking David Fleet Introduction What is tracking? Major players: Dynamics (model of temporal variation of target parameters) Measurements (relation

More information

Reconnaissance d objetsd et vision artificielle

Reconnaissance d objetsd et vision artificielle Reconnaissance d objetsd et vision artificielle http://www.di.ens.fr/willow/teaching/recvis09 Lecture 6 Face recognition Face detection Neural nets Attention! Troisième exercice de programmation du le

More information

OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS

OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS OBJECT DETECTION FROM MMS IMAGERY USING DEEP LEARNING FOR GENERATION OF ROAD ORTHOPHOTOS Y. Li 1,*, M. Sakamoto 1, T. Shinohara 1, T. Satoh 1 1 PASCO CORPORATION, 2-8-10 Higashiyama, Meguro-ku, Tokyo 153-0043,

More information

Multi-scale Geometric Summaries for Similarity-based Upstream S

Multi-scale Geometric Summaries for Similarity-based Upstream S Multi-scale Geometric Summaries for Similarity-based Upstream Sensor Fusion Duke University, ECE / Math 3/6/2019 Overall Goals / Design Choices Leverage multiple, heterogeneous modalities in identification

More information

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems Daniel Meyer-Delius 1, Christian Plagemann 1, Georg von Wichert 2, Wendelin Feiten 2, Gisbert Lawitzky 2, and

More information

arxiv: v1 [physics.soc-ph] 13 Oct 2017

arxiv: v1 [physics.soc-ph] 13 Oct 2017 arxiv:171.5752v1 [physics.soc-ph] 13 Oct 217 Analysis of risk levels for traffic on a multi-lane highway Michael Herty Giuseppe Visconti Institut für Geometrie und Praktische Mathematik (IGPM), RWTH Aachen

More information

Eigen analysis and gray alignment for shadow detection applied to urban scene images

Eigen analysis and gray alignment for shadow detection applied to urban scene images Eigen analysis and gray alignment for shadow detection applied to urban scene images Tiago Souza, Leizer Schnitman and Luciano Oliveira Intelligent Vision Research Lab, CTAI Federal University of Bahia

More information

Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity- Representativeness Reward

Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity- Representativeness Reward Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity- Representativeness Reward Kaiyang Zhou, Yu Qiao, Tao Xiang AAAI 2018 What is video summarization? Goal: to automatically

More information

Design of Driver-Assist Systems Under Probabilistic Safety Specifications Near Stop Signs

Design of Driver-Assist Systems Under Probabilistic Safety Specifications Near Stop Signs Design of Driver-Assist Systems Under Probabilistic Safety Specifications Near Stop Signs The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.

More information

Indirect Clinical Evidence of Driver Inattention as a Cause of Crashes

Indirect Clinical Evidence of Driver Inattention as a Cause of Crashes University of Iowa Iowa Research Online Driving Assessment Conference 2007 Driving Assessment Conference Jul 10th, 12:00 AM Indirect Clinical Evidence of Driver Inattention as a Cause of Crashes Gary A.

More information

Improved Kalman Filter Initialisation using Neurofuzzy Estimation

Improved Kalman Filter Initialisation using Neurofuzzy Estimation Improved Kalman Filter Initialisation using Neurofuzzy Estimation J. M. Roberts, D. J. Mills, D. Charnley and C. J. Harris Introduction It is traditional to initialise Kalman filters and extended Kalman

More information

Section Distance and displacment

Section Distance and displacment Chapter 11 Motion Section 11.1 Distance and displacment Choosing a Frame of Reference What is needed to describe motion completely? A frame of reference is a system of objects that are not moving with

More information

GISLab (UK) School of Computing and Mathematical Sciences Liverpool John Moores University, UK. Dr. Michael Francis. Keynote Presentation

GISLab (UK) School of Computing and Mathematical Sciences Liverpool John Moores University, UK. Dr. Michael Francis. Keynote Presentation GISLab (UK) School of Computing and Mathematical Sciences Liverpool John Moores University, UK Dr. Michael Francis Keynote Presentation Dr. Michael Francis North-West GIS Research Laboratory, LJMU, Liverpool,

More information

ILR Perception System Using Stereo Vision and Radar

ILR Perception System Using Stereo Vision and Radar ILR - 05 Perception System Using Stereo Vision and Radar Team A - Amit Agarwal Harry Golash, Yihao Qian, Menghan Zhang, Zihao (Theo) Zhang Sponsored by: Delphi Automotive November 23, 2016 Table of Contents

More information

Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models

Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models SUBMISSION TO IEEE TRANS. ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models Xiaogang Wang, Xiaoxu Ma,

More information

A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality

A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality Shu Kong, Donghui Wang Dept. of Computer Science and Technology, Zhejiang University, Hangzhou 317,

More information

CHAPTER 2. CAPACITY OF TWO-WAY STOP-CONTROLLED INTERSECTIONS

CHAPTER 2. CAPACITY OF TWO-WAY STOP-CONTROLLED INTERSECTIONS CHAPTER 2. CAPACITY OF TWO-WAY STOP-CONTROLLED INTERSECTIONS 1. Overview In this chapter we will explore the models on which the HCM capacity analysis method for two-way stop-controlled (TWSC) intersections

More information

Snow Removal Equipment Visibility

Snow Removal Equipment Visibility Ministry of Transportation Snow Removal Equipment Visibility 2015 Winter Maintenance Peer Exchange September 2015 Winter Equipment Visibility the issue Snow removal equipment often operates under the most

More information

N-mode Analysis (Tensor Framework) Behrouz Saghafi

N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains

More information

Tennis player segmentation for semantic behavior analysis

Tennis player segmentation for semantic behavior analysis Proposta di Tennis player segmentation for semantic behavior analysis Architettura Software per Robot Mobili Vito Renò, Nicola Mosca, Massimiliano Nitti, Tiziana D Orazio, Donato Campagnoli, Andrea Prati,

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

A Hierarchical Convolutional Neural Network for Mitosis Detection in Phase-Contrast Microscopy Images

A Hierarchical Convolutional Neural Network for Mitosis Detection in Phase-Contrast Microscopy Images A Hierarchical Convolutional Neural Network for Mitosis Detection in Phase-Contrast Microscopy Images Yunxiang Mao and Zhaozheng Yin (B) Department of Computer Science, Missouri University of Science and

More information

Automotive Radar and Radar Based Perception for Driverless Cars

Automotive Radar and Radar Based Perception for Driverless Cars Automotive Radar and Radar Based Perception for Driverless Cars Radar Symposium February 13, 2017 Auditorium, Ben-Gurion University of the Negev Dr. Juergen Dickmann, DAIMLER AG, Ulm, Germany 1 Radar Team

More information

Unit 2 - Linear Motion and Graphical Analysis

Unit 2 - Linear Motion and Graphical Analysis Unit 2 - Linear Motion and Graphical Analysis Motion in one dimension is particularly easy to deal with because all the information about it can be encapsulated in two variables: x, the position of the

More information

Ranking Verification Counterexamples: An Invariant guided approach

Ranking Verification Counterexamples: An Invariant guided approach Ranking Verification Counterexamples: An Invariant guided approach Ansuman Banerjee Indian Statistical Institute Joint work with Pallab Dasgupta, Srobona Mitra and Harish Kumar Complex Systems Everywhere

More information

Asaf Bar Zvi Adi Hayat. Semantic Segmentation

Asaf Bar Zvi Adi Hayat. Semantic Segmentation Asaf Bar Zvi Adi Hayat Semantic Segmentation Today s Topics Fully Convolutional Networks (FCN) (CVPR 2015) Conditional Random Fields as Recurrent Neural Networks (ICCV 2015) Gaussian Conditional random

More information

Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANs

Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANs Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANs Patrick Wenzel 1,2, Qadeer Khan 1,2, Daniel Cremers 1,2, and Laura Leal-Taixé 1 1 Technical University

More information

Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior

Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior Formal Verification and Modeling in Human-Machine Systems: Papers from the AAAI Spring Symposium Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior D. Sadigh, K. Driggs-Campbell,

More information

OVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION

OVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION OVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION Na Lin, Haixin Sun Xiamen University Key Laboratory of Underwater Acoustic Communication and Marine Information Technology, Ministry

More information

AN ADAPTIVE cruise control (ACC) system, which maintains

AN ADAPTIVE cruise control (ACC) system, which maintains IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 8, NO. 3, SEPTEMBER 2007 549 Sensor Fusion for Predicting Vehicles Path for Collision Avoidance Systems Aris Polychronopoulos, Member, IEEE,

More information

Real-time image-based parking occupancy detection using deep learning. Debaditya Acharya, Weilin Yan & Kourosh Khoshelham The University of Melbourne

Real-time image-based parking occupancy detection using deep learning. Debaditya Acharya, Weilin Yan & Kourosh Khoshelham The University of Melbourne Real-time image-based parking occupancy detection using deep learning Debaditya Acharya, Weilin Yan & Kourosh Khoshelham The University of Melbourne Slide 1/20 Prologue People spend on Does average that

More information

Global Behaviour Inference using Probabilistic Latent Semantic Analysis

Global Behaviour Inference using Probabilistic Latent Semantic Analysis Global Behaviour Inference using Probabilistic Latent Semantic Analysis Jian Li, Shaogang Gong, Tao Xiang Department of Computer Science Queen Mary College, University of London, London, E1 4NS, UK {jianli,

More information

Multi-Sensor Multi-Target Tracking - Strategies for Events that Become Invisible

Multi-Sensor Multi-Target Tracking - Strategies for Events that Become Invisible Multi-Sensor Multi-Target Tracking - Strategies for Events that Become Invisible D.Hutber and Z.Zhang INRIA, 2004 Route des Lucioles, B.P.93, 06902 Sophia Antipolis Cedex, France. dhutber@sophia.inria.fr,

More information

Sound Recognition in Mixtures

Sound Recognition in Mixtures Sound Recognition in Mixtures Juhan Nam, Gautham J. Mysore 2, and Paris Smaragdis 2,3 Center for Computer Research in Music and Acoustics, Stanford University, 2 Advanced Technology Labs, Adobe Systems

More information

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore

More information

Sparse Kernel Machines - SVM

Sparse Kernel Machines - SVM Sparse Kernel Machines - SVM Henrik I. Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I. Christensen (RIM@GT) Support

More information

AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-12 NATIONAL SECURITY COMPLEX

AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-12 NATIONAL SECURITY COMPLEX AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-1 NATIONAL SECURITY COMPLEX J. A. Mullens, J. K. Mattingly, L. G. Chiang, R. B. Oberer, J. T. Mihalczo ABSTRACT This paper describes a template matching

More information

The Detection Techniques for Several Different Types of Fiducial Markers

The Detection Techniques for Several Different Types of Fiducial Markers Vol. 1, No. 2, pp. 86-93(2013) The Detection Techniques for Several Different Types of Fiducial Markers Chuen-Horng Lin 1,*,Yu-Ching Lin 1,and Hau-Wei Lee 2 1 Department of Computer Science and Information

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Active Traffic & Safety Management System for Interstate 77 in Virginia. Chris McDonald, PE VDOT Southwest Regional Operations Director

Active Traffic & Safety Management System for Interstate 77 in Virginia. Chris McDonald, PE VDOT Southwest Regional Operations Director Active Traffic & Safety Management System for Interstate 77 in Virginia Chris McDonald, PE VDOT Southwest Regional Operations Director Interstate 77 at Fancy Gap Mountain Mile markers 0-15 Built in late

More information

EFFECTS OF WEATHER-CONTROLLED VARIABLE MESSAGE SIGNING IN FINLAND CASE HIGHWAY 1 (E18)

EFFECTS OF WEATHER-CONTROLLED VARIABLE MESSAGE SIGNING IN FINLAND CASE HIGHWAY 1 (E18) EFFECTS OF WEATHER-CONTROLLED VARIABLE MESSAGE SIGNING IN FINLAND CASE HIGHWAY 1 (E18) Raine Hautala 1 and Magnus Nygård 2 1 Senior Research Scientist at VTT Building and Transport P.O. Box 1800, FIN-02044

More information

Related topics Velocity, acceleration, force, gravitational acceleration, kinetic energy, and potential energy

Related topics Velocity, acceleration, force, gravitational acceleration, kinetic energy, and potential energy Related topics Velocity, acceleration, force, gravitational acceleration, kinetic energy, and potential energy Principle A mass, which is connected to a cart via a silk thread, drops to the floor. The

More information

Pedestrian Density Estimation by a Weighted Bag of Visual Words Model

Pedestrian Density Estimation by a Weighted Bag of Visual Words Model Pedestrian Density Estimation by a Weighted Bag of Visual Words Model Shilin Zhang and Xunyuan Zhang image representation termed bag of visual words integrating weighting scheme and spatial pyramid co-occurrence,

More information

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May 2002 Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Dept of EECS, UC Berkeley Berkeley,

More information

Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning

Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning Jing Dong jdong@gatech.edu 2017-06-20 License CC BY-NC-SA 3.0 Discrete time SLAM Downsides: Measurements

More information

Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework

Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework Fatih Porikli, Tekin Kocak TR2006-100 January 2007

More information

Modeling Multiple-mode Systems with Predictive State Representations

Modeling Multiple-mode Systems with Predictive State Representations Modeling Multiple-mode Systems with Predictive State Representations Britton Wolfe Computer Science Indiana University-Purdue University Fort Wayne wolfeb@ipfw.edu Michael R. James AI and Robotics Group

More information