Multimodal Detection of Driver Distraction
-
2017-01-01
-
Details:
-
Creators:
-
Corporate Creators:
-
Corporate Contributors:
-
Subject/TRT Terms:
-
Resource Type:
-
Geographical Coverage:
-
Corporate Publisher:
-
Abstract:Distracted driving has become a major cause of crashes and loss of life. While there is legislation prohibiting the use of cellphones while driving, people continue to use them. We reason that if we can create a system that automatically detects when a person is distracted and warns them (even shutting down an application if necessary), then some serious accidents could be prevented. The goal of this project was to expand on our previous work on automatically detecting driver distraction by using more modalities than simply speech. For this we needed to collect a new dataset where the car information (gas pedal, etc.) and video driver information from a good quality back-facing camera were collected. We collected data from 30 subjects driving the OpenDS simulator we had previously created. In the past, we had used third party observation to annotate the places where the driver was believed to be distracted, for this dataset. For this dataset, we asked each subject to watch the video of their session and to stop the recording at places where they felt they “noticed something else” or that “they may have been less attentive”. They were asked to mark it (a modification of the NASA Task Load Index) and this annotation was logged and synced with the rest of the data. The route conditions were modified slightly – to the hairpin turns and a stop sign in the earlier database we added different placements of the turns and a stop light as well as street signs that could be seen from a distance. We also added small packages along the roadway to represent minor visual distractions that should not have an effect on the safety of the driver but could distract their attention.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:
-
Download URL:
-
File Type: