Using Video Analytics to Automatically Annotate Driver Behavior and Context in Naturalistic Driving Data
-
2024-12-01
-
Details:
-
Creators:
-
Corporate Creators:
-
Contributors:
-
Corporate Contributors:
-
Subject/TRT Terms:
-
Publication/ Report Number:
-
DOI:
-
Resource Type:
-
Right Statement:
-
Geographical Coverage:
-
Edition:Research Summary Report
-
Contracting Officer:
-
Corporate Publisher:
-
Abstract:Naturalistic driving data provides a wealth of information for researchers studying driver behavior and distracted driving. However, manually annotating the videos to extract the data is costly and time consuming. This project’s research team set out to develop a system to analyze videos from the second Strategic Highway Research Program Naturalistic Driving Study dataset and automatically produce annotations and descriptors for events, behavior, and driving scenarios related to transportation safety.(1) The project had four objectives: characterize high-level driver behavior, such as eating or using a cellphone; classify the environment outside the vehicle, such as the position of roadway objects, work zones, and intersections; understand interactions and dependencies between drivers and the surrounding environment, such as looking at a billboard or a passing vehicle; and demonstrate how the video analytics techniques used in this study can help human factors researchers address research questions in novel ways. The researchers developed and tested advanced computer vision algorithms, including deep neural network (DNN)-based methods to capture spatial and temporal information embedded in the naturalistic driving videos. The DNN models included convolutional neural network models for image recognition and transformer-based models to process sequential data. All the codes developed as part of this project are open sourced.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:
-
Download URL:
-
File Type: