ContextVLM: Zero-Shot and Few-Shot Context Understanding for Autonomous Driving using Vision Language Models
-
2024-08-01
Details
-
Creators:
-
Corporate Creators:
-
Corporate Contributors:
-
Subject/TRT Terms:
-
DOI:
-
Resource Type:
-
Right Statement:
-
Geographical Coverage:
-
Corporate Publisher:
-
Abstract:In recent years, there has been a notable increase in the development of autonomous vehicle (AV) technologies aimed at improving safety in transportation systems. While AVs have been deployed in the real-world to some extent, a full-scale deployment requires AVs to robustly navigate through challenges like heavy rain, snow, low lighting, construction zones and GPS signal loss in tunnels. To be able to handle these specific challenges, an AV must reliably recognize the physical attributes of the environment in which it operates. In this paper, we define context recognition as the task of accurately identifying environmental attributes for an AV to appropriately deal with them. Specifically, we define 24 environmental contexts capturing a variety of weather, lighting, traffic and road conditions that an AV must be aware of. Motivated by the need to recognize environmental contexts, we create a context recognition dataset called DrivingContexts with more than 1.6 million context-query pairs relevant for an AV. Since traditional supervised computer vision approaches do not scale well to a variety of contexts, we propose a framework called ContextVLM that uses vision-language models to detect contexts using zero- and few-shot approaches. ContextVLM is capable of reliably detecting relevant driving contexts with an accuracy of more than 95% on our dataset, while running in real-time on a 4GB Nvidia GeForce GTX 1050 Ti GPU on an AV with a latency of 10.5 ms per query.
-
Content Notes:This is an open access article under the terms of the Creative Commons Attribution 1.0 Universal (CC BY 1.0) license (https://creativecommons.org/publicdomain/zero/1.0/). Please cite this article as: S. Sural, Naren and R. R. Rajkumar, "ContextVLM: Zero-Shot and Few-Shot Context Understanding for Autonomous Driving Using Vision Language Models," 2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), Edmonton, AB, Canada, 2024, pp. 468-475, https://doi.org/10.48550/arXiv.2409.00301.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:urn:sha-512:104a6049a39af8af9b0ab1f0feb0863bc7c51c835b597c1a66293e20b8bf734145b7239e8705a31900ce0ff8705c7baf15067f1a6f29333fa59e6fa2b9628f0d
-
Download URL:
-
File Type: