Finding Vulnerabilities of Autonomous Vehicle Stacks to Physical Adversaries
-
2025-05-01
-
Details
-
Creators:
-
Corporate Creators:
-
Corporate Contributors:
-
Subject/TRT Terms:
-
Publication/ Report Number:
-
Resource Type:
-
Geographical Coverage:
-
Edition:Final Report, 01/01/2024 - 12/31/2024
-
Corporate Publisher:
-
Abstract:Autonomous Driving (AD) vehicles must interact and respond in real-time to multiple sensor signals indicating the behavior of other agents in the environment, such as other vehicles, and pedestrians near the ego vehicle (i.e., the vehicle itself). While autonomous vehicle (AV) developers tend to generate numerous test cases in simulations to detect safety and security problems, to the best of our knowledge, they are not testing for malicious physical interactions from attackers, such as by placing emergency cones in the hood of an AV or driving maneuvers that nearby human drivers or other AVs might perform. The main goal of our project is to develop automatic testing tools to evaluate the safety and security of autonomous vehicle stacks against unanticipated critical physical conditions created by attackers. Specifically, we aim to demonstrate adversarial driving maneuvers in different real-world scenarios, highlighting the potential consequences for AV safety and security, build an attack framework in a simulation environment to study the optimal discovery of adversarial driving maneuvers, and contribute to the development of a skilled AV security workforce. In this way, this effort aims to enable the deployment of increasingly trustworthy transportation systems.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:urn:sha-512:f61ca5989439c353db5aa2af111d3f86f388b9ceecddc5b33130b287ec4c530431f8714ad5261c2a348803ffe018ce58f3fcd556b1cd1cdfd68c875ddfa612a4
-
Download URL:
-
File Type: