Building Digital Twins via GPU-Accelerated Human-Regularized Reinforcement Learning
-
2024-09-01
Details
-
Creators:
-
Corporate Creators:
-
Corporate Contributors:United States. Department of Transportation. Federal Highway Administration. Office of Research, Development, and Technology ; United States. Department of Transportation. University Transportation Centers (UTC) Program ; United States. Department of Transportation. Office of the Assistant Secretary for Research and Technology
-
Subject/TRT Terms:
-
Resource Type:
-
Geographical Coverage:
-
Edition:Final report 09/30/2023-09/30/2024
-
Corporate Publisher:
-
Abstract:We have carried out a two-component study investigating how to build high-performing human driver models. First, we investigated how to combine large-scale datasets from autonomous vehicle companies with reinforcement learning methods for training agents. We showed that by adding a supervised learning loss atop the model training, we could build models that were human-like without reducing the agent performance. However, we were bottlenecked by simulator speed. We then designed anew simulator, Gurdave, that can run thousands of simultaneous simulations of urban environments containing drivers, pedestrians, and cyclists. This simulator is an order of magnitude faster than existing open-source simulators. We demonstrate that in this simulator, we can reproduce our prior agent training results between one and two orders of magnitude faster.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:urn:sha-512:d5b76881a0f047aedf9551a87783a8fc6b91ae4ff51d9e6d850a29980f6fe1b9d153524a42b438ed47539bcdaced4f2c91767e89e974d7c76cd0c9f06007e948
-
Download URL:
-
File Type: