Safety Assurance and Demonstration of Connected Autonomous Vehicles
-
2025-07-31
Details
-
Creators:
-
Corporate Creators:
-
Corporate Contributors:
-
Subject/TRT Terms:
-
Resource Type:
-
Geographical Coverage:
-
Edition:Final Report (July 1, 2024 – June 30, 2025)
-
Corporate Publisher:
-
Abstract:This proposed larger-scale effort aims to re-define and demonstrate the vision of full autonomy to one of safe autonomy, where a learning-enabled system is coupled with the foundations of cyber-physical systems to endow the system with an explicit awareness of both its capabilities and limitations. In turn, the system realizes when it is in or near a zone where its safety cannot be assured, and thereby transitions to a safe fallback state. A multi-pronged approach is adopted to achieve safe autonomy: (a) creating contextual awareness of the operating conditions to modify learning- and logic-based behaviors to reflect the operational context; (b) determining the location and orientation of the AV in absolute and relative coordinate frames to serve the needs of different tasks reliably and scalably; (c) defining and enforcing both static and dynamic guards for safe real-time actuation; (d) developing a powerful co-simulation framework to safely and efficiently test system performance under a range of clear and adverse operating conditions; and (e) validating and demonstrating the methodology on Carnegie Mellon University's (CMU’s) Cadillac CT6 autonomous vehicle. The effort will also showcase physical demonstrations of vehicle capabilities to researchers and visiting dignitaries. Recent advances in machine learning (ML) have been significant, and the application potential for ML seems limitless. However, using ML in its current form inevitably generates a non-zero amount of false positives and negatives, which in a safety-critical system can potentially be disastrous, causing damage to life and/or property. At the same time, the judicious use mathematical foundations, scientific principles and engineering ingenuity has led to the creation of large-scale but practical safety-critical systems such as aviation, nuclear power plants, electric grids and medical devices. In this effort, the research team builds on the conjecture that learning-enabled systems must necessarily be guided and fenced by logical, explainable and analyzable safeguards. Specifically, the team proposes to apply their methodology to the domain of connected and autonomous vehicles which must address a very long tail of known and unknown scenarios.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:urn:sha-512:c0121ec5c2dcba1fbd4431138918155ff40f1a76f581595fd9efde5daea1a9e5173082460c88445847793345f8d6c72553cd50b9314d50f4ae7ac7032fb7d2c0
-
Download URL:
-
File Type: