Details:
-
Creators:
-
Corporate Creators:
-
Corporate Contributors:
-
Subject/TRT Terms:
-
Resource Type:
-
Geographical Coverage:
-
Edition:Final Report (July 1, 2023-June 30, 2024)
-
Contracting Officer:
-
Corporate Publisher:
-
Abstract:Safety is a critical strategic goal of the U.S. DOT according to its RD&T plan with zero fatalities being a grand challenge. This proposal directly addresses pedestrian safety – critical to automotive safety. The research team targets the design of roadside infrastructure at critical points along roadways where pedestrian accidents are likely, such as intersections, sharp curves or hidden driveways. Many such locations involve blind spots, where the vehicle or any single roadside sensor only has a partial view of its environment owing to occlusions. There is rich prior art on sensing systems for blind spot monitoring ranging from the cameras, LiDAR or custom (usually visual) sensors, many of which rely on multiple sensors placed at different vantage points adding to cost and installation burden. What is lacking is a sensing platform that can be placed at a single vantage point, and yet offers a seamless “through-occlusion” imaging, while operating over long-range (hundreds of meters) and offering high-resolution (sub-centimeter). In this proposal, the team designs a high-resolution imaging system despite obstructions using two complementary platforms – a mmWave radar and camera. The team specifically focuses on single-chip automotive mmWave radars that are widely deployed in cars as collision sensors yet are extremely compact – merely centimeters across. The key technical insight is that both the mmWave radar and camera have complementary strengths. While the mmWave radar offers extremely high depth-resolution (centimeter-scale at even hundreds of meters) and through-occlusion imaging, its spatial resolution is extremely poor (several degrees). In contrast, cameras offer poor depth resolution (several meters, especially at long range), but high spatial resolution. This work explores mechanisms to achieve the best of both these systems on a single combined platform. Unfortunately, a classical data-driven sensor fusion does not directly apply in this case mainly due to the unique attributes of mmWave radar images. Specifically, mmWave radar outputs experience clutter artifacts that must be carefully eliminated to prevent spurious detected objects. The preliminary work on mmWave-and-camera based depth sensing at IROS’22 explicitly models and corrects for this effect prior to fusing sensed data. Through the proposed work, the team seeks to generalize this to generate high-resolution 3D point clouds, including through-occlusions. Beyond technical research contributions, the objective is to demonstrate end-to-end benefits of the system for existing stakeholders. To this end, the team will further fully implement and evaluate the system on commodity mmWave platforms and camera systems with the support of Bosch, who has generously offered $100K of in-kind support (lab spaces, equipment, personnel time). The designs will be deployed at pedestrian intersections, with the first deployment enabled through active collaboration from the City of Pittsburgh Department of Mobility and Infrastructure, who the team is in conversations with as a deployment partner. The team is acutely aware that pedestrian safety incidents impact under-served, low-income and the housing displaced at a rate much higher than the general population. To this end, the team has engaged with the Pittsburgh Parks Conservancy as an equity partner to identify the needs of the community in/around Mellon Square Park in shaping the first deployment.
-
Format:
-
Funding:
-
Collection(s):
-
Main Document Checksum:
-
Download URL:
-
File Type: