Nicolas Schischka
Recently, promising research efforts have been made to advance the perception of autonomous agents. However, dealing with partial or even full occlusions is still a challenging research area, although it is especially important for complex urban scenarios. In those, the detection rate for vulnerable road users, such as pedestrians that are hidden behind physical barriers, can drop significantly. This project is therefore supposed to advance the current state-of-the-art by further building on methods like amodal panoptic segmentation, pixel tracking and depth estimation, while considering heterogeneous sensor modalities as input. Moreover, leveraging information from other road users or road infrastructure via Vehicle-to-Everything (V2X) communication can provide orthogonal cues to those retrieved from the sensor data of each individual robot. By designing novel learning-based fusion models for collective perception, this project additionally aims to extend and improve over single-agent-based models and develop robust methods for occlusion-aware 3D occupancy estimation.