This paper presents a vision-based obstacle sensing architecture designed for Urban Air Mobility approach and landing operations. The proposed system is integrated within a vision-aided navigation pipeline and comprises two main modules: an air-to-air module, responsible for detecting nearby flying vehicles that could pose collision risk, and an air-to-ground module, which assesses the occupancy status of the assigned landing area. In the air-to-air module, a preliminary analysis defines the sensing requirements, focusing on the Field of View (FOV) needed to detect any potential intruding drone. These requirements lead to the selection of a region of interest in the acquired images that fully encompasses any potential colliding intruder, and support the development of strategies to mitigate collision risks near vertiports. Once intruders are ensured to fall within the sensor FOV, the air-to-air detection pipeline includes a global appearance-based intruder detector, a local motion-based confirmation logic, and a tracker to evaluate the trajectory criticality. On the air-to-ground side, the system verifies the safety of the landing area by applying a classification Convolutional Neural Network to the image portion corresponding to the designated landing pattern, which is provided by the vision-aided navigation module. A highfidelity simulation environment is developed to generate representative datasets and validate the performance of the proposed obstacle sensing architecture in urban scenarios.
Enhanced Vision-Based Obstacle Sensing During UAM Approach and Landing Operations / Veneruso, Paolo; Miccio, Enrico; Opromolla, Roberto; Fasano, Giancarmine; Gentile, Giacomo; Tiana, Carlo. - (2025), pp. 1-10. ( Digital Avionics Systems Conference (DASC) Montreal, QC, Canada 14-18 September 2025) [10.1109/dasc66011.2025.11257317].
Enhanced Vision-Based Obstacle Sensing During UAM Approach and Landing Operations
Veneruso, Paolo;Miccio, Enrico;Opromolla, Roberto;Fasano, Giancarmine;
2025
Abstract
This paper presents a vision-based obstacle sensing architecture designed for Urban Air Mobility approach and landing operations. The proposed system is integrated within a vision-aided navigation pipeline and comprises two main modules: an air-to-air module, responsible for detecting nearby flying vehicles that could pose collision risk, and an air-to-ground module, which assesses the occupancy status of the assigned landing area. In the air-to-air module, a preliminary analysis defines the sensing requirements, focusing on the Field of View (FOV) needed to detect any potential intruding drone. These requirements lead to the selection of a region of interest in the acquired images that fully encompasses any potential colliding intruder, and support the development of strategies to mitigate collision risks near vertiports. Once intruders are ensured to fall within the sensor FOV, the air-to-air detection pipeline includes a global appearance-based intruder detector, a local motion-based confirmation logic, and a tracker to evaluate the trajectory criticality. On the air-to-ground side, the system verifies the safety of the landing area by applying a classification Convolutional Neural Network to the image portion corresponding to the designated landing pattern, which is provided by the vision-aided navigation module. A highfidelity simulation environment is developed to generate representative datasets and validate the performance of the proposed obstacle sensing architecture in urban scenarios.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


