Improvements in low altitude, non-cooperative sense and avoid are of major interest for collision hazard mitigation within the UTM/UAM/U-Space framework. In this regard, the sensing architecture must be carefully designed so that its detection and tracking performance is suitable for timely and reliable conflict assessment, while respecting size, weight, power and costs constraints, which are particularly strict for small aerial vehicles. Within this framework, an experimental assessment of non-cooperative sensing solutions based on a lightweight radar and a visual camera, respectively, is presented in this paper. Visual detections are obtained by using a Deep Learning-based neural network, while raw detections produced by the radar are first filtered based on Doppler information to remove ground clutter, and then clustered by means of a centroiding approach. The resulting detection sets are used to generate tentative and firm tracks using customized Kalman filtering techniques. Following a research plan that foresees data gathering with incremental complexity, ground-to-air tests have been carried out using a small UAV as flying intruder, and Carrier-Phase Differential GNSS to get a reference solution and assess visual-based and radar-based detection and tracking performance. Results achieved by standalone radar and visual sensing solutions clearly highlight the potential of sensor fusion strategies to take advantage of their complementary characteristics.
File in questo prodotto:
Non ci sono file associati a questo prodotto.