Over the last years, the increasing risk of collisions among inactive satellites or debris in the Earth orbit has pushed research efforts for the development of key enabling technologies for active debris removal missions. In this framework, visual sensors can represent a valid asset to support relative navigation during the close proximity operations, but several technical challenges must still be addressed to guarantee accurate measurements despite operating with uncooperative targets. In this regard, convolutional neural networks have shown promising results in terms of pose estimation accuracy and robustness to variable illumination conditions. This work, developed in the framework of the Autonomous Navigation up to High Earth Orbits (ANHEO) project funded by the Italian Space Agency (ASI), presents an original relative navigation architecture for proximity operations towards an uncooperative inactive satellite, employing convolutional neural networks for monocular-based pose estimation. Pose estimation is performed following an indirect approach, in which two neural networks of the YOLO family are employed for image processing: the first neural network locates the target satellite in the image, while the second one extracts and identifies a set of key-points corresponding to natural target features. The resulting correspondences between the 3D coordinates of the key-points and their positions in the image are used to solve a Perspective-n-Point problem combining analytical and numerical solvers. Pose estimates are finally integrated within a filtering scheme to retrieve the relative navigation state. A new dataset of 20000 synthetic images of the ENVISAT satellite is developed using the open-source software Blender, accounting for variable conditions in terms of target relative position, attitude, Sun illumination and presence of the Earth in the background, to train the networks. A first testing dataset is generated to assess pose estimation capability under various target observation geometries; then, a second dataset composed by a sequence of images acquired along a reference monitoring trajectory around ENVISAT is used to validate the entire relative navigation architecture. Results show the capability to estimate the relative state with errors of less than 6% of the range on relative position and with degree-level accuracy on relative attitude, while relative velocity and target angular velocity are estimated with errors of less than 6 cm/s and 0.03°/s, respectively.

A CNN-Based Relative Navigation Architecture for Proximity Operations in Active Debris Removal Missions / Napolano, Giuseppe; Pastore, Mario; Nocerino, Alessia; Fasano, Giancarmine; Grassi, Michele; Opromolla, Roberto. - (2024), pp. 541-559. ( 75th International Astronautical Congress (IAC 2024) Milano, Italia 14-18 Ottobre 2024) [10.52202/078360-0053].

A CNN-Based Relative Navigation Architecture for Proximity Operations in Active Debris Removal Missions

Napolano, Giuseppe;Nocerino, Alessia;Fasano, Giancarmine;Grassi, Michele;Opromolla, Roberto
2024

Abstract

Over the last years, the increasing risk of collisions among inactive satellites or debris in the Earth orbit has pushed research efforts for the development of key enabling technologies for active debris removal missions. In this framework, visual sensors can represent a valid asset to support relative navigation during the close proximity operations, but several technical challenges must still be addressed to guarantee accurate measurements despite operating with uncooperative targets. In this regard, convolutional neural networks have shown promising results in terms of pose estimation accuracy and robustness to variable illumination conditions. This work, developed in the framework of the Autonomous Navigation up to High Earth Orbits (ANHEO) project funded by the Italian Space Agency (ASI), presents an original relative navigation architecture for proximity operations towards an uncooperative inactive satellite, employing convolutional neural networks for monocular-based pose estimation. Pose estimation is performed following an indirect approach, in which two neural networks of the YOLO family are employed for image processing: the first neural network locates the target satellite in the image, while the second one extracts and identifies a set of key-points corresponding to natural target features. The resulting correspondences between the 3D coordinates of the key-points and their positions in the image are used to solve a Perspective-n-Point problem combining analytical and numerical solvers. Pose estimates are finally integrated within a filtering scheme to retrieve the relative navigation state. A new dataset of 20000 synthetic images of the ENVISAT satellite is developed using the open-source software Blender, accounting for variable conditions in terms of target relative position, attitude, Sun illumination and presence of the Earth in the background, to train the networks. A first testing dataset is generated to assess pose estimation capability under various target observation geometries; then, a second dataset composed by a sequence of images acquired along a reference monitoring trajectory around ENVISAT is used to validate the entire relative navigation architecture. Results show the capability to estimate the relative state with errors of less than 6% of the range on relative position and with degree-level accuracy on relative attitude, while relative velocity and target angular velocity are estimated with errors of less than 6 cm/s and 0.03°/s, respectively.
2024
A CNN-Based Relative Navigation Architecture for Proximity Operations in Active Debris Removal Missions / Napolano, Giuseppe; Pastore, Mario; Nocerino, Alessia; Fasano, Giancarmine; Grassi, Michele; Opromolla, Roberto. - (2024), pp. 541-559. ( 75th International Astronautical Congress (IAC 2024) Milano, Italia 14-18 Ottobre 2024) [10.52202/078360-0053].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1008084
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact