In recent years, Artificial Intelligence has gained significant popularity for solving various tasks, including service optimization, system monitoring, and industrial control. Despite its success, adoption in critical systems, such as the railway domain, remains limited. This is primarily due to the high stakes in these systems, where failures can lead to damage to critical infrastructure and risks to human lives. As a result, software in these domains must be deterministic, ensuring that all behaviors can be statically verified. Machine Learning models, due to their complexity, are often perceived as black-box systems and exhibit seemingly nondeterministic behavior, making their integration into such infrastructure challenging. To address this issue, one potential solution is the use of eXplainable Artificial Intelligence (XAI) techniques, which enable the construction of human-interpretable explanations for model predictions. In this paper, we propose a time-series prediction framework for the railway domain by combining XGBoost, a highly accurate tree-based model, with SHAP, a widely used explainability technique.

Harnessing Explainable AI in Railway: A Decision Tree-Based Approach / Barbareschi, Mario; Emmanuele, Antonio; Mazzocca, Nicola; Di Torrepadula, Franca Rocco. - (2025), pp. 119-124. ( 20th European Dependable Computing Conference Companion, EDCC-C 2025 Faculty of Sciences of the University of Lisbon (FCUL), prt 2025) [10.1109/edcc-c66476.2025.00043].

Harnessing Explainable AI in Railway: A Decision Tree-Based Approach

Barbareschi, Mario;Emmanuele, Antonio;Mazzocca, Nicola;di Torrepadula, Franca Rocco
2025

Abstract

In recent years, Artificial Intelligence has gained significant popularity for solving various tasks, including service optimization, system monitoring, and industrial control. Despite its success, adoption in critical systems, such as the railway domain, remains limited. This is primarily due to the high stakes in these systems, where failures can lead to damage to critical infrastructure and risks to human lives. As a result, software in these domains must be deterministic, ensuring that all behaviors can be statically verified. Machine Learning models, due to their complexity, are often perceived as black-box systems and exhibit seemingly nondeterministic behavior, making their integration into such infrastructure challenging. To address this issue, one potential solution is the use of eXplainable Artificial Intelligence (XAI) techniques, which enable the construction of human-interpretable explanations for model predictions. In this paper, we propose a time-series prediction framework for the railway domain by combining XGBoost, a highly accurate tree-based model, with SHAP, a widely used explainability technique.
2025
Harnessing Explainable AI in Railway: A Decision Tree-Based Approach / Barbareschi, Mario; Emmanuele, Antonio; Mazzocca, Nicola; Di Torrepadula, Franca Rocco. - (2025), pp. 119-124. ( 20th European Dependable Computing Conference Companion, EDCC-C 2025 Faculty of Sciences of the University of Lisbon (FCUL), prt 2025) [10.1109/edcc-c66476.2025.00043].
File in questo prodotto:
File Dimensione Formato  
Harnessing_Explainable_AI_in_Railway_A_Decision_Tree-Based_Approach.pdf

solo utenti autorizzati

Licenza: Copyright dell'editore
Dimensione 358.53 kB
Formato Adobe PDF
358.53 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1014865
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact