In addressing the challenge of optimizing maintenance operations in Industry 4.0, recent efforts have focused on predictive maintenance frameworks. However, the effectiveness of these frameworks, largely relying on complex deep learning models, is hindered by their lack of explainability. To address this, we employ eXplainable Artificial Intelligence (XAI) methodologies to make the decision-making process more understandable for humans. Our study, based on a previous work, specifically explores explanations for predictions made by a recurrent neural network-based model designed for a three-dimensional dataset, used to estimate the Remaining Useful Life (RUL) of Hard Disk Drives (HDDs). We compare the explanations provided by different XAI tools, emphasizing the utility of global and local explanations in supporting predictive maintenance tasks. Using the Backblaze Dataset and a Long Short-Term Memory (LSTM) prediction model, our developed explanation framework evaluates Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) tools. Results show that SHAP outperforms LIME across various metrics, establishing itself as a suitable and effective solution for HDD predictive maintenance applications.
A Comparative Assessment of eXplainable AI Tools in Predicting Hard Disk Drive Health / Amato, F.; Ferraro, A.; Galli, A.; La Gatta, V.; Moscato, F.; Moscato, V.; Postiglione, M.; Sansone, C.; Sperli, G.. - 3741:(2024), pp. 574-584. (Intervento presentato al convegno 32nd Italian Symposium on Advanced Database Systems, SEBD 2024 tenutosi a Villasimius (CA) nel 23-26 June).
A Comparative Assessment of eXplainable AI Tools in Predicting Hard Disk Drive Health
Amato F.;Ferraro A.;Galli A.;La Gatta V.;Moscato F.;Moscato V.;Postiglione M.;Sansone C.;Sperli G.
2024
Abstract
In addressing the challenge of optimizing maintenance operations in Industry 4.0, recent efforts have focused on predictive maintenance frameworks. However, the effectiveness of these frameworks, largely relying on complex deep learning models, is hindered by their lack of explainability. To address this, we employ eXplainable Artificial Intelligence (XAI) methodologies to make the decision-making process more understandable for humans. Our study, based on a previous work, specifically explores explanations for predictions made by a recurrent neural network-based model designed for a three-dimensional dataset, used to estimate the Remaining Useful Life (RUL) of Hard Disk Drives (HDDs). We compare the explanations provided by different XAI tools, emphasizing the utility of global and local explanations in supporting predictive maintenance tasks. Using the Backblaze Dataset and a Long Short-Term Memory (LSTM) prediction model, our developed explanation framework evaluates Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) tools. Results show that SHAP outperforms LIME across various metrics, establishing itself as a suitable and effective solution for HDD predictive maintenance applications.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.