Decoding electroencephalographic (EEG) signals is of key importance in the development of brain–computer interface (BCI) systems. However, high inter-subject variability in EEG signals requires user-specific calibration, which can be time-consuming and limit the application of deep learning approaches, due to general need of large amount of data to properly train these models. In this context, this paper proposes a multidimensional and explainable deep learning framework for fast and interpretable EEG decoding. In particular, EEG signals are projected into the spatial–spectral–temporal domain and processed using a custom three-dimensional (3D) Convolutional Neural Network, here referred to as EEGCubeNet. In this work, the method has been validated on EEGs recorded during motor BCI experiments. Namely, hand open (HO) and hand close (HC) movement planning was investigated by discriminating them from the absence of movement preparation (resting state, RE). The proposed method is based on a global- to subject-specific fine-tuning. The model is globally trained on a population of subjects and then fine-tuned on the final user, significantly reducing adaptation time. Experimental results demonstrate that EEGCubeNet achieves state-of-the-art performance (accuracy of 89.56 ± 4.29 and 89.06 ± 4.86 for HC versus RE and HO versus RE, binary classification tasks, respectively) with reduced framework complexity and with a reduced training time. In addition, to enhance transparency, a 3D occlusion sensitivity analysis-based explainability method (here named 3D xAI-OSA) that generates relevance maps revealing the most significant features to each prediction, was introduced.

An Explainable 3D-Deep Learning Model for EEG Decoding in Brain–Computer Interface Applications / Suffian, M.; Ieracitano, C.; Morabito, F. C.; Mammone, N.. - In: INTERNATIONAL JOURNAL OF NEURAL SYSTEMS. - ISSN 0129-0657. - 35:13(2025). [10.1142/S012906572550073X]

An Explainable 3D-Deep Learning Model for EEG Decoding in Brain–Computer Interface Applications

Ieracitano C.;
2025

Abstract

Decoding electroencephalographic (EEG) signals is of key importance in the development of brain–computer interface (BCI) systems. However, high inter-subject variability in EEG signals requires user-specific calibration, which can be time-consuming and limit the application of deep learning approaches, due to general need of large amount of data to properly train these models. In this context, this paper proposes a multidimensional and explainable deep learning framework for fast and interpretable EEG decoding. In particular, EEG signals are projected into the spatial–spectral–temporal domain and processed using a custom three-dimensional (3D) Convolutional Neural Network, here referred to as EEGCubeNet. In this work, the method has been validated on EEGs recorded during motor BCI experiments. Namely, hand open (HO) and hand close (HC) movement planning was investigated by discriminating them from the absence of movement preparation (resting state, RE). The proposed method is based on a global- to subject-specific fine-tuning. The model is globally trained on a population of subjects and then fine-tuned on the final user, significantly reducing adaptation time. Experimental results demonstrate that EEGCubeNet achieves state-of-the-art performance (accuracy of 89.56 ± 4.29 and 89.06 ± 4.86 for HC versus RE and HO versus RE, binary classification tasks, respectively) with reduced framework complexity and with a reduced training time. In addition, to enhance transparency, a 3D occlusion sensitivity analysis-based explainability method (here named 3D xAI-OSA) that generates relevance maps revealing the most significant features to each prediction, was introduced.
2025
An Explainable 3D-Deep Learning Model for EEG Decoding in Brain–Computer Interface Applications / Suffian, M.; Ieracitano, C.; Morabito, F. C.; Mammone, N.. - In: INTERNATIONAL JOURNAL OF NEURAL SYSTEMS. - ISSN 0129-0657. - 35:13(2025). [10.1142/S012906572550073X]
File in questo prodotto:
File Dimensione Formato  
an-explainable-3d-deep-learning-model-for-eeg-decoding-in-brain-computer-interface-applications.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 5.46 MB
Formato Adobe PDF
5.46 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1033039
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact