This paper introduces a hybrid deep learning approach for processing electroencephalographic (EEG) signals to track their dynamics over time in the spatial-spectral domain. This method is particularly valuable whenever the temporal evolution of the brain process under analysis is relevant to classification. A hybrid deep learning model, herein referred to as EEGConViT, processes spatial-temporal stream of data (EEG movies). Specifically, EEGConViT consists of a custom CNN that encodes each frame into a feature vector, which is then augmented with temporal position embeddings with a Transformer model able to capture sequential dependencies. In this work, the application of motor EEG signals was investigated with the proposed EEGConViT. In particular, EEG signals preceding motor execution were processed to assess the ability of the model to predict the upcoming sub-movement of the upper limb. The model was evaluated using a collection of EEG signals from 14 subjects, derived from a publicly available repository. Using a leave-one-subject-out strategy, the model was trained on data from 13 subjects and fine-tuned on the remaining one (cross-subject training and subsequent calibration over the single subject). Results demonstrate that our approach outperforms comparable models in the literature while significantly reducing training time, an essential factor in medical applications, where both classification performance and rapid calibration are critical.

Movie-ing the EEG: a Hybrid CNN-Transformer-based Framework for decoding EEG signals for BCI applications / Suffian, M.; Ieracitano, C.; Morabito, F. C.; Mammone, N.. - (2025), pp. 1-8. ( 2025 International Joint Conference on Neural Networks, IJCNN 2025 Pontifical Gregorian University, ita 2025) [10.1109/IJCNN64981.2025.11228392].

Movie-ing the EEG: a Hybrid CNN-Transformer-based Framework for decoding EEG signals for BCI applications

Ieracitano C.;
2025

Abstract

This paper introduces a hybrid deep learning approach for processing electroencephalographic (EEG) signals to track their dynamics over time in the spatial-spectral domain. This method is particularly valuable whenever the temporal evolution of the brain process under analysis is relevant to classification. A hybrid deep learning model, herein referred to as EEGConViT, processes spatial-temporal stream of data (EEG movies). Specifically, EEGConViT consists of a custom CNN that encodes each frame into a feature vector, which is then augmented with temporal position embeddings with a Transformer model able to capture sequential dependencies. In this work, the application of motor EEG signals was investigated with the proposed EEGConViT. In particular, EEG signals preceding motor execution were processed to assess the ability of the model to predict the upcoming sub-movement of the upper limb. The model was evaluated using a collection of EEG signals from 14 subjects, derived from a publicly available repository. Using a leave-one-subject-out strategy, the model was trained on data from 13 subjects and fine-tuned on the remaining one (cross-subject training and subsequent calibration over the single subject). Results demonstrate that our approach outperforms comparable models in the literature while significantly reducing training time, an essential factor in medical applications, where both classification performance and rapid calibration are critical.
2025
Movie-ing the EEG: a Hybrid CNN-Transformer-based Framework for decoding EEG signals for BCI applications / Suffian, M.; Ieracitano, C.; Morabito, F. C.; Mammone, N.. - (2025), pp. 1-8. ( 2025 International Joint Conference on Neural Networks, IJCNN 2025 Pontifical Gregorian University, ita 2025) [10.1109/IJCNN64981.2025.11228392].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1033054
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact