The rapidly growing research area of eXplainable Artificial Intelligence (XAI) focuses on making Machine Learning systems' decisions more transparent and humanly understandable. One of the most successful XAI strategies is to provide explanations in terms of visualisations and, more specifically, low-level input features such as relevance scores or heat maps of the input, like sensitivity analysis or layer-wise relevance propagation methods. The main problem with such methods is that starting from the relevance of low-level features, the human user needs to identify the overall input properties that are salient. Thus, a current line of XAI research attempts to alleviate this weakness of low-level approaches, constructing explanations in terms of input features that represent more salient and understandable input properties for a user, which we call here Middle-Level input Features (MLF). In addition, another interesting and very recent approach is that of considering hierarchically organised explanations. Thus, in this paper, we investigate the possibility to combine both MLFs and hierarchical organisations. The potential advantages of providing explanations in terms of hierarchically organised MLFs are grounded on the possibility of exhibiting explanations to a different granularity of MLFs interacting with each other. We experimentally tested our approach on 300 Birds Species and Cars dataset. The results seem encouraging.

Explanations in terms of Hierarchically organised Middle Level Features / Apicella, A.; Giugliano, S.; Isgro, F.; Prevete, R.. - 3014:(2021), pp. 44-57. (Intervento presentato al convegno 2nd Italian Workshop on Explainable Artificial Intelligence, XAI.it 2021 nel 2021).

Explanations in terms of Hierarchically organised Middle Level Features

Apicella A.;Isgro F.;Prevete R.
2021

Abstract

The rapidly growing research area of eXplainable Artificial Intelligence (XAI) focuses on making Machine Learning systems' decisions more transparent and humanly understandable. One of the most successful XAI strategies is to provide explanations in terms of visualisations and, more specifically, low-level input features such as relevance scores or heat maps of the input, like sensitivity analysis or layer-wise relevance propagation methods. The main problem with such methods is that starting from the relevance of low-level features, the human user needs to identify the overall input properties that are salient. Thus, a current line of XAI research attempts to alleviate this weakness of low-level approaches, constructing explanations in terms of input features that represent more salient and understandable input properties for a user, which we call here Middle-Level input Features (MLF). In addition, another interesting and very recent approach is that of considering hierarchically organised explanations. Thus, in this paper, we investigate the possibility to combine both MLFs and hierarchical organisations. The potential advantages of providing explanations in terms of hierarchically organised MLFs are grounded on the possibility of exhibiting explanations to a different granularity of MLFs interacting with each other. We experimentally tested our approach on 300 Birds Species and Cars dataset. The results seem encouraging.
2021
Explanations in terms of Hierarchically organised Middle Level Features / Apicella, A.; Giugliano, S.; Isgro, F.; Prevete, R.. - 3014:(2021), pp. 44-57. (Intervento presentato al convegno 2nd Italian Workshop on Explainable Artificial Intelligence, XAI.it 2021 nel 2021).
File in questo prodotto:
File Dimensione Formato  
paper4.pdf

accesso aperto

Licenza: Creative commons
Dimensione 4.78 MB
Formato Adobe PDF
4.78 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/920651
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact