This work proposes a novel general framework, in the context of eXplainable Artificial Intelligence (XAI), to construct explanations for the behaviour of Machine Learning (ML) models in terms of middle-level features which represent perceptually salient input parts. One can isolate two different ways to provide explanations in the context of XAI: low and middle-level explanations. Middle-level explanations have been introduced for alleviating some deficiencies of low-level explanations such as, in the context of image classification, the fact that human users are left with a significant interpretive burden: starting from low-level explanations, one has to identify properties of the overall input that are perceptually salient for the human visual system. However, a general approach to correctly evaluate the elements of middle-level explanations with respect ML model responses has never been proposed in the literature. We experimentally evaluate the proposed approach to explain the decisions made by an Imagenet pre-trained VGG16 model on STL-10 images and by a customised model trained on the JAFFE dataset, using two different computational definitions of middle-level features and compare it with two different XAI middle-level methods. The results show that our approach can be used successfully in different computational definitions of middle-level explanations.

A General Approach to Compute the Relevance of Middle-Level Input Features / Apicella, A.; Giugliano, S.; Isgro, F.; Prevete, R.. - 12663:(2021), pp. 189-203. (Intervento presentato al convegno 25th International Conference on Pattern Recognition Workshops, ICPR 2020 tenutosi a ita nel 2021) [10.1007/978-3-030-68796-0_14].

A General Approach to Compute the Relevance of Middle-Level Input Features

Apicella A.
;
Giugliano S.;Isgro F.;Prevete R.
2021

Abstract

This work proposes a novel general framework, in the context of eXplainable Artificial Intelligence (XAI), to construct explanations for the behaviour of Machine Learning (ML) models in terms of middle-level features which represent perceptually salient input parts. One can isolate two different ways to provide explanations in the context of XAI: low and middle-level explanations. Middle-level explanations have been introduced for alleviating some deficiencies of low-level explanations such as, in the context of image classification, the fact that human users are left with a significant interpretive burden: starting from low-level explanations, one has to identify properties of the overall input that are perceptually salient for the human visual system. However, a general approach to correctly evaluate the elements of middle-level explanations with respect ML model responses has never been proposed in the literature. We experimentally evaluate the proposed approach to explain the decisions made by an Imagenet pre-trained VGG16 model on STL-10 images and by a customised model trained on the JAFFE dataset, using two different computational definitions of middle-level features and compare it with two different XAI middle-level methods. The results show that our approach can be used successfully in different computational definitions of middle-level explanations.
2021
978-3-030-68795-3
978-3-030-68796-0
A General Approach to Compute the Relevance of Middle-Level Input Features / Apicella, A.; Giugliano, S.; Isgro, F.; Prevete, R.. - 12663:(2021), pp. 189-203. (Intervento presentato al convegno 25th International Conference on Pattern Recognition Workshops, ICPR 2020 tenutosi a ita nel 2021) [10.1007/978-3-030-68796-0_14].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/860275
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact