Classification of remotely sensed hyperspectral images via supervised approaches is typically affected by high dimensionality of the spectral data and a limited number of labeled samples. Dimensionality reduction via feature extraction and active learning (AL) are two approaches that researchers have investigated independently to deal with these two problems. In this paper, we propose a new method in which the feature extraction and AL steps are combined into a unique framework. The idea is to learn and update a reduced feature space in a supervised way at each iteration of the AL process, thus taking advantage of the increasing labeled information provided by the user. In particular, the computation of the reduced feature space is based on the large-margin nearest neighbor (LMNN) metric learning principle. This strategy is applied in conjunction with k-nearest neighbor ( k-NN) classification, for which a new sample selection strategy is proposed. The methodology is validated experimentally on four benchmark hyperspectral data sets. Good improvements in terms of classification accuracy and computational time are achieved with respect to the state-of-the-art strategies that do not combine feature extraction and AL.
Active-metric learning for classification of remotely sensed hyperspectral images / Pasolli, Edoardo; Yang, Hsiuhan Lexie; Crawford, Melba M.. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 54:4(2016), pp. 1925-1939. [10.1109/TGRS.2015.2490482]
Active-metric learning for classification of remotely sensed hyperspectral images
Pasolli, Edoardo;
2016
Abstract
Classification of remotely sensed hyperspectral images via supervised approaches is typically affected by high dimensionality of the spectral data and a limited number of labeled samples. Dimensionality reduction via feature extraction and active learning (AL) are two approaches that researchers have investigated independently to deal with these two problems. In this paper, we propose a new method in which the feature extraction and AL steps are combined into a unique framework. The idea is to learn and update a reduced feature space in a supervised way at each iteration of the AL process, thus taking advantage of the increasing labeled information provided by the user. In particular, the computation of the reduced feature space is based on the large-margin nearest neighbor (LMNN) metric learning principle. This strategy is applied in conjunction with k-nearest neighbor ( k-NN) classification, for which a new sample selection strategy is proposed. The methodology is validated experimentally on four benchmark hyperspectral data sets. Good improvements in terms of classification accuracy and computational time are achieved with respect to the state-of-the-art strategies that do not combine feature extraction and AL.File | Dimensione | Formato | |
---|---|---|---|
Pasolli_2016.pdf
solo utenti autorizzati
Tipologia:
Documento in Post-print
Licenza:
Accesso privato/ristretto
Dimensione
3.48 MB
Formato
Adobe PDF
|
3.48 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.