Dimensionality reduction is the search for a low-dimensional space that captures the 'essence' of the original high-dimensional data. Principal Component Analysis (PCA) is one of the most used dimensionality reduction techniques in psychology for data analysis and measure development. However, machine learning techniques manage more complex data, and they represent a valuable alternative to classical methods. In this work, we consider autoencoders, neural networks with as many inputs as outputs and a smaller number of hidden nodes, which are widely used in several study fields for dimensionality reduction. Recent literature has compared PCA and autoencoders focusing especially on differences concerning image reconstruction. In particular, recent research has proposed a PCA-Autoencoder, a neural network which embeds PCA properties. The objective of this work is to compare PCA-autoencoder, an autoencoder with uncorrelated hidden features, a simple autoencoder and Principal Component Analysis performances on artificial data that have been simulated from factor-based populations. Results show that three-layered linear autoencoders and PCA are very similar in performing dimensionality reduction in terms of accuracy and reconstruction error also on simulated data from psychometric models and could be used where traditional methods show their limits.

From Principal Component Analysis to Autoencoders: a comparison on simulated data from psychometric models / Casella, M.; Dolce, P.; Ponticorvo, M.; Marocco, D.. - (2022), pp. 377-381. (Intervento presentato al convegno 1st IEEE International Workshop on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering, MetroXRAINE 2022 tenutosi a ita nel 2022) [10.1109/MetroXRAINE54828.2022.9967686].

From Principal Component Analysis to Autoencoders: a comparison on simulated data from psychometric models

Casella M.;Dolce P.;Ponticorvo M.;Marocco D.
2022

Abstract

Dimensionality reduction is the search for a low-dimensional space that captures the 'essence' of the original high-dimensional data. Principal Component Analysis (PCA) is one of the most used dimensionality reduction techniques in psychology for data analysis and measure development. However, machine learning techniques manage more complex data, and they represent a valuable alternative to classical methods. In this work, we consider autoencoders, neural networks with as many inputs as outputs and a smaller number of hidden nodes, which are widely used in several study fields for dimensionality reduction. Recent literature has compared PCA and autoencoders focusing especially on differences concerning image reconstruction. In particular, recent research has proposed a PCA-Autoencoder, a neural network which embeds PCA properties. The objective of this work is to compare PCA-autoencoder, an autoencoder with uncorrelated hidden features, a simple autoencoder and Principal Component Analysis performances on artificial data that have been simulated from factor-based populations. Results show that three-layered linear autoencoders and PCA are very similar in performing dimensionality reduction in terms of accuracy and reconstruction error also on simulated data from psychometric models and could be used where traditional methods show their limits.
2022
978-1-6654-8574-6
From Principal Component Analysis to Autoencoders: a comparison on simulated data from psychometric models / Casella, M.; Dolce, P.; Ponticorvo, M.; Marocco, D.. - (2022), pp. 377-381. (Intervento presentato al convegno 1st IEEE International Workshop on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering, MetroXRAINE 2022 tenutosi a ita nel 2022) [10.1109/MetroXRAINE54828.2022.9967686].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/906139
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact