In this article, we explore the use of Large Language Models (LLMs) for predicting factor loadings in personality tests through the semantic analysis of test items. By leveraging text embeddings generated from LLMs, we evaluate the semantic similarity of test items and their alignment with hypothesized factorial structures without depending on human response data. Our methodology involves using embeddings from four different personality test to examine correlations between item semantics and their grouping in principal factors. Our results indicate that LLM-derived embeddings can effectively capture semantic similarities among test items, showing moderate to high correlation with the factorial structure produced by humans respondents in all tests, potentially serving as a valid measure of content validity for initial survey design and refinement. This approach offers valuable insights into the robustness of embedding techniques in psychological evaluations, showing a significant correlation with traditional test structures and providing a novel perspective on test item analysis.
Semantic analysis of test items through large language model embeddings predicts a-priori factorial structure of personality tests / Milano, N.; Luongo, M.; Ponticorvo, M.; Marocco, D.. - In: CURRENT RESEARCH IN BEHAVIORAL SCIENCES. - ISSN 2666-5182. - 8:(2025). [10.1016/j.crbeha.2025.100168]
Semantic analysis of test items through large language model embeddings predicts a-priori factorial structure of personality tests
Milano N.;Ponticorvo M.;Marocco D.
2025
Abstract
In this article, we explore the use of Large Language Models (LLMs) for predicting factor loadings in personality tests through the semantic analysis of test items. By leveraging text embeddings generated from LLMs, we evaluate the semantic similarity of test items and their alignment with hypothesized factorial structures without depending on human response data. Our methodology involves using embeddings from four different personality test to examine correlations between item semantics and their grouping in principal factors. Our results indicate that LLM-derived embeddings can effectively capture semantic similarities among test items, showing moderate to high correlation with the factorial structure produced by humans respondents in all tests, potentially serving as a valid measure of content validity for initial survey design and refinement. This approach offers valuable insights into the robustness of embedding techniques in psychological evaluations, showing a significant correlation with traditional test structures and providing a novel perspective on test item analysis.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


