The ability to measure how closely raters agree when providing subjective evaluations is a need common to many fields of research. A main criticism arising in agreement studies concerns the dependency of the commonly adopted κ-type agreement coefficients on the frequency distribution of ratings over classification categories. This study aims at investigating the robustness of two inferential benchmarking procedures adopted for characterizing the extent of inter-rater agreement when applied to different κ-type coefficients.
Titolo: | Robustness of κ-type coefficients as measures of rater consistency | |
Autori: | ||
Data di pubblicazione: | 2019 | |
Abstract: | The ability to measure how closely raters agree when providing subjective evaluations is a need common to many fields of research. A main criticism arising in agreement studies concerns the dependency of the commonly adopted κ-type agreement coefficients on the frequency distribution of ratings over classification categories. This study aims at investigating the robustness of two inferential benchmarking procedures adopted for characterizing the extent of inter-rater agreement when applied to different κ-type coefficients. | |
Handle: | http://hdl.handle.net/11588/762931 | |
ISBN: | 978-88-86638-65-4 | |
Appare nelle tipologie: | 4.1 Articoli in Atti di convegno |
File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.