Decision making processes often rely on subjective evaluations provided by human raters. In the absence of a gold standard against which check evaluation trueness, rater's evaluative performance is generally measured through rater agreement coefficients. In this study some parametric and non-parametric inferential benchmarking procedures for characterizing the extent of rater agreement—assessed via kappa-type agreement coefficients—are illustrated. A Monte Carlo simulation study has been conducted to compare the performance of each procedure in terms of weighted misclassification rate computed for all agreement categories. Moreover, in order to investigate whether the procedures overestimate or underestimate the level of agreement, misclassifications have been computed also for each specific category alone. The practical application of coefficients and inferential benchmarking procedures has been illustrated via two real data sets exemplifying different experimental conditions so as to highlight performance differences due to sample size.

Benchmarking procedures for characterizing the extent of rater agreement: a comparative study / Vanacore, A.; Pellegrino, M. S.. - In: QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL. - ISSN 0748-8017. - 38:3(2022), pp. 1404-1415. [10.1002/qre.2982]

Benchmarking procedures for characterizing the extent of rater agreement: a comparative study

Vanacore A.
;
Pellegrino M. S.
2022

Abstract

Decision making processes often rely on subjective evaluations provided by human raters. In the absence of a gold standard against which check evaluation trueness, rater's evaluative performance is generally measured through rater agreement coefficients. In this study some parametric and non-parametric inferential benchmarking procedures for characterizing the extent of rater agreement—assessed via kappa-type agreement coefficients—are illustrated. A Monte Carlo simulation study has been conducted to compare the performance of each procedure in terms of weighted misclassification rate computed for all agreement categories. Moreover, in order to investigate whether the procedures overestimate or underestimate the level of agreement, misclassifications have been computed also for each specific category alone. The practical application of coefficients and inferential benchmarking procedures has been illustrated via two real data sets exemplifying different experimental conditions so as to highlight performance differences due to sample size.
2022
Benchmarking procedures for characterizing the extent of rater agreement: a comparative study / Vanacore, A.; Pellegrino, M. S.. - In: QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL. - ISSN 0748-8017. - 38:3(2022), pp. 1404-1415. [10.1002/qre.2982]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/880983
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact