In several contexts ranging from manufacturing to service sector, quality assessment relies on subjective evaluations provided by a certain (sometimes small) number of raters. Generally, the raters are the final customers, but depending on the specific context and on the scope of the assessment they may also be field experts or trained quality inspectors. Each rater has an array of options, including subjective and objective assessments that she/he may use to judge, rank and rate the quality of a product or a service or even a process. Actually, the rater acts as a measurement instrument with a measurement error depending on her/his technical knowledge and skills with respect to the specific evaluation task. Raters’ evaluations are collected and then processed with the aim of getting useful information for taking strategic and/or operational decisions. Moving from the straightforward but often disregarded premise that only reliable raters can provide fair quality evaluations, a suitable procedure for selecting reliable raters who can be effectively employed to obtain a fair quality diagnosis is proposed. The reliability of the rater is defined as the ability to provide stable and coherent evaluations and it is measured as a function of two factors: repeatability (i.e. the ability to give a stable score to the same quality item in different occasions) and consistency (i.e. the ability to coherently score the same quality item using different rating scales). Starting from the above definition and following a psychometric approach, a rater reliability index has been developed together with a suitable hypothesis testing procedure for the selection of reliable raters. The rater reliability index and the rater selection procedure are fully exploited by their application to test the reliability of a group of students as quality assessors of a university teaching course.

A novel reliability index to evaluate and test rater performance / Vanacore, Amalia; Pellegrino, MARIA SOLE. - (2015). (Intervento presentato al convegno European Network for Business and Industrial Statistics, 15th International Annual Conference tenutosi a Prague nel 6-10 September 2015).

A novel reliability index to evaluate and test rater performance

VANACORE, AMALIA;PELLEGRINO, MARIA SOLE
2015

Abstract

In several contexts ranging from manufacturing to service sector, quality assessment relies on subjective evaluations provided by a certain (sometimes small) number of raters. Generally, the raters are the final customers, but depending on the specific context and on the scope of the assessment they may also be field experts or trained quality inspectors. Each rater has an array of options, including subjective and objective assessments that she/he may use to judge, rank and rate the quality of a product or a service or even a process. Actually, the rater acts as a measurement instrument with a measurement error depending on her/his technical knowledge and skills with respect to the specific evaluation task. Raters’ evaluations are collected and then processed with the aim of getting useful information for taking strategic and/or operational decisions. Moving from the straightforward but often disregarded premise that only reliable raters can provide fair quality evaluations, a suitable procedure for selecting reliable raters who can be effectively employed to obtain a fair quality diagnosis is proposed. The reliability of the rater is defined as the ability to provide stable and coherent evaluations and it is measured as a function of two factors: repeatability (i.e. the ability to give a stable score to the same quality item in different occasions) and consistency (i.e. the ability to coherently score the same quality item using different rating scales). Starting from the above definition and following a psychometric approach, a rater reliability index has been developed together with a suitable hypothesis testing procedure for the selection of reliable raters. The rater reliability index and the rater selection procedure are fully exploited by their application to test the reliability of a group of students as quality assessors of a university teaching course.
2015
978-961-240-294-5
A novel reliability index to evaluate and test rater performance / Vanacore, Amalia; Pellegrino, MARIA SOLE. - (2015). (Intervento presentato al convegno European Network for Business and Industrial Statistics, 15th International Annual Conference tenutosi a Prague nel 6-10 September 2015).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/670862
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact