Massive Open On-line Courses (MOOCs) are effective and flexible resources to educate, train, and empower populations. Peer Assessment (PA), provides a powerful pedagogical strategy to support educational activities and foster learners’ success, also where a huge number of learners is involved. Item Response Theory (IRT) can model students’ features, such as the skill to accomplish a task, and the capability to mark tasks. In this paper we investigate the applicability of IRT models to PA, in the learning environments of MOOCs. Our main goal is to evaluate the relationships between some students’ IRT parameters (ability, strictness) and some PA parameters (number of graders per task, and rating scale). We use a data-set simulating a large class (1,000 peers), built by a Gaussian distribution of the students’ skills to accomplish a task. The IRT analysis of the PA data allow to say that the best estimate for peers' ability is when 15 raters per task are used, with a [1,10] rating scale.

An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses / Nakayama, Minoru; Sciarrone, Filippo; Temperini, Marco; Uto, Masaki. - In: INTERNATIONAL JOURNAL OF DISTANCE EDUCATION TECHNOLOGIES. - ISSN 1539-3100. - 20:1(2022), pp. 1-19. [10.4018/IJDET.313639]

An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses

Filippo Sciarrone;Marco Temperini
;
2022

Abstract

Massive Open On-line Courses (MOOCs) are effective and flexible resources to educate, train, and empower populations. Peer Assessment (PA), provides a powerful pedagogical strategy to support educational activities and foster learners’ success, also where a huge number of learners is involved. Item Response Theory (IRT) can model students’ features, such as the skill to accomplish a task, and the capability to mark tasks. In this paper we investigate the applicability of IRT models to PA, in the learning environments of MOOCs. Our main goal is to evaluate the relationships between some students’ IRT parameters (ability, strictness) and some PA parameters (number of graders per task, and rating scale). We use a data-set simulating a large class (1,000 peers), built by a Gaussian distribution of the students’ skills to accomplish a task. The IRT analysis of the PA data allow to say that the best estimate for peers' ability is when 15 raters per task are used, with a [1,10] rating scale.
2022
Item Response Theory; Peer Assessment; Rating Peers; Grading Scale; Strictness; Latent Ability; Pearson Correlation; Gaussian Distribution,; Simulation
01 Pubblicazione su rivista::01a Articolo in rivista
An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses / Nakayama, Minoru; Sciarrone, Filippo; Temperini, Marco; Uto, Masaki. - In: INTERNATIONAL JOURNAL OF DISTANCE EDUCATION TECHNOLOGIES. - ISSN 1539-3100. - 20:1(2022), pp. 1-19. [10.4018/IJDET.313639]
File allegati a questo prodotto
File Dimensione Formato  
Nakayama_An-Item-Response_2022.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 2.24 MB
Formato Adobe PDF
2.24 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1663122
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact