In whatever study program where Computer Science is taught (as a support subject matter, or as the main one) the analysis of students' programming skills is a complex and crucial endeavour. Peer assessment (PA) can be used to expose students (peers) to a very effective educational methodology, to spur competence, and to evaluate skills in a wide range of subject matters, including Computer Science and programming. An important feature is in that data from PA sessions can be used to model the students, and support the inference of automated grading. In this paper we analyse the data coming from experiments where several PA sessions were conducted, with students having to produce programs, and evaluate their peers' programs. The main aim is to see how methods of the Item Response Theory (IRT) can be applied in the PA framework, to model the students effectively. The results seem encouraging, allowing to foresee the enrichment of more traditional automated grading techniques by the IRT methods.

Evaluation of Programming Skills via Peer Assessment and IRT Estimation Techniques / Nakayama, Minoru; Sciarrone, Filippo; Temperini, Marco; Uto, Masaki. - (2022), pp. 1-8. (Intervento presentato al convegno 20th International Conference on Information Technology Based Higher Education and Training, ITHET 2022 tenutosi a Antalya; Turkey) [10.1109/ithet56107.2022.10031766].

Evaluation of Programming Skills via Peer Assessment and IRT Estimation Techniques

Sciarrone, Filippo
;
Temperini, Marco
;
2022

Abstract

In whatever study program where Computer Science is taught (as a support subject matter, or as the main one) the analysis of students' programming skills is a complex and crucial endeavour. Peer assessment (PA) can be used to expose students (peers) to a very effective educational methodology, to spur competence, and to evaluate skills in a wide range of subject matters, including Computer Science and programming. An important feature is in that data from PA sessions can be used to model the students, and support the inference of automated grading. In this paper we analyse the data coming from experiments where several PA sessions were conducted, with students having to produce programs, and evaluate their peers' programs. The main aim is to see how methods of the Item Response Theory (IRT) can be applied in the PA framework, to model the students effectively. The results seem encouraging, allowing to foresee the enrichment of more traditional automated grading techniques by the IRT methods.
2022
20th International Conference on Information Technology Based Higher Education and Training, ITHET 2022
Item Response Theory; Peer Assessment; Performance Assessment; Programming skill
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Evaluation of Programming Skills via Peer Assessment and IRT Estimation Techniques / Nakayama, Minoru; Sciarrone, Filippo; Temperini, Marco; Uto, Masaki. - (2022), pp. 1-8. (Intervento presentato al convegno 20th International Conference on Information Technology Based Higher Education and Training, ITHET 2022 tenutosi a Antalya; Turkey) [10.1109/ithet56107.2022.10031766].
File allegati a questo prodotto
File Dimensione Formato  
Nakayama_Evaluation-Programming-Skills_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 485.06 kB
Formato Adobe PDF
485.06 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1729491
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact