Automated assessment of homework assignments is a challenging topic in programming courses. For some years we have been using our Q2A-II system, that supports 1) automated grading of homework programs, and 2) formative peer assessment, performed by students on the algorithm descriptions they submit with the homework. Here we present some of the data we collected during 4 years of activity with Q2A-II, in the framework of a university course on Basics of Computer Programming. Each year, 4 homework were administered to 300–500 learners, with about 8.600 submissions (each made of program + algorithmic description), overall, and about 23.300 peer evaluations. On such data we propose several observations, aiming to rate the effectiveness of the initiative, in view of a more in depth analysis. On the algorithm descriptions we performed a basic textual categorization, using BERTopic-based text embedding topic extraction. The classification aim is exclusively to assess whether a text can or cannot be considered as an algorithm description: in two of the research questions we try to validate the classification and to see how different can be the behavior of the authors of such descriptions during the peer assessment activity.
Q2A-II, a System to Support Peer Assessment on Homework: A Study on Four Years of Use / Sterbini, A.; Temperini, M.. - 14607:(2024), pp. 249-262. (Intervento presentato al convegno 8th International Symposium on Emerging Technologies for Education, SETE 2023 tenutosi a Sydney; Australia) [10.1007/978-981-97-4246-2_20].
Q2A-II, a System to Support Peer Assessment on Homework: A Study on Four Years of Use
Sterbini A.
;Temperini M.
2024
Abstract
Automated assessment of homework assignments is a challenging topic in programming courses. For some years we have been using our Q2A-II system, that supports 1) automated grading of homework programs, and 2) formative peer assessment, performed by students on the algorithm descriptions they submit with the homework. Here we present some of the data we collected during 4 years of activity with Q2A-II, in the framework of a university course on Basics of Computer Programming. Each year, 4 homework were administered to 300–500 learners, with about 8.600 submissions (each made of program + algorithmic description), overall, and about 23.300 peer evaluations. On such data we propose several observations, aiming to rate the effectiveness of the initiative, in view of a more in depth analysis. On the algorithm descriptions we performed a basic textual categorization, using BERTopic-based text embedding topic extraction. The classification aim is exclusively to assess whether a text can or cannot be considered as an algorithm description: in two of the research questions we try to validate the classification and to see how different can be the behavior of the authors of such descriptions during the peer assessment activity.File | Dimensione | Formato | |
---|---|---|---|
Sterbini_Q2A-II_2024.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.58 MB
Formato
Adobe PDF
|
1.58 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.