Questions with open answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them. This can be mitigated by having students grade each other's answers, but the uncertainty on the quality of the resulting grades could be high. In our OpenAnswer system we have modeled peer-assessment as a Bayesian network connecting a set of sub-networks (each representing a participating student) to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher mark and a very good ability to predict it within 1 mark from the right one (ground truth). From the available datasets we noticed that different teachers sometimes disagree in their assessment of the same answer. For this reason in this paper we explore how the model can be tailored to the specific teacher to improve its prediction ability. To this aim, we parametrically define the CPTs (Conditional Probability Tables) describing the probabilistic dependence of a Bayesian variable from others in the modeled network, and we optimize the parameters generating the CPTs to obtain the smallest average difference between the predicted grades and the teacher's marks (ground truth). The optimization is carried out separately with respect to each teacher available in our datasets, or respect to the whole datasets. The paper discusses the results and shows that the prediction performance of our model, when optimized separately for each teacher, improves against the case in which our model is globally optimized respect to the whole dataset, which in turn improves against the predictions of the raw peer-assessment. The improved prediction would allow us to use OpenAnswer, without teacher intervention, as a class monitoring and diagnostic tool.

Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer / DE MARSICO, Maria; Sterbini, Andrea; Temperini, Marco. - ELETTRONICO. - (2016), pp. 1-5. (Intervento presentato al convegno 15th International Conference on Information Technology Based Higher Education and Training (ITHET) tenutosi a Istanbul; Turkey) [10.1109/ITHET.2016.7760743].

Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer

DE MARSICO, Maria
;
STERBINI, Andrea
;
TEMPERINI, Marco
2016

Abstract

Questions with open answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them. This can be mitigated by having students grade each other's answers, but the uncertainty on the quality of the resulting grades could be high. In our OpenAnswer system we have modeled peer-assessment as a Bayesian network connecting a set of sub-networks (each representing a participating student) to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher mark and a very good ability to predict it within 1 mark from the right one (ground truth). From the available datasets we noticed that different teachers sometimes disagree in their assessment of the same answer. For this reason in this paper we explore how the model can be tailored to the specific teacher to improve its prediction ability. To this aim, we parametrically define the CPTs (Conditional Probability Tables) describing the probabilistic dependence of a Bayesian variable from others in the modeled network, and we optimize the parameters generating the CPTs to obtain the smallest average difference between the predicted grades and the teacher's marks (ground truth). The optimization is carried out separately with respect to each teacher available in our datasets, or respect to the whole datasets. The paper discusses the results and shows that the prediction performance of our model, when optimized separately for each teacher, improves against the case in which our model is globally optimized respect to the whole dataset, which in turn improves against the predictions of the raw peer-assessment. The improved prediction would allow us to use OpenAnswer, without teacher intervention, as a class monitoring and diagnostic tool.
2016
15th International Conference on Information Technology Based Higher Education and Training (ITHET)
Modeling peer-assessment; Bayesian networks; Automatic correction of open answers
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer / DE MARSICO, Maria; Sterbini, Andrea; Temperini, Marco. - ELETTRONICO. - (2016), pp. 1-5. (Intervento presentato al convegno 15th International Conference on Information Technology Based Higher Education and Training (ITHET) tenutosi a Istanbul; Turkey) [10.1109/ITHET.2016.7760743].
File allegati a questo prodotto
File Dimensione Formato  
DeMarsico_Postprint_Modeling-peer-assessment_2016.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/document/7760743
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 293.39 kB
Formato Adobe PDF
293.39 kB Adobe PDF
DeMarsico_Modeling-peer-assessment_2016.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 305.59 kB
Formato Adobe PDF
305.59 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/931134
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact