The analysis and grading of open answers (i.e. answers to open-ended questions) is a powerful means to model the state of knowledge and cognitive level of students in e-learning systems. In a previous work (presented at last SPEL workshop) we showed an approach to open answers grading, based on Constraint Logic Programming (CLP) and peer assessment, where students were defined as triples of finite-domain variables: K for student's Knowledge about the question's topic, C for Correctness of her/his answer, and J for her/his estimated ability to evaluate ("Judge") another peer's answer. The CLP Prolog module supported the grading process helping eventually to get a complete set of grades although the teacher had actually graded only a (substantial indeed) part of them. Here we try and tackle the problem of grading open answers by an alternative approach, using peer-assessment in a social collaborative e-learning setting, mediated by the teacher through a simple Bayesian-networks-based model, that allows managing student models (based on the same finite-domain variables as above) and producing again automated evaluations of those answers that have not been graded by the teacher. In particular we give an account of the OpenAnswer web-based system, which can allow teachers and students to use our approach, and show the result of some experimentation we conducted.
Analysis of open answers via mediated peer-assessment / Sterbini, Andrea; Temperini, Marco. - STAMPA. - (2013), pp. 663-668. (Intervento presentato al convegno 17th International Conference System Theory, Control and Computing (ICSTCC) tenutosi a Sinaia, ROMANIA nel OCT 11-13, 2013) [10.1109/icstcc.2013.6689036].
Analysis of open answers via mediated peer-assessment
STERBINI, Andrea;TEMPERINI, Marco
2013
Abstract
The analysis and grading of open answers (i.e. answers to open-ended questions) is a powerful means to model the state of knowledge and cognitive level of students in e-learning systems. In a previous work (presented at last SPEL workshop) we showed an approach to open answers grading, based on Constraint Logic Programming (CLP) and peer assessment, where students were defined as triples of finite-domain variables: K for student's Knowledge about the question's topic, C for Correctness of her/his answer, and J for her/his estimated ability to evaluate ("Judge") another peer's answer. The CLP Prolog module supported the grading process helping eventually to get a complete set of grades although the teacher had actually graded only a (substantial indeed) part of them. Here we try and tackle the problem of grading open answers by an alternative approach, using peer-assessment in a social collaborative e-learning setting, mediated by the teacher through a simple Bayesian-networks-based model, that allows managing student models (based on the same finite-domain variables as above) and producing again automated evaluations of those answers that have not been graded by the teacher. In particular we give an account of the OpenAnswer web-based system, which can allow teachers and students to use our approach, and show the result of some experimentation we conducted.File | Dimensione | Formato | |
---|---|---|---|
VE_2013_11573-522791.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
978.35 kB
Formato
Adobe PDF
|
978.35 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.