The Open Answers module is designed to be integrated into the social collaborative reputation-based elearning system SocialX, to manage answers to open questions. In particular, its aim is to support personal evaluation of skills and knowledge of students, involved in peer-assessment-based learning activities, while mitigating the workload imposed upon the teacher, for analysis and correction of the answers. In brief, 1) students do answer open questions, to be evaluated by peers and teacher; 2) students peer-evaluate each other's answers; 3) the teacher grades only a subset of the whole answers corpus; 4) the system infers the assessment for the remaining answers, by exploiting the relations established through the web of the students' peer-assessments of those answers, and the personal evaluation maintained for each student. The peer-assessment data are analyzed through a constraint-logic-based model of student's possible behaviors, obtaining a (possibly big!) set of hypotheses on the answers' correctness. The teacher is proposed with a minimum set of answers to grade: by such grading (s)he helps narrowing the set of hypotheses. Testing of the constraint-logic-based analysis engine is ongoing by means of simulation devices that generate suitable sets of students' behaviors. © 2012 IEEE.
Supporting assessment of open answers in a didactic setting / Sterbini, Andrea; Temperini, Marco. - (2012), pp. 678-679. (Intervento presentato al convegno 12th IEEE International Conference on Advanced Learning Technologies, ICALT 2012 tenutosi a Rome, Italy nel 4 July 2012 through 6 July 2012) [10.1109/ICALT.2012.149].
Supporting assessment of open answers in a didactic setting
STERBINI, Andrea;TEMPERINI, Marco
2012
Abstract
The Open Answers module is designed to be integrated into the social collaborative reputation-based elearning system SocialX, to manage answers to open questions. In particular, its aim is to support personal evaluation of skills and knowledge of students, involved in peer-assessment-based learning activities, while mitigating the workload imposed upon the teacher, for analysis and correction of the answers. In brief, 1) students do answer open questions, to be evaluated by peers and teacher; 2) students peer-evaluate each other's answers; 3) the teacher grades only a subset of the whole answers corpus; 4) the system infers the assessment for the remaining answers, by exploiting the relations established through the web of the students' peer-assessments of those answers, and the personal evaluation maintained for each student. The peer-assessment data are analyzed through a constraint-logic-based model of student's possible behaviors, obtaining a (possibly big!) set of hypotheses on the answers' correctness. The teacher is proposed with a minimum set of answers to grade: by such grading (s)he helps narrowing the set of hypotheses. Testing of the constraint-logic-based analysis engine is ongoing by means of simulation devices that generate suitable sets of students' behaviors. © 2012 IEEE.File | Dimensione | Formato | |
---|---|---|---|
VE_2012_11573-482527.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
132.44 kB
Formato
Adobe PDF
|
132.44 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.