In the OpenAnswer system it is possible to compute grades for/to the answers to open-ended questions given to a class of students, based on the students’ peer-evaluation and on the teacher’s grading work, performed on a subset of the answers. Here we analyze the systems’ performances, expressed as the capability to infer correct grades based on a limited amount of grading work by the teacher. In particular, considering that the performance may well depend on alternative definitions (valorization) of several aspects of the system, we show an analysis of such alternative choices, with the intention of seeing what choices might result in better system’s behavior. The factors we investigate are related to the Bayesian framework underpinning OpenAnswer. In particular we tackle the different possibilities to define probability distribution of key variables, conditional probabilities tables, and methods to map our statistical variables onto usable grades. Moreover we analyze the relationship between two main variables that express knowledge possessed by the student and her/his peer-assessing skill. By exploring alternative configurations of the system’s parameters we can conclude that Knowledge is in general more difficult than Assessment. The way to reach such a (not astonishing) conclusion provides also a quantitative evidence of Bloom’s ranking.
Teacher supported peer evaluation through openanswer: A study of some factors / De Marsico, M.; Sterbini, A.; Temperini, M.. - 739:(2017), pp. 442-463. (Intervento presentato al convegno 8th International Conference on Computer Supported Education, CSEDU 2016 tenutosi a Rome; Italy) [10.1007/978-3-319-63184-4_23].
Teacher supported peer evaluation through openanswer: A study of some factors
De Marsico M.
;Sterbini A.
;Temperini M.
2017
Abstract
In the OpenAnswer system it is possible to compute grades for/to the answers to open-ended questions given to a class of students, based on the students’ peer-evaluation and on the teacher’s grading work, performed on a subset of the answers. Here we analyze the systems’ performances, expressed as the capability to infer correct grades based on a limited amount of grading work by the teacher. In particular, considering that the performance may well depend on alternative definitions (valorization) of several aspects of the system, we show an analysis of such alternative choices, with the intention of seeing what choices might result in better system’s behavior. The factors we investigate are related to the Bayesian framework underpinning OpenAnswer. In particular we tackle the different possibilities to define probability distribution of key variables, conditional probabilities tables, and methods to map our statistical variables onto usable grades. Moreover we analyze the relationship between two main variables that express knowledge possessed by the student and her/his peer-assessing skill. By exploring alternative configurations of the system’s parameters we can conclude that Knowledge is in general more difficult than Assessment. The way to reach such a (not astonishing) conclusion provides also a quantitative evidence of Bloom’s ranking.File | Dimensione | Formato | |
---|---|---|---|
DeMarsico_Teacher-supported_2017.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.4 MB
Formato
Adobe PDF
|
2.4 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.