The OpenAnswer system has the goal of exploiting teacher mediated peer-assessment for the evaluation of answers to open ended questions. The system models both the learning state of each student and their choices during peer-assessment. In OpenAnswer, each student is represented as a Bayesian network made of a triple of finite-domain variables: K for student's Knowledge about a topic, J for the estimated ability to evaluate ("Judge") the answer of another peer, C for Correctness of the answer to a given question. The student's individual sub-networks are connected through further Bayesian variables which model each peer-assessment choice, depending on the type of peer-assessment performed: (G for grading, B for choosing the best, W for choosing the worst). During an assessment session, each student grades a fixed number of peers' answers. The final result for a given session is a full set of grades for all students' answers, although the teacher had actually graded only a part of them. The student's assessments are instantiated in the network as evidence, together with the teacher's (perhaps partially complete) grades, so that OpenAnswer deduces the remaining grades. In the former OpenAnswer implementation, all variables were represented through a probability distribution over three values (Good/Fair/Bad for K and J, correct/fair/wrong for C). We present experiments and simulations showing that, by increasing the domain granularity for all variables from 3 to 6 values (A to F), the information obtained from the Bayesian network achieves higher reliability. © 2014 Springer International Publishing Switzerland.

How the granularity of evaluation affects reliability of peer-assessment modelization in the OpenAnswer system / DE MARSICO, Maria; Sterbini, Andrea; Temperini, Marco. - STAMPA. - 8534 LNAI:(2014), pp. 212-223. (Intervento presentato al convegno 8th International Conference on Augmented Cognition, AC 2014 - Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014 tenutosi a Heraklion, Crete nel 22 June 2014 through 27 June 2014) [10.1007/978-3-319-07527-3-20].

How the granularity of evaluation affects reliability of peer-assessment modelization in the OpenAnswer system

DE MARSICO, Maria;STERBINI, Andrea;TEMPERINI, Marco
2014

Abstract

The OpenAnswer system has the goal of exploiting teacher mediated peer-assessment for the evaluation of answers to open ended questions. The system models both the learning state of each student and their choices during peer-assessment. In OpenAnswer, each student is represented as a Bayesian network made of a triple of finite-domain variables: K for student's Knowledge about a topic, J for the estimated ability to evaluate ("Judge") the answer of another peer, C for Correctness of the answer to a given question. The student's individual sub-networks are connected through further Bayesian variables which model each peer-assessment choice, depending on the type of peer-assessment performed: (G for grading, B for choosing the best, W for choosing the worst). During an assessment session, each student grades a fixed number of peers' answers. The final result for a given session is a full set of grades for all students' answers, although the teacher had actually graded only a part of them. The student's assessments are instantiated in the network as evidence, together with the teacher's (perhaps partially complete) grades, so that OpenAnswer deduces the remaining grades. In the former OpenAnswer implementation, all variables were represented through a probability distribution over three values (Good/Fair/Bad for K and J, correct/fair/wrong for C). We present experiments and simulations showing that, by increasing the domain granularity for all variables from 3 to 6 values (A to F), the information obtained from the Bayesian network achieves higher reliability. © 2014 Springer International Publishing Switzerland.
2014
8th International Conference on Augmented Cognition, AC 2014 - Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014
social collaborative e-learning; peer-assessment; assessment
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
How the granularity of evaluation affects reliability of peer-assessment modelization in the OpenAnswer system / DE MARSICO, Maria; Sterbini, Andrea; Temperini, Marco. - STAMPA. - 8534 LNAI:(2014), pp. 212-223. (Intervento presentato al convegno 8th International Conference on Augmented Cognition, AC 2014 - Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014 tenutosi a Heraklion, Crete nel 22 June 2014 through 27 June 2014) [10.1007/978-3-319-07527-3-20].
File allegati a questo prodotto
File Dimensione Formato  
VE_2014_11573-609386.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 545.42 kB
Formato Adobe PDF
545.42 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/609386
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact