We explore the classification and assessment capabilities of a selection of Open Source Small Language Models, on the specific task of evaluating learners' Descriptions of Algorithms. The algorithms are described in the framework of programming assignments, to which the learners in a class of Basics in Computer Programming have to answer. The task requires to 1) provide a program, in Python, to solve the assigned problem, 2) submit a description of the related algorithm, and 3) participate in a formative peer assessment session, over the submitted algorithms. Can a Language Model, be it small or large, produce an assessment for the algorithm descriptions? Rather than using any of the most famous, huge, and proprietary models, here we explore Small, Open Source based, Language Models, i.e. models that can be run on relatively small computers, and whose functions and training sources are provided openly. We produced a ground-truth evaluation of a large set of algorithm descriptions, taken from one year of use of the Q2A-II system. In this we used an 8-value scale, grading the usefulness of the description in a Peer Assessment session. Then we tested the agreement of the models assessments with such ground-truth. We also analysed whether a pre-emptive, automated, binary classification of the descriptions (as useless/useful for a Peer Assessment activity) would help the models to grade the usefulness of the description in a better way.

An Exploration of Open Source Small Language Models for Automated Assessment / Sterbini, A.; Temperini, M.. - (2024), pp. 332-337. (Intervento presentato al convegno International Conference Information Visualisation tenutosi a Coimbra; Portugal) [10.1109/IV64223.2024.00064].

An Exploration of Open Source Small Language Models for Automated Assessment

Sterbini A.
;
Temperini M.
2024

Abstract

We explore the classification and assessment capabilities of a selection of Open Source Small Language Models, on the specific task of evaluating learners' Descriptions of Algorithms. The algorithms are described in the framework of programming assignments, to which the learners in a class of Basics in Computer Programming have to answer. The task requires to 1) provide a program, in Python, to solve the assigned problem, 2) submit a description of the related algorithm, and 3) participate in a formative peer assessment session, over the submitted algorithms. Can a Language Model, be it small or large, produce an assessment for the algorithm descriptions? Rather than using any of the most famous, huge, and proprietary models, here we explore Small, Open Source based, Language Models, i.e. models that can be run on relatively small computers, and whose functions and training sources are provided openly. We produced a ground-truth evaluation of a large set of algorithm descriptions, taken from one year of use of the Q2A-II system. In this we used an 8-value scale, grading the usefulness of the description in a Peer Assessment session. Then we tested the agreement of the models assessments with such ground-truth. We also analysed whether a pre-emptive, automated, binary classification of the descriptions (as useless/useful for a Peer Assessment activity) would help the models to grade the usefulness of the description in a better way.
2024
International Conference Information Visualisation
Algorithm Description Quality Natural language processing (NLP); Automated Assessment; Open Source Small Language Models; Peer Assessment; Technology Enhanced Learning; Transformer-based Large and Small Language Models
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
An Exploration of Open Source Small Language Models for Automated Assessment / Sterbini, A.; Temperini, M.. - (2024), pp. 332-337. (Intervento presentato al convegno International Conference Information Visualisation tenutosi a Coimbra; Portugal) [10.1109/IV64223.2024.00064].
File allegati a questo prodotto
File Dimensione Formato  
Sterbini_postprint_Exploration-Open-Source_2024.pdf

solo gestori archivio

Note: pdf
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 188.65 kB
Formato Adobe PDF
188.65 kB Adobe PDF   Contatta l'autore
Sterbini_Exploration-Open-Source_2024.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 229.82 kB
Formato Adobe PDF
229.82 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1728661
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact