Manual form filling, especially when technical data must be retrieved from various documents, remains tedious and error-prone. We present a prototype system using Large Language Models (LLMs) to automate and streamline form completion, demonstrated through a case study in the MICS project, which supports circular economy practices. Our approach combines automatic extraction with human-in-the-loop review, significantly reducing user effort and time while maintaining accuracy. User study results indicate improved usability and satisfaction, but highlight the need for safeguards against AI “hallucinations” and further integration.

Automating Form Completion with Large Language Models / Aiuti, Alessandro; Zeppieri, Stefano; Bisante, Alba; Altamura, Paola; Datla, Venkata Srikanth Varma; D'Elia, Luca; Ershova, Mariia; Malakuczi, Viktor; Migliorelli, Matteo; Rossini, Gabriele; Rotondi, Carmen; Trasciatti, Gabriella; Imbesi, Lorenzo; Panizzi, Emanuele. - (2025), pp. 1-2. (Intervento presentato al convegno CHItaly '25 tenutosi a Salerno, Italy) [10.1145/3750069.3755963].

Automating Form Completion with Large Language Models

Aiuti, Alessandro
Primo
Writing – Original Draft Preparation
;
Zeppieri, Stefano
Secondo
Software
;
Bisante, Alba
Writing – Review & Editing
;
Altamura, Paola;Datla, Venkata Srikanth Varma;D'Elia, Luca;Ershova, Mariia;Malakuczi, Viktor;Rossini, Gabriele;Rotondi, Carmen;Trasciatti, Gabriella;Imbesi, Lorenzo;Panizzi, Emanuele
Ultimo
Supervision
2025

Abstract

Manual form filling, especially when technical data must be retrieved from various documents, remains tedious and error-prone. We present a prototype system using Large Language Models (LLMs) to automate and streamline form completion, demonstrated through a case study in the MICS project, which supports circular economy practices. Our approach combines automatic extraction with human-in-the-loop review, significantly reducing user effort and time while maintaining accuracy. User study results indicate improved usability and satisfaction, but highlight the need for safeguards against AI “hallucinations” and further integration.
2025
9798400721021
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1751523
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact