We introduce MAIA (Multimodal AI Assessment), a multimodal dataset developed as a core component of a competenceoriented benchmark designed for fine-grained investigation of the reasoning abilities of Visual Language Models (VLMs) on videos. The MAIA benchmark is characterized by several distinctive features. To the best of our knowledge, MAIA is the first Italian-native benchmark addressing video understanding: videos were carefully selected to reflect Italian culture, and the language data (ie, questions and reference answers) were produced by native-Italian speakers. Second, MAIA explicitly includes twelve reasoning categories that are specifically designed to assess the reasoning abilities of VLMs on videos. Third, we structured the dataset to support two aligned tasks (ie, a statement verification and an open-ended visual question answering) built on the same datapoints, this way allowing to assess VLM coherence across task formats. Finally MAIA integrates, by design, state-of-the-art LLMs in the development process of the benchmark, taking advantage of their linguistic and reasoning capabilities both for data augmentation and for assessing and improving the overall quality of the data. In the paper we focus on the design principles and the data collection methodology, highlighting how MAIA provides a significant advancement with respect to other available dataset for VLM benchmarking. Data available at GitHub.

MAIA: A Benchmark for Multimodal AI Assessment / Testa, Davide; Bonetta, Giovanni; Bernardi, Raffaella; Bondielli, Alessandro; Lenci, Alessandro; Miaschi, Alessio; Passaro Lucia, C.; Magnini, Bernardo. - (2025), pp. 1121-1134. ( CLiC-it 2025 Cagliari, Italy ).

MAIA: A Benchmark for Multimodal AI Assessment

Testa Davide
Primo
;
Lenci Alessandro;
2025

Abstract

We introduce MAIA (Multimodal AI Assessment), a multimodal dataset developed as a core component of a competenceoriented benchmark designed for fine-grained investigation of the reasoning abilities of Visual Language Models (VLMs) on videos. The MAIA benchmark is characterized by several distinctive features. To the best of our knowledge, MAIA is the first Italian-native benchmark addressing video understanding: videos were carefully selected to reflect Italian culture, and the language data (ie, questions and reference answers) were produced by native-Italian speakers. Second, MAIA explicitly includes twelve reasoning categories that are specifically designed to assess the reasoning abilities of VLMs on videos. Third, we structured the dataset to support two aligned tasks (ie, a statement verification and an open-ended visual question answering) built on the same datapoints, this way allowing to assess VLM coherence across task formats. Finally MAIA integrates, by design, state-of-the-art LLMs in the development process of the benchmark, taking advantage of their linguistic and reasoning capabilities both for data augmentation and for assessing and improving the overall quality of the data. In the paper we focus on the design principles and the data collection methodology, highlighting how MAIA provides a significant advancement with respect to other available dataset for VLM benchmarking. Data available at GitHub.
2025
CLiC-it 2025
Multimodality, Benchmarking, Vision-Language Models, Multimodal Reasoning, Language Resources
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
MAIA: A Benchmark for Multimodal AI Assessment / Testa, Davide; Bonetta, Giovanni; Bernardi, Raffaella; Bondielli, Alessandro; Lenci, Alessandro; Miaschi, Alessio; Passaro Lucia, C.; Magnini, Bernardo. - (2025), pp. 1121-1134. ( CLiC-it 2025 Cagliari, Italy ).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1764614
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact