[Context and motivation] Traditionally, requirements are documented using natural language text. However, there exist several approaches that promote the use of rich media requirements descriptions. Apart from text-based descriptions these multimodal requirements can be enriched by images, audio, or even video. [Question/Problem] The transcription and automated analysis of multimodal information is an important open question, which has not been sufficiently addressed by the Requirement Engineering (RE) community so far. Therefore, in this research preview paper we sketch how we plan to tackle research challenges related to the field of multimodal requirements analysis. We are in particular focusing on the automation of the analysis process. [Principal idea/results] In our recent research we have started to gather and manually analyze multimodal requirements. Furthermore, we have worked on concepts which initially allow the analysis of multimodal information. The purpose of the planned research is to combine and extend our recent work and to come up with an approach supporting the automatic analysis of multimodal requirements. [Contribution] In this paper we give a preview on the planned work. We present our research goal, discuss research challenges and depict an early conceptual solution. © 2012 Springer-Verlag.

Automatic analysis of multimodal requirements: a research preview / Bruni, E; Ferrari, A; Seyff, N; Tolomei, G.. - 7195 LNCS:(2012), pp. 218-224. (Intervento presentato al convegno 18th Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2012 tenutosi a Essen; Germany) [10.1007/978-3-642-28714-5_19].

Automatic analysis of multimodal requirements: a research preview

Tolomei, G.
2012

Abstract

[Context and motivation] Traditionally, requirements are documented using natural language text. However, there exist several approaches that promote the use of rich media requirements descriptions. Apart from text-based descriptions these multimodal requirements can be enriched by images, audio, or even video. [Question/Problem] The transcription and automated analysis of multimodal information is an important open question, which has not been sufficiently addressed by the Requirement Engineering (RE) community so far. Therefore, in this research preview paper we sketch how we plan to tackle research challenges related to the field of multimodal requirements analysis. We are in particular focusing on the automation of the analysis process. [Principal idea/results] In our recent research we have started to gather and manually analyze multimodal requirements. Furthermore, we have worked on concepts which initially allow the analysis of multimodal information. The purpose of the planned research is to combine and extend our recent work and to come up with an approach supporting the automatic analysis of multimodal requirements. [Contribution] In this paper we give a preview on the planned work. We present our research goal, discuss research challenges and depict an early conceptual solution. © 2012 Springer-Verlag.
2012
18th Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2012
distributional semantics; multimodal requirement descriptions; requirements analysis; similarity-based clustering
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Automatic analysis of multimodal requirements: a research preview / Bruni, E; Ferrari, A; Seyff, N; Tolomei, G.. - 7195 LNCS:(2012), pp. 218-224. (Intervento presentato al convegno 18th Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2012 tenutosi a Essen; Germany) [10.1007/978-3-642-28714-5_19].
File allegati a questo prodotto
File Dimensione Formato  
Bruni_Automatic_2012.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 135.47 kB
Formato Adobe PDF
135.47 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1382669
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact