One of the main visual analytics characteristics is the tight integration between automatic computations and interactive visualization. This generally corresponds to the availability of powerful algorithms that allow for manipulating the data under analysis, transforming it in order to feed suitable visualizations. This paper focuses on more general purpose automatic computations and presents a methodological framework that can improve the quality of the visualizations adopted in the analytical process, using the dataset at hand and the actual visualization. In particular, the paper deals with the critical issue of visual clutter reduction, presenting a general strategy for analyzing and reducing it through random data sampling. The basic idea is to model the visualization in a virtual space in order to analyze both clutter and data features (e.g., absolute density, relative density, etc.). In this way we can measure the visual overlapping which may likely affects a visualization while representing a large dataset, obtaining precise visual quality metrics about the visualization degradation and devising automatic sampling strategies in order to improve the overall image quality. Metrics and algorithms have been tuned taking into account the results of suitable user studies. We will describe our proposal using two running case studies, one on 2D scatterplots and the other one on parallel coordinates. (C) 2011 Published by Elsevier Ltd.
Improving visual analytics environments through a methodological framework for automatic clutter reduction / Santucci, Giuseppe; Enrico, Bertini. - In: JOURNAL OF VISUAL LANGUAGES AND COMPUTING. - ISSN 1045-926X. - STAMPA. - 22:3(2011), pp. 194-212. [10.1016/j.jvlc.2011.02.002]
Improving visual analytics environments through a methodological framework for automatic clutter reduction
SANTUCCI, Giuseppe;
2011
Abstract
One of the main visual analytics characteristics is the tight integration between automatic computations and interactive visualization. This generally corresponds to the availability of powerful algorithms that allow for manipulating the data under analysis, transforming it in order to feed suitable visualizations. This paper focuses on more general purpose automatic computations and presents a methodological framework that can improve the quality of the visualizations adopted in the analytical process, using the dataset at hand and the actual visualization. In particular, the paper deals with the critical issue of visual clutter reduction, presenting a general strategy for analyzing and reducing it through random data sampling. The basic idea is to model the visualization in a virtual space in order to analyze both clutter and data features (e.g., absolute density, relative density, etc.). In this way we can measure the visual overlapping which may likely affects a visualization while representing a large dataset, obtaining precise visual quality metrics about the visualization degradation and devising automatic sampling strategies in order to improve the overall image quality. Metrics and algorithms have been tuned taking into account the results of suitable user studies. We will describe our proposal using two running case studies, one on 2D scatterplots and the other one on parallel coordinates. (C) 2011 Published by Elsevier Ltd.File | Dimensione | Formato | |
---|---|---|---|
VE_2011_11573-375797.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
5.51 MB
Formato
Adobe PDF
|
5.51 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.