We introduce a framework for developing flexible, multimodal environments in which the recognition, interpretation and actuation phases are separated from one another in order to allow software modularization and component reuse. We illustrate the framework by commenting on two applications of hand-based interaction in which two different hand detection algorithms are used by the proposed system to capture users' gestures. The recognised gesture are then mapped to application-specific commands, respectively to simulate curve-drawing with a graphical tool and to steer puppet animation. We show both experimental results for the specific applications and the general platform structure. The proposed interactive applications are based on inexpensive devices (such as low-resolution WebCams) and efficient, real-time algorithms.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Framework for Hand-Based Interaction|
|Data di pubblicazione:||2010|
|Appartiene alla tipologia:||04b Atto di convegno in volume|