Abstract
Human communication has been studied from different approaches and resulting in contributions to several disciplines. From the computer sciences point of view, the findings made in the area have inspired the development of Natural User Interfaces (NUI), interaction mechanisms aimed at replicating the way in which people communicate, so the information exchange with computational systems happens in similar fashion. Gestural interfaces are a specific type of NUI focused on analyzing the relationship between body motion and semantic meanings. Although, from a technical perspective, proposals found in the literature had proven high efficiency and accuracy on gestural recognition, several authors had reported lack of naturalness in the interaction with gesture-based applications, leading to the conclusion that NUIs are not usually as natural as they claim to be. Moreover, gestures are culture and language specific, which makes them ambiguous, incompletely specified, and difficult to match with semantic meaning when the context is unknown. In this paper, we propose a methodology for enabling the development of gesture-based applications, considering that accuracy and efficiency in recognition tasks must not be affected, and prioritizing the flexibility for allowing the use of gestures that are suitable for different user contexts through the exploration of user-defined gesture sets and Machine Learning techniques, and using a one-shot learning approach.
Keywords
Get full access to this article
View all access options for this article.
