Abstract:
Sensing interfaces relying on head or facial gestures provide effective solutions for hands-free scenarios. Most of these interfaces utilize sensors attached to the face, as well as into the mouth, being either obtrusive or limited in input bandwidth. In this paper, we propose ChewIt – a novel intraoral input interface. ChewIt resembles an edible object that allows users to perform various handsfree input operations, both simply and discreetly. Our design is informed by a series of studies investigating the implications of shape, size, locations for comfort, discreetness, maneuverability, and obstructiveness. Additionally, we evaluated potential gestures that users could utilize to interact with such an intraoral interface. The results of this thesis lead into findings that demonstrate ChewIt’s discreetness from a third-party point of view when not interacting and a first point of view when interacting; propose a location within the mouth where the user can place the device when not interacting; a definitive form factor related to multiple parameters such as resting position, orientation and comfort; a set of gestures deduced from the analysis of multiple parameters when users are interacting, such as comfort and ease of use; and a working prototype able to communicate data over Bluetooth and analyze it in real-time.