Abstract:
With the growth of 3D digital gaming and animation industry, 3D digital models are becoming more and more popular. Problems arise when 3D coordinates required by many modelling operations must be defined using 2D input devices, be it computer mouse or the drawing tablet. This research addresses this issue, by constructing a prototype system that allows the user to give direct 3D inputs using movements of their right hand, and select commands using the static posture of their left. This is accomplished using a low cost stereo camera systems, combined with stereo vision techniques such as CAMShift, stereomatching, PCA and LDA. As a result, our system is able to perform real time 3D hand tracking and posture recognition. The hand coordinates and the hand posture are sent to a 3D modelling software - Blender via a local TCP connection. By using a Blender plug-in script, Blender was able to perform the actual modelling operations, and provided visual feedback to the user. Usability studies have been conducted to test the real world performance of our system. The result showed that for operations where 3D inputs are not required, the mouse is a better input device to use. But when the user needs to perform operations that requires 3D inputs, participants completed tasks much faster using our 3D hand tracker system than using the mouse. Overall the participants reported that the 3D input was natural and intuitive to use. Even though the user's arm gets fatigued quickly when using our prototype system, the users enjoyed the experience of using it.