Abstract:
This chapter presents a learn by demonstration approach, for closed-loop, robust, anthropomorphic grasp planning. In this respect, human demonstrations are used to perform skill transfer between the human and the robot artifacts, mapping human to robot motion with functional anthropomorphism [1]. In this work we extend the synergistic description adopted in Chaps. 2–6 for human grasping, in Chap. 8 for robotic hand design and, finally, in Chap. 15 for hand pose reconstruction systems, to define a low-dimensional manifold where the extracted anthropomorphic robot arm hand system kinematics are projected and appropriate Navigation Function (NF) models are trained. The training of the NF models is performed in a task-specific manner, for various: (1) subspaces, (2) objects and (3) tasks to be executed with the corresponding object. A vision system based on RGB-D cameras (Kinect, Microsoft) provides online feedback, performing object detection, object pose estimation and triggering the appropriate NF models. The NF models formulate a closed-loop velocity control scheme, that ensures humanlikeness of robot motion and guarantees convergence to the desired goals. The aforementioned scheme is also supplemented with a grasping control methodology, that derives task-specific, force closure grasps, utilizing tactile sensing. This methodology takes into consideration the mechanical and geometric limitations imposed by the robot hand design and enables stable grasps of a plethora of everyday life objects, under a wide range of uncertainties. The efficiency of the proposed methods is verified through extensive experimental paradigms, with the Mitsubishi PA10 – DLR/HIT II 22 DoF robot arm hand system.