Adaptive Visual Gesture Recognition for Human-Robot Interaction

Main Article Content

Mohammad Hasanuzzaman
Saifuddin Mohammad Tareeq
Tao Zhang
Vuthichai Ampornaramveth
Hironobu Gotoda
Yoshiaki Shirai
Haruki Ueno

Abstract

This paper presents an adaptive visual gesture recognition method for human–robot interaction using a knowledge-based software platform. The system is capable of recognizing users, static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system learns new users, poses using multi-cluster approach, and combines computer vision and knowledge-based approaches in order to adapt to new users, gestures and robot behaviors. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human-robot interaction. It is implemented using the Frame-based Software Platform for Agent and Knowledge Management (SPAK). The effectiveness of this method has been demonstrated by an experimental human-robot interaction system using a humanoid robot, namely, ‘Robovie’.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hasanuzzaman, M., Mohammad Tareeq, S., Zhang, T., Ampornaramveth, V., Gotoda, H., Shirai, Y., & Ueno, H. (2007). Adaptive Visual Gesture Recognition for Human-Robot Interaction. Malaysian Journal of Computer Science, 20(1), 23–34. Retrieved from https://ejournal.um.edu.my/index.php/MJCS/article/view/6292
Section
Articles