HAND GESTURES RECOGNITION USING CONVOLUTION NEURAL NETWORKS

Authors

  • K SUPARNA Author
  • R. DEEPIKA Author

Keywords:

hand-crafted features, convolution neural network (CNN), Hand Gesture Recognition (HGR), video streams, depth clue, including color information

Abstract

Hand Gesture Recognition (HGR) targets on interpreting the sign language into text or speech, so as to facilitate the communication between n deaf-mute people and ordinary people. This task has broad social impact, but is still very challenging due to the complexity and large variations in hand actions. Existing methods for HGR use hand-crafted features to describe sign language motion and build classification models based on those features. However, it is difficult to design reliable features to adapt to the large variations of hand gestures. To this problem, we propose a novel convolution neural network (CNN) which extracts discriminative spatial- temporal features from raw video stream automatically without any prior knowledge, avoiding designing features. To boost the performance, multi-channels of video streams, including color information, depth clue ,and body joint positions, are used as input to the CNN in order to integrate color, depth and trajectory information. We validate the proposed model on a real dataset collected with Microsoft Kinect and demonstrate its effectiveness over the traditional approaches based on hand-crafted features.

Downloads

Download data is not yet available.

Downloads

Published

05-08-2024

How to Cite

HAND GESTURES RECOGNITION USING CONVOLUTION NEURAL NETWORKS. (2024). International Journal of Information Technology and Computer Engineering, 12(3), 333-340. https://ijitce.org/index.php/ijitce/article/view/680