EFFICIENT GESTURE CLASSIFICATION WITH PRE-TRAINED NEURAL NETWORKS
Keywords:
sign language interpretation, virtual reality interactions, smart home automation, potential of transfer learning in gesture recognitionAbstract
Gesture recognition plays a crucial role in human-computer interaction, enabling intuitive and contactless communication between users and machines. Traditional approaches to gesture classification often require extensive training data and computational resources, making real-time implementation challenging. This study explores the effectiveness of pre-trained neural networks for efficient gesture classification, leveraging transfer learning techniques to improve accuracy while reducing training time. By utilizing models such as Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), and Recurrent Neural Networks (RNNs), the system extracts high-level spatial and temporal features from gesture datasets. The proposed approach enhances recognition speed, optimizes resource utilization, and ensures robustness across different lighting conditions and environments. Experimental results demonstrate that pre-trained models significantly outperform conventional methods in accuracy and generalization, making them ideal for real-world applications such as sign language interpretation, virtual reality interactions, and smart home automation. This research highlights the potential of transfer learning in gesture recognition, paving the way for more accessible and efficient human-machine interfaces.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.