RECOGNITION OF AMERICAN SIGN LANGUAGE USING MACHINE LEARNING AND NEURAL NETWORKS
Keywords:
Sign Language Recognition, Manifold, Machine learning, CNN, Dimension reductionAbstract
We must discover a solution to the communication barrier that prevents many persons with impairments, including the deaf and mute, from interacting with the general public. As a pattern recognition methodology, Sign Language Recognition (SLR) offers a potential solution. Only 24 English letters are recognized and categorized in this research using machine learning and deep learning approaches to detect and classify American Sign Language (ASL). This is due to the fact that letters J and Z need finger movement. To speed up machine learning training and visualization, dimension reduction using Principal Component Analysis (PCA) and manifold techniques is the first step. To categorize the pattern, many machine learning algorithms are used, including Random Forest Classification (RFC), K-Nearest Neighbor (KNN), Gaussian Naïve Bayes (GNB), Support Vector Machine (SVM), and Stochastic Gradient Descent (SGD). This work use the Grid Search approach to determine the optimal combination of hyperparameters for the SVM algorithm, resulting in more accurate predictions. The results show that various dimensionality reduction algorithms have different impacts on the prediction models' accuracy. Specifically, we find that the manifold algorithm is the best dimensionality reduction algorithm for KNN, but not for other prediction models. When it comes to machine learning algorithms, principal component analysis (PCA) is far more practical than KNN. Among the aforementioned techniques, the two most accurate deep learning options for classification are deep neural networks (DNN) and convolutional neural networks (CNN).
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.