Emotion Recognition Using Facial Landmarks and CNNs
DOI:
https://doi.org/10.62647/Keywords:
CNNAbstract
Automated emotion recognition from facial expressions has numerous applications in human-computer
interaction, surveillance, and psychological assessment. This paper presents a hybrid approach that combines
geometric facial landmark detection with deep convolutional neural networks (CNNs) to enhance emotion
classification accuracy. We extract 68 facial landmarks using the Dlib library and augment this geometric
information with grayscale image patches from regions of interest such as the eyes, mouth, and eyebrows. These
data are input into a CNN trained on two benchmark datasets: CK+ and FER2013. The CNN incorporates both
raw pixel data and heatmaps derived from the landmark positions. Data augmentation techniques such as random
rotation, scaling, and horizontal flipping improve the model’s generalization capability. Experimental results show
an accuracy of 88.3% on CK+ and 82.5% on FER2013, surpassing models using only pixel data or landmark
coordinates. Ablation studies confirm that combining geometric and appearance-based features results in higher
robustness, particularly under partial occlusion and varying lighting conditions. These findings suggest that hybrid
architectures can be effectively deployed in real-world affective computing systems, including mobile applications
and embedded platforms.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2018 Author

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.











