Emotion Recognition Using Facial Landmarks and CNNs

Authors

  • Monica Chew Google Inc., Mountaipn View, CA, USA Author

DOI:

https://doi.org/10.62647/

Keywords:

CNN

Abstract

Automated emotion recognition from facial expressions has numerous applications in human-computer
interaction, surveillance, and psychological assessment. This paper presents a hybrid approach that combines
geometric facial landmark detection with deep convolutional neural networks (CNNs) to enhance emotion
classification accuracy. We extract 68 facial landmarks using the Dlib library and augment this geometric
information with grayscale image patches from regions of interest such as the eyes, mouth, and eyebrows. These
data are input into a CNN trained on two benchmark datasets: CK+ and FER2013. The CNN incorporates both
raw pixel data and heatmaps derived from the landmark positions. Data augmentation techniques such as random
rotation, scaling, and horizontal flipping improve the model’s generalization capability. Experimental results show
an accuracy of 88.3% on CK+ and 82.5% on FER2013, surpassing models using only pixel data or landmark
coordinates. Ablation studies confirm that combining geometric and appearance-based features results in higher
robustness, particularly under partial occlusion and varying lighting conditions. These findings suggest that hybrid
architectures can be effectively deployed in real-world affective computing systems, including mobile applications
and embedded platforms.

Downloads

Download data is not yet available.

Downloads

Published

30-12-2018

How to Cite

Emotion Recognition Using Facial Landmarks and CNNs. (2018). International Journal of Information Technology and Computer Engineering, 6(4), 92-98. https://doi.org/10.62647/