Allie – AI Voice and Sign Assistant

Authors

  • Sayyada Kaneez Sogra Fatima B.E. Student, Department of Computer Science and Engineering, ISL Engineering College, Hyderabad, India. Author
  • Mr. Mohammed Rahmat Ali Assistant Professor, Department of Computer Science and Engineering, ISL Engineering College, Hyderabad, Author

DOI:

https://doi.org/10.62647/IJITCE2025V13I2sPP374-380

Keywords:

AI

Abstract

Effective communication is a vital aspect of human
interaction, yet individuals with speech and hearing
disabilities frequently encounter barriers in conveying
their thoughts or understanding others. Conventional
solutions such as sign language interpreters or visual
aids are not always accessible, which can result in
social isolation and reduced opportunities for
participation. To address this issue, Allie – AI Voice
and Sign Assistant has been developed as a smart
Android-based mobile application that leverages
artificial intelligence (AI) and mobile technology to
enable seamless two-way communication.
Allie offers two key features: (1) Voice/Text to Sign
Translation, where spoken or typed input is translated
into sign language using animated GIFs for commonly
used words or phrases (e.g., “Hello”, “Thank you”)
and static alphabet images (A–Z) for letter-by-letter
expression; and (2) Sign to Text/Voice Recognition,
which utilizes MediaPipe’s HandLandmarker to
detect hand landmarks and classifies them through a
custom-trained TensorFlow Lite (TFLite) model. This
enables users to see the recognized gesture as text and
perform context-based actions such as launching apps
(e.g., WhatsApp, YouTube, Spotify) through specific
signs.
The app is designed for flexibility and usability in both
online and offline environments, with sign media
stored in the res/drawable directory and optionally in
cloud storage using Firebase or Supabase. The custom
gesture classification model is built using landmark
data collected through a dedicated in-app feature that
records hand movements into CSV format. The app’s
interface is built using Kotlin, incorporating Jetpack
Compose and ViewBinding to ensure a responsive and
accessible user experience.
This paper explores the app’s architecture,
development methodology, technological choices, and
implementation details. It highlights how machine
learning and computer vision contribute to building
inclusive communication tools for differently-abled
individuals. The target users include not only those
with speech or hearing impairments but also their
families, educators, and learners of sign language.
Future enhancements include integration of text-tospeech
functionality for gesture outputs, animated
avatars for real-time sign rendering, and a built-in
chatbot for conversational assistance. By combining
AI and mobile innovation, Allie represents a step
forward in bridging communication gaps and
fostering inclusivity through technology.

Downloads

Download data is not yet available.

Downloads

Published

14-06-2025

How to Cite

Allie – AI Voice and Sign Assistant. (2025). International Journal of Information Technology and Computer Engineering, 13(2s), 374-380. https://doi.org/10.62647/IJITCE2025V13I2sPP374-380