
Home / Articles
Design and Implementation of Multilingual Hand Gestures Recognition System for Dumb and Deaf.
The Multilingual Sign Language Recognition System addresses communication barriers faced by the hearing and speech-impaired community, especially in multilingual contexts. Sign language serves as the principal means of communication for millions of Deaf and mute individuals globally, providing a comprehensive and organized visual language for interaction. However, due to the scarcity of interpreters and the limited awareness among the general public, individuals relying on sign language often face significant challenges in communication. These barriers lead to social isolation and hinder equal access to essential services such as education, healthcare, and employment opportunities. Sign Language Recognition (SLR) systems are designed to bridge this gap by automatically translating sign language into text or speech, fostering inclusive and accessible communication. These systems hold immense potential to improve interactions between the Deaf and dumb within the society thereby creating enabling seamless integration in public, professional, and personal domains. By leveraging advanced deep learning techniques, the YOLO algorithm for real-time gesture detection and Tensorflow for classification was adopted for this study. This system focused on recognizing hand gestures across multiple sign languages such as ASL and BSL. The study achieved a detection accuracy of 99%. Despite limitations like dependency on high-performance hardware and exclusion of facial expressions, the project demonstrated significant potential as an assistive technology.
Index Terms- Detection, hand gesture, Multilingual, Recognition, TensorFlow