Kumar, Vijay and Bansal, Vishal and ., Mekhala and Shoeb, Adnan (2025) Gesture Talk -Bridging Silence with Words. International Journal of Innovative Science and Research Technology, 10 (5): 25may1721. pp. 2924-2929. ISSN 2456-2165
![IJISRT25MAY1721.pdf [thumbnail of IJISRT25MAY1721.pdf]](https://eprint.ijisrt.org/style/images/fileicons/text.png)
IJISRT25MAY1721.pdf - Published Version
Download (468kB)
Abstract
Gesture Talk is a real-time sign language-to-text conversion to create a communication environment for the deaf and able-bodied, with a touch of inclusivity across many other domains. The whole idea is to further develop a cheap and efficient solution to translate American Sign Language (ASL) gestures into text without the necessity for an interpreter. The process uses convolutional neural networks (CNNs) with a dual-layer classification algorithm, applying Gaussian blur and adaptive thresholding on the pre-processed webcam video frames and incorporating an auto-correct feature to enhance word prediction. The system was trained and tested on a labeled ASL gestures dataset prepared out of pre-processed 128×128 grayscale images obtained from RGB video inputs. Gesture Talk achieves recognition of 98.0% accuracy for ASL gestures, which surpasses many existing systems, and provides a user-friendly interface supporting all platforms, enabling deployment on desktops, mobiles, and web applications for much greater accessibility for deaf individuals.
Item Type: | Article |
---|---|
Subjects: | T Technology > T Technology (General) |
Divisions: | Faculty of Engineering, Science and Mathematics > School of Electronics and Computer Science |
Depositing User: | Editor IJISRT Publication |
Date Deposited: | 19 Jun 2025 10:24 |
Last Modified: | 19 Jun 2025 10:24 |
URI: | https://eprint.ijisrt.org/id/eprint/1266 |