- International Journal of Multidisciplinary Studies and Innovative Technologies
- Vol: 6 Issue: 1
- American Sign Language Recognition using YOLOv4 Method
American Sign Language Recognition using YOLOv4 Method
Authors : Ali Al-shaheen, Mesut Çevik, Alzubair Alqaraghuli
Pages : 61-65
View : 25 | Download : 7
Publication Date : 2022-07-20
Article Type : Review
Abstract :Abstract – Sign language is one of the ways of communication that is used by people who are unable to speak or hear (deaf and mute), so not all people are able to understand this language. Therefore, to facilitate communication between normal people and deaf and mute people, many systems have been invented that translate gestures and signs within sign language into words to be understood. The aim of this research is to train a model to be able to detect and recognize hand gestures and signs and then translate them into letters, numbers and words using the You Only Look Once (YOLO) method through pictures or videos, even in real-time. YOLO is one of the methods used in detecting and recognizing things that depend in their work on convolutional neural networks (CNN), which are characterized by accuracy and speed in work. In this research, we have created a data set consisting of 8000 images divided into 40 classes, for each class, 200 images were taken with different backgrounds and under lighting conditions, which allows the model to be able to differentiate the signal regardless of the intensity of the lighting or the clarity of the image. And after training the model on the dataset many times, in the experiment using image data we got very good results in terms of MAP = 98.01% as accuracy and current average loss=1.3 and recall=0.96 and F1=0.96, and for video results, it has the same accuracy and 28.9 frames per second (fps).Keywords : American Sign Language, Real-time Detection, You Only look Once, YOLO, CNN, Recognition, Hand Gestures, Computer Vision, Machine Learning, Deep Learning